Forging digital ethics at ‘the speed of trust’
[00:00:00.170] – Shimrit Janes
There’s a beautiful phrase that I have come across in a couple of different areas now, which is move at the speed of trust. And so folding in this ethical approach into your digital workplace allows you to move at the speed of trust. It creates these opportunities where employees can really feel valued, they understand transparently why decisions are being made and they can see where you’re maybe trying to mitigate harm. And there’s value in that. It can really create the guardrails to then play and be innovative and to say, what can we do? It allows you to kind of play with those new emergent technologies that are coming through in a way that is healthy. We spoke about potential risk to organizations and brand and reputational risk and cybersecurity if you’re not being ethical well. So then the flip of that is the opportunity to really minimize that risk and to think through who you are as an organization and your values as a guide and an ethical, organized digital workplace really helps it become an attractive place to work.
[00:01:08.630] – Nancy Goebel
It was just wonderful welcoming Shimrit Janes back into the studio. Shimrit is DWG’s Director of Knowledge and author of an insightful and thought-provoking series of reports on nurturing digital inclusion. And today we focused on the third and final conversation tied to this report series and focused on forging digital ethics. In conversation, Shimrit and I explore not only the nature of ethics and how a digital strain has emerged, one that has been challenged even further with the emergence of disruptors like generative AI, what we now talk about actively relative to ChatGPT. We then turn our attention to how digital workplace practitioners can start to fold an ethical approach into their work as way of introducing two living examples from Oxfam and Comcast, both of which are taking an ethical and inclusive approach to their digital workplaces. And along the way, we also share both practical and aspirational advice for moving at the speed of trust. And you’ll understand more about what that means by listening to the episode. As always, Digital Workplace Impact is brought to you by Digital Workplace Group. And this is Nancy Goebel, your host and DWG’s chief executive. Join me now in conversation with Shimrit Janes. Happy listening.
[00:02:55.350] – Nancy Goebel
Shimrit. It is just wonderful to have you back in the studio. I don’t know that we’ve had a chance to chat this frequently in the pod studio, but of course, this is a rare moment in the sense that you’ve published now a three-part series around nurturing digital inclusion. And how could we not come back together to talk about the key element of ethics as part of that conversation? And so, welcome back into the studio to start.
[00:03:33.490] – Shimrit Janes
Thank you so much for having me back for this final part of the conversation. I love talking about this stuff with you, so it’s a delight to be back.
[00:03:42.500] – Nancy Goebel
As do I. And so I know that on a number of occasions you’ve shared that the topic of ethics rarely comes up in conversations related to the digital workplace. And this is actually something that you surfaced again as part of the executive summary of your latest research around forging digital ethics. Why do you think that is?
[00:04:09.870] – Shimrit Janes
I think the easy answer is that we’ve not been conditioned or taught to discuss ethics within the digital workplace. I think I know when I was a practitioner, it never came up. It wasn’t even a consideration. And I don’t think it’s something we’ve been trained to do for whatever reason. So to then expect people to naturally bring it in as part of their planning and their programming for the digital workplace, it’s kind of unfair almost to expect it because it’s not so commonly done. I think that’s the easy answer. If I go deeper, I think ethics and discussing ethics can be really uncomfortable if you you can feel kind of in your body when something feels wrong and isn’t aligned with your values. And that takes courage to be able to speak to that in a room, whether that’s with a leader or with the people who hold the purse strings, for example. And also if you do bring it up and you’re challenging someone based on ethics or morality, that can lead to potentially disagreement based on different morals. And you might want to avoid that within a work environment. It really requires an environment where it’s supported to have those conversations.
[00:05:29.780] – Shimrit Janes
And I think ethics ultimately comes down to exploring if we’re causing harm, and people want to be seen as good, they don’t want to be told that they’re potentially causing harm. So I think at a kind of deeper, more human level, there’s also that challenge around discussing ethics.
[00:05:52.190] – Nancy Goebel
So I think that’s an important grounding point for us. I think there are a couple of level set questions that we need to layer in now before we get into some of the rich conversation around things like the issues, the consequences, the opportunities. So the first level set question is then tied to your opening chapter in this research, where you frame what are ethics? So maybe we can start with that as that first level set question.
[00:06:28.570] – Shimrit Janes
And in the paper, I talk about studying this as a 17-year-old as part of my in the UK. It was my A levels and just kind of being mind blown at the number of different ethical approaches there were. But if you boil it down to its simplest and you were to say to someone, what are ethics? I think the majority of people say it’s doing the right thing, and I think we intrinsically know that that’s what ethics is. The challenge is if you start to dig under that. So how do you determine what the right thing is? There are so many different ethical approaches based on which culture you’re from, which country you’re in which industry you’re in, even which religion you might belong to or may have been brought up with, and so many that could impact that. So it’s not just one the right thing isn’t one thing, I guess is a good way of saying it. And so in response to your previous question, I said you kind of get a feeling in your body if something feels wrong, and if something isn’t aligning with your own values, there’s this kind of tightening and constricting that happens.
[00:07:37.970] – Shimrit Janes
But I think just because something may feel wrong to you, it doesn’t necessarily mean it’s ethically wrong. If you bring in, for example, power dynamics, you might have someone in a position of power whose power is being threatened by someone who’s marginalized, who’s spoken up. And that person in power might feel like that’s the wrong thing to do to have their power threatened, but it doesn’t mean it’s ethically wrong for that person to raise the challenge. So there’s this really challenging dance between what might feel morally right for somebody or morally wrong for somebody, versus at a higher societal level or organizational level, what’s considered to be ethically right or wrong. So how do you then account for that complex web of what the right thing is? And so in the paper, we look at how this kind of ethics in two ways. So the first is ethics are almost a set of guidelines that are created that tell you and give you guidance around what the right thing to do is in any situation. So almost like rules of thumb that have been agreed and almost documented as your guidebook, but then those ethical guidelines can, depending on the context, they may over time, become unethical, for example.
[00:09:03.950] – Shimrit Janes
So ethics is also this process of questioning and debating and assessing those guidelines as a group and listening to each other and being aware of what dynamics might be at play that are impacting whether something is ethical or not. So it’s this kind of combination of documenting or just agreeing what your guidelines are around what’s ethical, but at the same time being willing to constantly question those guidelines through conversation. And there’s a nice venn diagram that we have in the paper which basically says just because you can do something, does it mean that you should? And it’s this intersection of what’s legally allowed or not allowed, what the technology can or can’t do, and then on top of that, whether you should or shouldn’t do it. And that’s where the ethical part comes in.
[00:09:57.840] – Nancy Goebel
I have to say it makes me think back to conversations that our community has had collectively about accessibility, and in the US in particular, the W3C guidelines, which actually are broken down into the should do, the must do, and the can do. And I know it’s positioned in a different sequence in the guidelines, but the idea is that there are things that we just know are inherently core to what needs to happen in the organization. And then there are things that are meant to stretch our ambition to get to a place where we are not in a position where we’re doing harm, let alone creating an environment where people can bring their whole selves to work on any given day and feel that they have a place to not just survive, but to thrive. And so I think it’s interesting to bring in my level set question number two in at this moment, because I think some of our listeners might actually be surprised that this research topic is nested within a series around nurturing digital inclusion. And I wonder if you can add some context to that in particular.
[00:11:26.570] – Shimrit Janes
Absolutely. And so when Elizabeth, our Director of Research, and I were scoping out the digital inclusion research paper, we considered it important to include ethics. It was actually the first chapter of the paper when it was one, even though it’s been published second as this box set instead. And the reason for that was, for me, talking about ethics, digital ethics, really sets this foundation of getting us used to talking about doing the right thing and being able to discuss it. Because as exactly as you just said, Nancy, as soon as you start to get into deeper into digital inclusion and digital accessibility, you’re working with competing priorities. So do you spend the extra money that’s needed to create a fully accessible digital workplace? Or how do you make decisions based on inclusive design that may take more time to have that inclusive design methodology where you’re involving more users, but spending that extra time will lead to a better, more inclusive digital workplace? So how do you navigate those decisions that you need to be able to make as a digital workplace practitioner and as an organization? And so I think having that foundation of ethics and just getting people used to saying, why are we making these decisions?
[00:12:46.750] – Shimrit Janes
Who are the different people being impacted? What do we value? Is a really helpful or needed, I should say, part of this bigger conversation around inclusive digital inclusivity.
[00:13:00.890] – Nancy Goebel
And that’s such an important grounding point. I also think it’s important for us to have some dialogue around the kinds of issues that surface in the digital workplace in particular. And I say that because certainly we experience things in life, we see things in the media around us, but sometimes we need to build a bridge into what does it mean for us and for our immediate context. And so can you share some common ethical issues that do come up in the digital workplace day to day?
[00:13:43.830] – Shimrit Janes
Yeah, I think the ones that if I was to say to somebody, name me a couple of digital ethical issues that you as a digital workplace practitioner face or are thinking about right now. And I think the ones that immediately come to mind are to do with AI to do with privacy, to do with kind of surveillance technology that is in the news, maybe concerns about cybersecurity. They’re the ones that I think people are most used to talking about, particularly with AI and everything that’s going on at the moment around that and also around surveillance tech. But if you go beyond that, if we’re talking about ethical issues, we did a paper a couple of years ago about techno stress and the impact on how we use the tools and design them can impact on well being and can impact on people’s home lives and how people behave using those tools. And so to me, there’s ethical questioning can help you look at what are potentially the unintended consequences of some of these decisions we’re making around notifications and presence and flexible working even. And then there’s also a question around how change is managed.
[00:14:57.380] – Shimrit Janes
Are you managing change in an ethical way that respects people’s attention, readiness for change, their capacity for change, or are you forcing it on them in a way that’s going to do harm? So there are a couple of ways that you can start to extend the conversation out around ethical digital workplace issues. And then I think as well, we’re getting more used to talking about this now, but the impact on the environment. We have a paper coming out later this year looking at the green digital workplace, and there’s the impact of digital waste. So when we’re creating content, there’s an environmental impact of that. So how are we managing the waste that is created through our digital workplace, even down to the hardware and the actual physical computer and headset and technology we’re using in order to access the digital workplace? Where is that being sourced from? Where does it go when it’s reached its end of life? And there’s so many more areas you can start to branch out into as soon as you start to think about ethical issues in the digital workplace. And so I think part of this paper is trying to help people expand how they think about digital ethics for their work.
[00:16:10.760] – Nancy Goebel
And I think we can’t help but delve into the consequences when organizations don’t address ethics in the digital workplace as an extension of those issues. So can you talk us through a little bit about how you frame that in the research as well?
[00:16:33.790] – Shimrit Janes
So I think some of the almost easiest direct consequences to think about are the impact on employees. If you feel like you’re not being trusted to do the work, or you’re not valued and you’re being seen as robots, we mentioned the idea of surveillance monitoring. If that’s being implemented in an unethical way, there’s a real impact on the employee where they don’t feel valued and they don’t feel trusted. And that then starts to break the trust relationship between the employee and the organization. So that’s one example. And I think ultimately the consequence of an unethical digital workplace is harm, it does harm to the employee, it can do harm to the organization if people aren’t behaving in an ethical way using the digital tools, if there are risks of cyberbullying or of confidential information being leaked out. Also reputational harm if people are not using a digital ethical workplace, an ethical digital workplace sorry. And that you risk reputational harm as people don’t want to stay, they want to leave. And as we were saying, it then ripples out from that potentially harm to families, harm to the environment, harm to homeless. And I think they are real consequences on an organization that leaders need to be thinking about when they think about the digital workplace.
[00:18:00.420] – Shimrit Janes
And there’s an article that I shared internally inside DWG yesterday around Timnit Gebru, who is the former co-lead of Google’s ethical AI team. And she and her team were doing research into ethical AI, hence the title and was spotlighting practices that she was concerned about, that she was seeing inside of Google to do with the ethics and seeing racism and sexism and all types of concerning behaviors leaking practices leaking through. And she was shut down, she was told to withdraw the paper, she was asked to leave. And that environment where you aren’t able to raise your concerns about ethics has a knock on consequence for that organization. It means you can’t have an open culture where you can experiment, be innovative, question each other, raise challenges with each other in that healthy way. And so as well as those harms that are created, there’s also an impact on how creative you can be potentially and innovative as an organization. It feels like the opposite is that if you don’t have ethics, you can kind of do anything and be as creative as you want. But actually that questioning and challenging culture that comes with ethics is actually a really healthy thing for innovation.
[00:19:21.570] – Nancy Goebel
And just thinking about a visual for this discussion around consequences, I’m immediately brought to the idea of the ripple effect. And so I could see very clearly the ring around people, both employees and clients, impact to the organization, to the families, potentially surrounding individuals, to the community and then the environment at large. And so I think it’s always important to have balance in a conversation and the consequences are deeply concerning. But I think also people are often motivated by the incentives and the opportunities. So I think it’s important to play that side by side so people are seeing a balanced view of what it looks like to have a highly functioning digital workplace and the opportunities that creating an ethical environment can bring.
[00:20:34.090] – Shimrit Janes
Yeah, absolutely. Last week we held our first ever Rewilding Work Trailblazers event, which was really exciting. And this is a community for our members who are really thinking through what does the future of work mean, but through a healthy lens and how can we navigate some of these questions? And we raised the question, which we also actually raised with some MBA students a couple of months ago. But the question was, is all innovation good? And it triggered this beautiful conversation around what do we mean by that question? And to think about it, and I think it’s easy to say the answer is no, not all innovation is good. But the purpose of that is to really get into how do we understand exactly what you were just saying, Nancy, the consequences, but also look for the opportunities. So we have that balance. And there’s a beautiful phrase that I have come across in a couple of different areas now, which is move at the speed of trust. And so folding in this ethical approach into your digital workplace allows you to move at the speed of trust. It creates these opportunities where employees can really feel valued.
[00:21:51.130] – Shimrit Janes
They understand transparently why decisions are being made. They can see where you’re maybe trying to mitigate harm. And there’s value in that. It can really create the guardrails to then play and be innovative and to say, what can we do? It allows you to kind of play with those new emergent technologies that are coming through in a way that is healthy. We spoke about potential risk to organizations and brand and reputational risk and cybersecurity if you’re not being ethical well. So then the flip of that is the opportunity to really minimize that risk and to think through who you are as an organization and your values as a guide. And an ethical digital workplace really helps it become an attractive place to work. So, yes, it’s about avoiding and minimizing harm on the one hand. And I think the flip of that is that digital ethics can also be about creating this joyful, healthy, inclusive digital workplace where people really get what they need to do the work.
[00:22:55.370] – Nancy Goebel
That’s a very powerful set of framing points, Shimrit. And I think you always bring such insight to conversations like this and why you’re doing the research that you are. I think it’s incumbent on us to also provide the guidance around what’s involved in managing ethics in the digital workplace. And I’m thinking about this on four different levels, and maybe we can explore each of them together. The first is really around what do we owe ourselves as digital workplace practitioners. When we think about this space in particular.
[00:23:42.390] – Shimrit Janes
And this is a question that came through in the research, and I kind of love it. I feel weird to say that about your own writing, but this idea of what do we owe ourselves as digital workplace practitioners? And one of the things that came through in the research was this idea of a moral injury. If you’re asked to do things that don’t feel or sit right with you, it causes this moral injury. And we’re seeing it right now in the digital workplace. There’s examples of IT, people who are being asked to put surveillance tech in who have real concerns about what they’re being asked to do and it doesn’t sit right with them. But they’re not being given the opportunity to express that. They’re not being given the opportunity to express and have that ethical conversation. And this could happen on so many different levels within the digital workplace. And so I think what we owe ourselves as digital workplace practitioners is really that space to discuss and explore the consequences of the digital worlds we’re creating. The decisions that we make as practitioners, the tools that we choose, how we choose to put them together, have a real impact on people.
[00:24:54.480] – Shimrit Janes
And so how are those decisions sitting with us? They feel right. And I think having to make decisions that don’t feel right impacts you as a practitioner and how you feel when you go home at the end of the day. So getting used to asking what harms could this cause? What harms aren’t we seeing? Is this the right thing to do as a practitioner can start to help you feel right in the work that you’re doing and the potential impacts you’re going to have. So I think for me that’s the key part of it.
[00:25:30.020] – Nancy Goebel
And not only are we champions or stewards of the employee experience, but we ourselves are employees inside of organizations and need to create those spaces to allow for thoughtful questioning and discourse. And so by extension, what do we owe those using the tools that we manage inside of our organizational communities?
[00:25:59.410] – Shimrit Janes
Yes, and there’s a phrase that has come up many times over the years while I’ve been working with DWG, which is user experience debt, that kind of debt that’s created through a bad user experience and the impact that has on the individual. And I think ultimately we owe those people that are using the tools, tools and experience that have been chosen and put together in a healthy way. I think if we can get used to this idea of talking about health within the digital workplace, not just the health of the tech, but the impact on the health of the people using it. So what impact are our decisions having them having we owe them this healthy experience that doesn’t do them harm, that allows them to have that joyful experience, that allows them to feel that joy when they go home at the end of the day, even if the home is from the office at home into the living room. And that can be through inclusive design, it can be by providing training that actually really helps them use the tools, creating that frictionless experience, by providing feedback loops so that people can say if something isn’t working for them and they’ll be heard.
[00:27:10.010] – Shimrit Janes
I think what we owe and this kind of gets to the heart of what this is normal stuff that we know as digital workplace practitioners. But I think if we’re talking about ethics, we owe them that healthy experience.
[00:27:24.270] – Nancy Goebel
And what about leadership and organizations at large? What do we owe them?
[00:27:31.650] – Shimrit Janes
Yeah, so as a digital workplace practitioner, I would say that we owe our leadership and organization as a whole our expertise and our ability to challenge, to really be able to say either this isn’t the right thing to do and this is why, or this is the right thing to do and this is why. So really helping them move beyond just that financial decision-making is obviously still important. I’m not saying this is either or, but to have as well as that financial decision-making about the digital workplace, helping them make decisions based on the values that are important to the organization, to people. There’s this idea of we often talk about it’s gone from user centered to human centered. And in other circles I’m increasingly seeing discussion of life centered. So how do we help the organization make those decisions? So how do we help them make decisions as an organization based on our expertise, on how the tools will impact our people, their relationships, the environment. And sometimes we do need to make those hard decisions that will still do harm. I am not utopian. I know that you can’t live in that world, but at least have the kind of guardrails in place to help your leaders question those and at least think about how it’s being done and how those decisions are being made and communicated.
[00:29:02.150] – Shimrit Janes
I think it comes back to we can help our leaders move at the speed of trust.
[00:29:07.950] – Nancy Goebel
And I think this comes back to an important point that was raised by the MassMutual team in the accessibility space of this research program. And that’s the idea of progress over perfection, because we’re continually learning in this space, because organizations are like living organisms and are evolving as the circumstances around us are evolving, technology and a myriad of other factors. And so just to continue to take steps forward each and every day is part of the paradigm that we need to create. Going back to this idea of the ripple effect we’ve talked about ourselves, we’ve then radiated out to talk about those using the tools that we’re supporting and enabling as part of the digital workplace. We talked about leadership in the wider organizations. Of course, we now need to look at the wider purview with the idea that we’re creators and stewards of wider digital habitats, as you’ve put it, and need to think about things like our families, our wider society, and even nature. And so talk to us a little bit about what we owe along those dimensions as well.
[00:30:39.530] – Shimrit Janes
Yes, definitely. And this visual that you created of those circles is absolutely in line with what we talk about in the paper and elsewhere about nested systems. And so you start with the individual who is within a team, who is within a business function, within an organization, within a region, community, society, you can expand out and out and you obviously can’t take all of those things into consideration or you would have analysis paralysis. You would never do anything. But I think just having an awareness of how the decisions you’re making as a digital workplace team will impact people’s home lives, might impact a community. Natural environments can just help to start you start to see some of those potential ripple effects of what you might be doing. So if, for example, when we talk about how technology enables flexible working, there’s a pro and a con to that. You might have behavior where a manager is messaging out of hours and expecting people to respond out of hours, which can cause harm. The flip of that is that flexible working that the technology enables will support caregivers who need that flexibility. So just seeing, seeing those different sides to it will help you have the conversation around what do we need to put in place to try and minimize the harm and elevate and amplify the goods that come out of it?
[00:32:07.280] – Shimrit Janes
And I think as well, there’s so many consequences related to our technology choices that are hidden that we aren’t necessarily aware of, so isn’t necessarily related with the digital workplace. But if we think about electric vehicles, these are being created and designed and sold and bought as a way of addressing the climate crisis. It feels like we’re doing the right thing in expending our energy there. And then if you dig under the surface, the batteries needed for those cars are being made based on labor conditions in mines where there’s a human and an actual cost to that and they’re horrific conditions. So at what cost are we experiencing that green revolution? That’s kind of like an extreme example to get us thinking about what that might mean for the digital workplace. But even if we then look at AI, what’s the human cost of AI? What’s the environmental cost? So ChatGPT everyone’s talking about it. There’s an environmental impact of the amount of energy needed. There’s the data sets that are being used that are predominantly kind of based on information created in only one part of the world. But the answers being given are being accessed all over the world and that has an impact representation.
[00:33:23.690] – Shimrit Janes
So it’s just being aware of what exists underneath some of the technology sometimes that is letting us get there. And that’s not saying don’t innovate or don’t use these solutions, but just helping become a little bit more informed and curious about that larger web connected to our decisions. Just it lets us have the ethical discussion. You can’t make an ethical decision that’s 100% good life is too complex, it’s just not feasible. But just that widening of scope means you can at least start to bring it into the conversation around what are the choices we want to make, what are the impacts we are and aren’t comfortable with as an organization.
[00:34:11.490] – Nancy Goebel
And so I think this naturally brings us to take a breath and explore organizations that are approaching this well. Because as we think about this ripple effect, it could be overwhelming. But certainly there are those that are on a path that are having positive impacts, thoughtful conversations, thoughtful discourse, and shaping their practices with ethical considerations in mind. What can you tell us about them based on your research?
[00:34:50.570] – Shimrit Janes
And I’m just going to pick two to talk about, I think that aren’t too overwhelming because, yes, it can get overwhelming and you don’t want to get, like I said, into that paralysis of, oh my God, I have no idea what the right thing to do is. And so one is Oxfam International. They have obviously a rights-based approach to how they work with the vulnerable populations they work with to do with privacy and security and lots around that. And so they took that organizational value and applied it to their digital workplace as an ethical decision. So they’ve shared with us previously that they take a rights-based approach to their digital workplace. And what that means for them is that they tried to create the digital workplace in a way that makes sure that their people feel safe using the platforms and that all relate to privacy behaviors, what you can and can’t do with the tools, trying to minimize techno stress, all those things that are related to how safe you feel using the platform. And so for them, and this is one of the things that you hear that I read about ethical digital ethics, is that it’s really looking at your values as an organization and saying, how do those values help us make decisions about how we’re going to use the digital workplace?
[00:36:14.890] – Shimrit Janes
And Oxfam International is a beautiful example of how they took an organizational value and brought it inside to help them think through what it means for their digital workplace. So that’s one really good example. There’s another which is comcast. And so we had a great conversation with Comcast talking about how they approach the digital workplace in an inclusive way and some of the ethical things they’ve put in place. And this is a really simple one, but has such a big impact. So for them, it was how they use Yammer. And the question we asked is how do you help people use Yammer in an ethical way? And so for them, first of all, they said they’ve managed to build up a culture of respect that means any issues that do come up kind of can be resolved in a way that isn’t even intervention isn’t needed. But at the same time, they’ve got systems in place to step in when needed. So they have employee policies that set the expectation for ethical behavior. They have training and guidance in place that says, as an employee, this is what the right thing to do is based on our values as an organization.
[00:37:23.460] – Shimrit Janes
And at the same time, here is a venue to raise any concerns that you have. They use the technology to monitor potential harmful words. They’ve supported their employee resource groups as one of the first to start to use Yammer as a technology to say, you’re important to us as an organization. We’re going to help you use this so that you can get the exposure that you need and to have the conversations you need to be able to have as an employee resource group. And so it’s more an example of how you can start to create a culture where ethical behavior is the norm and at the same time where concerns can be raised in a way that feels safe. And so they’re two very different examples, but two good examples of how you can start to bring ethics into conversation.
[00:38:14.030] – Nancy Goebel
I couldn’t agree more. It sounds like Oxfam and Comcast have so many learnings to share. And of course, our listeners will have access to a link to this research report to see those vignettes in greater detail in the research. As a companion piece to this conversation, I wonder if we can pivot a little bit from the examples of specific organizations to examples of some specific practices that would be powerful to talk about as well. So I’m thinking about how digital workplaces need to balance this point around monitoring employee digital activity with employee privacy, since you mentioned it earlier.
[00:39:10.640] – Shimrit Janes
Yeah and whenever I think about this, an example comes to mind for me from a book I read, which I can’t remember what it’s called right now, but it was to do with animal intelligence, and it was talking about how in a and I use the word lab here, not in an animal testing way, but in a behavioral lab. If you put an animal in a behavioral lab and observe them, you’re not observing natural behavior, you’re observing behavior that’s impacted because they’re being watched, because they’re not in a natural environment. And it comes to mind for me whenever I think about employee monitoring, if you know you’re being watched, your behavior changes. And there have been studies that show that knowing if it’s been done in an unethical way, this isn’t saying you can’t have any monitoring, but if it’s being done in an unethical way, it will impact the stress levels and the well being of employees. And actually it has the opposite of the intended effect. Instead of improving productivity, it can lead to a productivity decline as a result of that. And so I think it comes down to purpose. So why is monitoring being put in place and how is it being done?
[00:40:23.080] – Shimrit Janes
So we have a lovely example from one of our members that shared with us a few months ago around how they’re just using anonymized data from teams and Outlook to understand trends around where people are collaborating. And it’s part of this bigger strategy around trying to help people get the best out of teams for collaboration. And so they’re monitoring activity, but they’re doing it in a way that respects privacy and they’re doing it in a transparent way and using the data to then say, let us help you collaborate better using the tools that we have. So that’s an example of how it can be done in an anonymized way, but that’s not always possible. So we know that there are organizations where digital monitoring is legally required. For example, it might be for financial regulated industries. So there are definitely cases where it makes sense to have the technology in place. At the same time, there are organizations looking at it because they have what I’ve seen described as productivity paranoia. They don’t trust employees to be productive, even though all the other information is telling them that employees are actually being productive as they work in a distributed way.
[00:41:36.130] – Shimrit Janes
And it raises, I think we’ve spoken about this before, it’s this question of what productivity is. So if I’m, I’ve been very open, I will sometimes have a nap in the afternoon, a short nap, because it helps me reset. But is that me being productive even though it helps me be productive for the remainder of the day, I’m not technically at my desk and so the monitoring tech would say I’m not being productive. So I think if you’re going to do this in an ethical way, it’s really engaging directly with employees around how these tools look for the opportunity, as we were saying. So one side is the risk, but the other is the opportunity. So how working with your employees transparently? If you say we have these tools, these are the purposes that we’re looking to use them for, what could the impact be on you? How do you feel about it? And what data could be useful for you to help you with your work? It’s essential to have that transparency and really co-create the solution with people and to see, for example, it might be you can say that we’re going to use this technology, but it’s not a stick to beat you with if you’re not being productive.
[00:42:49.250] – Shimrit Janes
It’s me as a manager. It helps us spot potential red flags to then have a management conversation around are there areas that you’re struggling in? How can I help you? Or maybe there’s an issue at play and can we start to dig into that so I can support you, which is very different to seeing you’ve not been productive to 100% today and therefore you’re going to be punished as a result. And so I think those monitoring tools is a really good case study of how ethics can start to bring in an approach that’s healthy and where the tools can be used in a way that helps build that trust instead of chipping away at it.
[00:43:34.230] – Nancy Goebel
I have to say I immediately thought about Walmart as part of this conversation. When you walk into a Walmart in the US. There’s someone to greet you. And for the casual shopper, it’s a way of opening up your experience of coming into the store. But behind the scenes, the thinking behind this is that when you make eye contact with individuals coming into a store, someone who is contemplating shoplifting can actually be dissuaded from doing that. So there’s both an opportunity, an avoided consequence on a statistical scale by introducing the greeter. And so it just makes me think about the fact that it took a lot of thought to come up with an innovative way of reducing shoplifting in the retail experience. And certainly there are examples of organizations like the Oxfams, like the Comcasts of the world, that are approaching the digital ethics paradigm in a way that’s equally as innovative. And so with Comcast in particular, you broach the topic of policy setting. I guess I’m wondering if there’s a way for us to pull that thread a little bit further as well on a practice level and have some thoughts, exchange some thoughts around where policies need to consider quality aspects of work and not just the legal and compliance side of things.
[00:45:31.820] – Nancy Goebel
Any particular reflections to start us off on that?
[00:45:35.770] – Shimrit Janes
And I think this is also where we almost go back to the beginning of the conversation, which was the ethics and ethical policy is both setting the guidelines in place and then having that ongoing conversation around whether they’re still relevant or how they apply to different situations. And so in the paper, we set out an example of how you can start to create an ethical strategy. So looking at the values of the organization, engaging with employees around what matters to them as well, and to leaders, and to then start to use that to build your guide. But I think we’re also aware that can take time. And while that’s good practice in the absence of or while you’re working on that, what we recommend is starting to create that culture of ethical questioning that helps you get beyond the regulatory commitments. And so we put together a table which is in the paper, which is free to download. And so we look at, I think it’s four or five different pillars that you can start to think through. How are we applying an ethical approach within these verticals? So the first is digital workplace design.
[00:46:47.110] – Shimrit Janes
And there are a set of questions within the paper, such as, whose experience is missing from our design process that will help you start to think through, okay, what do we need to do if we’re going to create an ethical inclusive approach to how we’re designing our digital workplace? And so, in terms of policy, you could be looking at, okay, we are going to, whenever we design something new, consider these questions as part of the process. The second pillar was looking at digital workplace adoption. We spoke about change. So again, there’s a set of questions in there, such as what behaviors and beliefs could we be potentially normalizing with the way that we’re going about change and kind of our change strategy? Are we normalizing? This is the thing that came around with gamification. Are we normalizing quantity of content over quality of content by putting certain gamification tools in place and so really thinking that through. So that’s adoption. And then there’s also digital workplace management. So that’s an example of, again, a whole series of questions. But are we folding in, for example, ethical questioning into our procurement process when we are working with tech vendors?
[00:48:03.190] – Shimrit Janes
So if you’re thinking about policy, are you starting to think about how your procurement process, for example, and looking at that through an ethical lens? And are there things that you need to change around your procurement policy to make sure you’ve got that questioning in there as part of your RFP process? And there are some other areas around digital workplace management as well. And then the final one was digital workplace use. So how are the tools itself, how are the tools themselves actually being used? Again, whole load of questions in the paper that you can have a look at. But an example is what safeguarding do we need to consider around the well-being and health impacts of unhealthy use of the digital workplace? And a lot of this is not even not necessarily needing new policies. It’s potentially going back to the policies you already have and looking at it through that ethical lens. So if you already have policies related to well-being and to health, if you already have policies to do with ethical behavior and for example, bullying, how do you apply those to the digital workplace? How do you bring in those questions to help you make the decisions and have the conversations around what healthy looks like?
[00:49:15.860] – Shimrit Janes
So there’s some kind of policy and questioning approaches that you can take. I’m going to introduce one that is going to be, I think, maybe a stretch suggestion for people. At the beginning, we spoke about how the body responds to ethics and when something doesn’t feel ethical. And there are two people who are researcher designers called Lauren Lin and sahibzada mayed, and they have put together a body mind check for research, radical research. And the idea is it’s a set of questions that whenever you’re doing design work which includes designing the digital workplace or research work which again involves your user research, there are set of questions that allow you as a team and an individual to just check in with how your body is responding to it and is what you’re being asked to do or what you’re asking somebody else to do. Does it feel right? That doesn’t mean you need to make decisions based on that, but it’s just a way of sensing into does something feel wrong? And you can then bring that to a wider discussion as a team. And I think part of the Rewilding Work Trailblazers community is when we’re talking about healthy workplaces, it’s this reconnection of body and mind in work.
[00:50:36.960] – Shimrit Janes
So we’re not just making intellectual decisions, but we’re checking into our gut and making gut checks. And I think it might not be comfortable for a team, but I think when we talk about ethics and what’s right, our bodies are important signals, which doesn’t mean you make a decision based on that, but at least you can bring it to the conversation and to say, this feels wrong for me, let’s explore that together. And so that’s not something you can embed in policy necessarily, but I think as a practice, it’s a healthy one to get used to.
[00:51:09.830] – Nancy Goebel
I just immediately had the thought around a whole new discipline that can exist in the work that we do as digital workplace practitioners, and that’s thinking about almost the kinesiology of the digital workplace.
[00:51:31.550] – Shimrit Janes
There’s a whole practice around kind of the schematic the body experience that I think there are so many industries used to talking about it and starting to get used to bringing that into the digital workplace experience. I think it’s just a healthy practice to start to play with.
[00:51:51.650] – Nancy Goebel
So we’ve covered quite a lot of territory under the banner of ethics in the digital workplace. And we know that there are many aspects of what we do day to day that can fall under the business as usual arena. A couple of times you’ve mentioned ChatGPT, and so I think it’s fitting to start to draw this conversation to a close with a little bit of discussion around the generative AI arena and how disruptive technologies challenge the digital ethics paradigm in new ways and old. And so any points to give our community the confidence to navigate this space, given that we are or we have been thrust into a very disruptive phase in the lifecycle of the digital workplace.
[00:53:03.450] – Shimrit Janes
This is such a fascinating area, not even just because of the technology, but you almost begin to see the full spectrum of the human experience in terms of how people react to it. It’s so interesting. And I think we had the Rewilding Work Trailblazers session last week where we were talking about healthy innovation and we started that with a sharing circle, just asking people what challenges are they facing related to healthy innovation in the workplace at the moment. And inevitably, things like generative AI and emergent technology came up. And one of the things that you and I and the others in the room were talking about to try and make sense of all the different challenges coming through was almost there were four themes. The first was how the speed of change and the kind of proliferation of all these innovative solutions just feels overwhelming. So how do you start to prioritize where you need to focus? And then the second was how do you nurture cultures of innovation that celebrate failure, that can move quickly to assess options and to be innovative themselves. And then the third was how to work with people through change to help them have agencies as these innovations are explored.
[00:54:20.570] – Shimrit Janes
And then there was also this concern around potential consequences of pursuing certain innovations, and how do you have the structures in place to be able to adapt and cause change? And I think so many of those areas when it comes to generative AI and things like ChatGPT, having that ethical muscle in place to help you as one of the inputs into it can help you start to navigate some of those. So that question of where we need to focus, there are going to be lots of things impacting that. But if you have ethics as one of them, it might well be that you say, let’s play with this, and let’s also have somebody who’s monitoring potential ethical consequences of what we’re doing so that we can have that. And if something feels like it’s going to have a harmful impact, it’ll give us the room to pause and to just say, we need to understand this a little bit more. But then you flip that with, we have high risk averse organizations where they’re almost moving too slowly because the risk appetite is not high. And so how do you nurture a culture of innovation?
[00:55:26.630] – Shimrit Janes
And so then again, ethics can kind of give you that room to play, to say, this is what we can and can’t do, and to have those parameters to say, okay, well, what are we comfortable with right now? Instead of sticking your head in the sand and saying, we’re not touching it at all, it gives you the scope. Again, we were saying that flip to look at it in a joyful way and to say, okay, well, what does feel right for us right now? And let’s at least play with it in this sandpit so we can start to understand it. And the speed at which it’s maturing does feel overwhelming. And I think in some ways you said, to what extent is it new and old? A lot of these questions being raised by generative AI aren’t new. There have been questions raised for decades around kind of the ethics around some of it, around how biases get coded in the potential impacts. If you look at Sci-fi, people were raising concerns about ethical concerns about generative AI decades ago. So this isn’t new for us in some way. We’ve almost rehearsed it in our culture.
[00:56:31.790] – Shimrit Janes
But I think if we go back to Timnit Gebru, the person who was at Google, the way that she frames it is I’m not worried about machines taking over the world. I’m worried about groupthink insularity and arrogance in the AI community. And that comes down to just human behavior. And so how are we building that muscle as an organization to have those conversations. The tech is the tech, and a lot of us don’t have control over how it’s being created, but how are we as an organization building the muscle to be able to play, but also play in a healthy way? So I think there are some ways in which it’s kind of not new, but I think the speed is the potential feels very new. People are having to make ethical decisions so much quicker than normal without necessarily having all the information, without knowing what the impacts could be. There’s this real fear of losing out to competition as well, and are we going to be left behind? And so I think that’s where a lot of the things that you and I and DWG talk about in terms of living systems as an organization is how do you become more adaptive?
[00:57:42.030] – Shimrit Janes
How do you sense what’s going on and respond to it as you go along? You said progress over perfection. So how do you apply that to ChatGPT, to generative AI to start to play and not feel like you need the perfect answer, but at least have the guardrails in place so that you’re asking the questions? And I think having developing that culture and practice of being able to ask questions, not as one-offs, but as part of your day-to-day practice, will help you start to navigate the kind of overwhelm of that is ChatGPT and generative AI.
[00:58:16.590] – Nancy Goebel
Shimrit. I think this conversation has been so insightful, so grounding, so useful. I think we have to pause there and recommend that our listeners pick up a copy of the research, whether it’s as members on the extranet or on the DWG website. And of course, we’ll include the link in the show notes. But for now, I think we pause and it’s a chance for me to thank you for coming into the studio today to have this important conversation about essentially nurturing ethical ways of working via the digital workplace and through the digital workplace. And so thank you for having this conversation today.
[00:59:09.710] – Shimrit Janes
And thank you for the space to talk about it. I loved it, so thank you Nancy.
[00:59:13.270] – Nancy Goebel
Digital Workplace Impact is brought to you by the Digital Workplace Group. DWG is a strategic partner covering all aspects of the evolving digital workplace industry, not only through membership, but also benchmarking and boutique consulting services. For more information, visit digitalworkplacegroup.com.