Enquire about consultancy
Book a free one-to-one consultation to review the current state of your digital workplace and discover how DWG expert guidance can help you move forward with confidence.
In this episode of Digital Workplace Impact, Nancy Goebel explores AI literacy – one of the hottest topics for organizations right now – with Elizabeth Marsh, DWG Director of Research, and author of DWG’s recent research report ‘Empowering employees for the AI era: A guide to AI upskilling’.
Empowering employees for the AI era: A guide to AI upskilling
Designing AI literacy programmes that stick: Empowering future innovators
In this episode of Digital Workplace Impact, Nancy Goebel explores AI literacy – one of the hottest topics for organizations right now – with Elizabeth Marsh, DWG Director of Research, and author of DWG’s recent research report ‘Empowering employees for the AI era: A guide to AI upskilling’. As AI becomes increasingly integral to workplace innovation and productivity, understanding the readiness of employees to embrace this technology is paramount.
Drawing on her research, Elizabeth highlights the importance of fostering a culture of AI literacy to unlock new value and enhance decision-making capabilities. The discussion delves into the current state of AI readiness across organizations, revealing a significant gap between ambition and actual investment in upskilling employees.
Listeners will learn about a model for AI literacy that Elizabeth has developed, which emphasizes the need for a holistic approach to digital skills training. From foundational awareness to advanced application, the model progresses through a sequence of practical strategies for empowering employees and ensuring responsible AI use. Elizabeth also addresses the emotional landscape surrounding AI adoption, underscoring the necessity of building confidence and addressing any fears that may hinder progress.
With real world examples from Vodafone, Moderna, Danone and others, this episode is packed with actionable insights and real-world examples, making it a must-listen for anyone looking to navigate the complexities of AI in the workplace. Whether you're shaping strategy or levelling up your own AI skills, this conversation will inspire fresh thinking about what it takes to create and be part of an AI ready workforce.
Episode 164: Raising your organization’s digital IQ: The path to an AI ready workforce
“AI literacy is, of course, having an influence on the degree to which a company can be innovative, productive, perhaps keeping up with competitors, enabling faster decision-making – all of these things come back to that AI literacy piece. What are you trying to do with AI? And do you really know if the readiness is there? And I think that’s why we really need to dive into this area, because it’s perhaps not had the level of attention.” Elizabeth Marsh
Nancy Goebel
AI readiness and more specifically AI literacy is such a hot topic right now. Join me in conversation today with Elizabeth Marsh, DWG’s Director of Research for an inspiring and insightful look at how to elevate your organization’s AI literacy journey and unlock new value along the way. ,And by the way, please be sure to look up the show notes for her supporting research, themed ‘Empowering employees for the AI era: A guide to AI upskilling’.
This is Nancy Goebel, DWG’s Chief Executive and your host. Digital Workplace Impact is brought to you by Digital Workplace Group. Happy listening!
This is Nancy Goebel and I’m delighted to host this session today with my colleague Elizabeth Marsh, who will be taking centre stage to share some of the latest research that she’s put together in the spirit of helping us think all about AI readiness. And so, Elizabeth, I have to extend a warm welcome to you. It’s always such a treat to have a chance to be in a shared space together to hear the latest of your thinking alongside the wider research programme at DWG, so welcome!
Elizabeth
Great to be here. And I’m excited to talk about this new report, which is the latest in our series of AI readiness reports. So we’ll be focusing specifically on people readiness elements. We’ll be diving in, in just a moment, but to start with looking at, what are you trying to achieve with AI? And there’s big expectations – some hype, some reality – but, generally speaking, real excitement around what can be achieved in this area. And yet, if I look across quite a few surveys that I’ve seen, is the workforce ready? Well, maybe not so much!
And so if we look at those arrows, some of the common things that we really want to do with AI and achieve in terms of the common use cases, if we haven’t woven in that AI people enablement piece, there are things that are going to become clear in terms of people being unsure about how to really apply AI to tasks, maybe getting poor results, maybe resisting it altogether, thinking that this is just something that’s going to maybe replace my job rather than enhance me. Perhaps misinterpreting the insights; maybe over-trusting the insights. That can be an issue too. And maybe misusing it in a way that’s going to put the company in a difficult position in terms of ethical breaches or compliance issues. So, whenever we’re talking about these things that we want to achieve with AI, we need to be thinking about this enablement. It’s kind of like, relatively speaking, a low-cost insurance policy for a very high-cost AI transformation. You know, to not weave in this piece is really being shown to be quite a big mistake.
So, setting the report up – again, you can see it across lots of surveys and the conversations that we have. For most organizations using AI, the maturity is not there yet. It’s at various stages for most. Many more are investing a great deal more in AI as they’re going forward and have very high expectations around what we can potentially achieve with that in terms of value. And yet again, we hit this number of organizations thinking staff are ready being much lower and looking across several pieces of research – including a piece that I did about 18 months ago – only around one in five are saying that they’re really investing in AI upskilling. And actually we saw that in the DWG Member Survey too – I think 74% saying AI is a priority but only 17% saying they’re focusing on digital literacy. So it’s not quite there yet in terms of priority.
For those of you perhaps based in the EU, or with people in the EU or doing business with the EU, in February the EU’s AI Act brought out the AI literacy principle. And this is about making sure that people have an appropriate level of readiness to use AI productively and responsibly. And there’s not, as far as I’m aware, an equivalent in the US, although I think various States are in progress of raising the need for this to be considered. Nancy, you at the end of last year very much highlighted this need for schools of AI – so this focus on the skills that are needed both at a technical level but also across the whole workforce as well.
And so what we have done in this report, what I set out to do, was to build a model of AI literacy specific for the digital workplace – and that’s for the broad employee base – and in there, there’s also pointers about following a roadmap for upskilling. So we’ll dive into that.
This really interested me. Why does it matter? Let’s start with both the way that we think about AI, but also the way that we feel about it, because this has a big impact on AI adoption. And I just liked this qualitative piece of work from Slack, where they looked at the range of emotions that can come up as we adopt and use AI – from perhaps feeling intimidated to going into confidence and resourceful, maybe curiosity, caution, confusion – at the end maybe even guilty. So, you can see there’s quite a big array! I think there’s even more than that. There’s a big array of emotions that can come up and people may be sceptical; they may be resentful. They may be really excited, but unsure.
So what we’ve seen in research is that AI literacy is a really important factor to help people on that journey and through perhaps some of the negatives to some of the more positive states. And people with greater AI literacy are more likely to be positive, as you might expect, around the use of the tools. And those with lower AI literacy may be feeling more apprehensive, afraid, kind of distressed about that.
And then AI literacy is, of course, having an influence on the degree to which a company can be innovative, productive, perhaps keeping up with competitors, enabling faster decision-making – all of these things come back to that AI literacy piece. What are you trying to do with AI? And do you really know if the readiness is there? And I think that’s why we really need to dive into this area because it’s perhaps not had the level of attention.
And I guess we shouldn’t treat AI any differently from any other major digital transformation in terms of that training and support for the workforce. It’s not a big new thing. And that’s true of the way that digital workplace teams apply a lot of their skills to this in terms of the governance etc. as well.
So then I make the point – OK, yes, it’s tongue in cheek – don’t focus on AI literacy. But I’m serious as well. So many of the skills that we need now in the workplace to use AI are very much rooted in general digital workplace literacy. So, organizations that have worked on some of those things – and we have the example of Vodafone who have approached a very holistic idea of digital upskilling with their WOW programme – so people are better set up in terms of that basic digital literacy, both the competence but also the confidence to use digital tools, to take on new tools as they come. Can I solve problems? Can I optimize workflows? Do I know how to formulate a good search? Do I know how to critically evaluate data? So that base level.
Then even more getting into the data literacy level – so being able to use data to inform decision-making; again, critically evaluate, understand where the data’s come from, what the biases might be and how can I communicate it? And so I kind of take this step back because while a lot of you might just be focused that you really need to look at the AI literacy piece, if we can look at this more holistically, the workforce is going to be better prepared for new technologies, new iterations of AI – and it’s evolving fast, right? So they’re going be better prepared for this.
So we need to think about those kind of foundational literacies. As I said, Vodafone, and AstraZeneca is another great example there. So they’re really kind of thinking about that broader effort to upscale digitally as well. You know, we talk about technical debt sometimes in the digital workplace, where we pay the cost for perhaps legacy tools or poorly governed systems and coding etc., but I think there’s also a let’s call it a ‘digital literacy debt’ or ‘digital dexterity debt’, ‘the 3Ds’, where now as we look at helping people to get up to speed on AI, it’s actually revealing gaps in data literacy and digital literacy that haven’t been addressed before as well. So it’s kind of food for thought.
Now let’s dive into the model. We wanted a model that: a) really puts the ethical component right at the heart of that model – and it’s woven into all of the stages. We also wanted to make sure that this didn’t look – I originally had something that was much more linear but in fact, it’s a cycle. So it’s not a kind of ‘one and done’. So hence that arrow round – and, as we maybe go up that spiral with our AI skills. So this model is really a way of breaking down and thinking about the various components of what the workforce need. It’s not that you would necessarily structure your programme around this, but it helps with understanding the skills and planning that approach as well.
The first stage, I very much wanted to pick up on the basic awareness – and this sometimes gets missed out. But it’s really important; it’s that kind of foundational stage. Are we aware of the technology? What are the initial perceptions we come to AI with? Does that include misconceptions that are going to potentially hinder going forward with the technology? So, in this section, we’re looking at that core knowledge of what AI is: different types, basic-level different types of AI, some of the ethical concepts, recognizing that actually we’re already using AI – probably in more places than people might realize. And so that can be a good doorway to having conversations about how it can help us. Just bringing in as well as the can, the cannot. So, people might also have overinflated expectations. So just starting to touch on some of the things that it doesn’t do so well yet or for the foreseeable future as well, so it’s getting a realistic perspective. Familiarization in the specific industry that you’re in. So how are different organizations using it? What are some of the ways? Just kind of piquing that interest.
And also, of course, understanding what’s in it for me? And so that it’s not just something that’s here to replace you, etc., etc. And that there are different ways that it can help in the way that you’re working on a day-to-day basis. So it’s really getting into those kinds of initial perceptions, some of the concerns or fears, getting to the basic concepts. So we’re laying a foundation here.
And across some of the examples, we saw quite appealing initial programmes, like ‘GPT Kickstart’ at Moderna – so that’s really something that they aimed at every employee just being able to get started. You’re not going to get in-depth here. At Danone their ‘Dan Skills’ programme is kind of weaving the AI Academy into that and making sure that people can understand how in different parts of the operation they can get in and use that.
So then we come around the spiral a little bit and we come into ‘Know’. And so we’re getting into a bit more depth here in terms of the technical and operational proficiency that we’re looking for from employees and more ability to kind of navigate around AI applications and work with those interfaces – recognizing perhaps a bit more about how reliant these models are, in terms of the data they’re trained on and the algorithms – so really getting into the data literacy components. So a little bit more understanding about what is it we’re looking at – and some of the potential pitfalls. But also at this stage, getting more into perhaps the AI governance structures and policies that the organization has, so making sure that that’s well understood. I think there’s a desire often from employees to understand these things, but in an approachable way. You know, I want to be safe in terms of how I’m using these, I don’t want to make mistakes. So really helping people with that. And the recognition that this is going to be a process of continuing to upskill as well. And we’re building confidence here and when that confidence is lower, and people are maybe more likely to see AI as a threat, resist wanting to learn about it, they may have more technostress. So, that confidence element is really important.
I should say that, as we’re going through the framework, we’re looking at the kind of elements of learning and knowledge. I think from the start, we want people hands-on. I think it was Walmart who put in place an ‘AI playground’. The sooner people can get hands on and try things out, I think that’s really important.
So now we’re really getting in deep in terms of use and applying those foundation concepts, so getting much deeper into: What are the use cases? How can I use this on a daily basis? Getting more skill around writing prompts, steering some of the outputs, refining some of the outputs, thinking about AI augmentation. So, how is that human–AI collaboration happening for me and within our team? And how does that work best? Perhaps getting bit more innovative with the use of the tools that are available, being really clear on where the AI excels and where we can get the most out of it. Perhaps some experimentation there, but also very much knowing that importance of human judgement and contextualizing those outputs. Where is it appropriate to be using those? Where do I need to be surrounding those with my human context as well? And perhaps starting to champion it a little bit more – so at the last stage, we’re looking at perhaps I can explain concepts now; perhaps that’s actively becoming more of a champion with others as well. Here, really thinking about the use cases, thinking about the innovation, the collaboration, and that level of potentially becoming a champion as well.
So again, to ‘Evaluate’ and, as I said, this should be part of our core digital literacy, whether we’re evaluating documents that we’ve found, websites, search results – and now we’re applying it to AI and it’s suddenly become hyper-important. So, if we’ve got that foundational digital literacy, we’re in a good position. So, what’s the accuracy and reliability of what we’re seeing? It may look great, but perhaps there are inconsistencies, perhaps there are hallucinations. Starting to be more aware of some of the harms that could arise, both personally, but for the organization, without just passively accepting an AI output, getting that tuning in to having always that hat on of what am I looking at? Is it good? Is it misleading? How do I need to use or correct it? And being alert to some of the generalizations, maybe oversimplification, simplifications that may come out. And then also that critical thinking around, for example, if an employee is using AI-generated sales forecast, critically analysing those assumptions, checking for the potential bias in maybe the historical data, validating the predictions that are coming out before acting on those. So, this is a really, really important aspect. We’ve been building it a little bit since the start of the model, but here it really comes into focus and looks at some of those risks. So again, organizations really building this in.
And then, last, that responsible use of AI, understanding its impact both within but also beyond the organization as well, so getting into what it means in terms of some of the regulations around use and misuse of information which need to be understood. The importance of explainable AI – so that interpretability, accountability, and being able to see how decisions are being made to the extent that we can. And also looking at where we use them. So, for example, particularly in people processes, being aware if we’re hard-wiring or assessing performance, then lots of care is needed there – looking at things like biased data sets and potential discrimination, the potential to perpetuate existing silos in the organization, depending on how this is rolled out and the data that it’s using. So it could actually reinforce some of those silos that we’ve seen. Of course, thinking about things like the environmental aspects as well in this and understanding those as well. So, thinking about responsible and proportionate use, I guess.
And then another aspect, I suppose – and I was talking with a colleague about this earlier – is the potential for widening that digital divide in the workforce that is already an issue in some organizations between the ‘haves’ and the ‘have-nots’ of who’s using AI. And if someone is slower or resistant, or doesn’t have the access, what does that mean in terms of relative career prospects, performance, etc.? So there’s a lot of intricate aspects to think about there.
So there’s a ton of stuff in ‘Uphold’ because it is intricate going into some of those areas around bias and fairness, transparency, privacy aspects, oversight and accountability, looking at the sustainable aspects. And then also that kind of self-management and transparent use – so about being discerning and intentional in how we use AI for work. And the example there is not afraid to let colleagues or managers know how I’m using AI for work tasks. So that’s part of conversation being enabled about how we’re using AI. And I think, I feel really good about when I see work and I’m given work, or I see communications and people tell me about how they worked with their AI buddy, whereas as opposed to if I see something and nothing is said. So, you know, there’s some interesting layers to this, I think.
I’m not going to go through this in depth, but we put in this roadmap for AI upskilling, not because we’re telling you how to launch a learning programme – obviously you’ve got your expertise around these things. But we wanted to highlight the kind of really AI-relevant things that we’re trying to pull out of this report. And so, you know, I was thinking about the kind of devil’s advocate of – I’m always talking about we need to understand that ‘Listen’ stage; we need to understand the skills that are there, the attitudes that are there, now. And I was thinking about the argument of ‘Well, hey, why bother? We’re going to roll out this programme of learning anyway.’ And, you know, ‘We’ve kind of got a rough idea’, but it’s that ‘Do you really?’. It’s really easy to make assumptions about digital skills; it’s really easy to make assumptions about who needs help – for example, based on what generation they are or what department. And actually here we have the opportunity; I said it earlier, it’s like low-cost insurance for that high-cost AI transformation, where we can not just find out about skills, but know about the attitudes and perceptions that may help hold us back but also may help us. So, what I’ve seen in quite a few pieces of industry research is that there’s a lot of motivation, there’s a lot of desire for training. I saw an interesting statistic earlier – and I haven’t dived into the report in depth – was that for 81% of people who were switching to a different company, one of the motivating forces was about the AI training they were or weren’t receiving. I’ve not conveyed that very clearly, but there was a real possibility that when people are looking around for the organization they want to work for now, they want to know that they’re going to be enabled around AI. And so it’s also a feature that is attractive, let’s say, to people to stay in an organization and to go to an organization.
So, here we are getting our baseline, so that we know that what we’re doing makes a difference. You know, we’re demonstrating that commitment to the people-centric approach – as you all well know, in general digital workplace initiatives, but here particularly, as I said at the start, this is integral to the communication ‘We’re enabling you’ … here’s the training, here’s the guardrails, here’s how we’re making it accessible. It means we can tailor the training, we can maybe identify power users. So, lots of good things coming there.
And then the ‘Legitimize’ piece – and I was really pleased to find ‘4Ls’! We did have to work at that a little bit! But that really meaningful AI attestation – and, you know, things like Credly, where people are able to showcase what they’re doing on LinkedIn – kind of encouraging, as in the report, encouraging AI-fluent employees to pay it forward so kind of, again, a network effect. But really thinking about this – and we make some suggestions in the report around helping to legitimize and reward that.
So, the report is set up to enable you to support you in conducting your own – I’m less and less using ‘assessment’ because it sounds a bit formal, a bit intimidating, like it’s somehow going to hit people with a ruler afterwards; it’s understanding, discovering what’s happening around AI literacy in its broadest possible sense. And yes, there are things to think about when you’re going through that process. I know it sounds obvious, but really what you want to know, how can you really use the outputs that come through the questions that you’re asking? It’s really easy to end up accidentally assessing the digital workplace and people can feel, for example, if the search isn’t very good and you ask them about search skills, people can get quite defensive and like ‘It’s not me!’. Actually, it’s maybe a bit of both. There’s just an element of care, also in how it’s communicated so that people can be honest and can give you good-quality answers. I’ve mentioned the idea of some of the items to self-assess also being scenario-based and I think it’s really useful to get into some of the evaluative and ethical elements as well.
Nancy
Well, Elizabeth, you have given us such a great window into this body of work and the examples that you gave really helped to bring the concepts to reality. And I guess one of the things I’m curious to know is in the course of bringing this research together, was there anything that surprised you along the way, whether it was in terms of the examples that you surfaced or these elements that came together ultimately to form a spiral of literacy and development, or anything else for that matter?
Elizabeth
I think the thing that I did reflect on quite a bit was that it’s the same and different, you know, when I showed my tongue in the cheek ‘Don’t focus on AI literacy’ – so actually thinking about this holistically. I think it became clear to me how important that was and when I looked, I saw that there was research to show that those that are coming forward with AI examples are very much talking about that AI enablement and that kind of broader confidence-building for employees.
I think the other thing that caught my interest – and I mentioned this – was that there is a level of motivation and excitement, as well as desire for training, desire for support, and it doesn’t seem to be quite as forthcoming as people really want it to be. They want the guardrails, they want to feel safe, they want to feel enabled about this. And maybe that’s not always been the case with some previous digital workplace tools. So it seems like an opportunity that we also perhaps need to be careful not to squander as well.
Nancy
And, you know, I know that in the course of the work we do with members, we’ve always strongly encouraged the importance of user research and understanding what’s needed, along with the strategic context of the organization – because you have to work top down, bottom up, all at the same time. And when I think about the listening activities that you were spotlighting earlier in sharing this framework, this is another way to be connected with employees and understand how they’re thinking, what they’re feeling, because this disruption factor is so foundational that people don’t want to be left behind. You may recall when we had the gathering of the Trailblazers a few months ago, we talked about this notion of the fear of becoming obsolete. And, as organizations think about their employee value proposition for the future, this is one of the areas that we’re hearing – a lot of employees really want to make sure that they are fit for the future and they want to see a support system in place within their organizations. But, at the same time, they also need to take a level of ownership and, in some cases, they move with their feet when an employer isn’t giving them the support they need.
And, you know, I can see very regularly on LinkedIn how people are getting certifications on their own on top of considering role changes. You can either be the disruptor or be disrupted in conversations like this. And so, for me, that’s another angle to think about as part of your status as an employer of choice. This is expected, and to tailor it based on what your employees are concerned about, thinking about, is an important part of the paradigm here – and not just deliver some vanilla training that may come off the shelf from Microsoft University.
Elizabeth
Yes, piquing that digital curiosity – and we do have a shared responsibility, we live in a digital age – so, we have individual responsibility, but it’s much more fun to look at it as if we can pique digital curiosity. And to make it human and tailored and personal are really important.
Nancy
All great insights, as always, Elizabeth.
Digital Workplace Impact is brought to you by Digital Workplace Group. DWG is a strategic partner covering all aspects of the evolving digital workplace industry, not only through membership, but also benchmarking and boutique consulting services.
For more information, visit digitalworkplacegroup.com.


“AI literacy is having an influence on the degree to which a company can be innovative, productive, perhaps keeping up with competitors, enabling faster decision-making – all of these things come back to AI literacy… And I think that's why we really need to dive into this area, because it's perhaps not had the level of attention.”
DWG
Contact us to apply to join DWG as a member and become part of a community of more than 900 digital workplace and intranet leaders and practitioners.
Apply for membershipBook a free one-to-one consultation to review the current state of your digital workplace and discover how DWG expert guidance can help you move forward with confidence.