Your digital journey starts here
Book a free one-to-one consultation to discuss the current status of your digital workplace. Each consultation is followed up with a bundle of useful resources to help get you started.
In the latest episode of the Digital Workplace Impact podcast, our host Nancy Goebel delves into the strategic imperative of AI readiness. Nancy is joined by Edward Taylor, DWG’s Chief Growth Officer, to explore the transformative impact of Microsoft Copilot on large-scale enterprises and the pivotal role of digital literacy in the workforce.
Is your organization ready to seize the opportunities presented by AI?
The discussion covers how early adopters of AI navigated the challenges of data leaks and the proactive measures subsequently implemented. You can hear insights on user engagement during Copilot trials and understand more about why change management is essential for AI integration.
In addition, the pair offer valuable advice for those considering DWG’s AI Readiness Assessment, a tool to help organizations leverage their strengths for AI adoption.
If you have questions about the strategic importance of AI readiness in large-scale organizations, this is the podcast for you. Listen today and find out more.
My main concern is that a lot of organizations are rushing to go roll out AI, but haven’t really thought about making sure that they’ve got those foundational structures in place to allow them to do it successfully. And I suspect that the AI readiness assessment might highlight to an organization that it might be better for them to potentially delay the desire to roll out AI at scale and maybe concentrate some of their efforts to make sure that the foundations are in place so they can leverage AI to its fullest extent.
In this episode of Digital Workplace Impact podcast, we delve into the strategic imperative of AI readiness. I’m joined by Ed Taylor, who’s DWG’s Chief Growth Officer. Together, we explore the transformative impact of generative AI broadly alongside Microsoft Copilot, specifically, all of which is in play in large-scale enterprises. Digital literacy is playing a pivotal role in transforming the needs of the workforce around AI. We talk about the importance of discoveries about how early adopters have navigated the challenges of data leaks and the proactive measures they’ve implemented since. We talk about gained insights from our digital workplace analytics tools in and around user engagement during copilot trials. We talk about why change management is essential for AI integration and how DWG is now guiding organizations to leverage their strengths for AI adoption for maximum advantage, as well as remediating some of the gaps that shine through an assessment like ours. And so, finally, that allows us to offer some valuable advice to organizations considering undertaking DWG’s AI readiness assessment. As always, Digital Workplace Impact is brought to you by Digital Workplace Group. And this is your host, Nancy Goebel, DWG’s Chief Executive. Happy listening.
Welcome back, Ed. It’s so great to have you in the studio once again to share some of the things that you’re thinking about.
Absolute pleasure to be back, Nancy. Very much looking forward to the discussion.
And so you know I know that AI is the topic of the day and a pretty significant presence for lots of organizations. I know you’ve been doing a lot of thinking and writing about this topic in particular off the back of conversations with DWG members, current and perspective. I just thought it would be really interesting for us to frame our conversation around that. I hope that works as a good challenge for you today
Definitely. Yeah, very happy to talk about that. I certainly view it as an area of challenge and of huge interest for all of our members and other organizations that are looking at large-scale applications of AI across their enterprise. So it’s exciting times indeed.
In DWG’s latest edition of Vantage Point, you talk about the strategic importance of AI readiness in large-scale organizations. Let’s start there. Share your perspective.
The strategic importance of AI readiness is massive. I think we mustn’t underestimate how important it is. I think it should be viewed as a bit of a wake-up call for enterprises to really think about bringing their A game when it comes to their ability to effectively manage and exploit the potential of AI. I think based on some of the things that we’ve observed, and no doubt we’ll talk about, I suspect quite a lot of them aren’t as ready as maybe they should be in terms of the maturity that they have with inside their enterprise to be able to take advantage of lots of different areas. I’m sure we call on that. But I think from my perspective, the whole area of AI readiness really isn’t about implementing technologies. It’s about transforming enterprises, culture, processes, and strategies so they can really leverage AI to the full potential. I think that the technology is building up and with the introduction of Copilot and other tools and integration, particularly inside the Microsoft space, the technology is there to exploit. I think it’s more about how do organizations really get to that point where they can start taking advantage of it and really ensure that they can develop sustainable growth and long term success with the use of AI.
And so let’s pull on the Microsoft Copilot thread for a moment. What do you think the impact of the launch of Microsoft Copilot has been for member organizations? And how do you think it’s impacted their approach to AI adoption?
I think I think the launch of Copilot, particularly with it being a Microsoft product, helped fast track its use within inside organizations. I think when you look at the DWG member group of 70 plus large scale organizations, I think as a straw, as a rough figure, about 95% of those organizations are very much using the Microsoft ecosystem as their core, and the majority of them will be on M365 and using tools like SharePoint online, Viva, Teams, etc. etc. I also think the fact that that come round has given people a certain degree of license to feel comfortable around using AI within inside the confines of their enterprise from a security standpoint, because I think prior to that, we had some member organizations using tools like ChatGPT, and obviously that did introduce a degree of risk because there wasn’t a guarantee that their data wasn’t going off to third parties or being exposed in situations where they didn’t really want to do that. I think obviously we’re all well aware of some relatively high-profile data leaks. But going back to the Copilot element, I think that really did give organizations a basis to move forward with their desire to adopt AI.
I think that’s the area that we’re in at the moment. I think there’s a lot of organizations out there that are experimenting with Copilot and trialing it and exposing it to different parts of their organization and different user groups. But I think the bit that we’re looking at is that next step How do we actually start to demonstrate the use cases that are going to derive most value to those organizations? And then how do you then scale up and get AI built into those organizations so that it becomes a business as usual element of how people go about their daily working lives.
So one of the things I immediately pull out from what you’ve shared over the last few minutes is that risk readiness is an important element, whether it’s in the wake of the data leaks with ChatGPT or just the concerns generally about security and even privacy vis-a-vis Microsoft Copilot.
Absolutely. I think I was I was very fortunate to attend the conference last week where we were talking about AI, but more at a macro level of IT, so to speak, where we were looking at IT as a whole across the whole of a large scale enterprise, not specifically the digital workplace. There was a section that talked about the need to have a balance between the desire to innovate around AI, but also the need to be able to regulate it as well. It was quite an eye opener for me personally because there’s a lot of legislation that organizations are going to have to get their heads around as well, which will introduce a degree of need for compliance teams to make sure they’re up to date with that legislation. I think it also helps people maybe understand some of the risks that they’re potentially bringing into their organization so that they can effectively gear themselves up to manage the responsibility of having AI within inside their business and not introducing any unnecessary risk. That’s quite an interesting area. I certainly learned a lot in terms of what’s going to be happening over the next couple of years from a legislative standpoint when it comes to AI.
You’ve got the legislative and compliance element, but also internal governance to match an organization’s risk tolerance while still trying to take advantage of some of the rewards that AI has to bring. All of that falls under the risk readiness and risk mitigation side of things. I know that you’ve also been thinking a lot about the implications for digital literacy in the workforce and would love to hear your thoughts on that, too.
The digital literacy element of AI is important in that we’ve We’ve observed a number of organizations that I said are going through piloting of Copilot. The top-level results that we’re seeing is that there is a bit of a gap in terms of the perception of the end users that are interacting with the copilot trial and where, ideally, we would like to see the competency of those individuals in terms of their ability to use the full portfolio of tools that are available inside the M365 platform. I think to me, it’s certainly highlighted from what we’re seeing in the data that there’s a real need for organizations to emphasize change management and make sure that those end users are sufficiently supported to be able to develop the skills that they need to be able to get the most from AI and the tools that are going to be improved by the use of AI. Some of those areas are very much designed to help the end users. I was just thinking about some of the use cases that are likely to be used that are going to benefit what they do. We’ve already seen a lot of that with inside the DWG member groups.
We’ve had a large healthcare business provide a live tour on their virtual assistance. We’ve got different HR functions, looking at AI for the use of employee onboarding, potentially performance management. How can we personalize people’s career development? Aspects of collaboration, certainly around smart meeting assistance and the features that we’re seeing with inside MS Teams. But I think the big challenge is making sure that those end users are able to leverage those capabilities so that they can become AI-ready employees, so to speak. I think that’s where the real challenge is going to start surfacing for organizations to understand how do we take an entire organization, especially if you’re a business that operates in financial services and a lot of your end users are knowledge workers who should potentially stand to benefit from using AI as part of their day-to-day work. That’s a key area that they’re going to have to really double down on and invest in change management expertise to help affect that in a positive way.
Another area certainly is the analytics capabilities to be able to measure the impact that AI is having in organizations, whether it’s from risk readiness standpoint or digital literacy, having some objective analytics that surface a view into what’s happening vis-a-vis user engagement and utilization. It becomes clearer and clearer how things are changing in the organization. I know you’ve had some access to early insights around what some organizations have been seeing around user engagement, specifically during the Copilot reviews or trials, depending on your preference of language.
Yeah, I think the top line statistic that I was fed by a very well-placed business leader of a digital workplace analytics tool was that they were seeing a large cohort of individuals that were effectively one and done in terms of their interaction with the Copilot trial. I think you could almost argue that you’ve got individuals that are signing up these trials just to take a peek at what’s happening inside the Copilot world, but haven’t really been able to grasp what the next step might be. So you could argue that they’re not necessarily getting the best value out of those trials. Now, admittedly, a lot of those trials might just be more free to air trials where people just go in, have a bit of a play, and then potentially report back. I think ideally, when we’re looking at launching trials, we should be looking at trying to get those end users to go through maybe a bit more like if we’re looking at launching a new web service or something like that, we’d go through user testing where we get people to go through different scenarios so that they get exposed to some of the different capabilities that AI has to offer so that those end users can then make the leap around how that might benefit their day job. Maybe that’s some of the areas that we need to start looking at when we’re doing these copilot trials is how can we optimize the experience of the trial so that we can showcase the extent of the capability that’s available within inside AI so that those users start to have the light bulb moment goes off or they start having sparks of inspiration as to how they could actually apply AI with inside their
own day-to-day lives. From that, hopefully, we can then start to drive a suite of different use cases that the enterprise could then take away and say, Right, we can roll this out and build upon this so that we can allow a broader audience of individuals to gain the benefit of using those tools. But I think really looking at how we’re getting the value from I think trials is important. Based on that data, I think it’s something that needs to be addressed soon, really.
I would wholly agree with that. But I also know that in moments like this when there are still more questions than answers in a lot of ways, organizations within our membership often look to DWG for insights into good practice and the pathway to get to that level of good practice. I know that you, alongside a number of other team members inside of DWG, have been formulating a suite of services to support AI adoption. I think it would be interesting to talk a little bit more about what these services entail and thoughts around time on that launch?
Absolutely. I think the first area that I would talk to is the intention for us to launch in the coming weeks an AI readiness assessment that will allow organizations to get a really good handle on where their expertise are or their strengths and weaknesses are around the different dimensions that they need to be competent at to be able to effectively launch AI readiness. And obviously that builds upon DWG’s expertise as a benchmarker of different aspects of the digital workplace. So it’s not a difficult step for us to take a look at what are the key components or dimensions that we need to measure for us to be able to determine if an organization is likely to be able to implement AI to its fullest extent. And I think also it will benefit organizations, but it would also benefit organizations to be able to see a gap in where their capabilities lie. But in terms of the areas that we’re looking to appraise as part of that assessment, one key area is looking at leadership and how ready is the leadership within inside your organization to transform it to being an AI-driven business because executive leadership is one of the key determinants for success of AI initiatives, and that can really act as a cornerstone for AI success in moving it forward.
Making sure that you’ve got those leaders who are willing to sponsor AI, understand the benefit is crucial to that. Then as we start moving through different areas, we would look at the high in terms of the organizational analysis. Starting to understand how effective is the organization at adopting new technologies in general and what impact that might have on the culture and behavior of your end users. That’s something that we’d certainly look to get more detail into. Following on from that, we would look at the support functions as well. What capabilities do you have within inside HR, IT, potentially legal and communications in terms of how they can play an active role in enabling AI to be effectively baked into new ways of working? Are those functions effectively ready to take on the challenge of rolling out AI? I think it’s important to put in my governance hat on, is that when we’re looking at those support functions and those different parts of the business to play a role in AI, it’s really important that each of those have a voice in the decision making around rolling out AI. One of the members that we met with a couple of weeks ago, I was talking to the leader who was managing that, and they’ve got 50 different individuals as part of their AI Governance Council that’s feeding into how do we manage AI?
And that’s where organizations need to be heading when we’re looking at the analysis of their support functions. Have you got the right people in the room asking the right questions about how we’re going to roll AI out? And then the final part of the jigsaw is doing quite a thorough analysis on the business unit areas and trying to actually understand which of your business units inside your enterprise is most likely to gain business value from using AI, because ultimately when you’re buying these new tools, there is a financial investment attached to rolling out these tools. We’ve got to be quite pragmatic about not necessarily dictating that the whole organization needs access to these tools, but actually trying to look at where can we derive the most value from the use of those tools and take some sensible decisions about where do we initiate the use of AI first inside the business and then look to scale that up over a longer period of time. But the intention with that analysis, once we’ve gone through those four areas of analyzing the leadership, looking at the organization, reviewing the support functions, and doing that business unit analysis, is that that report will then give those organizations, hopefully, a really good understanding of how they can tackle AI and what challenges they might need to address with inside their enterprise before they pull the trigger on launching something so that they’re actually able to de-risk certain aspects of how they might approach launching AI and can do so, stay for the knowledge that they’re heading in the right direction based on the areas that we have knowledge of with inside DWG.
So hopefully people find that interesting.
So what I take away from it is that in a lot of ways, it functions like a diagnostic similar to DWG’s digital workplace maturity benchmark of once upon a time and can inform not only strategy and roadmap, but because change is such a big part of the equation, it can help support almost a bespoke change management approach for the organization. Is that right?
Yeah, very much so. I think one of the things that you’ll notice with that assessment is that it’s not particularly focused on the technology. It’s very much focused on the people elements of this and the strategic approach to rolling out AI. And obviously, once we actually start getting into the execution aspect of it, this whole conversation is going to pivot very much to a change management aspect. And how do we go on that journey of making sure that we’ve cultivated the the right leadership, we’ve got the right human transformation that’s happening, and people have the necessary programs around them that they can become successful using these tools. So from my perspective, we want to be able to take that leadership cohort and make sure that they’ve got the foundational learning in place. They’re bought into the business strategy. They’ve created the necessary governance to manage AI, and they can really understand where the prioritization is. Then obviously, the next part of the change management side of things is more on the human change. How do we make sure that the end users have those enablement programs to get the best out of AI? How do we bake that into the employee experience?
That’s going to be really, really important. I think the final part of the jigsaw is helping those individuals develop those technical skills that are needed to be able to get the best from AI. That probably needs to be a lot more hands-on than it has been with any other technology that we’ve seen being introduced into the digital workplace. And I think that’s probably something that’s worth expanding upon a little bit. We’ve seen lots of different types of technologies being rolled out over the last 15 to 10 years and so on and so forth. We’ve had enterprise social networks, we’ve had enterprise search, we’ve had aspects of analytics. Not many of those have required end users to really sit down and think how to use them. They’ve been very intuitive by design or in the case of analytics, they don’t really exert themselves on end users. It’s other parts of the business that are taking advantage of those tools. But I think with AI, I think we’re at a bit of a tipping point now where we have to take more of a people-first approach to making sure that they’re supported to get the best out of those tools because it is new for everybody.
So I think we all need to go back to AI prompting 101 and start to get our heads around how do we get the best out of these tools.
And so if we come back to this idea of change management for a moment, why do you think it’s so crucial when introducing AI into an organization or in cases where they’ve dabbled with it, really helping them leapfrog into the risk management and value that AI can bring to bear?
Well, I think when you just look at what we’ve discussed We’ve highlighted that with the data that we saw from the analytics provider, there’s a cohort of individuals that don’t seem to gel well with AI. There’s a bit of a step gap. I guess you’re going to have this conversation around individuals who are naturally early adopters and want to try and consume as much as possible and really master AI. But at the other end of the spectrum, you’re going to have, for use of a better word, laggards who aren’t that interested in moving forward at pace. We’ve got to bring those people to the table. I think that’s where change management can come into the mix and really help sell the benefits to some of those individuals that might not be immediately predisposed to want to get involved in AI, which is understandable because it’s a stretch into the new age. But I think from a change management standpoint, that’s where you really need those practitioners who understand the different ways and tactics that we can use and deploy on individuals to help affect that change. I think this is something that organizations need to think about developing those necessary skill sets inside their organizations so that change can constantly be delivered across the enterprise to the highest extent.
I don’t think that’s something that we’ve seen, particularly in the digital workplace arena. I don’t think there are many organizations out there that can claim that they’ve got dedicated change resources available. It’s always been a bit ad hoc, should we say, when it comes to change management. I think this introduction of AI is a big watershed moment in terms of making sure that end users are supported. I think that’s where the investment probably needs to be. In addition to the financial investment you’re making and having access to that technology, I think you also equally need to be thinking about, have we got the skill sets in the business to make sure that if we’re looking at larger cohorts of individuals, they’ve got those support structures in place to get the value.
At moments I always think back to a sharing session where Pandora came in and talked about the fact that when they’re bringing technology into the organization, they plan on 20% of it being the technology cost and 80% of it being all about the change activities to help individuals adopt and put new capabilities to best use. And this just feels like another clear-cut example of the need for the 80/20 rule within this case, the heavy emphasis being on the change management paradigm that you’ve been talking about. Just coming Back to this readiness assessment and comparing it to benchmarking that we’ve done over the years is an output of the readiness and articulation of the strengths, weaknesses, and opportunities for AI adoption.
I think when you step back and look at a lot of the benchmarking that we’ve done with organizations, particularly looking at digital workplace maturity, I think one of the key things that we see is that a lot of organizations can buy their way to success in certain areas. So mobility, for example, you can buy a tool from Microsoft or whatever it might be, or an internet product that gives you that ability to engage your end users through mobile devices and so on and so forth. And that ticks that box quite neatly. I think where we start to see real challenges unravel is where you’ve got that people element. And I think that’s where we start to see a real need to have that ongoing, continuous investment. The dimension that I come back to is one that’s called organizational readiness, which is ultimately a measure of how easily an organizational organization is able to roll out new technology to its end users and start them on a successful journey of maturing their skill set with inside any given particular technology that might come into their digital workplace. I think that’s where we’ll probably start to see when we’re doing more of these AI assessments is that naturally, there’ll be strengths in some of those more straightforward areas.
But I think with the AI readiness assessment, it will really start to uncover some of the weaknesses that some organizations probably will have when it comes to aspects of change management, which we’ve discussed a lot, but also around just their generic governance. I think my main concern is that a lot of organizations are rushing to go roll out AI, but haven’t really thought about making sure that they’ve got those foundational structures in place to allow them to do it successfully. I suspect that the AI readiness assessment might highlight to an that it might be better for them to potentially delay the desire to roll out AI at scale and maybe concentrate some of their efforts to make sure that the foundations are in place so that they can leverage AI to its fullest extent. So things like your directory of your end users, is that up to date? Is your content fresh and up to date? Have you removed all the duplication? Is it as well written as possible? Have you got a really good handle on the ownership ownership of all the key assets that are within inside your organization as well? Have you got the necessary governance processes, controls, and standards in place to get the best from it?
Because making sure you’ve got all of those capabilities operating at a high level will put your organization on the best possible footing for AI. I would like to think that the organizations that go on to use the readiness assessment will be able to do so with a degree of comfort that they’ve got those foundational boxes ticked and are able to move forward with real purpose rather than dipping a toe in the world of an AI trial or a copilot trial that doesn’t really serve a longer purpose. So I think it’s important that organizations do the basics first before they start to try and reach for the stars, because I suspect it won’t end well if you start rolling out AI at pace without having the necessary processes in place to make sure that you’re running in a risk-free environment as far as you can.
If the first look is the strengths, weaknesses, and opportunities, if we’re mirroring some of what we’ve learned through DWG’s benchmarking process, would there also be recommendations for driving AI adoption forward within the scope of the assessment?
Oh, absolutely. I think that’s where the second phase of work comes into the mix, and that’s where we’d look to build with organizations the necessary strategies and roadmaps that are needed to build out AI for their organizations. And that’s something that DWG has a very long track record of working with organizations to develop a strategy that defines what is it we’re seeking to achieve and brings that to life with the things that we refer to as guiding policies that provide very granular level goals that we’re seeking to achieve that are also measurable. So one One of the things that DWG does very well, in my opinion, is that when we create strategies, we do build in measurable KPIs that can allow that strategy to be appraised later in time. The element that complements the strategy the most, though, is the roadmap. And that’s where we then start to work with organizations to build out what are the key work streams that need to be focused upon when we’re looking at rolling out AI. And then take each of those work streams, and that might be, we might look at content governance as being one stream, we might look at change management as another.
We might look at the decision making bodies as another one. And what we can then do with each of those work streams is look to develop what the near, mid, and longer term activities that need to take place so that we can build the maturity of those individual work streams up at a cadence that suits the organization, because not every organization is going to have resources on tap to roll out a moment’s notice. Some might want to take a more measured approach. We really need to make sure that when we’re developing the roadmap, that we rightsize that roadmap in line with the aspiration of the strategy. In many cases, that highlights a need to maybe leverage DWG to provide some additional support with our consulting, or it might highlight a need to recruit or leverage secondments to bring specific resources in to help manage some of the outputs from the work streams that we’ll be working on. I think just to to that, those roadmaps as well also help us to bring key individuals in who are going to be playing a more long term hands on role in the management of AI and the governance of AI within inside those organizations for the foreseeable future.
I think that’s one of the more important elements of the whole AI readiness is making sure that you’re looking at your enterprise from the perspective of how do we build up the AI capability within inside the organization as a continuous mechanism within the business that seeks to move forward and take those necessary strategic direction decisions so that they’re able to leverage AI to its fullest extent. And by that, I mean having Steerco buy-in at the highest level that might meet on a quarterly basis, and then more of a day-to-day management of those services that shared across the key stakeholders that I would expect would involve individuals from HR, from IT, internal comms, potentially your facilities, parts of the business, and other business functions as well on a case-by-case basis for each organization.
And so the process kicks off with an AI readiness assessment. Then the findings and recommendations inform what is tantamount to an AI playbook that is largely comprised of the strategy, the roadmap, and measurement checkpoints, KPIs, as you put them, so that people can see how things are progressing and where energy is needed next. So implicit or inherent in that, does that also mean that the AI readiness assessment is something that you use periodically as part of that KPI set?
Definitely. I think any organization, as we do with a lot of our benchmarks, they like the fact that they have that ability to do a before and after just to get a sense check of where they’re at before they initiate a large amount of change within the business. Then once that change has taken place, we can then go back and benchmark the organization to demonstrate their leadership that we’ve actually delivered against the objectives of whatever the strategy might be. We can certainly look to do that approach with the AI readiness assessment. I think that’s something that we’ll see evolve over time with our own benchmarks as well. Because I think the other thing to bear in mind is that we’re always looking at the market of saying, Where’s the market headed in terms of the evolution of digital workplaces at large? And have we got sufficiently robust benchmarks to challenge those different areas? So I would like to envisage in the coming months that other benchmarks might join our portfolio of benchmarks to provide more rigorous appraisals of AI. But I think in the first instance, this AI readiness assessment has really been designed with organizations who are at the early stages of going, Are we actually ready for AI?
Do we need to do a bit of a sense check? And that’s where I think those are the organizations that should be looking to reach out to us and potentially have a discussion around, Are we right candidates for AI readiness assessment? Because I think from my perspective, it would give me a hell of a lot of peace of mind knowing where the strengths, weaknesses, and opportunities are with inside my organization before we went ahead with rolling out AI. Because I think that, and we’ve seen this time and time before, the collective knowledge of what DWG brings to the table is hugely valuable in helping leaders validate their own thinking or get an informed perspective from other organizations who have potentially dealt with these challenges or have worked through these challenges and can provide you with some valuable insights that can inform your own thinking. I think that’s where we can provide a really helpful perspective to those businesses that inevitably are all scrabbling around, trying to consume as much information as possible on this topic because it’s new to everybody. You just need to look at the number of organizations that were at the recent member meeting that we had at BT.
All of the businesses, I think, were at various stages of trialing AI, but I don’t recall any of them saying that they’d fully rolled it out to their enterprises as something beyond the trial. So I think we’re at a really interesting juncture. And I guess using something like the AI readiness assessment would really help organizations get a level of confidence that they’re ready to actually leverage this tool because I suspect what we might be seeing is a lot of organizations, as we’ve discussed in this podcast, may be coming to a conclusion that they’re not quite as ready as they would have liked to have been and have identified some of the problems that we’ve discussed and having to go back to the drawing board or potentially recognize the fact that they might need more investment to make this as successful as it can be.
For those organizations that are thinking about this space, if you had to offer either a final reflection or final word of advice as part of this conversation, what would that be?
I would say, give me a phone call or anybody in the DWG team, give them a DWG consulting team, including yourself. Let’s get in touch, let’s talk. Because I think at this stage where we’re really interested in understanding how we can add value to organizations in their journey with AI. I think it’s something that I would like to feel we can give organizations some peace of mind. Obviously, coupled that with organizations who are DWG members, we’ve already produced a huge quantity of… We’ve got access to live tours that I mentioned, the healthcare business that I referred to earlier, did a live tour of their AI Virtual Assistant. We’ve published a number of research papers. We’ve got a new research paper coming on very soon, and the topic of that is the AI Readiness Support Report. We’ve already published a research paper on generative AI. So I think there’s lots of resources that are available with inside the DWG member world. So for anybody who’s listening to this podcast and are interested in DWG membership, We do offer a free trial. So reach out to myself or Nancy, and we can get you hooked into that world and help you get a feel for membership.
But I think, yeah, I think with the AI readiness assessment, I think for us, it’s really about helping organizations understand the potential for AI within inside their organization, but crucially, get the peace of mind that they’ve covered all the bases and have a really good understanding as to where their strengths, weaknesses, and the opportunities are for them to move forward.
Well, Ed, thank you, as always, for coming in to share your energy and your passion and your insights. I’m sure that off the back of this episode, we’ll be getting lots of questions and opening up some very rich conversations with both members and leaders in our wider circle. So thank you for popping in for a chat. It’s always great to have you.
My pleasure, Nancy. It’s always good to catch up on these topics. Thank you very much for having me.
Digital Workplace Impact is brought to you by the Digital Workplace Group. DWG is a strategic partner covering all aspects of the evolving digital workplace industry, not only through membership, but also benchmarking and boutique consulting services. For more information, visit digitalworkplacegroup.com.
“My main concern is that a lot of organizations are rushing to roll out AI, but haven't really thought about making sure that they've got those foundational structures in place to allow them to do it successfully. And I suspect that the DWG AI Readiness Assessment may highlight to an organization that it might be better for them to potentially delay the desire to roll out AI at scale, and maybe concentrate some of their efforts on making sure that the foundations are in place to leverage AI to its fullest extent.”
DWG’s Chief Growth Officer
Learn more about DWG and our history, and the benefits of working with us.
Read MoreBook a free one-to-one consultation to discuss the current status of your digital workplace. Each consultation is followed up with a bundle of useful resources to help get you started.