Your digital journey starts here
Book a free one-to-one consultation to discuss the current status of your digital workplace. Each consultation is followed up with a bundle of useful resources to help get you started.
Digital Workplace Impact investigates and explores the ideas, practices and people impacting the new digital worlds of work.
- Chris Tubb, Lead Strategy Consultant and Benchmarking Evaluator at DWG
In the past 20 years, DWG has completed more than 700 benchmarks – both of intranets and digital workplaces – in each case providing insights for organizations wanting to take the next step in achieving their ambitions.
In this episode of Digital Workplace Impact, DWG’s benchmarking architect and lead benchmarking evaluator, Chris Tubb, helps to uncover the mysteries of all things benchmarking. What’s the DWG model? How does it work? How is it different from the Digital Workplace of the Year awards? And what are the biggest benefits to the organization of stepping into the benchmarking spotlight?
At a time when digital workplace teams are looking to sharpen their business cases for investment, provide proof of value or conduct health checks to inform their aspirations, DWG’s benchmarking gives an independent evaluation of how a digital workplace is performing, along with actionable recommendations for ‘what next?’.
Chris is one of DWG’s lead strategy consultants and a published author. DWG’s Chief Executive Officer and host, Nancy Goebel, leaves him with no question unasked, revealing just how much data and metrics can add to a world full of opinion.
So, for crisp insights and great conversation take a listen and evaluate for yourself if DWG’s benchmarking might be for you.
(Show notes, links and transcript for this episode.)
*Note: Sample reports are available to digital workplace practitioners only. Technology providers and consultants fall outside scope for this resource.
[00:00:00.650] – Chris Tubb
There’s no point having a benchmarking model if we refresh it every year. Very typically, we have organizations that come in, I guess, as fixer uppers. They know that they’re not performing well in an area. This is something that they’re going to look at. They’ve resolved to be able to improve and that they know they’re doing poorly, but what they need to know is how poorly, and that how they come in. And they’re interested in understanding how far behind they are with an express intention of them coupling this benchmarking activity with a refreshed strategic direction followed by action.
[00:00:43.550] – Nancy Goebel
My longtime colleague Chris Tubb stopped by the Digital Workplace Impact podcast studio today to chat about all things DWG benchmarking. Chris is not only one of DWG’s lead strategy consultants and published authors, he is also a DWG benchmarking architect and lead evaluator. In our 20 plus year history as a company, DWG has done over 700 benchmarks spanning intranet and digital workplace. At a time when digital workplace teams are looking to sharpen their business cases for investment, provide proof of value for investments, or conduct health checks to set next stage ambitions. DWG benchmarking is a way to get an independent evaluation of how your digital workplace, in whole or in part, is performing, as well as to get a crisp set of recommendations to help you take things forward. This is Nancy Goebel, your host for Digital Workplace Impact, which is brought to you as always, by the Digital Workplace Group. Join me now in conversation with Chris Tubb. Happy listening.
[00:01:52.770] – Nancy Goebel
Chris I am just delighted to have a chance to catch up with you to talk about all things DWG benchmarking in the Digital Workplace Impact podcast studio today. Welcome, welcome.
[00:02:05.510] – Chris Tubb
It’s fantastic to be here. Thank you Nancy.
[00:02:07.140] – Nancy Goebel
Chris, we’ve been working together for so many years. Gosh we must have done over 100 benchmarking report-backs together over the years. And very often we get questions from members and from practitioners in our wider circles about the magic behind DWG benchmarking. And so I just thought it might be really timely for us to come together and have a chat about just that topic.
[00:02:36.540] – Chris Tubb
Yes, benchmarking is a great process. I’ve been doing it for so long now that I stopped counting a long time ago. I was thinking about this because I was previously benchmarked when I was back in practice. This was in the decade before the last one. I was running the intranet at the Orange Group, which was part of the France Telecom Group, and we got benchmarked in designing usability. And I think that was 2005. So this was shortly after the IBF as it was the Intranet Benchmarking Forum, which was, I guess, the beginning of DWG. But that wasn’t wasn’t long after Paul Miller started that whole business. It’s the 20th anniversary this year, so these things come around, don’t they?
[00:03:29.650] – Nancy Goebel
It’s hard to believe that DWG has been working the benchmarking channels for 20 plus years, as you say. And in that time, we’ve done in excess of 700 benchmarking evaluations, which is quite a lot. And so maybe we need to start with a little bit of a level set for those who may be newer to the concept, because I do hear people use benchmarking in lots of different ways, and we’ve got a very specific view of what DWG benchmarking is and what sets it apart. So maybe we can start there Chris.
[00:04:06.100] – Chris Tubb
Sure, absolutely. Benchmarking is a way of just comparing. And I guess the way that we do that within DWG is that we come up with some models now, we don’t come up with those without a great deal of development. We’re thinking about what a model of best practice would be for a particular management practice or something that relates to intranet or digital workplace management in particular. So we’ve got a model. We try and understand what’s going on within an organization and compare people to that model. So that’s nice, but the important part is then we’re comparing each of the performances of those organizations with other organizations that have also been benchmarked. So this allows us to compare not only how well we feel these organizations are doing against our idea of what best practice would be, but against each other in reality. And I think that’s where the power comes from.
[00:05:17.190] – Nancy Goebel
And do you happen to know the origin story behind DWG’s benchmarking? You talked about being an early adopter during your time as an intranet manager.
[00:05:30.750] – Chris Tubb
Once upon a time, I know that it was mostly based around design and usability at that time, we were crafting intranets from scratch. Back then, there was none of these intranet in a box platforms or SharePoint. So we were just doing the best we could using the tools that we have. And as you can imagine, these intranets were possibly not the most usable or great user experience that we have now.
[00:06:04.560] – Nancy Goebel
And I would just add to that to say from the origin stories that I’ve heard about the benchmarking, it started roughly with a group of half a dozen organizations that were actively managing their intranets. And unlike dot com sites where you could actually see things in the public domain, people really wanted to understand how their intranets compared. And so DWG back then, the Intranet Benchmarking Forum developed a methodology, a model, as you called it, to be able to compare this group of organizations’ respective intranets to each other and then to facilitate conversation around the findings and then work in some recommendations. So I’d say, from my point of view, that’s the core story that I’ve heard. And I know there were naysayers early on who said that wasn’t possible to do. And here we are, 700 plus benchmarks later. And can you talk a little bit about how DWG benchmarking has changed over the years?
[00:07:13.850] – Chris Tubb
Yeah, I think it was around 2020 at kind of the turn of the pandemic that we decided to revise the intranet focused benchmarks. In particular, the collaboration had migrated by that time into tools like Microsoft Teams, so there was a large swathe of the management benchmarks which weren’t particularly useful. So we kind of retooled that towards being able to measure and understand the performance of, I think, different sort of teams and ranges of products. We provided the majority of the intranet benchmarking into a new benchmark, which is called modern intranet management that we in the team call MIM. And that allows us to look at very typical management tasks for the intranet team, things like strategy, governance, findability and search management. But we also look at quite stretch topics for many organizations, things like user centric content and design, making sure that you’re using tools like content strategy or content design to be able to make sure that content is as valuable to individuals as possible. We look at sort of task focused information that’s something that very often gets ignored, particularly in very communication centric organizations. Can people find policies? Can people find locations?
[00:08:46.810] – Chris Tubb
So this was something that we very were very much focusing on the intranet manager wherever they happened to sit within the organization. It was something that that team could understand its performance and its management practices. The second one that we came up with was very much more focused at digital communicators. It was clear that whilst communications on the intranet was an important channel that despite us attempting for 15 years to get rid of email, that hasn’t really happened. Email retain was always retained as an important channel for communicators. People still like the immediacy. So we created a benchmark very much focused on the range of digital channels that communicators use, as well as a variety of other management and prioritization tasks that communicators need to do, focusing on things like, have you got good central local balance between the core or group communicators and all of the local or divisional communicators? We also brought in things like real time and multimedia channels, very much the flavor of that moment where everybody was suddenly having to do their town hall on a Zoom meeting or a Team’s meeting, as well as looking at other broader channels, things like digital signage.
[00:10:13.650] – Chris Tubb
So I think that’s gone down very well. Very much focused on what digital communicators need so that they can really understand how they perform again against other organizations. We’ve got two other benchmarks that are also focused on, if you like, more sort of digital workplace topics. One is digital workplace management that’s focusing on what you need to do to manage this wider digital estate in a coherent way. So we look at strategy, we look at sort of product management, we look at how employee services are organized. We look at obviously, things like measurement and adoption, change in digital literacy. So those sorts of management topics, things are not directly intranet based, but all of these other sorts of tools that digital workplace teams deal with. Those are dealt with by digital workplace management benchmark. And then finally we specifically look at digital collaboration tools. This has been such a pivotal topic in the last couple of years, and there’s a whole range of collaboration tools that organizations put in. Very often these are the defaults that people have, the kind of the Microsoft Teams, as well as all of the different sort of project and team collaboration that people have.
[00:11:42.710] – Chris Tubb
But there’s a bunch of other things that need to go in to mean that people can collaborate well. Can you collaborate with people externally? Can you find good people profiles so that you can understand who the experts are within the business? Are you doing sharing the innovation, getting setting clear expectations about what you should and should not share, as well as other more emergent topics, things like people collaborating, creating their own collaboration environments, and facilitating people to use and create their own tools. So I think those are four really sort of solid benchmarks. The digital collaboration tools and the digital communications channels. Benchmarks very much sort of focus on the what it’s sort of a product focus. The modern intranet management and the digital workplace management benchmarks tend to focus much more on the discipline, the skills that teams need, very much more, the how and above all of these. We still have this digital workplace maturity benchmark which is getting on, but I think is going to be revised in the near future. And we’ll kind of understand what do we need to understand about the digital workplace over the next ten years so that you can compare your practice against other organizations and then really understand where you need to apply your efforts, and you can start those conversations about these repeated digital transformations that organizations will have to do again and again, for sure.
[00:13:26.900] – Nancy Goebel
And I think one of the important parts about this overall program that you’ve described is that there is a discipline around refreshing these benchmarks as the industry is evolving. In the first generation, moving from the four pronged intranet model that you described to then bringing in the digital workplace maturity benchmark to now having these four deeper dives that look at intranet collaboration, communication and the discipline behind managing the digital workplace. And then, of course, coming back in the not so distant future to evolve the next generation digital workplace maturity benchmark. It’s critically important to recognize that our industry is not a static one. And so the models are robust, but we do need to refresh them from time to time to make sure that we can continue to provide an ambition and relevancy around these evaluations.
[00:14:33.430] – Chris Tubb
We specifically design them so that they will endure. There’s no point having a benchmarking model if we refresh it every year. Very typically, we have organizations that come in, I guess, as fixer uppers. They know that they’re not performing well in an area. This is something that they’re going to look at. They’ve resolved to be able to improve and that they know they’re doing poorly, but what they need to know is how poorly, and that’s how they come in. And they’re interested in understanding how far behind they are with an express intention of them coupling this benchmarking activity with a refreshed strategic direction followed by action. They’re going to renew their intranet, or they’re going to renew their communications channels, or they’re going to renew aspects of their digital workplace. And at that point they will then say, great, we’re going to do a before and after. We’re going to understand how far we’ve come, how we have improved, whether we have improved. Very often we find some organizations go away and have to deal with something which might be quite tactical, doesn’t have a great deal of end user impact. Maybe they’ve moved from on premises to the cloud and actually that’s taken all of their attention and they haven’t been focusing on end user needs.
[00:16:03.010] – Chris Tubb
And we can say, well, actually, you haven’t made much progress on when we benchmarked you a couple of years ago. You still need to attend to these various matters. So when we’re designing these, they do need to endure, they do need to be long lasting. And I’m almost worried now, Nancy, that if we look at what we will do next, just because things like the generative AI is so much in our attention right now, that the danger is that you’ll kind of overreact and you’ll put this kind of flavor of the month in into these benchmarking models, and it will age very badly. Whereas we were still benchmarking the digital workplace maturity benchmark, we were still able to benchmark people’s collaboration using this maturity model that we have after Microsoft Teams came in, after Slack came in, even though those things hadn’t been designed into the benchmark itself because we were focusing on what people needed to do to collaborate, either in terms of meetings or in terms of working together on documents, et cetera, et cetera, et cetera. So it’s very important to make sure that these are long lasting and not necessarily fashionable.
[00:17:26.680] – Nancy Goebel
Yes, absolutely. I think that’s part of what makes our overall approach stand out, because technology will evolve, but fundamentally, people need to be able to find things. They need to be able to connect with colleagues and so on down the line. So we’re really looking at the essence of those needs and practices and comparing those things, and less so about the tool of the moment that’s supporting that capability. And so through this conversation, Chris, we’ve explored a little bit about what prompts our members to get benchmarks. So you talked a little bit about the fixer uppers who are looking to either generate a business case for change or to support the active redesign process and conducting a before and after snapshot. Are there any other reasons why DWG members get benchmarked that you can think of off the top of your head.
[00:18:26.850] – Chris Tubb
I think one of the key ones, and particularly both on the digital workplace side and on the intranet side, these things mean different things to different people. And very often organizations want to take an approach which will lead them through some change, but people are not necessarily aware of what that change should be or perhaps they disagree on what that change should be. And because it’s designed to be such a kind of an objective process, we haven’t mentioned, but we always have two evaluators to go through this. So not only are you having an external set of experts come in to look at what you do, they’re comparing you against a set model, they’re comparing you against your peers, but also that there’s two of them involved in this process, moderating what each is adjudicating upon or evaluating. But then you can be really clear that whatever internal politics that you have, the different discussions and directions that you might want to take your intranet or digital workplace in is that you’ve got nice, objective, neutral data to start that conversation with and possibly sometimes restart that conversation and making sure that these things are as objective as possible so that everybody can just take five, start again and go, right, what’s the best thing for us to do?
[00:20:01.460] – Chris Tubb
And one of the things that we always do, because when we’re speaking to people, we typically speak to five or six stakeholders within an organization. It’s not a user focused piece of research. What we’re talking to people about is their capabilities, what are the management practices that they have within this organization? And we’ll come back with lots and lots of quotes about the various practices and opinions that people have that we’ve spoken to. And with about five or six people involved, you get a certain idea of the sort of level of anonymity, but we anonymize things as much as we can. So we take out any specific names, we take out any specific department names. We want people to concentrate on the substance of what was said. We don’t want them to try and say, well, was that Bob? Was that Jane? We want people to be able to take our, again, neutral, expert view, take their position and the reality of where they are and take that forward on into their internal discussions. And I think that’s very often when there’s been a lot of discussion, when we report this back, you’ve provided a useful input so that that will take on these these decisions as they they go forward.
[00:21:41.170] – Nancy Goebel
And I think that point around the independent evaluation is an important one because we do know, human nature being what it is, that sometimes this space is politically charged and it’s either the louder voices or the more well funded groups that can drive the agenda. But ultimately we want to take a broader view as an industry around what good looks like and how we measure against that in order to improve the performance of the digital workplace as a whole within any given organization. And the other thing that I would add to the list of reasons members get benchmarked is that there have been cadre of organizations that have used it year on year to set their next stage ambition. And so that’s part of why having these enduring models that you talked about that have headroom for growth is important as well and that can be used to set budget as well as to set strategy.
[00:22:54.160] – Chris Tubb
Yeah, absolutely. Because they are these models of best practice as well – well, good practice. Best practice is often contentious, let’s say good practice. It forces practitioners within organizations to talk about a whole range of things which they wouldn’t normally discuss. Each of these models has sort of eight areas that we look at. We’re looking for 32 things that we’re going to score. So we’re asking about specific capabilities and practices. And if an organization doesn’t do them, it’s odd that you would perhaps come up with a discussion that would allow you to say well, why do we not, for example, use content design practices when we’re coming up with content on our intranet? So it’s a process which allows teams to really explore different skills and capabilities that they perhaps haven’t considered or perhaps haven’t heard of. And I think it’s a very useful tool to be able to bring a team together and discuss how you work because it’s only through working together that you’re going to make any of this more effective. It’s not going to get better by accident, it’s not going to get better by ignoring it. You need to use all of these different practices and skills and techniques and everything else and do it really, really well as a concerted effort to really perform to reach the potential that as an organization, you hope you can meet.
[00:24:46.920] – Nancy Goebel
So we’ve talked a little bit about, or a lot about why members get benchmark. Can you give us a slice of what the process looks like? You mentioned that it’s interview based and there’s typically a set of eight different framing points within each of the models with criteria nested within those. Anything else that we need to know about the process overall?
[00:25:14.430] – Chris Tubb
Exactly what I would say is that it’s not enormously time consuming when it comes to the organization because we’re looking at capabilities. We’re not doing 25 user interviews and a survey needing 1500 people. It’s not that onerous. We do sort of a two hour teams call where we, for example, look at the intranet, get a good understanding of the business context, get a good understanding of the team that’s looking after it and what the intranet looks like and what everything is called and then we run people through. So we’ll typically speak to what we call a hub who’s giving us this tour. We’ll take them through. We don’t read out these boring questions. We try and interpret them so that we have a nice free flowing conversation and then we try and take down verbatim notes about what people are saying. So we’ll use that typically, for example, that intranet manager or the intranet product owner. In the case of the modern intranet management benchmark, we’ll have this two hour call and then we’ll organize five or six additional interviews with relevant stakeholders. So for example, for the modern internet management benchmark, we might want to speak to the head of internal communications.
[00:26:38.710] – Chris Tubb
We might want to speak to someone in IT, we might want to speak to a couple of key publishers and somebody who’s looking after the content management, something like that. And what we’re trying to do for each of these 32 little areas that we’re going to evaluate everybody on is we’re trying to gather not only some evidence about what the practice is for each one of these areas, but also opinions around it. Do people think that what is being done is comprehensive? Do they think that there are gaps? Are they disappointed in the way that maybe these things are done or adopted? So that’s it in terms of the involvement, we get a lot through those two hour screen share session and five 1 hour interviews and that’s enough for us to go through and basically assign a score to each one of these 32 areas. Those are four areas are put into four areas in eight wider metrics and these are the areas that we can compare directly. We go away, we have a look at all of that data, the two evaluators look at all of the data, they score it individually and then bring that together in a scoring meeting and agree scores.
[00:28:02.870] – Chris Tubb
And that’s where I guess the clever bit happens because as we’ve got the scores against the model so it’s a percentage about how well you’ve done in each of these areas. You know, how many, how many points out of a possible eight that you can score. But then we can, just using a simple mathematical calculation is that we’re effectively then normalizing those scores and comparing how each organization is done against all of the other organizations. And that allows us to go, well, you as an organization are above average or below average. And that then gives a good way of understanding where you might want to place your efforts. We’ll then write up a report. There’s a big long report which we assume nobody reads. And then we do a PowerPoint summary which we then present back in again a two hour session and we make recommendations in the spirit of the model. So it’s not a hard sell, it’s not like consulting in the slightest. It’s very much more we believe you’re self sufficient as a team. This is what we would do to improve your performance against this model and then we ride off into the sunset.
[00:29:22.510] – Chris Tubb
So it’s not an onerous amount of work for our contact within a member. We do virtually all of the running and the overall it typically takes around kind of six weeks from beginning to end. The most difficult thing for the member hub is to be able to get hold of four or five colleagues who will speak to us and are going to expect us and making sure that those are the right people. But yes, it’s a very useful output at the end of that process.
[00:30:00.640] – Nancy Goebel
I like to say that it’s a very targeted intervention where the heavy lifting sits with DWG to frame the evidence gathering process. And on the other side, not only is there a clear view of the findings, but I think what’s important is that we detail very specific recommendations and for the members who are participating in that process, we actually tie those recommendations back to resources in our knowledge base to help jumpstart their next steps. And I think that’s part of what makes our approach a unique one. And so Chris, is this process one that’s designed to be repeatable?
[00:30:46.110] – Chris Tubb
Yeah, most certainly. We’ve talked about how these are designed to be long term. We can effectively repeat each of the benchmarks at a later date. We have exactly the same questions to ask. Sometimes we even speak to exactly the same people that happened earlier in this year. So most definitely that was able to be a comparable piece of data. So we can then understand at each of these individual levels whether things have got better or worse. Hopefully they’ve got a lot better and you see scores go up. Sometimes I’m afraid scores go down, attention goes elsewhere. Maybe somebody who’s really good leaves, maybe a management practice like whether you have a steering group meeting falls into disrepair or somebody has had a bad experience and that they give a very strong piece of opinion which sways some of the scores. So sometimes they do go down. And again, both of those, whether things have gone up or whether things have gone down, both of those are useful inputs to be able to understand what you’re going to do next time. Very often organizations, even though they are very high performing because they’ve done it again and again, they’re still begging to be told what the bad news is.
[00:32:12.330] – Chris Tubb
And it’s very often that an organization that has been above average across the board for this and maybe has got an overall performance a couple of standard deviations above the mean, but they still want to know, well, in which areas can we perform better? And I think it’s those teams with a real growth mindset that really get the best out of this. Very often we have organizations which are that’s their style. They actively go and look in all aspects of their business for external benchmarking because that’s the way that they approach things.
[00:32:55.190] – Nancy Goebel
Yeah, I know that. Certainly we’ve told lots of stories over the years with permission from specific members about those transformation journeys that you’ve described. I can remember once upon a time we did a series of evaluations with Nationwide, and they went from the bottom quartile of our league table to the top quartile over the space of three years. Why? Because they took the findings and the recommendations very seriously and did the hard work of putting the recommendations in place that came through the process and were able to show demonstrable progress to their leadership as part of this. And so we certainly know coming out of the pandemic that the digital workplace is an essential part of every organization. It is the digital headquarters of every organization. And as such, it’s an asset that needs to be managed and evaluated and nurtured along with every other asset in an organization. And so I’m going to take us in a slightly different direction. Chris, DWG is known for its benchmarking. It is equally known for the Digital Workplace of the Year awards. And can you sort of share a thumbnail on how you think they differ?
[00:34:22.210] – Chris Tubb
I think they differ greatly. I do love all of the awards, not just the Digital Workplace of the Year awards. And they’re always fantastic to see the good work that people have done. You see the beautiful screenshot of a beautifully designed intranet, and it’s always wonderful to see there’s a way more behind a great intranet than the screenshot you see in an award. There’s a variety of practices that people need to put in day in, day out, week in, week out, to really refine what they do and to keep these things sustainable. So after a year, after two years, after three years after launch, is it still as good as when it won that award? So I think that they’re entirely different things. We see great intranets which perhaps aren’t managed very well. We also see perhaps underinvested intranets that are managed brilliantly and they still tickle the boxes and continue to deliver for their business, even though that business might be in a moment of austerity. So I think that they’re very different and the management of these things need to be applied. It’s not just a snapshot in time. It’s something that needs to be I don’t know whether you could have a category, Nancy, for the best maintained intranet.
[00:36:05.980] – Chris Tubb
It might not look so great on the slide, I’m thinking.
[00:36:10.790] – Nancy Goebel
I guess just to add to that, Chris, what I would say is that both the benchmark and the awards that DWG run can be an opportunity to celebrate a moment in time, moments of accomplishment. With the benchmarking itself, I see it as a holistic process that allows you to hone in on not only the areas of strength, but the critical areas for improvement that will allow to continue to drive the value and the impact of the digital workplace. Over time, you happen to key off of the intranet, which is, of course, the front door to the digital workplace. But as you described earlier, there are a wide variety of areas that we look at in exploring the digital workplace performance as a whole. And so any final thoughts or reflections or even advice you’d like to offer individuals that are thinking about the role and value of benchmarking?
[00:37:17.870] – Chris Tubb
I think I’d encourage members to benchmark. It allows some really good direct comparisons with your peers. It’s a great way of understanding where you need to apply your efforts. And the more people that go through the benchmarking model, the more useful it becomes. The bigger those leak tables are, the more that there’s someone who’s within the same sector as you, and it makes it easier to justify your improvements or to really show off validate your efforts. So I’d encourage organizations to really make good use of it.
[00:38:03.120] – Nancy Goebel
And I always say, as part of our benchmarking report backs, that the league tables, as you’ve described them, serve a number of different purposes. Not only do they allow you to see how your organization compares to other digital workplaces in whole or in part, but also they give you some important cues about the organizations to connect with for deeper conversation. We’ve had lots of occasions over the years where organizations actually sat down side by side with another organization to talk through the results of their individual benchmarks and to learn from each other. Because ultimately, the benchmarking process is designed to help you surface new ways of approaching things that perhaps you’ve been struggling with on your own. And I think the added benefit is that within the membership, all of those conversations happen in a confidential setting, which allows these organizations to feel comfortable sitting side by side.
[00:39:12.630] – Chris Tubb
Absolutely.
[00:39:14.290] – Nancy Goebel
So, Chris, have we missed anything?
[00:39:17.410] – Chris Tubb
I’d mention one thing as well, Nancy. DWG is very good at doing this. And one thing that I would say is that if anybody has an idea for a benchmark, we can develop them. And we’ve had a few examples in the past of organizations who want to be able to compare themselves to other organizations, but what we have provided doesn’t quite fit the box, doesn’t quite tick the box. So we have, in the past, actually created entire benchmarks to be able to specifically understand where an organization is in a particular practice. And then we’ve even gone to the extent of comparing other organizations using that benchmark and then sort of sharing an anonymized report afterwards so we can use them to really show off competitive advantage. And that’s another use of them that’s.
[00:40:23.040] – Nancy Goebel
A great addition to cap off our time together. Chris, thank you so much for coming into the studio today. It’s always a treat to be in conversation with you, whether it’s talking about benchmarking as we did today, or the research that you’ve published, or just generally what you’re thinking about moment in time.
[00:40:43.450] – Chris Tubb
It’s a pleasure, Nancy. Thanks so much.
[00:40:47.370] – Nancy Goebel
Digital Workplace Impact is brought to you by the Digital Workplace Group. DWG is a strategic partner covering all aspects of the evolving digital workplace industry and boutique consulting services. For more information, visit digitalworkplacegroup.com.
Learn more about DWG and our history, and the benefits of working with us.
Read MoreBook a free one-to-one consultation to discuss the current status of your digital workplace. Each consultation is followed up with a bundle of useful resources to help get you started.