Compliance benchmarking: Benefits, limitations, and best practices

 

What you'll learn on this podcast episode

Guidance from the US Department of Justice, particularly the recent 2020 memorandum, stresses that a company’s compliance program must reflect and evolve with its risks—and should not be a snapshot or on cruise control. But in assessing those risks, it’s helpful to see what other companies in the same area or circumstances have done to meet them. Collective action and coordination can be very useful in dealing with common risks. So, when is benchmarking and a collective approach to risk helpful? And when can it backfire? In this episode of the Principled Podcast, LRN Director of Advisory Services Emily Miner continues the conversation from Episode 6 about benchmarking with her colleague Susan Divers. Listen in as the two discuss the benefits and limitations of benchmarking, and how organizations can ensure they benchmark their E&C programs effectively.   



Where to stream

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Audible Listen on Google Podcasts_@2x Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

 

Guest: Susan Divers

Susan_Divers_Principled_Podcast

Susan Divers is the director of thought leadership and best practices with LRN Corporation. She brings 30+ years’ accomplishments and experience in the ethics and compliance arena to LRN clients and colleagues. This expertise includes building state-of-the-art compliance programs infused with values, designing user-friendly means of engaging and informing employees, fostering an embedded culture of compliance, and sharing substantial subject matter expertise in anti-corruption, export controls, sanctions, and other key areas of compliance.

Prior to joining LRN, Mrs. Divers served as AECOM’s Assistant General for Global Ethics & Compliance and Chief Ethics & Compliance Officer. Under her leadership, AECOM’s ethics and compliance program garnered six external awards in recognition of its effectiveness and Mrs. Divers’ thought leadership in the ethics field. In 2011, Mrs. Divers received the AECOM CEO Award of Excellence, which recognized her work in advancing the company’s ethics and compliance program.

Before joining AECOM, she worked at SAIC and Lockheed Martin in the international compliance area. Prior to that, she was a partner with the DC office of Sonnenschein, Nath & Rosenthal. She also spent four years in London and is qualified as a Solicitor to the High Court of England and Wales, practicing in the international arena with the law firms of Theodore Goddard & Co. and Herbert Smith & Co. She also served as an attorney in the Office of the Legal Advisor at the Department of State and was a member of the U.S. delegation to the UN working on the first anti-corruption multilateral treaty initiative. 

Mrs. Divers is a member of the DC Bar and a graduate of Trinity College, Washington D.C. and of the National Law Center of George Washington University. In 2011, 2012, 2013 and 2014 Ethisphere Magazine listed her as one the “Attorneys Who Matter” in the ethics & compliance area. She is a member of the Advisory Boards of the Rutgers University Center for Ethical Behavior and served as a member of the Board of Directors for the Institute for Practical Training from 2005-2008. She resides in Northern Virginia and is a frequent speaker, writer and commentator on ethics and compliance topics. 

 

 

Host: Emily Miner

Emily_Miner_Principled_Podcast_guest

 

Emily Miner is a director of LRN’s Ethics & Compliance Advisory services. She counsels executive leadership teams on how to actively shape and manage their ethical culture through deep quantitative and qualitative understanding and engagement. A skilled facilitator, Emily emphasizes co-creative, bottom-up, and data-driven approaches to foster ethical behavior and inform program strategy. Emily has led engagements with organizations in the healthcare, technology, manufacturing, energy, professional services, and education industries. Emily co-leads LRN’s ongoing flagship research on E&C program effectiveness and is a thought leader in the areas of organizational culture, leadership, and E&C program impact. Prior to joining LRN, Emily applied her behavioral science expertise in the environmental sustainability sector, working with non-profits and several New England municipalities; facilitated earth science research in academia; and contributed to drafting and advancing international climate policy goals. Emily has a Master of Public Administration in Environmental Science and Policy from Columbia University and graduated summa cum laude from the University of Florida with a degree in Anthropology.

 

 

Principled Podcast transcription

Intro: Welcome to the Principled Podcast, brought to you by LRN. The Principled Podcast brings together the collective wisdom on ethics, business and compliance, transformative stories of leadership and inspiring workplace culture. Listen in to discover valuable strategies from our community of business leaders and workplace change makers.

Emily Miner: Guidance from the US Department of Justice, particularly the recent 2020 memorandum, stresses that a company's compliance program must reflect and evolve with its risks and should not be a snapshot or on cruise control. But in assessing those risks, it's helpful to see what other companies in the same area or circumstances have done to meet them. Collective action and coordination can be very useful in dealing with common risks. So when is benchmarking and a collective approach to risk helpful, and when can it backfire?

Hello, and welcome to another episode of LRN's Principled podcast. I'm your host, Emily Miner, director of Advisory Services at LRN. Today I'm continuing my conversation from episode six about benchmarking with my colleague Susan Divers, our director of Thought Leadership and Best practices. We're going to be talking about the benefits and the limitations of benchmarking and how organizations can ensure they benchmark their E&C programs effectively.

Susan brings more than 30 years experience in both the legal and E&C spaces to this topic area with subject matter expertise in anti-corruption, export controls, sanctions, and other key areas of compliance. Susan, thanks for coming on the Principled podcast.

Susan Divers: Oh, Emily, it's always nice to talk with you.

Emily Miner: So Susan, before we get started, let's kind of define benchmarking and summarize the conversation that I had in our last podcast with our colleague Derek. So benchmarking means comparing what you do as an organization in this case to a usually large number of comparable organizations or individuals. And most often, this is done in a quantitative way, although there are also opportunities to benchmark qualitatively.

And at LRN, we've been using benchmarks for a number of years now through our research reports. We've conducted major panel research on the role of ethical culture in an organization and in organization's risk of misconduct. So looking at how that varies across countries, across industries. We conduct every year research into ethics and compliance program effectiveness research that you lead and that you and I collaborate on. And we've been doing that for, oh gosh, coming up on, I don't know, maybe eight years now. That's been given us a insightful look into Ethics & Compliance Program best practices, and how they've evolved over time. We've also conducted research on codes of conduct, analyzing nearly 150 publicly listed codes of conduct from the top listed companies around the world and looking at similarities and differences and best practices in that space.

But we have a brand new product at LRN that we're launching later this month that I know we're all really excited about called Catalyst Reveal, which is a platform that will, as it's name suggests, reveal insights to our clients about their ethics and compliance program, things like course level data training, data, employee sentiment, ethical culture. It will also give our clients the ability to see how their results along these metrics compare with other organizations in the LRN client universe. So looking at by industry, by company size, and a few other comparable filters.

So with that exciting launch as our backdrop, I wanted to talk to you as an expert and a thought leader in this space about benchmarking compliance programs, when to do it, when not to do it, et cetera. So let me turn it over to you, Susan, and let's start with the benefits. What are the benefits of benchmarking in ethics and compliance program?

Susan Divers: Sure, Emily, I'd be happy to talk about that. In thinking about this topic, there are really three really good functions that benchmarking is appropriate for. And then there are some where it's not so appropriate and we can talk about all of that. But starting with what it's very appropriate for, the first is if you're setting up a program, you need to figure out kind of what are the basics that you need to do at the outset. And it can be very helpful particularly if it's a new program, and it usually is if it's setting it up to be able to say your management, "We have to have a code. We have to have policies. We have to have audit. And we have to have training" and those are kind of the four basic pillars and being able to make that case. That's very basic, but it can be very helpful in terms of people who are struggling to get started in what we all know is a really complicated area.

So that's kind of the first setting where benchmarking I think can be very helpful. And then the second is you've got your program and you're up and going. Now, no two companies are alike, no two industries are alike, and I can get into that a little bit later, but it's helpful to know if you're mainstream or not. Like for example, our Ethical Pulse Culture check lets you sort of get an idea from a short questionnaire embedded in our platform in Reveal whether your culture is really out of whack or pretty much along the same lines as mainstream. And again, that's really helpful because it can show you an area where you're maybe excelling and it's good to take credit for that and scale it, or it can show you an area where you're deficient and it's good to know about that too.

And then the last is, and this is where for example Ethisphere has done a lot of really good work, best practices. People are constantly innovating. I'm always amazed at how ethics and compliance programs are changing and getting better. And we can talk about that a little bit, and Reveal's going to be very helpful there. But benchmarking can give you ideas that can be very valuable for enhancing your program. So those are sort of the three big areas where I think benchmarking can be extremely helpful.

Emily Miner: Yeah, thanks Susan. And on that last point that you shared, that's really resonating because if nothing else, benchmarking or surveying what other companies are doing out there with respect to ethics and compliance and different facets of that, it gives you as an ethics and compliance professional just an idea of what's possible. Maybe there's a new approach to communicating with your employees that you haven't thought of that might work for your organization.

I'm at the SCCE's Compliance & Ethics Institute right now, and there was a session yesterday about one particular organization's sort of their evolution of their compliance program following some significant trust that was lost in the organization to senior leader misconduct. One of the things that they talked about was having employees around the globe put on skits that they turned into videos that dealt with ethics moments and how the actors, which were the employees of the organizations, would kind of get famous around the world for their skits. It was a very lighthearted way of communicating very serious topics that resonated for this particular organization. But a lot of people in the room were asking questions, "Oh, well, how could I put together a skit like that? Did you write the script or did the employees come up with it and this and that?" Just that it's a way of sharing ideas and fostering innovation across the industry that can be really exciting and powerful.

Susan Divers:    Yeah, that's a great example, but maybe it's time to talk a little bit about the limits of benchmarking too because that's a good illustration of the point that benchmarking's good for the three things we just talked about. Setting up, making sure that you're in the mainstream and not at either end, or maybe you want to be excelling and then getting ideas and best practices. What it's not good for is saying, "Hey, we met the criteria." And the reason is there isn't a criteria. In fact, there was a quote two days ago or so from the CEO of Advanced Micro Devices, and she said, I quote, "It's like running a different company every two years."

So the point I'm trying to make here is that your program has to be based on your risks, and those risks can change dramatically, I mean, certainly in the semiconductor area, and that's what she was talking about. The risks have changed, they basically changed radically with all the changes with China and the export sanctions and the war in the Ukraine. So it's not enough to say, "Hey, I'm doing what everybody else is doing in that area."

And secondly, the other big problem is comparing apples to apples. I picked three consumer companies to sort of illustrate this. One is Walmart, which obviously is a big consumer company. Another is PepsiCo, another is Mondelez. And if you look at all three, they all have really different risk profiles. They may be in the same area generally, but Walmart's much bigger than the other two. Walmart had a major scandal a number of years ago where they wound up paying, I think it was 137 million in 2019 because in order to get permits for their stores in Latin America, particularly Mexico, their lawyers were actually paying bribes. When you think about it, that should have been something that they were sensitive to on their risk profile and both training and auditing the local lawyers. Also, there was some lawyers on their teams internally. That was a risk and they failed to mitigate it.

PepsiCo is bottling, and so do Mondelez has plants, but it's not quite the same level of regulatory intensity as setting up a store, hiring people, environmental health. So I use that example because I'm trying to pick an industry and say, "Well, if you compared yourself to one, you might miss some of the particular risks that you have."

One of the also things to bear in mind, and you alluded to it when we started, is that DOJ has never recommended benchmarking in all of the guidance. In fact, they've said things that kind of contradict benchmarking if you were using it to say, "Hey, we met the norm." They've said, "You don't want to be on cruise control," and that's because things change. And they've also said, "You don't want to just take a snapshot of your program at a given time." And that's kind of what the CEO of Advanced Micro Devices was saying too. And that's because any time you're looking backwards rather than forwards, you could miss the iceberg that's looming up ahead and going to sink the Titanic. So at any rate, I think benchmarking can be very useful, but you have to use it for the right purposes and you have to bear in mind the limitations.

Emily Miner: Right. Absolutely. It's never the be all end all. It's one data point that we should be collecting and looking at in some situations and not others. And in those situations, it's one of many that we should be considering when we're thinking about program effectiveness.

Susan Divers: Yeah, it's an element. Yep, absolutely.

Emily Miner: So let's kind of tease this out a little bit more. Where do you see benchmarking being helpful? I know that you gave those three scenarios, but maybe if you could pick out a concrete example to share against any of those three scenarios to illustrate how it can be helpful or when it can backfire.

Susan Divers: Sure. Well, let's pick another consumer company, Anheuser-Busch. This is a great example because it illustrates how benchmarking can be used very effectively to drive a best practice. Anheuser-Busch had a very prominent CECO who has very recently left to go to the Department of Justice in the last couple of months. When he was there, he set up an internal data analytics program that was able to pull data from their own systems, payments, SAP of course, onboarding and pick out red flags without, if you will, human intervention. In other words, he was able to take a number of data streams from various parts of the company and meld them together. And because he was very good CECO, he was able to figure out what some of the risk signs were or the red flags.

What it did is it enabled Anheuser to manage its third parties, which if you think about it, beer distribute, beer companies have a lot of third parties. And then they could focus in on those companies, those third parties where there were red flags. They didn't have to audit everybody to the same degree of intensity. And that approach of internal data analytics was a best practice that was gathering steam, sorry. But once Matt really took it to the next level and showed how it could be done, then it really became mainstream in the E&C area. And Matt's now at DOJ. So if you're going to go in and have tense talks with regulators, being able to talk about what you're doing in benchmarking is important. And it takes us back to Reveal where Reveal is a really powerful tool that we've developed that will enable you to see red flags or predictive factors. And again, remember looking backwards doesn't really help you because it doesn't tell you if there's a big iceberg about to sink the Titanic.

But looking forward and saying, gosh, the data that's coming in from Asia on attempts to pass courses or on our Ethical Pulse Culture check or other features is worrying. It's nothing specific that we know about at this point, but it indicates that, I'm just picking on Asia randomly, it indicates that we need to spend some time in Asia figuring out what's going on.

So that's really an excellent use of benchmarking and that's a good story as to how understanding what best practices are emerging and adapting them then for you, because nobody could simply take Matt's system of third party analytics and plug it into their company and come up with the same results. It has to be tailored and it has to be specific. But that's a really good example of what DOJ is talking about in this area where they say you have to tailor it to your risks. So does that make sense?

Emily Miner: Yeah, absolutely. It's a great example with Anheuser-Busch and the system that they set up. I want to kind of talk about specific types of data that we collect in ethics and compliance or can collect, because I feel like the kind of two most common ones that organizations want to benchmark are training completion rates, that's a metric that is easy to collect and is often one that is shared, and hotline. "Oh, my hotline reports. How does this compare?" And the hotline providers will publish annual benchmarking reports on hotline.

So we've got course completions, we've got hotline data, but we also collect other data points, or there are other places where we could to think about program effectiveness. I'd love to hear from you, as you think about the universe of ethics and compliance data, where do you think kind of benchmarking holds water and where does it not?

Susan Divers: That's a great question, Emily, and I'm glad you asked it. Let's start with the hotline because that's a really good example in a lot of ways of two of the pitfalls. One of the major pitfalls that we touched on is are you comparing apples to apples or apples to potatoes? A company, let's take Starbucks for example, they have 300,000, relatively young, many of them first job employees. And are they going to call the hotline if they see something or worried about something? The odds are probably no even though they've got a big kind of young and engaged workforce because they're inexperienced. Most of their employees, I was talking to their CECO last week, and most of their employees really haven't worked extensively in the workplace. So Starbucks might have really low hotline numbers.

Another company that's largely unionized, on the other hand, because unionized workers generally know about the hotline and they know about formal complaint processes, they'll have high hotline usage compared to other companies. Let's just pick a slightly ridiculous example, but a big manufacturer of clothing like the Gap or something. You'll have unionized workers in the plants, but Booz Allen is a consulting company. Are you going to compare hotlines between Booz Allen and the Gap? That really is an apples to potatoes comparison.

So I think hotline benchmarking, and I know most of my colleagues in the E&C area would agree is very, very difficult because you'd have to really know what the workforces are to try to get an idea. And then secondly, it can be driven by other factors such as when I was at AECOM, we deployed a lot of people in the Middle East and the conditions were harsh. So our hotline complaints would go up when people were under stress, but another company might not have that circumstance.

Emily Miner: Yeah, that's such a great point about when you're using benchmarking and you're considering using benchmarking, you have to be really thoughtful about what that benchmark pool is made up of. The union example is such a great one because even within the same industry, you compared the Gap to Booz Allen, but even within the manufacturing industry, for example, not all manufacturing company has a unionized workforce. So you can think, "Oh, well it's manufacturing, so it's comparable," but it might not be depending on the workforce dynamics. That level of insight isn't always available when we're benchmark data sources.

Susan Divers: We forgot one thing that both of us know, which is I think the last stat I saw was more than 90% of meaningful issues are not raised through the hotline, they're raised in conversations with managers. So I've never been a fan of hotline benchmarking.

Emily Miner: Yes, absolutely.

Susan Divers: But to turn to training completions, that's an interesting one too. Again, it really depends. If you're using an old fashioned training provider whose library consists of 45 minute or even longer lectures, sort of Soviet style on the evils of sexual harassment, first, it's probably not very effective. And secondly, a lot of people won't complete a 45 minute course just because it's long. If the training is repetitive and hectoring, they'll drop out. Whereas the kinds of courses that we have and that we emphasize are very engaging, they tend to be shorter, they tend to be more microburst learning.

So again, what are you comparing? Do you have a lot of employees on the shop floor? Well, it's hard for them. They can't really just take a break, sit down at their laptop and open up a course on antitrust. So again, I think training completions can be tricky. It doesn't mean it isn't interesting to see that data, but figuring out, again, whether you're making an apples to apples or an apples to potato comparison, I think is really important. And then secondly, remember, it's retrospective looking. It's not telling you anything about what's coming around the corner.

Emily Miner: Mm-hmm. One thing that we've focused on in this discussion is comparing ourselves to other organizations. I mean, that was how I even defined benchmarking at the outset, but there's also internal benchmarking, comparing your own performance year over year or whatever the period of time is. When you were just talking about training completion, it made me think about that internal comparison, less so with training completion because I think it tends to be high, a lot of companies mandate it so there can be penalties for not completing training. So if it's high for that reason alone whether or not it's good or relevant to employees or they liked it or whatever.

But thinking about metrics like pass/fail rates or number of attempts or test outs or some of those more nuanced training related data points and comparing against yourself year over year and seeing what has changed and what might be the result of that. I mean, maybe you noticed in year one that it was taking the majority of your employees or a significant minority of your employees more attempts than you wanted to answer certain questions correctly related to a certain risk topic. And so then as a result, you rolled out some focused communication and maybe you targeted specific groups of people where you noticed were particularly struggling for additional manager led conversations or whatever. And then in year two, does that pass rate or attempt rate improve? That's a helpful metric because you're comparing apples to apples, you're comparing yourself and you're able to connect it back directly to specific interventions that you may have need to make improvements in that area.

So I just wanted to point out that benchmarking can be done internally as well. It's not always an external exercise even though that does tend to be how we talk about it.

Susan Divers: Well, and you're exactly right, and that's where it gets really valuable because first you can make sure that you're comparing apples to apples. For example, if you've just done a merger and suddenly your population of employees has doubled, well obviously then you know that you've got a much different comparison year over year, but you can break that down and you can make those comparisons by manipulating the data.

Secondly, your Ethical Culture pulse survey is a really good tool year over year adjusted for employee population size. And if we've got new people coming in the company, a merger for example. And it can be proactive. It can, again, spot trends as you were just saying that indicate that you may need to spend more time with people. But the beauty of internal benchmarking, particularly the way Reveal has set that up for our clients and made it easy is that you can get genuine insights looking at what happened last year, what happened this year and you know some of the reasons why there may have been a change. Whereas if you're comparing yourself to, I don't know, Ernst & Young, you don't. You don't have visibility in terms of their numbers. So internal benchmarking, I think you're right to stress that. And it's a very, very valuable tool.

Emily Miner: I've done, as you know, a lot of work with organizations evaluating and assessing their ethical culture. The trend that I've noticed with those clients that we've done this type of work year over year over year is that the benchmark, the external benchmark just grows. It's important kind of in year one and maybe year two, but after that it ceases to be relevant and the companies don't really care what it is anymore because it's also they're not shooting for the benchmark. The benchmark is often the average and they want to be above average. And so it's more about competing with yourselves and how did we improve against our own performance last year?

And so that's just been interesting to observe. I think as companies get more robust in their use of data and their tools and how it informs their strategy in some areas like ethical culture for example, that external comparison just becomes less relevant over time.

Susan Divers: That's a really good point too. And that gets back to the Department of Justice saying, "Don't put your program on cruise control." And I do remember, I think it was 15 years ago when benchmarking was much more trendy and before people really thought through the limitations, someone was bragging that they had benchmarked their program against Boeing. Boeing then subsequently had major meltdowns left, right, and center most specifically and tragically the 737 MAX where people died. And so running around saying, "Hey, my program benchmarks well against Boeing" may not have been really a compliment to the program in the end. But it also misses the point which you're making, which is you have to look at your program and what's gaining traction with your people and where the proactive red flags are emerging because that's what enables you not to be Boeing, not to pick on Boeing, but it's a good example.

Emily Miner: So Susan, let's wrap up by offering some recommendations to organizations that are thinking about program effectiveness, how they measure that. They want to have those benchmarks. Maybe they fall into those three scenarios that you outlined at the beginning. What recommendations or best practices would you offer to those organizations, to your peers?

Susan Divers: Well, the first one is be really smart about it and avoid comparing apples to potatoes. And to do that, you have to really think it through. What are we comparing to whom and how similar are they? I really, again, think that's most useful for kind of like, "Are we in the mainstream? Or is there something maybe we forgot?" If it turns out that everybody in your industry has suddenly amended their training curriculum to train about trade controls in the wake of the Ukraine war and you haven't, well, that's a helpful benchmark.

But I think the main ones that are valuable are what we were talking about with best practices and data analytics and the creative use of data analytics that are tailored to that particular company is a great example of that. And then the second one as you pointed out which I think is equally valuable and really essential too, is internal benchmarking up to a point where you're able to see what direction things are going in. And again, it's more in the nature of red flags rather than a way of saying, "Hey, we met the requirement, we're good." It's, "How are people doing this year compared to last? What does that tell me about where I need to focus my resources?"

Emily Miner: Mm-hmm. Mm-hmm. Yeah, Susan, thank you so much. And thank you for joining me on this episode. We are out of time for today. So to everyone out there listening, thank you for listening to the Principled Podcast by LRN. It was a pleasure to talk with you, Susan.

Susan Divers: Oh, it's always a pleasure to talk to you, Emily.

Outro: We hope you enjoyed this episode. The Principled Podcast is brought to you by LRN. At LRN, our mission is to inspire principled performance in global organizations by helping them foster winning ethical cultures rooted in sustainable values. Please visit us at lrn.com to learn more. And if you enjoyed this episode, subscribe to our podcast on Apple Podcasts, Stitcher, Google Podcasts, or wherever you listen and don't forget to leave us a review.

 

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Stitcher Listen on Audible Listen on Google Podcasts Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM