Featured image

Can vs. Should: How Salesforce integrates ethics into AI technologies

What you'll learn on this podcast episode

Amid all the conversations about artificial intelligence in the marketplace, there is a growing focus on the ethics behind AI technologies. How do we ensure the responsible development of generative AI tools? What role do we play in the ethical deployment of AI-oriented business initiatives? In this episode of the Principled Podcast, host Emily Miner examines these questions with Rob Katz, the vice president of product management for responsible AI and tech at Salesforce. Listen in as the two discuss what ethical AI means in practice and how organizations can better integrate ethics into the development of their products, technologies, and services.

Read the National Institute of Standards in Technology’s AI Standards. 

Read Salesforce’s AI Acceptable Use Policy.  


Where to stream

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Audible Listen on Google Podcasts_@2x Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

 

Guest: Rob Katz

Rob Katz – Grayscale

Rob Katz is the vice president of Product in Salesforce's Office of Ethical and Humane Use of Technology. He co-leads Salesforce's Responsible AI & Tech team, which ensures the company's products and software are designed, developed, and delivered based on a foundation of Responsible AI principles and best practices. He and his wife Clara live in the Seattle area with their two children.

Host: Emily Miner

Host - Emily Miner

Emily Miner is a vice president in LRN’s Ethics & Compliance Advisory practice. She counsels executive leadership teams on how to actively shape and manage their ethical culture through deep quantitative and qualitative understanding and engagement. A skilled facilitator, Emily emphasizes co-creative, bottom-up, and data-driven approaches to foster ethical behavior and inform program strategy. Emily has led engagements with organizations in the healthcare, technology, manufacturing, energy, professional services, and education industries. Emily co-leads LRN’s ongoing flagship research on E&C program effectiveness and is a thought leader in the areas of organizational culture, leadership, and E&C program impact.

Prior to joining LRN, Emily applied her behavioral science expertise in the environmental sustainability sector, working with non-profits and several New England municipalities; facilitated earth science research in academia; and contributed to drafting and advancing international climate policy goals. Emily has a Master of Public Administration in Environmental Science and Policy from Columbia University and graduated summa cum laude from the University of Florida with a degree in Anthropology.

 

Principled Podcast transcription

Intro: Welcome to the Principled Podcast brought to you by LRN. The Principled Podcast brings together the collective wisdom on ethics, business and compliance, transformative stories of leadership and inspiring workplace culture. Listen in to discover valuable strategies from our community of business leaders and workplace change makers.

Emily Miner: Amid all the conversations about artificial intelligence in the marketplace, there is a growing focus on the ethics behind AI technologies. How do we ensure the responsible development of generative AI tools? What role do we play in the ethical development of AI-oriented business initiatives? They're tough questions, and they're ones that software development company, Salesforce, strives to answer each day through its Office of Ethical and Humane Use. 

Hi, and welcome to another episode of LRN'S Principled Podcast. I'm your host, Emily Miner, Vice President of Advisory Services at LRN. Today I'm joined by Rob Katz, Vice President of Product Management for Responsible AI and Tech at Salesforce. We're going to be talking about what ethical AI means in practice and how organizations can better integrate ethics into the development of their products, technologies, and services. Rob is a real expert in this space, having spent the last decade of his career leading ethical and global product management efforts for Salesforce as well as Amazon. Rob, thanks so much for joining me on the Principled Podcast. 

Rob Katz: Hi Emily. Thanks so much for having me today. 

Emily Miner: Rob, let's start by talking about the Office of Ethical and Humane Use at Salesforce. What is the purpose of this office, and what is your role within it? 

Rob Katz: Thanks, Emily for that question. Salesforce created the Office of Ethical and Humane Use of Technology back in 2018, and it was at the direction of our CEO, Marc Benioff. Marc was asking himself how to deal with ambiguous and hard topics related to questions he was getting from our internal team, from our customers, from stakeholders, from policymakers about not just whether we can build a particular technology, but whether we should build a particular technology. 

For example, Salesforce was being asked whether or not our technology should be used to facilitate the sale and marketing of certain classes of weapons, firearms, ammunition online. Now, Salesforce is a customer relationship management software. It allows organizations to connect with their customers, and so some of our customers are businesses and organizations that sell a wide variety of things and market a wide variety of things to their clients, both direct to consumer as well as business to business. If an organization wants to sell firearms and ammunition online, it's technically legal to do so in the United States. 

But Salesforce really was uncomfortable with this and, led by the Office of Ethical and Humane Use, created a policy that prohibited the use of our commerce cloud, which is what it sounds like, it's a e-commerce technology and our marketing cloud, which is what it sounds like as well, it allows people to market products and services, from being used to sell or market these kinds of weapons and ammunition directly to consumers in the United States. That's the kind of ambiguous question that you might have to deal with inside of a large technology company like Salesforce. It's not just whether you can do it, it's whether you should do it. There's a range of topics that we've covered inside the Office of Ethical and Humane Use over the years. 

Emily Miner: Rob, I love the framing that you're using around whether we can versus should build something. That's a framework that we think about a lot at LRN, and I think many in the ethics and compliance space think about and frame what their organizations should be achieving. For example, there's been a marked shift in the ethics and compliance community away from telling employees what they can and can't do, so here's your list of rules, but rather enabling employees to think about what they should and shouldn't do, which really comes down to values and ethical principles. It's a framework that I think is, it's really analogous, but within your specific context within the tech community. I'm kind of taken by that framing. 

Just, you mentioned it briefly, but to expand upon it a little bit, were these ambiguous questions, were they coming from within Salesforce from your employees or was it more a reaction to questions from your customers or was it a bit of both? 

Rob Katz: As far as the formation of the office, a lot of the initial questions that spurred the creation of the Office of Ethical and Humane Use of Technology came from employees. This was a time when in 2017, 2018, a lot of employees inside of larger organizations, especially in the technology space, were asking hard questions about what their technology was being used for or not and what they were building. It's their responsibility to ensure that the technology they were building was used for the desired stated good purpose and not for something that was unintended. 

It does, to your point, come down to values. Salesforce has five core values. Our number one value is trust, and our number two value is innovation. How do you build something that is trusted while also creating space for innovation? So it's not simply about saying no and we're not going to do this, but rather how might we? In that space of trying to figure out how might we, we not only created an Office of Ethical and Humane Use of Technology to deal with policy questions like the firearms policy that I alluded to earlier, but also to embed directly into our product and engineering organization, to work directly with the people who were designing, developing and delivering the software that we build in order to ask the question of, as I said before, not just whether we can but whether we should and how might we build it in a more responsible and inclusive way. 

That's why I ended up coming to Salesforce was to build that function of embedding directly into the product and technology organization and leading an effort to ensure that our ethical use principles are holding up the values of trust and of innovation in partnership with the people who are building the Salesforce technology. 

Emily Miner: Thanks for that. When I was reading different articles and white papers that your office has published in preparation for our conversation, I noticed that it seemed as if there's some sort of interchangeability between the terms ethical and trust, or maybe that's not the right way of putting it, but rather that you can't have one without the other. Often in your writing there's this baseline premise that an ethical development, an ethical deployment, an ethical use of a product, a technology, a service, whatever, can't be divorced from that being trusted, keeping that idea of trust really at the center of how you're doing what you're doing. It's interesting now to learn that that's actually Salesforce's number one value. So it's pulling those threads together for me. 

I want to dig a little bit deeper into some of those frameworks. You mentioned also trusted AI principles, and I know that Salesforce has five trusted AI principles, and then you also have five guidelines for responsible generative AI development. I'm wondering if you can talk a little bit about those and how the principles and the guidelines work together. Do you draw on them at different stages in the product development cycle? What's that interplay, and how are they used in practice? 

Rob Katz: Absolutely. Well, let me take it back to something you were saying as well about trust and about ethics. One of the hardest questions and most common questions I get from folks inside and outside the company is, "What do you mean by ethics, and what do you mean by trust?" Now, if you zoom all the way back out to 1999 when Salesforce was started, it was a pioneer and still is a pioneer of cloud computing. So it was taking what was then an on-prem software solution in which people would keep track of their customer relationships using software that was kept on a server in the corner of a room by the IT company and pioneering a multi-tenant shared service model in which those data would be stored in the cloud by Salesforce on behalf of those customers. They had to trust that we would not let company A see company B's data and so on and so forth. 

That's why trust was Salesforce's number one value in the first place because it was incumbent on us to not only convince our customers who had till then only wanted to have their data, which is a very valuable asset to them in a corner on a server run by the IT department that they could trust us to steward it for them, but not only that, it would be available all the time. So we built out availability dashboards that were very transparent in showing here's the uptime of this service. 

Trust is ultimately about how someone feels. When you think about trust and you trust someone, it's about how you feel. Ethics on the other hand, is more normative. It is sort of a set of moral values that you are imposing and we as a society impose on ourselves and on one another. So to translate a set of feelings and an inherently normative concept like ethics into a very quantitative organization like the team that builds software or the team that designs products is a really interesting challenge. 

We start with, here's a set of principles, here's a set of principles, the trusted AI principles, here's a set of principles, the responsible generative AI principles that we can all agree on. Responsible generative AI has to be accurate, honest, safe, empowering, and sustainable. Those are based on public standards, they're based on the way that our internal employees are telling us how they feel, they're based on emerging best practice. The interesting part of the job for me is not only creating those principles, which by the way are not set in stone, we can always revisit them, but it's how to operationalize them. So taking the feeling of trust or the normative sense of ethics, translating them into a set of principles, accurate, safe, honest, empowering, sustainable, and then say, okay, we're going to build a generative AI technology that's going to do, in Salesforce's case, it's going to help a salesperson generate an email, or it's going to help a service agent come up with a really helpful reply during a chat. 

How do we ensure that that email that's generated or that chat that is developed or anything else that is using generative AI meets or exceeds that standard of accurate, safe, honest, empowering and sustainable? Those nuances are the sort of building blocks of the work that the responsible AI team is doing at Salesforce. 

Emily Miner: The examples that you provided are so helpful. When I think about the scale of all the ways that Salesforce from the customer cloud, the marketing cloud, the other tools that are available within Salesforce, when I think about the scale of all the different interactions that Salesforce can support your clients in affecting with their clients, it's really quite staggering to think about all of those actions being developed with these five guidelines in mind. 

It's why I love the way that you described it as the guidelines are how we operationalize our principles because at the end of the day, that's really what it comes down to. That's where you take the proverbial values off the walls and really embed them in how we do business and operationalize those values, or in this case, the principles in action. Your team must have been very busy over the past five years to really get into the gears of all Salesforce's various operations. I know you talked a little bit at the beginning about one of the primary roles of your office as partnering within Salesforce that are actually building your products and tech. 

Rob Katz: Well, on that we stand on the shoulders of a lot of other teams as well. What's been really interesting and cool about this job is yes, we're pioneering a new function of how to translate something normative like ethics into the technical nitty-gritty of designing, developing and delivering software that helps companies sell, do customer service, market, handle analytics, communicate, all of the things that Salesforce does, and we are learning from best practices. We learned from our own security team and the security organization, cybersecurity as an industry is still in its relative youth. It didn't really exist 35 years ago. It sounds crazy, but I remember malware, worms, viruses. I remember installing antivirus software on the family computer back in the '80s and '90s, and we don't do that anymore as consumers for the most part because cybersecurity has grown up to handle everything from threat detection to vulnerability identification and everything in between. 

The reason that we've done that is that we've created the right set of principles around security and the right set of repeatable, scalable systems and processes. I know that that sounds pretty dry, repeatable, scalable systems and processes, so forgive me for that, but that's actually where the rubber meets the road. You can do something at scale when you come up with a set of principles for security or for privacy. So we work really closely with our security team, with our privacy team, with our legal teams to ensure that everything that we're doing is in line with not only the letter but also the spirit of the law. Ethics is a team sport at Salesforce, and we're not only working with the product team and the engineering team that are building the software, but we're working with all of those other functions, some of which have been around for a long time and have really well-established norms and processes and systems that we can ladder onto in order to be as effective as possible inside of the company. 

Emily Miner: That's such a good call out, Rob, and the way that you put it about standing on the shoulders of others, and ethics as a team sport. I want to talk a little bit now about how it works with partnering with your product and engineering teams because when I was doing some research, I read your AI Ethics Maturity Model white paper, which suggests a maturity model for building an ethical AI practice. In it I was struck by the use of the term ethical debt. I'm familiar with technical debt, which is, for those listening, what can happen when coders prioritize speedy development, a perfect code. But I had never heard of ethical debt, and I suspect that many of our listeners might also be unfamiliar with that term. What does that mean? 

Rob Katz: Great question. It is the sister or cousin or sibling of a technical debt. When we are working closely with a product or engineering team, we will partner with them and all of their supporting cast, designers, researchers, marketers, competitive intelligence folks, everybody ... It's a lot of folks ... To figure out what we should build, how we should build it, when we should build it. Then a stark reality sets in, and anybody who's worked in product management can understand this stark reality. That you can do whatever you want with technology, assuming you have unlimited time and unlimited resources. Now, the stark reality is that we don't have unlimited time, and we don't have unlimited resources. 

In order to identify what an ethical debt would be, it might be we need to ensure that we're building an explainability feature into this particular AI tool. We're going to build V1 of that explainability feature that doesn't have all of the bells and whistles, but we're going to note that we need to build V2 and put it on the product backlog. It could be on there as a feature, it could be on there as a bug, it could be on there ... Just depends on the team. But the point is that we have to log it and we have to keep track of it. When we make recommendations to a team, we make sure that they're identifying what our must-haves versus nice-to-haves and anything that's a nice-to-have gets put back onto the list of stuff that needs to get built next time. 

Then we go through and prioritize that ethical debt alongside of any other technical debt. By speaking in the language of our stakeholders, in this case, software engineers and product managers, we're able to better understand and relate our priorities to theirs. It helps us build rapport, but it also helps us characterize these things not as, oh, well that's a high-minded academic concept coming out of the ethics team in the ivory tower, but rather this is a very tactical, practical request that we need to consider as important as anything else on the backlog. 

Emily Miner: Thank you so much for that explanation of what ethical debt is. I really appreciate what you said at the end about using the language of your stakeholders. I've been struck many times in my career how effective we can be when we're translators. It's all about finding common ground with your stakeholders and reframing your goals in language in terms that those with whom you need or want to work can understand. It's just interesting to see how this is playing out within Salesforce, and it makes a lot of sense now when you describe it as ethical debt being the sibling of technical debt and why that relationship framing makes sense, so thanks for that. 

I want to switch gears a little bit here and talk about ethical risk as opposed to compliance risk because I think that many of our listeners who are coming from the ethics and compliance space would have a perspective on what constitutes an ethical risk as opposed to a compliance risk. I think we can kind of roughly go back to that can versus should framing that we were talking about at the beginning. How are your clients or how can your clients use AI to mitigate ethical risk as opposed to compliance risk? Are there any specific examples that you can point to to kind of paint a picture for AI's potentiality there? 

Rob Katz: A lot of times the ethical risks may be a precursor to compliance risks that will be coming, so consider data privacy. Everybody knows about data privacy, especially in the ethics and compliance regime because of the general data protection regulation, GDPR, or its cousin in the United States, CCPA, or the lack of a federal privacy law in the United States, and there's privacy laws everywhere. Back in 2016, 2017 when data privacy questions were being raised, they were often being raised by people working in roles like mine about should we be using these data for these reasons? What does it mean to take an approach of data ethics? A lot of those laws were written by people who were working and asking questions about should we allow data to be shared this way? Should we allow data to be retained this way? Should we allow data to be used this way? 

These data ethicists were shaping what ended up being a monumental law from a data privacy perspective that came with a large compliance burden. Organizations that were thinking about their data ethics best practices early on were in a much better place to comply with a law like GDPR or CCPA when the time came in 2018 for GDPR to go into effect. I would argue that we're in a very similar place right now when it comes to AI and particularly generative AI. Organizations taking a responsible AI by default approach, so they're coming up with a set of principles. Salesforce's principles are public, you can use them, accurate, safe, honest, empowering, sustainable. Go for it. You can use your own. 

There's a number of different AI ethics principles out there, and there are public risk management frameworks. I would recommend the one built by the National Institutes of Standards and Technology. If it's okay, we can link to that in the show notes. But it's a risk management framework that would allow an organization to identify effectively and not to overuse the Wayne Gretzky quote, but where the puck is going. This is going to anticipate where the EU AI Act and other AI regulatory actions are going so that organizations can anticipate where it's happening. 

Now, if that's not where the compliance bar is ultimately set, you've still done a good thing. It's not like you're investing and then you have to throw it all away. Taking a responsible AI by default approach using a set of principles and a risk management framework like NIST is going to help you set your organization up for success regardless because whether or not a regulator is coming and asking, are you compliant with this? Are you doing that? Show it, prove it, sign an audit that says so, it's actually just as important that you can show it to your internal stakeholders like your employees or your customers because we're increasingly being asked by our customers, how are you thinking about AI and data retention? How are you thinking about AI and bias prevention? How are you thinking about AI and auditability and explainability? 

These are the kinds of things that whether it's being asked by a regulator, by an employee or by a customer, those are the kinds of questions that you can anticipate, that we can anticipate. As a result, it would be effectively saying, hey, an ethics approach or a responsible AI approach would anticipate where the compliance bar is going to be set, and it's not quite clear because the regulatory space is still up in the air. But I would argue that if data privacy is any indication of where things were going and where they ended up, we're going to go very similarly when it comes to regulation and generative AI in particular. 

Emily Miner: I think you've kind of answered my next question in your description there, because I was going to ask what you saw as the role of regulation with respect to AI and generative AI, perhaps specifically, so the role of regulation versus self-policing or self-monitoring. I think what I'm taking away from what you just said is that the regulations will come, maybe they're not here yet or perhaps not explicitly enough yet, and so you may as well get ahead of that by thinking about it now. You also made the point that even though we can sort of reasonably expect that there will be more regulation in this area, we know that there will be more regulation in this area, that in and of itself isn't the reason to or isn't the only reason to consider what one's principles for the use or development of AI tools is because we also exist in a multi-stakeholder business environment, and we have the interests of employees and we have the interests of our consumers and our customers. Is that a fair summary of your stance on regulation versus self-policing? 

Rob Katz: Regulation has a huge role to play here to ensure a fair and even playing field for everybody and to ensure that we're also creating space for innovation. Salesforce would advocate for and has been working on advocating for a risk-based approach to regulation when it comes to AI, which is to say there are really, really important things to focus on when it comes to protecting people and fostering innovation. We should focus on the most high-risk applications that could cause significant harm or that could impact individuals' rights and freedoms. So thinking about consequential decisions, and we have published an AI Acceptable Use Policy that actually outlines this in detail. We believe we're the first enterprise company to have an AI Acceptable Use Policy. We're recording this episode in October 2023, so by the time someone might be listening to it in a few years, potentially other organizations will have done this as well. 

For a company like Salesforce, our Acceptable Use Policy governs how our customers can and can't use the software and to put out an AI Acceptable Use Policy, which we can link to in the show notes, says here are the things that you can and can't do with our AI. For example, you can't use it to make legal, financial or medical decisions, but you could use it if you were a lawyer to summarize all of the client notes, but it would not be usable to actually advise the client. Similarly, you could use it as a doctor potentially if it were compliant with the appropriate regulations in the United States, that would be HIPAA, and if it were handling personally identifiable health information, PHI in an appropriate way, so two ifs there, but you could use it to summarize notes from a shift in the emergency room. 

I have a good friend who's an ER doctor, and she is often stuck at the hospital very late after her shift is over writing up her charts. That would be in a great way for generative AI to augment a human being, a highly-trained human being so that that doctor or lawyer or financial advisor could focus on what a human being does best, which is to apply judgment and to express empathy with other human beings and not to summarize all of the notes from a long shift in the ER or all of the notes from a day's worth of legal consultations or financial consultations. So there's opportunities for AI to really augment what humans do best, but we want to ensure that it is regulated so that AI is not taking the place of the judgment that is necessary, especially inconsequential decisions. Does that answer the question around the role of regulation? 

Emily Miner: It does, and it makes me think about something I read a number of months ago that I think it's such a perfect and terrible case study of these risks that you're highlighting, and particularly those with consequential decision-making. What I'm referring to is the, what is it, the National Eating Disorder Association replaced their human staff that monitored ... People could communicate with the National Eating Disorder Association either by calling or by chatting, and they replaced their human chat staff with a generative AI tool that actually ended up offering really counterproductive and dangerous advice to those that were chatting in. So for example, telling them to count their calories and the benefits of calorie deficits and how you can measure your body fat and where can you buy the tools that will help you do so as we can all see obviously incredibly counterproductive and dangerous for people that have eating disorders. So it's a terrible real-life example of exactly what you're talking about that just made me think of that. 

Rob Katz: Yes, and yesterday was World Mental Health Day, and we are in the United States, especially suffering from a severe shortage of qualified mental health professionals. So I can understand the good intentions of trying to use technology to augment and to serve as many people as possible with good high-quality counseling when it comes to eating disorders. And you're absolutely right, it would be against our AI Acceptable Use Policy. It should be against where a thoughtful risk-based AI regulatory regime goes to substitute wholesale persons, people and judgment from the process of providing mental health care. It wouldn't be a huge leap to say what would it take to use generative AI to augment the work of those mental health care professionals so as to be able to serve as many people as possible? Not to replace them, not to take them completely out of the conversation, and it's the nuance and it's the ambiguity of what could we do with this? Why? How? When? How do you assess each and every one of those, in the software world, we would call it a job to be done and then decide where the line is? 

The nature of doing responsible AI or ethical technology is that it's not clear where that line is, and it requires, again, human judgment. It's easy to look at these examples where a lawyer used AI to write a brief for their client or the organization that you were speaking about replaced their human counselors with AI. We're not talking about ripping people out here. If anything, we're talking about augmenting humans with technology in a thoughtful risk-based approach. 

Emily Miner: Rob, personally, my stance on AI and its growing prominence in our everyday life and in the thousands of ways that I'm not even aware that it's part of my life, I vacillate between being really scared about the implications and us getting ahead of ourselves so maybe before there's a robust regulatory framework as an example. I oscillate between being scared about those potential implications and then at the same time recognizing the really beneficial power that the use of AI can have on our society, such as, I mean, you raised such a good example of the severe lack of mental health resources in the United States. 

I guess it just gives me great comfort to hear you describe Salesforce's perspective of AI as augmenting or empowering humans as opposed to replacing them because of course, that's another fear that many have. Will AI take my job? But it gives me great comfort to recognize that there is a middle ground here, and it's not all or nothing. While still recognizing and appreciating where that line is vague, but I think that goes back to what we were talking about at the beginning, the need to have principles that govern the way that we operate in these gray areas and multi-stakeholder discussions on where the line is and how that might change depending on the circumstance or the job. So I leave this conversation feeling really hopeful. 

Rob Katz: Me too. I'm an optimist when it comes to this, and it's really strange sometimes. People say, you're an optimist, but you work in responsible AI. How could you possibly be optimistic? Look at all of the doomsday stuff in the news. We're still very much in the early days of what AI can and will be doing, and these calls to pause or stop are unrealistic. The toothpaste is out of the tube. It's not going back in. To think about rather how we build, deploy, augment, ultimately augment human beings' work, and I live in a world of enterprise software, so it's all about augmenting human beings' work with these tools to make work better and less boring and to eliminate drudge work. 

I'm going to go on vacation. When I come back from vacation, I would love to use generative AI to summarize all of the conversations that happened in my chat application. At Salesforce we use Slack, and if it can summarize what happened in that channel, then it saves me a huge amount of time and allows me to be more productive and make that work more fun. There's a lot of opportunities like that. If we focus on what those opportunities are while also being clear-eyed about where the risks lie with respect to bias, toxicity, safety, data privacy, and to be careful about what we're doing, then it gives us an opportunity to harness the power of AI while mitigating the risks of it. 

Ultimately, I feel very fortunate because I get to work on this topic every day with wonderful people who care and who care about what we're building for our customers and for society, but who also are going to take an eye of optimism toward it as well, which is we want to go build something that wouldn't have been possible otherwise, and we want to be able to look back in the future and say we're proud of what we built. 

The sad truth is that there are a lot of people out there in the technology space who are looking back today in 2023 at what they built in 2012 or 2013 or 2014 or 2015 and saying, "I'm not sure we should have done that." I think that the rise of responsible AI and responsible tech is in response to that question of, I'm not sure we should have done that in retrospect. Being thoughtful and mindful about everything is a great place to start. 

Emily Miner: Rob, thank you so much for giving us a peek into how Salesforce is thinking about the ethics of AI and of responsible AI. Like I said, you're giving me a lot of hope and also optimism, so I appreciate that. I also want to put a plus one on the AI to summarize my inbox and my chats after coming back from vacation as I was out last week. So I've spent the past few days digging myself out of that. That would've been really helpful to have a summary. But let's end our conversation on this note of optimism, and thank you for joining us and spending time. 

Rob Katz: Sounds good. Thank you so much for having me, and I'll look forward to talking again soon. 

Emily Miner: My name is Emily Miner, and I want to thank you all for listening to the Principled Podcast by LRN. 

Outro:   We hope you enjoyed this episode. The Principled Podcast is brought to you by LRN. At LRN, our mission is to inspire principle performance in global organizations by helping them foster winning ethical cultures, rooted in sustainable values. Please visit us at LRN.com to learn more. And if you enjoyed this episode, subscribe to our podcast on Apple Podcasts, Stitcher, Google Podcasts, or wherever you listen. And don't forget to leave us a review.

 

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Stitcher Listen on Audible Listen on Google Podcasts Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM