Featured image

What does responsible AI and machine learning look like for business leaders?

 

What you'll learn on this podcast episode

Generative AI is on the agenda of most every company right now. Business leaders are grappling with how to use it in products, services, and in workflows. Managers and their teams are wondering if artificial intelligence is coming for their jobs. Regulators are trying to wrap their arms around it as its potential for misuse is high. If you’re concerned about corporate ethics, culture, and compliance, what is your role in the responsible development and deployment of AI-oriented business initiatives? On the Principled Podcast, host Jen Üner talks about responsible AI with Dr. Seth Dobrin, Ph.D. and President of the Responsible AI Institute. Listen in as the two unpack what “responsible AI” means and how business leaders can move forward in this rapidly changing landscape that is surely as monumental a shift as the invention of the Internet.   


Where to stream

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Audible Listen on Google Podcasts_@2x Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

 

Guest: Seth Dobrin, Ph.D.

Seth Dobrin – Grayscale

Dr. Seth Dobrin is a leading expert in artificial intelligence (AI) and its application to business. He is the president of the Responsible AI Institute. Previously, he was IBM's global chief AI officer, where he led the company's AI strategy. He has also held senior positions at other Fortune 500 companies where he used data and AI to create billions of dollars of top and bottom-line value.

Dr. Dobrin is a sought-after speaker and advisor on AI. He has been featured in major publications such as The Wall Street Journal, Forbes, The New York Times, and major broadcast networks such as the BBC, PBS, and NPR. He has also spoken at numerous conferences and events around the world head-lining top-tier events such as IAA Global 2022, AI Week Rmini, TNW Conference 2022, Reuters Momentum 2023, AIMed Global 2023, Total Retail Tech 2023, and many others.

Dr. Dobrin is a passionate advocate for the responsible use of AI. He believes that AI has the potential to solve some of the world's most pressing problems, but that it is essential to ensure that AI is used in a way that benefits all of humanity.  Dr. Dobrin holds a Ph.D. Molecular and Statistical Genetics from Arizona State University. 

Here are some of his most notable achievements: 

  • DataIQ 100 USA 2024 
  • Corinium’s Top 100 Leaders in Data & Analytics 2022. 
  • AI Innovator of the year by AIconics in 2021. 
  • “100 Most Influential People in AI" by Onalytica. 
  • "100 Most Influential People in Big Data" by DataQuest. 
  • “Top 50 AI Influencers" by Analytics India Magazine. 
  • “Top 100 AI Thought Leaders" by AI Business Review. 
  • "Top 100 AI Influencers in Europe" by Datanami. 

Dr. Dobrin is a visionary leader who is shaping the future of AI. He is a passionate advocate for the responsible use of AI, and he is committed to using AI to solve some of the world's most pressing problems. Dr. Dobrin is a true pioneer in the field of AI. He is using his expertise and passion to make a positive impact on the world. 


Host: Jen Üner

Episode_Card_Jen_Uner_2

Jen Üner is the Strategic Communications Director for LRN, where she captains programs for both internal and external audiences. She has an insatiable curiosity and an overdeveloped sense of right and wrong which she challenges each day through her study of ethics, compliance, and the value of values-based behavior in corporate governance. Prior to joining LRN, Jen led marketing communications for innovative technology companies operating in Europe and the US, and for media and marketplaces in California. She has won recognition for her work in brand development and experiential design, earned placements in leading news publications, and hosted a closing bell ceremony of the NASDAQ in honor of the California fashion industry as founder of the LA Fashion Awards. Jen holds a B.A. degree from Claremont McKenna College.

 

Principled Podcast transcription

Intro: Welcome to the Principled Podcast brought to you by LRN. The Principled Podcast brings together the collective wisdom on ethics, business and compliance, transformative stories of leadership and inspiring workplace culture. Listen in to discover valuable strategies from our community of business leaders and workplace change makers.

Jen Üner: Generative AI is on the agenda of most every company right now. Business leaders are grappling with how to use it in products, services, and in workflows. Managers and their teams are wondering if AI is coming for their jobs. Regulators are trying to wrap their arms around it as potential for misuse is really high. If you're concerned about corporate ethics, culture, and compliance, what is your role in the responsible development and deployment of AI oriented business initiatives? 

Hello, and welcome to another episode of LRN's Principled podcast. I'm your host, Jen Uner, strategic communications director at LRN and a co-producer of this podcast. Today I'm joined by Dr. Seth Dobrin, PhD and president of the Responsible AI Institute. We're going to be talking about responsible AI, what that means and how business leaders can move forward in this rapidly changing landscape that is surely as monumental a shift as the invention of the internet. Seth Dobrin is a real expert in this space. Until September of last year, he was the Global Chief AI Officer at IBM and has a prolific work history in digital strategy and data science in places like Monsanto and Motorola. Dr. Dobrin, thanks for joining me on the Principled podcast. 

Seth Dobrin: Yeah, thank you very much for having me. I really appreciate it, Jen, and I'm excited about the opportunity to talk to you in your audience. 

Jen Üner: We are excited too. So you have been at the forefront of the AI discussion since I've been paying attention. First, tell us a bit about yourself. 

Seth Dobrin: Yeah, thanks for that. When we look at where AI is today, it's still very nascent and a lot of people in this field don't have what you would think would be a regular degree for AI. So my background is actually in human genetics. So my PhD was in human molecular and statistical genetics, and this was at the tail end of the Human Genome Project. And I'm self-taught in this field because part of what we had to do was we were generating what was a tremendous amount of data back then, and we had to figure out how the heck do we analyze and process all of this data? And so that's when we saw the rise of things like R and Python and a lot of the base machine learning and statistics that go under what we think about as AI today. I kind of still continue to apply both parts of my degree, both the genetics as well as the data science, if you will, across startups across some of those multinationals that you mentioned and in academic situations. 

And eventually, as you mentioned, I wound up at Monsanto where I led the data and AI portion of their digital transformation, which is seen as one of the most successful digital transformations to date. And then I moved over to IBM to become their global Chief AI officer and worked to define what their overall strategy was across the business, kind of help with implementation across research, across consulting and across software, as well as help many of IBM's biggest customers, which are basically the Fortune 1000, figure out how did they develop strategies and implement those strategies to get real business value. And then I left, as you mentioned, IBM in September to become the president of the Responsible AI Institute, which is a member-based nonprofit focused on creating assessments that align to global standards. And these standards are all harmonized. And so think like ISO, IEEE, UCAST, depending on what part of the world you're in. 

Jen Üner: That is so necessary, I think actually that kind of clarity of thinking is going to be really valuable for business leaders moving forward. So I can't wait to see where that goes. 

So moving on, Thomas Friedman, I don't know if you saw that piece, Thomas Friedman recently had an opinion piece in The New York Times, where he says, "We as a society are on the cusp of having to decide on some very big trade-offs as we introduce generative AI." In that article, he cites LRNs founder actually Dov Seidman, and he likens our potential missteps with AI to those with social media. He says, "There's a failure of imagination when social networks were unleashed and then a failure to respond responsibly to their unimagined consequences once they permeated the lives of billions of people. We lost a lot of time and on our way in utopian thinking that only good things could come from just connecting people and giving people a voice. We cannot afford similar failures with artificial intelligence." 

That was a really long quote, but I think it gets to kind of a really important issue that the stakes do seem higher this time around than even with social media. What's your take on this? Do you see a correlation to societies and governments handling of social media? First of all, do you think there's a correlation there? 

Seth Dobrin: Yeah, absolutely. In fact, I actually go a little further and I say there's a direct correlation, especially with generative AI to where we were with the internet in the late '90s and early 2000s. The internet started as the wild, wild West. Started with no security, no safety, and it wasn't until we implemented that security and safety that we broadly started seeing enterprises adopt and transact on the internet. And we're in a similar position with generative AI where there is no safety or security, and we need to think about what that will look like. Now, moving on to social media, I think he's spot on, and I've been quoted as that on podcasts and major media networks saying essentially the same thing. We missed the boat with social media in terms of getting the right handle on it, not making the assumption that humans only do good, because we know they don't. 

And in fact, all of the bad things, or most of the bad things, we think about as AI, are really just mirrors of society because AI models are trained on the past decisions of humans that are instantiated in data, and especially these generative AI models. They're trained on very large data sets, often the whole or significant parts of the internet, and they're just mimicking back the way humans behave on the internet, social media, Reddit, things like that. And in fact, it really is a reflection of how we have a society have failed ourselves and really driven inequity and a lack of inclusion through things like the internet. And we see this divided world that's a direct result of not having appropriate regulation, not having appropriate controls, and allowing companies to treat us as humans as products. That's essentially what these social media companies do, is they monetize us as humans, and we've allowed that. We embrace it as a society, in fact. 

Jen Üner: Yeah, it makes me think about move fast and break things. Friedman argues against that, in fact. And it sounds like you do as well. I'd like to talk about the trade-offs, the trade-offs that we make between that freedom and that open approach versus that controlled and regulated approach. This is kind of your area of focus, right? With the responsible AI Institute? 

Seth Dobrin: Yeah, I mean, I think there's still a place for some version of move fast and learn from it. I don't know that I would go so far as to say break things. But move fast and learn from it and adjust. But in the context, this is where the challenge is going to be for us in the context of regulation. And to the point we made before, both social media and the internet, we said we'd regulate ourselves and we failed miserably at that. And so we can't make that mistake today. Now we have to remember that governments move very slowly. Even the EU AI Act, which is not in place yet, was just moved from working groups that were defining it to the European Parliament. If we look back at GDPR, the General Data Protection Regulation that the EU implemented last time, even after it was regulation, they gave people two years to comply with it. 

We don't have two years, but the rate this field is moving, two years is too late. And so in that gap, how do we make sure we implement this correctly? And then also remember that the US, there is no legislation even in process. There's things that have been proposed in committees, but none of them have moved out of committees yet. And so how do we provide that level of protection and prevent this move fast and break things mindset when we're implementing this technology? And I think to your point earlier, that's where an organization like the Responsible AI Institute can help. We build what are called conformity assessments, which basically, as I mentioned in the intro, allow organizations to align the AI systems and could be generative AI systems as well, to globally recognize standards like ISO that, oh, by the way, the regulators have pointed to and said, "As we start implementing the regulation, these ISO and other standards are good way to understand if you're going to comply with the regulation." 

We think about regulation as something that slows down innovation when in fact, regulation and what you would think of as internal regulations to an organization which are policies, good, solid and clear, transparent and concise policies or regulations actually improve innovation. And this has been, there's a PWC report that came out that validates this, organizations that have good AI regulations actually get more innovations and get AI into production faster because everyone is aligned as to what is permissible and what is not permissible. So the innovation that occurs, occurs in value added things instead of things that will get stopped or will cause harm in the long run. 

Jen Üner: That goes so much to what we do and what we study also at LRN, you need those sort of external validators as a measure of trust so that you can move faster with ISO with SOC 2. There are other kinds of things that you can point to. And then what you're saying about on the internal controls, internal side, it's really about aligning around those shared values, and this is how we're going to do it, and this is why we're going to do it this way. I think that does lubricate business, for lack of a better term. It does help streamline processes when you have that sort of working framework. 

Seth Dobrin: And I think one of the gaps that's often seen in organizations that are trying to do this right, is there's not a system of accountability within an organization. So as I used to say, or I still say when I was at IBM is, who's the person that's going to be wearing the orange jumpsuit if something goes wrong? Or who's the person that's going to have to sit down with the New York Times reporter or the Financial Times or Wall Street Journal reporter and explain why something went wrong for the organization? 

And then who's the person that's responsible for each individual system? And in fact, this relates to something that was just announced today, and I know the podcast's not going out today, but today is May 9th, that the IAPP, the International Association of Privacy Professionals, has started an AI governance effort to make sure that privacy professionals who seem to be the de facto person who's wearing that orange jumpsuit, if you will, if something goes wrong, are trained in AI and understand AI regulations, understand the intricacies of AI, how it's different from traditional data privacy. And so this is a huge step forward to have a professional organization like IAPP start building training, start helping privacy professionals figure out how to do that. And I've had the honor of, I'm part of the governance board for that. 

Jen Üner: That's really interesting and really good to hear. I think a lot of it does lean into that data privacy topic because so much of it is shared content models, and then of course there's all the privacy issues depending on what you're putting into the machines as well to get answers. That's actually really interesting. We're definitely going to be looking into that. One of the things that I was going to say about that, too, is you were talking about who's going to wear the orange jumpsuit and who's responsible in the organization. One of the things that we've noted also is the DOJ is actually looking to create greater accountability throughout the organization on any kinds of breaches or misconduct. So it's interesting because the potholes get bigger with AI since there are so many unknowns. It's going to be interesting to see how the DOJ ultimately responds to some of these things. Although they're definitely using regulation as a parameter and not just good governance. 

Seth Dobrin: And if you look back to similar kind of activities even beyond the internet, I mean, if we look at things that we consider important to public health, if you will, and I almost see this as a public health issue. We have regulatory bodies like the CDC, the FDA, the USDA, the EPA, that regulate governmental organizations that are responsible and most countries have similar governing bodies. We need something like that for AI. It's that important. It's as much ingrained in society today as food is, as medication is, and agriculture is, and this is an important topic that we need to get serious about as a society and as citizens of the world. 

Jen Üner: And we're still just kind of figuring that out just from an internet standpoint. 

Seth Dobrin: We haven't figured it out yet. 

Jen Üner: No, we haven't. To nerd out for just a sec, since you're immersed in the tech world, I was going to actually quote one of my former colleagues, Dr. John Underkoffler, he's also a PhD from MIT. Very much interested in spatial computing and the spatial operating environment, and collaborated with Steven Spielberg on the film Minority Report, just really about what NextGen Tech could look like. But one of the things that he always said that I thought was really, really interesting was that we don't really have AI yet. It's just machine learning. Because for him, AI had something to do with some kind of intelligence and not just, I guess, rote learning and spitting things back and that there would be some potential ascensions as you get into the word intelligence. He just always said it was a misnomer. And I would love to get your take on where we are. Is there still really truly that bifurcation? Do you see these things coming together? Where are we on that? 

Seth Dobrin: Yeah, I mean, I agree a hundred percent with him. I mean, AI is pretty much a marketing term at this point. It includes a whole lot of things, including machine learning. Some people call rules symbolic learning, and they lump it in with machine learning. And even these large language or generative AI models, things like ChatGPT or stable diffusion, those are just machine learning. They're a part of machine learning called deep learning, which is, you've probably heard of neural network. So that's all just machine learning. And as I alluded to at the beginning, it's just taking information that's out there in the world and regurgitating it. It is not truly AI. We often talk about the difference from where we are today to what Dr. Underkoffler is referring to. What he's referring to is what we talk about as general AI or sentient AI. We're nowhere near that. We've been saying we're going to be there for 50 years. We're probably closer today than we ever have been, but I think it's still going to be another 50 years. 

Jen Üner: Good. So maybe I'll miss it in my lifetime. 

Seth Dobrin: Your kids and grandkids won't, though. 

Jen Üner: Yeah, exactly. Sorry guys. As long as we're talking about seminal folks in the world of technology, famously the godfather of AI, Geoffrey Hinton, who was working in machine learning since the 1970s, he resigned from Google last month after a decade working there on AI initiatives, ostensibly so he could speak more freely about the risks. Since our audience is a hundred percent about risk mitigation in their organizations, they might point to things like GDPR and other data privacy and even copyright laws, sources of protections that are already in place. But what do you think ethics and compliance leaders should be looking out for when it comes to risk? And if Jeffrey Hinton's jumping out to talk about risk, what is your take on that and what can our people, ethics and compliance folks, what can they be doing policy-wise or workplace culture-wise to help their companies? 

Seth Dobrin: Yeah, so I think not surprising, Geoffrey left Google so he could talk more openly about where we are with the risks and challenges and even the rewards. Part of that was part of why I decided to leave IBM was because any senior executive in the company is representing the company, and our individual thoughts may not represent the company's individual thoughts. And so it gives us freedom to operate and speak more truth and more plainly than we would if we're in large organizations. So I think you see lots of people that are leaving, maybe it's not as widely publicized because we're not the godfathers or godmothers of AI, but I think it's important that privacy professionals in organizations and people who care about ethics and things like that identify for their organizations. And IBM was actually the first large company to do this. What are our ethical principles around AI? 

So what are the things that we are absolutely not willing to do? So have a transparent to your employees in the world. What are the things you're just not doing? For instance, at IBM, we said we're not going to create general facial recognition tools that can be used for things that we don't approve of, such as mass surveillance, and be very clear and be very bold about them. Have a ethics governing body that actually has teeth. And so again, IBM was the first company that we know of to set up an AI ethics board that was commissioned by the board of directors for the company. It had teeth, it stopped projects. It resulted in large contracts not being presented to clients because they didn't align to our ethical principles. And so I think it's important as professionals and leaders in this space that we take a stand for our company, we provide a stand for our company. 

So what is our position on this? And we have a way to make sure that the company adheres to that. And where we get these kind of fuzzy areas, there's an escalation process where people could say, COVID is a great example. IBM said, "We weren't going to do facial recognition." COVID came and you needed to do facial recognition for mask detection. Now, that's a fuzzy area because how you do that facial recognition is important. Do you do it on a mass surveillance concept where you can look in broadly and then identify people? Or do you do it for instance, when someone is walking into a building and they're badging in, you then know who that person is and it's a one-to-one match. Or even outside of COVID, in the US we have Global Entry where you walk up to a Global Entry booth that looks at your face and it ties you to your passport. 

You've consented into that by signing up for Global Entry. They're very clear with what you're doing, and it doesn't scan the entire passport database. It says, "Okay, this is Seth. We know he's one of the people that flew on in his flight. Let's match him to his passport." And so that's a whole different conversation than say, police forces looking at cameras around the world and identifying people, keeping track of people and surveilling people and using that against them. 

Now, to be completely clear, there are some organizations in our government that need to do that, right? I would never participate in some of the things that governmental agency does because it doesn't align to my ethos. But in order to keep the world safe, we need to know who bad people are and they need to do these things. And so while I don't want to directly participate in those, there are things that need to happen. 

And if you happen to be a privacy professional in one of those organizations, you need to figure out what the gray area is. So for instance, the US and NATO has said they're not going to do automated release of weapons of any kind of deadly force without a human being involved. And AI may say, "You should do this," but a human is ultimately making that decision. So even governmental militaries, right? Even militaries have guidelines and ethos that they adhere to. And the US and the UK have been very transparent with what those ethos are, and they have a body internally that governs this. Every organization should do that. 

Jen Üner: Oh, completely agree. I mean, you said gray areas. I think so often things are only getting more gray, and that's where the ethics side and the values side of ethics and compliance really becomes important because it's the thing that's going to help you navigate and help you create the right kinds of guardrails, structures, frameworks that are going to iterate those values that the organization has and guide the group's behavior. So they do have to be really thoughtfully generated by the humans. 

Seth Dobrin: Yep. I'll put a direct plug for the Responsible AI Institute here. That's something that we help our members do, is identify, develop governing principles and governing documents and policies for organizations in this area, and then ways to measure those. That's what we are established to do. And if you're interested in learning more, you should reach out to me or go to our website responsible.ai and learn more. We'd love to talk to you and your audience about that. 

Jen Üner: Oh, yeah. It definitely aligns with what we do as well, around compliance training and ethics and compliance program management, and again, measurement and best practices. So definitely really interesting. So just for a minute, going up the food chain, pretend you're a CEO or a board director of a large company, let's just say a big company. 10,000 plus employees perhaps operating in several countries, which of course have several different kinds of priority structures around AI, around corporate governance. What's your best piece of advice for your peers now as ACEO when it comes to AI? How should company leaders be thinking about it? What's the stance they should take to best prepare for the future? 

Seth Dobrin: I think I've alluded to it throughout the whole conversation we've had so far, Jen, I think number one, the board of directors and the CEO should anoint someone who's wearing that orange jumpsuit. They should put in place a governing body that establishes the proper governance policies and controls for your organization, and you ensure that that body has significant teeth. If they say something and the organization doesn't look like it and it gets escalated to the CEO, she should defer to that governing body. She should never override them. It's like parenting. If the kids go ask mom one thing and they ask dad and you get a different answer, that's chaos. We need to make sure that with something that's important, there's no chaos in the organization. The only way to do that is to make sure that the entire senior leadership team is behind a governing body that's as important as an AI ethics body. 

Jen Üner: That's fantastic advice. So if you were going to add one more thing to that, what would it be? 

Seth Dobrin: Number two would be I would empower my organization to be creative with AI and have measurements that are in fact value added. So oftentimes when we get into AI or data science projects, we think about kind of what I call the vanity metrics. So how many models do I have? How many people are using that model? How many API calls do I have? As a business we don't really care about those. They're cool, they're interesting. If you're a CIO or CDO or Chief AI officer, those are cool things you can stand up and say at a conference or among your peers. But what organizations really care about is the value they drive to the business, both in cost savings and new revenue. And so what is that value? And then how does that align to the human values of that organization? So getting to the ethics, every AI project should start off with, who's the user and who's impacted? 

And those two may or may not be the same person. So who is using an AI model, for instance, in a lending process, is an underwriter. So if I'm a mortgage broker, I'm using the AI. But the person that's impacted is the person who applied for the loan, and they are not the same person clearly. And so you need to keep those in mind throughout the whole time you're building that AI model, especially if it impacts the health, wealth, or livelihood of a human. And those are really critical and they're related. The two things are very related. 

Jen Üner: I love those examples, like really practical, really clear examples. I love your thinking around this, and I do look forward to having more conversations about this. Seth, we could have this conversation all day, but we are running out of time, and I do want to thank you for joining me on the Principled podcast. 

Seth Dobrin: Yeah, thank you so much for having me. It was a great conversation and appreciate the opportunity, as I said, to talk to your audience, check out Responsible AI Institute and check out the new IAPP AI Governance Body. 

Jen Üner: Will do. My name is Jen Uner, and I want to thank you all for listening to the Principled Podcast by LRN. 

Outro:   We hope you enjoyed this episode. The Principled Podcast is brought to you by LRN. At LRN, our mission is to inspire principle performance in global organizations by helping them foster winning ethical cultures, rooted in sustainable values. Please visit us at LRN.com to learn more. And if you enjoyed this episode, subscribe to our podcast on Apple Podcasts, Stitcher, Google Podcasts, or wherever you listen. And don't forget to leave us a review.

 

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Stitcher Listen on Audible Listen on Google Podcasts Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

 

 

Ready to upgrade your ethics and compliance program?

We’re excited to give you a personalized demo of the LRN solution. We’ve been a trusted ethics and compliance partner for over 25 years. With over 30 million learners trained each year, we optimize ethics and compliance programs across the globe to help save your team time, increase engagement, and align with regulation.