Featured image

Ethical AI integration and emerging compliance challenges

What you'll learn on this podcast episode

As AI technology evolves, businesses face challenges in ethical implementation. In this episode of the Principled Podcast, Amy Hanan, Chief Marketing Officer at LRN, talks with Justin Garten, LRN’s Senior Director of AI and Data Science, about the AI Implementation Principles brought forth by the White House and Department of Labor.

With experience at Mantium and Google, Justin shares insights on balancing innovation with ethical caution, fostering social innovation, and establishing clear AI governance. Tune in to discover strategies for mitigating AI’s impact on workers, aligning with regulations, and preparing for AI’s future.



Where to stream

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Audible Listen on Google Podcasts_@2x Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

 

Guest: Justin Garten

Episode Cover - Principled Podcast Season 11 Episode 9 - Justin Garten

Justin Garten is the Senior Director for AI and Data Science at LRN. He has led the development of cutting-edge AI applications as an AI consultant. Before joining LRN, he was at Mantium and Google where he developed, trained, and deployed various AI models while contributing to policy development on data privacy and AI usage.

Host: Amy Hanan

Principled Podcast - Season 11 Episode 8 - - Amy Hanan

Amy Hanan is the chief marketing officer at LRN. A B2B digital marketing leader, Amy has a nearly 20-year track record in product, brand, lifecycle, and demand-generation marketing as well as corporate communications for media, professional services, and technology companies. One of her central areas of expertise is executing tech-enabled marketing initiatives for growth. Before joining LRN, Amy was the chief digital officer at Baretz+Brunelle, a marketing and communications agency serving the legal and financial services industries. Her previous experience includes Reorg Research, ALM Media and The Associated Press. She holds a Bachelor of Arts degree from Northern Arizona University. 

 

Principled Podcast transcription

Intro: Welcome to the Principled Podcast, brought to you by LRN. The Principled Podcast brings together the collective wisdom on ethics, business and compliance, transformative stories of leadership, and inspiring workplace culture. Listen in to discover valuable strategies from our community of business leaders and workplace change-makers.

Amy Hanan:   Are businesses truly equipped to navigate the ethical complexities of AI implementation in the workplace or are we merely scratching the surface of a much deeper conversation?

As AI continues to reshape the modern workplace, are organizations prepared to balance innovation with ethical caution? And are businesses prioritizing the wellbeing of their workers, or are they sacrificing ethics for the sake of technological advancement?

Hello, and welcome to another episode of LRN's Principled Podcast. I'm your host, Amy Hanan, chief marketing officer here at LRN. Today I'm joined by our senior director of AI and data science, Justin Garten. Today we're going to be talking about the recent AI implementation principles brought forth by the White House and how organizations can navigate the intersection of artificial intelligence and workplace ethics to ensure ethical behavior, compliance, and employee wellbeing. Justin brings immense expertise to the table. He has led the development of cutting-edge AI applications as an AI consultant and at Mantium and Google, he developed, trained, and deployed various AI models while also contributing to policy development on data privacy and AI usage.

Justin, thanks so much for coming on the Principled Podcast.

Justin Garten: Amy, thanks so much for having me here today.

Amy Hanan:   So as we were just mentioning in the opening, the White House and the Department of Labor recently released a set of AI implementation principles aimed at guiding organizations in the ethical development and use of artificial intelligence in the workplace. They're attempting, with these principles, to underscore the importance of protecting workers' rights and ensuring transparency in AI implementation. But perhaps for all of our listeners, you could give a quick summary of some of the key points of these principles and what you think that we should be really focused on or paying attention to, at least in this initial release.

Justin Garten: Absolutely. Well, I think it's nice we're finally getting some clarity, at least the starts of clarity coming in terms of what we should expect on the regulatory front. But I think in terms of these principles, there's sort of three big buckets that you can divide them into. AI transparency, AI ethics, and workers' rights, and workers' rights is kind of the newest area. We've seen the other two a little bit before on previous regulations. I think though, that we have to recognize that these are more of a conversation starting point rather than final guidance. Some of these are rather loosely described in things like, "Centering worker empowerment." That's great to know that that's something that we need to pay attention to, but it doesn't necessarily give us concrete policy guidance in terms of what we need to do as businesses or providers in this space. I think what this does though is it gives us, like I said, that insight into areas that we need to think about and kind of points to some of the existing regulations that we can use as guidance to actually go forward with this.

Amy Hanan:   With that in mind, are there any initial steps you believe businesses should be prioritizing when they're thinking about incorporating these principles into their own operations? What's the starting point?

Justin Garten: I think one of the starting points is just the framing of it. All of these are placed in terms of risk or things that are challenges for businesses, but each of those risks only exist because of the corresponding opportunity that AI is creating. Business framing, first off, if it's all in terms of risk, you're going to create a situation in which employees feel disempowered, they feel constrained rather than able to move forward. I'd actually put that as a meta-point of think of this as the opportunity rather than necessarily something that you're just trying to stay within dangerous guidelines.

In terms of the transparency and ethics piece of things, I think we have a remarkably large area of existing understanding about current regulations in terms of data privacy, personally identifying information, how we handle algorithmic fairness, and things in terms of, say, bank loans where there's already a hugely established area of work that my guess is going to be, and don't take this as gospel, but that a lot of this is going to transfer straight over into the AI space because we kind of know the rules in that area. We already have organizations that have been dealing with these regulations for years, so that gives you a great starting point in terms of things to look at there. Things like security standards and pieces in terms of just protecting user day, wiping it out so that it's not saved.

On the workers' rights front, this one's harder and maybe a little more interesting even but what the regulatory structure is going to look like is really under-determined at this point. Empowering workers, what's that mean? I think from a business perspective, there's a few steps though you can take today. The first is just understanding your organization, and that means having clear lines of communication, having better measurements, and probably talk more about that one later. But the second is just preparing for accelerated change. This means in your marketplace, in the means of production, in terms of just the way that your employees are interacting with each other. That's something that creates some pretty unique challenges. Once again, probably more to say there.

Then the third though is to work with your workers to align incentives, that this shouldn't be something that's coming from the top down. This is something where you have to change the way you're thinking about your organization to incorporate a change in the way we're working. I guess maybe finally just principles are different than policies and practices. Every organization's going to have to actually come up with the policies and practices that make sense for them. It just requires us to engage with all of our values as organizations and actually work from there to how we implement these.

Amy Hanan:   Yeah, so it's understandable that there are a lot of concerns out there. I'm sure a lot of the leaders and professionals that are listening in on this conversation today have their own sets of concerns about the potential difficulty both interpreting these types of principles and then implementing them into their own organizations, especially given the constantly changing, the rapidly changing business environments that we're all operating in. But I was thinking about what you were saying about using values as part of this process and how that relates to how organizations can foster organizational and social innovation perhaps leaning into their values as a key tool there as it relates to AI overall and how they're thinking about AI for their businesses.

Justin Garten: Absolutely. No, I think that's a great question in terms of how you actually maybe operationalize those values, especially in the light of the rate of change that we're dealing with on this. You also mentioned that pace that we've got on the tech side, and I think there's actually two sides to how I think about this. One is sort of what we can think of as a larger industry, as a larger space of those of us who worry about these issues and then what we can do in individual companies. I think both are a little bit informed by what we actually see, why the tech changes are happening so fast.           

In AI, we have incredibly deep systems for sharing knowledge and innovations. A company comes up with a new idea and a month later it's in every model out there. We have a combination between academia and industry where just information sharing has become the norm over the last decades on this, and we don't have anything similar to that on the organizational front. We tend to be siloed where HR within a company is treated as a very local operation rather than something where information is shared across different companies or different industries. I think that might be a meta-point is that if we're going to be building systems to interact with change at that rate, we should take advantage of the systems that we've built on the tech side to increase the rate of that change. That means just being a lot more open, not treating our organizational innovations as things that we hold tightly in-house, but rather something that we share, that we recognize that that sharing process is actually how we improve as a whole.           

In terms of from the company's perspective, there's a few, I think, really concrete items that we can do in terms of just rethinking the organizational structure. Some of that is shortening timelines. If you know that the underlying market or tech structure is going to be changing in the course of months, multi-year planning becomes more directional rather than concrete and constraining. But I think the key point, and something that we talked a little about earlier is empowering employees. If you're from the top-down in your company saying, "Okay, here's our response to AI," that may mean that you're getting one experiment cycle a year because there's only so often that that can be updated in terms of the top-down leadership. If on the other hand, every group within your company feels empowered to experiment, to share, to actually work on this, you may get hundreds of experiment cycles in the same amount of time. Then the top-down side of it becomes making clear guardrails, helping people understand what the limitations are. Don't just go rogue and build your AGI and toss it out there, but take it step by step. "Here's how we share knowledge internally," building out tools to help employees share knowledge and providing clear support when things don't work, because that's the nature of experimentation is most of them fail.

Amy Hanan:   Mm. I want to stay on this point maybe a little bit longer by taking in a slightly different direction of workers' rights and the workers' experience here, because I know the principles released by the White House and Department of Labor certainly are touching on this. Maybe you could speak a little bit too about how businesses can effectively measure and mitigate the potential impacts on workers, especially considering the lack of tools that are already out there for assessing employee status or even the influence of factors, new technologies like AI.

Justin Garten: No, I think you've really hit the central point of all this with that in that we're talking about protecting workers or protecting them from impacts. We don't even have the tools to measure impacts yet, so it becomes something where there's a level of vagueness there, which means that I don't even know how you can really talk about protection in the context where you don't know what you're protecting from, how you measure that, how you'd actually evaluate it.           

That said, I think this is also an opportunity because we've known for a long time that within organizations we aren't measuring enough. We don't understand what's happening between workers, within workers, and treating organizations as something that aren't just these legacy institutions, but things that we're all actively members of and actively responsible for. Like I said, I think it's something we should have been doing for decades, and now for better or worse, we've got a really strong incentive to start getting this right.           

When I talk about measurement, I'm not talking about sort of a neo-Taylorism where we're stopwatching people and just hovering over shoulders. I think that's one of the sides of where these principles are outlining protections as well, is we don't just want to be sort of panopticon-ing every worker, telling them exactly what to do, hoovering up what they do so that we can build a model that approximates them. I think that's important in terms of building trust, actually building effective organizations. But I think it's also then just realizing that information within an org isn't necessarily something that we can measure from the top down. We have to build the tools so that workers can tell us what's going on, they can describe what they're experiencing, where these both opportunities and frustrations are as change happens. Because we all know change fatigue is real, and that's about the only constant I can guarantee over the next five years, there's a lot of change coming.

Amy Hanan:   Yes, and I think a lot of what you were just commenting on, I think naturally ladders up to the topic of oversight. I know this is something our listeners here are thinking about every day, governance and oversight. The establishment of AI governance and human oversight as part of what the White House and Department of Labor have just released really raises questions about where is this going to reside in organizations? Wondering if you could elaborate a little bit on the importance of clarity and defining these oversight roles when it's related to these types of emerging technologies and any maybe best practices we should be thinking about related to that.

Justin Garten: Absolutely, and I think once again, it's really interesting to imagine how different sort of the exact same regulations would be interpreted even by different departments within a company. You imagine a comment on AI safety. How would that be interpreted by technology versus compliance versus HR? These are all groups that bring their own backgrounds, their own perspectives on things, and the exact same words are going to be interpreted in very different ways. Part of the fun challenges of AI is dealing with that too, but we'll leave that aside for a moment. But I think that actually highlights because when you've got concerns that are this cross-cutting, you need to push the oversight as high in the organization as possible. I think this certainly means C-suite where you have to have people who have that perspective across the organization.           

The paradox of this, and something that I keep getting back to you is that it's also got to be bottom-up though, because often at that top level, you don't have perspective on what's happening in terms of data privacy practices, exactly how the data's being stored, all of the nitty-gritty details that really make the difference between a system that's safe for worker or user data and a system that really isn't. I think the oversight or the clarity has to come from above, has to come ideally from the top of the org, but the day-to-day practice also has to come from the bottom.           

I think this is something to say, for example, at Google was one thing I saw there where there's a lot of power devolved down to the lower level engineers in terms of protecting privacy, but there's also responsibility devolved down to that level. I think similarly, a lot of organizations as they deal with these types of challenges, they're going to have to find that sort of dual structure where clarity from the top but responsibility all the way through the organization.

Amy Hanan:   I was just thinking about this, all these different organizational structures, as you were talking through some of your points there. Continuing on this thread a little bit longer, really from an organizational standpoint, again, what steps should businesses be taking to be more proactive about looking at these challenges and aligning with some of these evolving regulatory frameworks and movements and all of these changing workplace dynamics as it relates to AI?

Justin Garten: I guess I'd re-highlight a couple of things and then move on to a couple of new points. On the first side, I'd say that in terms of things like cybersecurity, things like data privacy, there are existing best practices there that have been implemented by companies, say in the health space, where there's a lot of work in the US, HIPAA, but just health information privacy that give us a good starting point. I think the recognition that data, that AI, that distributed intelligence is going to become ubiquitous, not just in companies that traditionally have dealt with that. Your starting point at very least should be a baseline, maybe a level above where you thought you had to deal with, but where there are existing tools, providers, practices that you can implement today.           

I think it's also a matter of looking ahead. This is where regulation can be challenging because we've all seen in a lot of different areas that it's often the specifics are driven by whatever the previous crisis was. We know some things that are on the horizon, we don't know which of them are going to hit first. Things like concerns around AI-generated propaganda, fraud, security, job loss. All of these things that once again, something is going to be a crisis and then it'll be regulated. How do you prepare for that if you don't even know which direction that wave is going to be moving? Part of it is the monitoring piece. That's something that you need to do anyway. You've got to understand your organization, you've got to understand data flows. You've got to know which models you're using, which tools so that when there is a breach or there is a problem, you're able to, with a single click, say, "Okay, we can swap this out this way," or, "This was one we weren't using." Just that awareness.           

Monitoring also on the employee side, knowing your organization. There's going to be cases where a crisis hits one part of the organization or AI impacts one part of the organization differently, and being aware of that, being able to anticipate and being able to communicate that. Once again, this isn't an AI-specific thing. This is the basics of how we should have been running orgs for a long time anyway. Then there's a couple of things I think in terms of just how do you plan for this kind of accelerating change because we don't have experience with it. We can look at the historical record of things like the Industrial Revolution or past sort of major disruptions, but even those, I think, took place over decades and we're looking over months and years at this point.           

You can start doing more extreme scenario planning as well. For example, what would your organization look like if you had to plan for 100% yearly turnover? Now, I'm not saying that as a recommendation by any stretch, but if you think about the way that employee roles might be changing, that's, in some ways, maybe the most similar analog to what we're actually going to be dealing with. We know organizations that deal with this, I'm not going to put fast food as a way we should run our businesses, but just increased training, increased support, and knowing that we have to treat up-skilling as a constant part of the day to day. That's just one example, but kind of thinking through some of those more extreme-sounding scenarios is a way that we can actually start to think about the impacts that are coming and the possibilities that are coming on this.

Amy Hanan:   Yeah, and some of those scenarios that you just highlighted really underscores the cross-functional, the cross-collaboration really required in an organization to be able to plan effectively, to prepare, to make some decisions on how best to operationalize and react to these innovations that are happening.           

But I also want to bring it back to our ethics and compliance professionals here. Ethics and compliance as a function is also one of those core hub functions in an organization working cross-functionally, working collaboratively really within every area of an organization. When you're thinking about how businesses can start preparing for the ethical implications, the potential ethical culture impacts of the AI expansion in their own organization, what would you say are some of the critical priorities there?

Justin Garten: Yeah. I think as ethics and compliance professionals, one thing, I mentioned historical examples, I think it's actually worth looking back to some of those and thinking of the level of disruption of something like the Industrial Revolution where you went from 90-plus percent farm workers down to 2% over the course of not such a long period. I think the types of disruptions we've seen in the past and the way those have impacted people's lives, the structure of how organizations have been put together, it gives us a little bit of perspective. It's also something where I think ethics and compliance professionals need to be prepared because your roles are going to become far more important. In part because the kinds of issues we're dealing with are ones that just frankly have been science fiction until very recently.           

For example, HR is not really equipped right now to deal with the situation where somebody comes in and complains that the AI is being mistreated or the other worker is complaining that somebody is treating the AI a little too human-like. These are just, like I said, science fiction scenarios, but they're already happening. We saw this in Google with the release of their bot where they had an engineer who decided it was sentient and started pushing a certain narrative around that. Humans for all our flaws are, if nothing else, empathy machines. It's something that we're going to see where different people are going to be complaining that their jobs are taken away by the AIs while others are complaining that the AIs are being mistreated. That's a level of ethical challenge that frankly, it's been in our philosophy classes for a while, but now we get to deal with it in our day-to-day business and that's exciting. I think that's super-cool. But it's also something where those of you who've been thinking about these issues for a long time, this is time to step up. Hopefully we aren't dealing with trolley car problems in the course of this, but other dilemmas certainly.           

I think also there's an Alan Kay quote, "The best way to predict the future is to invent it." I think so often we treat invention as a technological phenomenon. Organizations and ethics is something that we don't apply maybe the same standards to in terms of being able to create something new, to move us forward. Unfortunately, that time has passed. We have to be innovative as ethics and compliance professionals. We have to be thinking about what this actually means. How do we build structures that support different types of intelligences working together? Sorry, I'm getting a little maybe far afield, but these are things that 10 years ago everyone would've said were 30, 40 years off and now it could easily be in the next decade, we're going to be seeing these issues come to the forefront. We have to invent the solutions before the problems come. I think that requires thinking deeply, thinking broadly across a lot of different parts of the organization, but also really thinking about what kind of future we're trying to build. The technical part is one side of that, but then the technical skills of organizations and ethics, these are things that we also have to innovate on.

Amy Hanan:   You mentioned before, I think quite astutely, that principles are not policy, but we all know that that policy will be coming. Looking ahead, do you have any opinions on or thoughts on what we could potentially be seeing as some of the first policies to be tackled from a regulatory perspective?

Justin Garten: I guess I'll drop an opinion first and then switch the prediction side of things. I would love if this weren't tackled as a regulatory issue, but instead as a legal issue. I think you've already got the big AI companies that are pushing for regulation that would basically be expensive, but a fixed cost, which is exactly what you want as an incumbent and helps none of us, I think. My preference would be if we had civil and criminal standards in place for how we're treating this rather than a regulatory standard. That said, that's not going to happen. I think we can look ahead to what's happening in the EU right now. They tend to be a couple years ahead of at least certainly where the US is on this and probably where most of the rest of the world is as well. That gives us some hints in terms of some of the things we've talked about, where the framing has largely been in terms of fairness, of taking a look at training data that we're using, avoiding bias in these systems.           

Like I said, those are things that actually I think we've got a decent understanding of, so that's not coming out of left field. I think some of it too is just not waiting because these are changes that are going to be expensive to implement if you try to do it all in a quarter. But if you take advantage of these glimpses into the future we've got of both these principles that are coming out from the White House, of what we're seeing on the EU side, what we know from other spaces, it sort of gives you some guidance on where to start building today so that when those policies actually come out or when the regulations are enforceable we've got a little time on that.           

Finally, I'd just say that treating resilience is a business necessity. You're not going to build for every possible possible future. Instead you want flexibility, resilience, understanding the ability to handle a diverse set of circumstances, because even in the past two years we've seen, we didn't predict exactly which types of models were going to leap ahead first. We didn't know exactly where the change was going to be. Five years ago, people were talking about blue collar jobs being the ones that going to be taken away, and now we're seeing artists being replaced. This just kind of came out of left field. Some people got it, some of us didn't. But the point is that flexibility and just adaptability is the key.           

I think maybe rolling back around to the ethics side of things, that an ethical organization is also resilient, is also flexible, is also understanding because you've got those lines of communication, you have the patterns of trust that you can actually deal with some pretty incredible opportunities. Because I guess the last piece of advice would be have fun. We've got the moment to live in interesting times, and this is going to be a heck of a ride. Already just the things we get to play with are ... We're living in an incredible moment of history and we have a responsibility with that, but enjoy the ride.

Amy Hanan:   I would be remiss if I didn't pick up on a couple of things that you just said there around having fun, taking advantage of the moment, being passionate and leaning in to all of the innovation that's available to us right now and not use this as an opportunity to ask you, Justin, as our senior director of AI and data science here at LRN, is there anything you'd like to share as maybe a little teaser with our listeners right now about maybe what LRN is doing?

Justin Garten: Absolutely. No, so the education space, which is kind of I think at least a core of what we do at LRN is so exciting right now. Just the ability to, in our area of ethics and compliance primarily, but across the board, the chance to build systems that really change the way education works, it's such a game changer. I think the importance of that just keeps rising because as these changes happen faster and faster, we have to be able to respond faster and faster. Things that we've been doing in terms of new types of scenarios that users can use where you actually dive into it or some of the things we've been exploring in terms of just faster ways to adapt content to individuals so we can meet people where they are, these are things that just, I don't know. It's been something that I've been excited about for years in terms of AI tutoring systems and AI education. I don't know. I feel very excited to be able to help in a small way on that with the work that we're doing here.

Amy Hanan:   I think this is absolutely a conversation we could have all day. We've only scratched the surface here, but I do believe that we're out of time right now. Justin, I want to thank you so much for joining me on this episode. Hopefully you'll come back and speak with us again once we have some more innovations that we could be sharing with our audience.

Justin Garten: Looking forward to it, definitely.

Amy Hanan:   Great. My name is Amy Hanan and I want to thank everybody for listening to this episode of the Principled Podcast by LRN.

Outro: We hope you enjoyed this episode. The Principled Podcast is brought to you by LRN. At LRN, our mission is to inspire principled performance in global organizations by helping them foster winning ethical cultures rooted in sustainable values. Please visit us at lrn.com to learn more. And if you enjoyed this episode, subscribe to our podcast on Apple Podcasts, Stitcher, Google Podcasts, or wherever you listen, and don't forget to leave us a review.

Be sure to subscribe to the Principled Podcast wherever you get your podcasts.

Listen on Apple Pocasts Listen on Spotify Listen on Stitcher Listen on Audible Listen on Google Podcasts Listen on TuneIn

Listen on Amazon Music Listen on iHeart Radio Listen on Podyssey Listen on Listen notes Listen on PlayerFM

Ready to upgrade your ethics and compliance program?

We’re excited to give you a personalized demo of the LRN solution. We’ve been a trusted ethics and compliance partner for over 25 years. With over 30 million learners trained each year, we optimize ethics and compliance programs across the globe to help save your team time, increase engagement, and align with regulation.