ChatGPT and other generative AI tools have caused a sensation in the marketplace. Some are heralding AI as the best innovation to come along since the internet, while others are fearful of its unforeseen, large-scale impact. For the E&C practitioner, what are the major risks and mitigation strategies that need to be in place? On this episode of LRN’s Principled Podcast, host Susan Divers explores the current and evolving risk landscape surrounding ChatGPT and generative AI with Jonathan Armstrong, a partner at the legal compliance firm Cordery.
Be sure to subscribe to the Principled Podcast wherever you get your podcasts.
Jonathan Armstrong is an experienced lawyer based in London with a concentration on compliance and technology. His practice includes advising multinational companies and their counsel on risk and compliance across Europe. Cordery gives legal and compliance advice to household name corporations on prevention, training, and cure—including internal investigations and dealing with regulatory authorities.
Jonathan has handled legal matters in more than 60 countries involving cybersecurity and ransomware, investigations of various shapes and sizes, bribery and corruption, corporate governance, ethics code implementation, reputation, supply chain, ESG, and global privacy policies. Jonathan has been particularly active in advising multi-national corporations on their response to the UK Bribery Act 2010 and its inter-relationship with the US Foreign Corrupt Practices Act (FCPA).
Jonathan is a co-author of LexisNexis’ definitive work on technology law, "Managing Risk: Technology & Communications.” He is a frequent broadcaster for the BBC and appeared on BBC News 24 as the studio guest on the Walport Review. In addition to being a lawyer, Jonathan is a Fellow of The Chartered Institute of Marketing. He has spoken at conferences in the US, Japan, Canada, China, Brazil, Singapore, Vietnam, the Middle East, and across Europe.
Jonathan qualified as a lawyer in the UK in 1991 and has focused on technology and risk and governance matters for more than 20 years. He is regarded as a leading expert in compliance matters. Jonathan has been selected as one of the Thomson Reuters stand-out lawyers for 2023 —an honor bestowed on him every year since the survey began. In April 2017, Thomson Reuters listed Jonathan as the 6th most influential figure in risk, compliance and fintech in the UK. In 2016 Jonathan was ranked as the 14th most influential figure in data security worldwide by Onalytica. In 2019 Jonathan was the recipient of a Security Serious Unsung Heroes Award for his work in Information Security. Jonathan is listed as a Super Lawyer and has been listed in Legal Experts from 2002 to date.
Jonathan is the former trustee of a children’s music charity and the longstanding co-chair of the New York State Bar Associations Rapid Response Taskforce, which has led the response to world events in a number of countries including Afghanistan, France, Pakistan, Poland, and Ukraine.
In July 2023 Jonathan was appointed to the New York State Bar Association Presidential Task Force on Artificial Intelligence. Jonathan sits on the Task Force with leading practitioners, regulators, judges and academics to develop frameworks for the use and control of AI in the legal system.
Susan Divers is a senior advisor with LRN Corporation. In that capacity, Ms. Divers brings her 30+ years’ accomplishments and experience in the ethics and compliance area to LRN partners and colleagues. This expertise includes building state-of-the-art compliance programs infused with values, designing user-friendly means of engaging and informing employees, fostering an embedded culture of compliance and substantial subject matter expertise in anti-corruption, export controls, sanctions, and other key areas of compliance.
Prior to joining LRN, Mrs. Divers served as AECOM’s Assistant General for Global Ethics & Compliance and Chief Ethics & Compliance Officer. Under her leadership, AECOM’s ethics and compliance program garnered six external awards in recognition of its effectiveness and Mrs. Divers’ thought leadership in the ethics field. In 2011, Mrs. Divers received the AECOM CEO Award of Excellence, which recognized her work in advancing the company’s ethics and compliance program.
Mrs. Divers’ background includes more than thirty years’ experience practicing law in these areas. Before joining AECOM, she worked at SAIC and Lockheed Martin in the international compliance area. Prior to that, she was a partner with the DC office of Sonnenschein, Nath & Rosenthal. She also spent four years in London and is qualified as a Solicitor to the High Court of England and Wales, practicing in the international arena with the law firms of Theodore Goddard & Co. and Herbert Smith & Co. She also served as an attorney in the Office of the Legal Advisor at the Department of State and was a member of the U.S. delegation to the UN working on the first anti-corruption multilateral treaty initiative.
Mrs. Divers is a member of the DC Bar and a graduate of Trinity College, Washington D.C. and of the National Law Center of George Washington University. In 2011, 2012, 2013 and 2014 Ethisphere Magazine listed her as one the “Attorneys Who Matter” in the ethics & compliance area. She is a member of the Advisory Boards of the Rutgers University Center for Ethical Behavior and served as a member of the Board of Directors for the Institute for Practical Training from 2005-2008.
She resides in Northern Virginia and is a frequent speaker, writer and commentator on ethics and compliance topics. Mrs. Divers’ most recent publication is “Balancing Best Practices and Reality in Compliance,” published by Compliance Week in February 2015. In her spare time, she mentors veteran and university students and enjoys outdoor activities.
Intro: Welcome to the Principled Podcast brought to you by LRN. The Principled Podcast brings together the collective wisdom on ethics, business and compliance, transformative stories of leadership and inspiring workplace culture. Listen in to discover valuable strategies from our community of business leaders and workplace change makers.
Susan Divers: ChatGPT and other generative AI tools have caused a sensation in the marketplace. Some are heralding AI as the best innovation to come along since the internet, and others are fearful of its unforeseen large scale societal impacts. More immediately, the risks include identity theft, privacy invasion, and compromise of IP rights. Companies such as Amazon, Apple, Accenture, Citigroup, Northrop Grumman and others, have banned the use of ChatGPT. So for the ethics and compliance practitioner, what are the major risks and what are the right mitigation strategies that need to be in place short of such a drastic move as banning the apps?
Hello, and welcome to another episode of LRN's Principled podcast. I'm your host, Susan Frank Divers, Director of Thought Leadership and Best Practices at LRN. Today I'm delighted to be joined by Jonathan Armstrong, a partner at Cordery, a compliance law firm. Jonathan has looked at many of these issues. We're going to be talking about the current and evolving risk landscape surrounding ChatGPT and generative AI. Jonathan is active in a number of professional bodies, so I should note that the views he's going to express today are his alone. So Jonathan, thank you very much for coming on the Principled podcast.
Jonathan Armstrong: My pleasure, Susan. Thanks so much for inviting me.
Susan Divers: Well, let's start off with your practice at Cordery since it is a compliance law firm. I know you're focused on compliance and very familiar with topics such as GDPR and data protection. So give us a flavor for your practice, if you will.
Jonathan Armstrong: Yeah, happy to, Susan. So at Cordery, we're based in London. We're doing work mainly for multinational clients across Europe and sometimes wide field, as you say. A lot of that's around tech stuff, so data privacy and cybersecurity, a lot of it relating to investigations and some work relating to issues like bribery, modern slavery, supply chain. So a fairly broad compliance practice, but only compliance. That's all we do.
Susan Divers: Fantastic. Well, let's get right into ChatGPT. Artificial intelligence has been around a while now, so why is ChatGPT causing such a big stir?
Jonathan Armstrong: Well, I think you're right, Susan. I mean, I'm obviously incredibly old and I've been practicing law for more than 30 years, but before then, I was always keen on technology as a kid, and I managed to persuade my grandparents maybe when I was 14 years of age that I had something that ran on AI and monitored their movements. And obviously there were science fiction films that did that before. I think the real change has been the fact that it is a, what you might call a social topic. People are talking down the pub, down the bar about AI, and obviously press interest has followed that. I think maybe two of the other factors as well are the fact that big tech has invested heavily. So you've got, for example, Microsoft's investment in OpenAI, you've got Google's investment in Bard, that are bringing AI to the masses, where previously it was an interest of academics and those in tech circles.
And I think the other reason that is perhaps a driver is nation state use of AI, and that might be applications that are broadly for good, such as the use of Google DeepMind in the National Health Service in the UK to predict illnesses. But it might be AI for bad. So for example, the rumors of the involvement of the Russian security services in some chatbots, which are preying on the vulnerable and manipulating individuals. So I think it's that sort of big tech backing plus sanctioned by governments and the press interest that's just made the real acceleration in the use of generative AI particularly.
Susan Divers: Now that's a very lucid and helpful explanation. In some respects it sounds like the perfect storm. One of the reasons I wanted to get you on the podcast in particular is in your interviews you've pointed out that it's not quite accurate to describe the legal risks and the dimensions of ChatGPT as the wild west. There are a number of EU and US and other country laws that apply to the use of ChatGPT. Can you remind us of some of those?
Jonathan Armstrong: I mean, I think the laws broadly fall into two categories. There are new laws and there are more than 40 of those, I think, across the world that are trying to create new rules around AI, and they're in different stages of enactment. Probably the most well-known is the so-called EU AI Act, which might be enforced towards the end of this year, maybe the beginning of next year, that looks to have a sort of broad code of rules. But the thing that I think is of more interest in the short term is what you might call retrofitting existing laws. So we're seeing GDPR particularly be used in different jurisdictions across the EU to try and bring some regulation to AI. So we've had, for example, the investigations by UK, EU and Australian data protection authorities into Clearview AI and the fines that have resulted from that. We've had the suspension by the Garante, the Italian Data Protection Authority of Replika, and the temporary suspension there of ChatGPT.
We've had the temporary suspension by the Irish Data Protection Commission of Bard, and we've also had people concentrate on AI in HR. So the Spanish regulator, for example, has looked at recruitment related apps. Do they reinforce the bias of humans? Do they unfairly exclude people from the recruitment, the hiring, the interviewing process on the basis of lack of fairness in those apps or lack of transparency? And we generally see issues like fairness and transparency run through GDPR enforcement across the EU. There are some just under 4 billion fines for GDPR and some two and a half thousand-ish cases. And a key feature of probably more than three quarters of those cases is fairness and transparency. And we particularly see that with a lot of the cases that look at the use of AI. And as I said, it's not just big tech cases where regulators have got involved, it's those people who buy in AI as well, particularly for things like recruitment support.
Susan Divers: I certainly agree with you. And interestingly, one of the first specific laws in the States is a New York City ordinance that basically says if you're using ChatGPT or other large language AI to recruit and to sift through candidates, then you must ensure that it's not using algorithms that violate fairness or transparency. So it's a pretty universal approach, but there are other risks associated with it as well. And I'd love to talk to you as a data privacy expert about that, but also about intellectual property infringement and ownership. And we've already talked about the bias and transparency aspects, so let's focus on those.
Jonathan Armstrong: Yeah, I think there are definitely real concerns, I think. We've had data scientists like Tim Negebru look at the way in which we almost industrialize biases that we have, and that clearly is a concern that we've talked about already. I think the IP issues are profound. In many cases, I think a lot of those cases are going to be around training data. So obviously AI isn't wise of itself. It accumulates wisdom by getting data from all sorts of different sources and sweating that data. So personally, I'd love a ringside seat at the Getty versus stability litigation that's running in the US and the UK at the moment, perhaps illustrates the point where Getty is saying about 12 million photographs were taken from the Getty archive and used as training data to train a different AI algorithm. And I know many people are altering the access agreements on their websites to try and prohibit their content being used to train AI algorithms.
But I think at its most basic level, you can obviously use things like ChatGPT for bad as well. I mean, for example, just this morning I asked ChatGPT, "If I was trying to pay a bribe in Malaysia, how would I disguise that so that my compliance team couldn't find out?" And it told me to call the bribe "duit kopi", which literally translates as coffee money, and it said that people in Malaysia would understand that, but probably I'm guessing the compliance team might not. If I was doing that in Nigeria, I should refer to the bribe as "dash" or "ig nuj". Apologies for my Nigerian pronunciation. And again, that might disguise it. I even asked it to write me an email that I could send a Malaysian purchasing officer to tell him that I would be paying him a bribe or offering him a bribe without paying him a bribe.
And it gave me this fabulous email. "In light of your exceptional performance, we are excited to introduce an enhanced commission structure to reward your contributions even more efficiently. We value our partnership and believe that this updated commission arrangement will be a testament to our commitment to recognizing your efforts." The other issue I think for compliance officers is it gives bad actors more tools to prohibit compliance officers from protecting the organization. So I think there are sort of what you might call overarching risks, but there are also day-to-day risks that ethics and compliance officers have to respond to as well.
Susan Divers: That's an absolutely scary example. I'm glad you developed it for us because it really illustrates how it can used to make it more difficult basically to have good internal controls. But before we leave this topic, I do want to talk just a little bit more about the IP issue because that's such a big concern in the States in particular. Is it true that the product of ChatGPT isn't necessarily copyrighted and are there some other considerations in that space?
Jonathan Armstrong: I think there are a lots of concerns here. I mean, there's an old-fashioned Latin maxim that lawyers used to use, which is "Nemo dat quod non habet". This is maybe a first for an LRN podcast that people are using Latin maxims. But what effectively that means is you can't give the rights to something that you don't have. And I think this is a big issue with things like ChatGPT. They are potentially acquiring rights in stuff, but they're not entitled to them. If for example, I have a website, I don't know, with Armstrong's Poetry and I reserve copyright in that to myself, and I also have an access agreement on the front of my site, which says that nobody can replicate my poetry and nobody can use it to train AI bots, then I'm entitled to enforce that access agreement. And in some countries I can do that through the criminal law as well, and somebody who's acquired those rights wrongfully can't then assign them to somebody else.
So I think there are going to be a lot of IP disputes. As I said, I think the Getty one's going to be fascinating because that's an organization that has sufficient resources to bring the case, to get it to court, and establish some principles. But I think there are definitely going to be issues for many organizations, and I think it becomes more worrisome when somebody takes that content from a generative AI offering and then, for example, puts it on their own website or incorporates it in their own content. There's all sorts of potential ramifications there with a duty to account for the profits that you've made and unraveling that content, particularly when you've co-mingled content that you've originated with content that you've grabbed from a generative AI platform. So unfortunately, I think it's good news for lawyers, but I think these are issues that we're still going to be unraveling, I'd suggest 5, 10, 15 years from now.
Susan Divers: That's a very lucid explanation, and I would agree with you, especially as products of ChatGPT then get incorporated into other products of ChatGPT and so forth. So it'll be certainly interesting to see how that plays out. One area that I do see being overlooked sometimes in talking about ChatGPT, and I think it's an important one for compliance officers, is the potential compromise of proprietary information through the use of prompts. I understand that that's a major consideration in the various companies banning the use of ChatGPT. So would you explain the risks and the concerns here, Jonathan?
Jonathan Armstrong: Yeah, I think there are all sorts of risks. You can have a general information security risk. So if it is right that chatbots like Replika AI are promoted or controlled by foreign nation states, they can be used to get information from your organization to attack it. If it's a chatbot that particularly markets itself, if you like it, vulnerable individuals and chats to them late at night, then they might get secrets from those vulnerable individuals. I found my grandfather's wartime documents on Saturday, and they were a fascinating read through, and there was a simple statement that British servicemen were given in World War II telling them not to talk to people even if they didn't look like the enemy. And I think it would be useful for many corporations to replicate that four paragraph statement and to have that about AI, not just about talking to people who might be foreign nation spies during times of war. So I think there's the information security risk there. There's all sorts of other risks as well.
If you ask ChatGPT, for example, to help you prepare a press release, then that press release could contain share sensitive information. There are strict rules if you're a listed entity, a stock exchange listed entity, on making announcements to the right people in the right format at the right time. So if you're doing that preparatory work on ChatGPT, it's helping you with phraseology, et cetera, then you may be giving share sensitive information away. There's lots of concerns I think with the working plans of organizations, and of course we might think that we can trust big tech organizations like ChatGPT, like Bard, but experience has taught us that often our employees aren't able to discern organizations we can trust versus organizations that we can't trust. And as I've said, we don't know the end use of that data. Some of the data we put in to applications like ChatGPT will end up on the internet. They might not end up in the internet through our actions, but because we have input information that's been used to train the dragon, the dragons helped somebody else, and they've cut and pasted that data and made it public.
So what we've got to tell, I think, our people is that if you put that data into a generative AI platform, then you've lost control of it. Just as we've told them, I hope, over the last 5, 6, 7, 8, 9 years, that we shouldn't use tools like Google Translate for sensitive documents because we don't know where that data ends up, then I think we've got to repeat those messages with generative AI as well. And I think you're right that for some organizations the right response is to ban it completely. And I can understand why some of those that you listed at the top of the show would ban it, but for other organizations, they won't want to ban generative AI because they'll want their people to be familiar with it, to understand it, and to get with the program. And for many organizations, that risk balance is going to be really difficult to work out.
Susan Divers: Well, and that will certainly be an area where ethics and compliance officers will really need to help their organizations put out clear, reasonable, and effective guidance. One of the scenarios that I've talked about at LRN is if I were to ask ChatGPT to design a bribery training course and I put a lot of our data or images or other material into ChatGPT, it's gone basically. It's available for the next person that wants to put together an anti-bribery course. So I think that's really something to bear in mind. And if you are certainly a defense contractor, the whole issue of exporting controlled data or technology to the web becomes live.
Well, thanks for that very helpful explanation. Before we close out though, I do want to ask you one last question too, which is you've talked a lot about deepfakes and you've mentioned those a bit in this podcast. I think that's a thread I'd like to pull because I think it's helpful for listeners to really think about what that could mean in terms of, you mentioned the person late at night who's lonely being asked questions by foreign powers' chatbots.
Jonathan Armstrong: Yeah, I think it's much easier. One of the problems, I guess, for a lot of scammers all around the world is that sometimes their lack of ability to write in coherent English has caught them out. And many of us will remember scams asking us to help launder money from relatives or people with similar names. And nearly always the lack of English phraseology was the tell. Now of course, all of that's gone. Any decent fraudster can use a tool like ChatGPT to correct their language to make it seem more credible. So I think the baseline risk is there. We also know, as I've said, that there are applications out there that are particularly trying to befriend the friendless, and they might be doing it for completely altruistic reasons or they might not. And as I say, we know or we believe that there are foreign powers behind some of these. But also, you can use generative AI to help with things like CEO scams, those scams that we see relatively regularly asking people to pay money and the CEO directing people in the finance team.
They all get markedly better when this technology is there to assist them. And so we can get a generative AI model to learn the phraseology of a CEO, and we can even use it for films. We can use it if we choose to mimic the voice of individuals within our organization. So I think that becomes much more concerning, not only from a fraud point of view, but from a reputation point of view, even from a regulatory point of view. We know, I've had cases where ill will whistleblowers have faked photographs, and they've done that in a pretty basic way using Photoshop, but they've used it to credentialize their false claim to try and cause an organization harm. As we give good people more tools to do stuff, we often also give bad people those tools as well.
So we're likely to see meaningfully more persuasive whistleblower claims, for example, from bad actors that look plausible, which are not. So this whole issue of fakery, I think, is something that ethics and compliance professionals need to understand, and it might mean that they need to be slightly more skeptical about some of the whistleblower reports that they receive, for example.
Susan Divers: That's a very interesting example, and I'm glad I asked you to expand on that space. I would not have thought about fakery and the whistleblower area, but obviously that's a concern along with the other issues that you've raised.
Well, Jonathan, unfortunately, we're out of time, so I want to thank you for joining me on this episode. It's been a wonderful podcast and I hope you'll come back and speak with us again soon.
Jonathan Armstrong: Yeah, I'd love to. My pleasure. Thanks for the invite.
Susan Divers: My name is Susan Divers and I want to thank you all for listening to the Principled podcast by LRN.
Outro: We hope you enjoyed this episode. The Principled Podcast is brought to you by LRN. At LRN, our mission is to inspire principle performance in global organizations by helping them foster winning ethical cultures, rooted in sustainable values. Please visit us at LRN.com to learn more. And if you enjoyed this episode, subscribe to our podcast on Apple Podcasts, Stitcher, Google Podcasts, or wherever you listen. And don't forget to leave us a review.
Be sure to subscribe to the Principled Podcast wherever you get your podcasts.