
The startup leading AI security in the UK.
The great thing about AI security is conceptually [inaudible 00:00:09] make sense, people understand, the policy people understand, leaking data or losing data is bad. They understand that if someone could steal my AI, that's also bad. So, there's less emphasis on the technical mechanisms to achieve this, more of emphasizing how does it work, the more the recommendations for quiet to mitigate that.
Dave Bittner: Hello, everyone and welcome to "Caveat," N2K CyberWire's privacy surveillance law and policy podcast. I'm Dave Bittner and joining me is my cohost Ben Yelin from the University of Maryland's Center for Health and Homeland Security. Hey, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: On today's show, I examine the troubling challenges of regulating deep fake porn. Ben looks at a brand news Appeals Court decision on geofencing. And later in the show, Dr. Peter Garraghan, CEO of Mindgard, discussing the UK's recently published AI security guidelines, and the recommendations he made for addressing cybersecurity risks in AI. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, before we dig in here, we got a kind note from one of our listeners. This is a listener named Kevin who wrote in and said, "Dear Dave and Ben, most importantly, Ben is one of the few lawyers I respect." There you go, Ben.
Ben Yelin: I'll take it.
Dave Bittner: That's pretty good, right?
Ben Yelin: Honestly, one of the best compliments I've heard for myself. So --
Dave Bittner: There you go.
Ben Yelin: -- I'm already a fan of yours, Kevin.
Dave Bittner: Faint -- faint praise.
Ben Yelin: It's a low bar.
Dave Bittner: Yes, well --
Ben Yelin: Literally, right?
Dave Bittner: -- there's no shortage of lawyer jokes, right? But as my -- another good friend of mine who's a lawyer says, "Everybody jokes about lawyers until they need one." Right?
Ben Yelin: Totally, true. Yes.
Dave Bittner: All right. Kevin goes on and says, "Not being a lawyer, my contact with the law has been as a trial consultant." And he says, "Yes, I know what you're thinking." He says, "Having said that, I listen to "Caveat" where the Chevron deference was discussed. I'm for the change. I think there's a natural bias on the part of the experts to lean in favor of the agency. It has always appeared to me to be a conflict of interest. It would be interesting to know how many times the experts have sided with the agency. Now, could not the court, as it does in many cases, call its own experts who would have a more objective view of the matter? I have seen and been called to testify as an expert witness on a case in which I was not involved. This just seems like a better solution to balance the regulatory system." What do you think here, Ben? I think Kevin makes an interesting point.
Ben Yelin: Yes, it's a really interesting point. I think it's a widely shared perspective that Kevin is putting forth here, and I'm glad, certainly glad that he wrote in. I think I was probably more critical of overturning Chevron than perhaps our median listener, or at least our median listener who knows what the heck Chevron is --
Dave Bittner: Right.
Ben Yelin: -- as a case. I guess I just -- I don't see it the way Kevin does. I don't think agency personnel have any inherent bias. I mean, they are civil servants. They were there in the public interest. Certainly, there are instances where there are probably financial incentives in terms of getting grants or getting on the right side of certain private enterprises, that sort of thing. But I think for the most part, just by the nature of being public servants and working within federal agencies, they are interested in what they see as the optimal policy.
Dave Bittner: But don't you think like -- I'll just be you know, try to -- taking maybe Kevin's side here. Someone who has taken a lifetime position with the EPA, is not likely to be a, you know, drill baby drill, person. Right? I mean, do you think there's an inherent --?
Ben Yelin: I don't think that's necessarily true. I mean --?
Dave Bittner: Don't you think there's an inherent bias in who would be attracted to work for particular agencies?
Ben Yelin: Yes, there's a natural bias toward that. I think people who work for agencies who are in civil service, granted there are some political appointees, and that's different, but if you are a civil servant, do you generally believe in the agency's mission? The agency's mission, I mean I guess it depends. Everything is partisan in a sense, but for a lot of these agencies, the mission is quite nonpartisan. I mean, a lot of the ones we never talk about, it's basically just processing old people's healthcare paperwork. And decisions about what Medicare should and should not cover, based on some statute that was enacted 50 years ago.
Dave Bittner: Yes.
Ben Yelin: So, yes, I mean I certainly understand the perspective. I just, to my knowledge, having agency expertise cut out of the process entirely, is an unwise decision and one that I don't think reflects what my view is of both the administrative procedure act and the proper role of the executive branch in developing regulatory policy. But I do think it's a totally valid disagreement. I do think courts are going to start to call in more outside experts. In a sense, those outside experts will be more unbiased because they don't have a direct stake in the outcome, but I still think it's giving away the leverage and the power that agencies have to make decisions without going through arduous, perhaps years' long, litigation, which is going to tie up what could be necessary regulatory policy. So, that's my view on it. I think we have a respectful disagreement there and certainly Kevin's view is shared by many of our listeners and many people who are waiting 40 years to see Chevron overturned.
Dave Bittner: Yes. All right, well thank you Kevin for writing in. We do appreciate it. And of course, we would love to hear from you. If you have something you'd like us to consider for the show, you can email us. It's caveat@n2k.com. Well, let's jump into our stories here. Ben, why don't you start things off for us?
Ben Yelin: So, even though the Supreme Court takes a break over the summer, our Appeals Court, the -- over the many federal circuits, do not take a break. And we've got a very high-profile landmark decision on geofence warrants from the Fourth Circuit Court of Appeals, which is located right here in the Mid-Atlantic. And this is actually a case that we had discussed previously when it went up for oral arguments. So, it's always good to come back to these and see what the decision was. So, this is the case of Okello Chatrie v the United States. Mr. Chatrie pled guilty in May 2022 for robbing a bank, after a district court refused to suppress evidence on his location obtained from Google. So, law enforcement went to Google and asked for a geofence warrant, all of the cellphones that were in the area of this bank at the time that the robbery occurred. They were able to do some investigative work. They figured out that his device was there and that he was the one who likely committed this crime. He sought to suppress that evidence saying it violated his Fourth Amendment rights. And the district courts refused to suppress that evidence. Mr. Chatrie appealed this to the Fourth Circuit Court of Appeals and the Fourth Circuit agreed with the district court, not necessarily for the same reasons, which I'll get into, but they agreed with the district court that this evidence should not be suppressed, that Mr. Chatrie does not have a reasonable expectation of privacy as it relates to geofence warrant data obtained from Google, and therefore, this is not a Fourth Amendment search that entitles him to the protections of the Fourth Amendment. So, in the view of this three-judge panel on the Fourth Circuit, a geofence warrant is unlike the information obtained in Carpenter. In the Carpenter Supreme Court decision back in 2018, that was about historical cell site location information of one individual, spanning a period of seven days. And that's a lot of information. In the view of the court, you can put together a mosaic of a person's life pretty easily and understand their private affiliations, associations by looking at seven days of historical cell site location information. The other factor in Carpenter that I think distinguishes it from this case in the view of the majority here, is that in Carpenter, there was no opt-in. Mr. Carpenter merely turned on his cellphone and law enforcement obtained the data from his cellular provider which collects data on his location without him having to opt-in to anything.
Dave Bittner: Okay.
Ben Yelin: Chatrie, and this is an apocryphal tale, agreed to opt-in to Google, I believe through Google Maps, collecting his data. So, he pressed the "I agree" button. He opted-in to sharing his location data with Google. And I think in view of the court that not only was a forfeiture of his reasonable expectation of privacy, but it was not -- there's not enough information in these geo warrants to create that type of broad mosaic view. A relevant precedent case in the Fourth Circuit is a case I believe we've also talked about on this podcast: leaders of A Beautiful Struggle versus the Baltimore Police Department. I don't know if you remember those spy planes that used --
Dave Bittner: Yes.
Ben Yelin: -- fly over Baltimore and they were taking real-time pictures.
Dave Bittner: Right.
Ben Yelin: It was a crime fighting tool. Fourth Circuit held in that case that the data obtained through those low flying planes did violate the Fourth Amendment because it could create kind of a dossier on a person's private movements. It did create that mosaic that led to Fourth Amendment protection and in this case, they're distinguishing the geofence warrant from the data collected in the spy plane case. This is a two-to-one decision. The majority was written by Judge Richardson, an appointee of President Trump. And he was joined by a long-time Fourth Circuit Judge, J. Harvie Wilkinson who is a Ronald Reagan appointee.
Dave Bittner: Wow.
Ben Yelin: The dissent was written by an Obama appointee. I mention this because there could be a move on the part of Chatrie to get this reheard en banc. So, in front of the whole Fourth Circuit Court of Appeals.
Dave Bittner: Okay.
Ben Yelin: And the Fourth Circuit Court of Appeals is slightly democratic leaning in terms of which judges were appointed. So, it's possible that this decision could get reversed if it went to the whole Fourth Circuit. And then the last thing I'll say is even though one of my favorite scholars, who I've talked about a million times on Fourth Amendment technology stuff, Professor Orin Kerr of University of California Berkeley, he thinks that the judges came to the right conclusion here. That this was not a search for Fourth Amendment purposes, but they did so for the wrong reasons. He and I think many other scholars are very critical of the so-called Mosaic Theory, and they don't know how it's a workable standard. At what point does a series of pictures become a mosaic? I think that's very unclear. There's no bright line. In terms of forming that mosaic, how do we distinguish the information collected here, the quality and nature of that information, from those collected by the Baltimore spy planes? I think that's a really difficult question. So, long story short, in the jurisdiction of the Fourth Circuit, which includes me and you here in Maryland, law enforcement does not need to obtain a warrant to get geofence data. I will say that Google itself voluntarily claims to have discontinued collection of data that they would submit in response to a request for a geofence warrant. So, this no longer seemingly applies to Google. Google has taken a step to protect their own users from geofence data requests, but there are a lot of other companies that collect your location information that have not made that promise that Google has made, including a lot of the apps that you would never think collect your location, but they clearly do. You know, when I want to order my Dunkin' Donuts, it finds the nearest Dunkin' Donuts to me, and it does that by collecting my location data.
Dave Bittner: Yes.
Ben Yelin: So, I do think this will have broader -- a broader application even though Google, which probably has the largest market share on location data, is no longer going to be sharing that information with law enforcement.
Dave Bittner: So, my perception of this, and help me understand if I'm on the right track here, is that we're kind of making a distinction between coming at this from two different directions. As you say, in Carpenter, it is identifying an individual and tracking that individual over a period of time to see where that person went. In this case, it seems like we are interested in a location, a snapshot of a location, a robbed bank, and we're saying, "Who was in this one place at this time to see if our suspect happened to pass through this area?"
Ben Yelin: That's right. That's exactly right. I mean, and that is a major difference.
Dave Bittner: Right.
Ben Yelin: The problem is, Carpenter, the decision itself, didn't really come up with a workable standard for circumstances that are not exactly identical to Carpenter.
Dave Bittner: Okay.
Ben Yelin: Some people have interpreted Carpenter as creating a multi-factor test. So, you look at the nature and quality of the information, the length of the collection, whether the collection -- whether the defendant voluntarily opted-in to the collection? I had a student write a brilliant paper on this arguing that implicit in the Carpenter decision is a multi-factor test. Professor Kerr and I think other scholars have said there is no multi-factor test. If they wanted to create a multi-factor test, they could have done so in that case or in a future case. The court basically has not taken any additional Fourth Amendment cases on any topic since Carpenter. It's amazing for us. We've still had five-plus years of podcast content without the Supreme Court taking any Fourth Amendment decisions.
Dave Bittner: Right.
Ben Yelin: But I think the reason that it's so hard to distinguish all of these cases is that the ruling in Carpenter has kind of created a Wild West in this field of jurisprudence where courts are just kind of doing their best to analogize the circumstances of the cases in front of them to Carpenter, and it's really hard to try and create some type of generalized rule, based on exactly what was said in that majority opinion. So, I think this is the Fourth Circuit's interpretation of Carpenter and how it would distinguish Carpenter based on the facts and the case at hand.
Dave Bittner: So, have we just not seen the disagreements among the circuit courts to have some Fourth Amendment issue make its way to the Supreme Court?
Ben Yelin: I think there frankly has been enough disagreement, not only on this issue, but on some of the other Fourth Amendment issues we've covered. For whatever reason, we're on a long streak of the Supreme Court just not taking up the issue. They don't have to explain why they don't grant certiorari in a case. Sometimes they decide to explain it, but for whatever reason, we've just been on a long drought of having any Fourth Amendment cases in front of the Supreme Court. They seem content to let lower courts argue and wrestle over what exactly Carpenter means without having to clarify it. And I suspect that as long as chaos doesn't break out in the streets, that might continue for some time.
Dave Bittner: Yes, interesting. All right, well we will have a link to that story in the Show Notes. My story comes from the IEEE "Spectrum" which is the -- kind of the Journal of the IEEE, which is the Institute of Electrical and Electronics Engineers. They are, I believe, the world's largest technical professional organization that deals with issues with technology and so on and so forth.
Ben Yelin: A frequent source for us, actually.
Dave Bittner: Yes --
Ben Yelin: Yes.
Dave Bittner: -- they've certainly been around for a long time, and I think they're generally well-respected. So, their journal is called "Spectrum," and they have a story here looking at some of the challenges that we're facing when it comes to dealing with deep fake porn as a society. And of course, deep fake porn is -- we have this technology to generate deep fakes where we can basically paste someone's face onto someone else's body or take with only a handful of images of someone, recreate them doing things that they never actually did. And this naturally, humans being humans, leads to people making apps and technology that can make pornography this way. And these apps are not new. They've been around for several years now. There was a study back in 2023 from an organization called Home Security Heroes, who found that if you have one clear image of a face, it can -- it takes less than a half an hour to create a 60-second, deep fake porn video all for free. Obviously, this has all sorts of implications. We've seen stories of this trickling down to kids in school making videos of their classmates, so on and so forth, which of course has all kinds of additional implications of people being underage. There was a high-profile incident where someone created some deep fake images of Taylor Swift and that one got 47 million views before it was removed. Interestingly, and I suppose not surprisingly, this article points out that 99% of the victims are women or girls. I guess there's no surprise there. So, that's kind of where we stand right now. And these apps are readily available. Many of them are free. So, we're faced with the challenge of what to do about this. Before I dig into some of the other details here, Ben, what do you think of what I've laid out here so far?
Ben Yelin: Yes, it's such a vexing problem to try and solve. There are so many different issues at play. From a legal perspective, you don't want to unnecessarily suppress First Amendment speech, if deep fakes have some type of satirical value or are part of a political message. There's a question of whether to allow deep fake images if there's some watermark or some warning that tells people that this was created through the use of deep fakes.
Dave Bittner: Right.
Ben Yelin: And then there's the question of, "Is there a proper policy solution when it's so easy to create the deep fakes in the first place?" Once they're out there, they're out there. And the ability of our legal system to respond lags behind the capability of the smartest and brightest minds who are creating these deep fakes and posting them on the internet. Even if somebody's successful in getting a deep fake video taken down, there's a time lag and 47 million people will have seen the video. So, I just see this as a very vexing issue. We've been dealing with it here in Maryland after an incident we talked about on this podcast, where a principal in a high school in Pikesville, which is a suburb of Baltimore, was suspended from his job because a video going around purported to show him saying racist, antisemitic things. It turns out it was a deep fake. And the prosecutor, who was trying to go after the person who created and distributed that deep fake, kind of said he's doing his best. And he did levy some charges, but his hands are sort of tied. There's not a legislative solution to that particular problem.
Dave Bittner: Yes.
Ben Yelin: A lot of states have started to take action to criminalize deep fakes, but usually they are within -- they are siloed within certain subjects, like political deep fakes or deep fakes of a sexual nature. And so, I think we're a long way from coming up with an all-encompassing solution to this problem.
Dave Bittner: This article points out -- there's a woman named Suzanna Gibson who started an organization called "My Own," after she was victimized by a deep fake ordeal during a political campaign. And she lives in Virginia and she was able to successfully push for expanding Virginia's Revenge Porn laws. And this articular points out that while there hasn't been much activity on the federal level, 49 states and the District of Columbia have some form of legislation against the nonconsensual distribution of intimate images. Where do we stand here in Maryland, Ben? You're -- we have laws against this, right?
Ben Yelin: Yes, we have enacted statutes through the Maryland General Assembly. There are a couple that we were pushing last session which didn't get enacted. One dealing specifically with misinformation in the political context. I testified at the hearing for that one. But yes, I mean, I do think despite the laws that we have already enacted, criminalizing the distribution of sexual-based deep fakes, it still leaves a lot of gaps, and it still doesn't provide a great level of recourse for the people who've been victimized by the creation of these deep fakes.
Dave Bittner: Let me ask -- granting and acknowledging and having tremendous empathy for the people who are victims of this, is there a First Amendment issue here? Specifically, I'm thinking of, and this is not a perfect analogy, and this took place before the era of deep fakes, but do you remember probably 10 or 15 years ago, "The Daily Show" put out a book called "America"? Do you remember that?
Ben Yelin: I had that book, as probably most 18 to 25 years old did at that time.
Dave Bittner: Well, what I'm reminded of is in that book, they had artistic images of the Supreme Court in the nude.
Ben Yelin: Yes.
Dave Bittner: Right? And they said something like, "Here's the Supreme Court. We have, you know, we've stripped their dignity." And obviously done for comedic purposes, but protected speech?
Ben Yelin: I think it is.
Dave Bittner: Yes. Is that because it's the Supreme Court?
Ben Yelin: I think that is a huge part of it. It does carry some type of satirical political value and making a statement.
Dave Bittner: Right. It was not photo-realistic. It was artwork.
Ben Yelin: It was artwork, and it was clearly presented as artwork. So, it wasn't created with the intent to trick people into thinking that these were actual naked pictures.
Dave Bittner: Right.
Ben Yelin: I think that makes a huge difference as well.
Dave Bittner: Yes.
Ben Yelin: I mean, we have to tread very carefully because the default is that we don't want to criminalize the creation of any images or artwork or photographs. That's the default value that we have. There are exceptions to that. And those are pretty well-founded in our legal system, but we have to just tread very carefully. So, I think when you start to get into things like political satire -- I saw a video going around, a deep fake of Fred Trump, criticizing his son for how he's running his campaign.
Dave Bittner: Oh, wow.
Ben Yelin: And that one was actually -- they did a very good job in that video of stating at the beginning that this was created as a deep fake.
Dave Bittner: Right.
Ben Yelin: Which I think also makes a difference. But there are more videos like that going around, and I do think, especially if it's the type of thing that cannot be expressed in any other way, like I believe that Fred Trump example qualifies, considering he's been dead for 20 years. I do think that has some First Amendment value and courts and legislatures have to wrestle with those conflicting values.
Dave Bittner: One of the other things that this article points out is the discussion over whether the legal recourse for these should be criminal or civil, and also whether the victims can have the right to sue, either sue the person who made the deep fake, or the platforms that hosted them either knowingly or otherwise. Do you have any thoughts there? Yes. Oftentimes the most controversial provisions of these laws are the private right of action. That can be the distinguishing factor that either makes a law succeed in getting enacted or making it fail. I think a lot of people think that there would be spurious or lawsuits that are not well-founded, but because we've created this private right of action, people are using lawsuits as a tool to takedown content that should be protected by the First Amendment. So, I think that's the concern there. There's certainly been an expression from industry, from some of the big tech companies that they should be immune from suits and the private right of action. I think, whether to make this a civil penalty or a criminal penalty, is kind of a secondary question in my opinion. I do think it's a question we have to answer at some point, but really, we have to first wrestle with what types of deep fakes are acceptable and what types are illegal? What is the dividing line? And what recourse does a victim of a deep fake have to get it removed from the internet? And I think once we resolve those problems, then I'd feel more comfortable having a broader discussion about this civil criminal distinction. Yes. It's just another example of how, I guess by design, you know, the legislation lags behind the technology and the things that society deals with.
Ben Yelin: Yes. Although I'll say, like it's pretty impressive that almost all of the states and D.C. have at least proposed legislation on deep fakes. It's so a relatively new problem, so --.
Dave Bittner: I guess it does have bipartisan appeal to tamp down on this thing, right?
Ben Yelin: Yes. I know there are proposals for anti-deep fake legislation making their way through Congress, and all of those have bipartisan sponsors. So, I think it doesn't fall neatly along partisan lines. We know that everybody, no matter their political affiliation, can be affected by these, particularly if you're in the demographics you talked about where it's largely young women who are victims of the distribution of these images and videos. So, I actually do think there's been more action on this than on a lot of the other issues we cover where Congress and state legislatures have been comparatively slow to act. So, I'm kind of hopeful, especially as we've seen action in the EU and in other countries that we can get our act together and come up with sensible regulations on this stuff.
Dave Bittner: All right. Well, we will have a link to that story. Again, it's from the IEEE's "Spectrum" publication. Interesting read if this is something -- a topic that is of interest to you. [ Music ] Ben, I recently had the pleasure of speaking with Dr. Peter Garraghan. He is the CEO of an organization called Mindgard. And we're discussing the UK's recently published AI security guidelines, and some of the recommendations that he made for addressing cybersecurity risks in AI. Here's my conversation with Dr. Peter Garraghan.
Dr. Peter Garraghan: So, the reason we completed this exercise was that in the last 12 to 18 months, there's been a lot of discussions about the cybersecurity risks of artificial intelligence. So, every government and every organization are now defining governance structures and frameworks, explaining what should be done in terms of, "We need risk. We need to do a [inaudible 00:29:00]. We need to fix security issues." However, there was very little in terms of the recommendations needed to reduce the cybersecurity risk of AI. So, the purpose of this research project and report was to actually give empirical evidence on the type of recommendations organizations can use today to minimize the cybersecurity risks within AI.
Dave Bittner: Well, take me through that process. I mean, as you say, it seems as though this has certainly captured the imagination of the public and also, you know, governments around the world. How do you approach this? How do you get started with something that is such an active topic?
Dr. Peter Garraghan: It is true that AI captures the human imagination, but that's also a double-edged sword. Ultimately, AI is still software. It does software activities. It uses data. It runs on hardware. So, to begin with, we started with having a very empirical view of what is AI today, and look at how it's used, and then looking at a whole set of different reports, news articles, technical blogs, and our expertise as a professor at Lancaster University, to really understand what's the current state of the arts of the recommendations to reduce cybersecurity risk, and what are the existing knowledge gaps included? This inquired or entailed quite a few weeks of literary review and read many, many papers for just more as a scientist, and then trying to map the current state of the arts of what recommendations have been known to work, which recommendations have been suggested to be effective to minimize cybersecurity risk, but also highlighting the current gaps in the space as well.
Dave Bittner: And what are some of the conclusions that you all came up with here?
Dr. Peter Garraghan: So, there was quite a few conclusions, but I think the main highlight is, if you look carefully at the recommendations, of all the recommendations given, and I talk about things both technical and organizational, have very strong analogies to current cybersecurity practices. Recommendations saying, "Make sure you have strict access controls on data," also applies to other type of software. Having user training on security of AI is very similar to user training of security applications of software. So, it should be reassuring to [inaudible 00:31:24] because a lot of the suggestions also align with what we [inaudible 00:31:29] understand. However, there's also these other difficulties which is if you really look at the evidence given with the recommendations, and go back to the original source, it either comes from very few sources, the sources themselves are derived from laboratory experimentation, therefore they're inferred to be effective, and in some ways it's also speculation based with expertise. Given how quickly the AI space changes and the type of cybersecurity risks that exist, there's very limited empirical information about actually was there effectiveness within deduction systems that comes from a lack of scientific activity in this space, the difficulty of doing so, and the nascent nature of AI requires that if people do find these problems, they're not obligated to actually report these.
Dave Bittner: It really is a fascinating issue. I mean, I'm trying to think of another example in, you know, society throughout history where something of this magnitude was kind of unleashed on the public and captured their imagination, but also had such a big potential for both good and bad.
Dr. Peter Garraghan: It is and it isn't. So, AI is still a software, and a more recent technology that, maybe not necessarily people capture the imagination with, but think of virtualization and cloud computing, which rose in prominence about 10, 15, 20 years ago. At that time, people were talking about, "I'm spitting up resources and I'm putting my data in places I can't see outside my home, outside my office. This seems incredibly insecure, but also really, really powerful. What do I do?" The cloud is a nebulous concept. There's a lot of fear and a lot of hype in the space, but what ended up happening is saying, "Okay, empirically, it is a computer that's hosted by somebody else, and there's [inaudible 00:33:23] of problems with that type of setup," but people went from being very pessimistic about technology. It's been around for 50, 60 years. Same as AI. AI's a very old technology. Then it became used and overhyped. They figured out some of the pain points and now in virtualization a lot of people use to, and it's become much more understood in terms of the risks in how it's used. AI follows exactly the same scenario. AI is 50 years old. It's not a new concept. It's only in the last few years that it's gotten into the public limelight and there are new types of AI, and now, there's lots of excitement. And there's some genuinely good use cases, but also people to perhaps oversell the power of AI in things it's not designed to do. And that's fine. That's true for any technology hype. What's going to happen is that brings cybersecurity risks and problems and separating we -- is there going to be an AI uprising versus actually the problem is my AI model's leaking my confidential data. That will come in time and then probably we'll have a much better understanding of how it works.
Dave Bittner: How do you go about preparing your information for policymakers, for translating and to put it in a way that they can both understand it but then they'll use it in their own work to better suit the public?
Dr. Peter Garraghan: I think with policymakers, they have various different levels of expertise, and their job is to communicate to the public, but also politicians, actionable insights for them to actually make legislation or suggest to different organizations and other governments best practice. So, going about this as a professor at the university, and also working a lot of business now, we have a lot of experience catering towards very technical individuals, but also writing it in a layperson's terminology, they don't need to be an expert in AI or cybersecurity to understand. So, that requires reading some very, very technical pieces of literature and work and code, and then translating that into a form that a typical person will understand. The great thing about AI security is conceptually [inaudible 00:35:29] make sense. People understand, the policy people understand, leaking data or losing data is bad. They understand that if someone could steal my AI, that's also bad. So, there's less emphasis on the technical mechanisms to achieve this. More of emphasizing how does it work, and what are the recommendations required to mitigate that?
Dave Bittner: What has the reaction been so far when you've presented this to the various stakeholders?
Dr. Peter Garraghan: So, with the stakeholders involved, they've been rather happy because it's not typical or common that they get someone who's both a professor and a CEO of a tech company to try give both views. So, I give a very technical, scientific, academic perspective on the problems objectively, but also try to tie this with the business problems that we face at Mindgard on a day-to-day basis. So, the reactions has been very complementary. To my knowledge, it has been circulated with various agencies within the UK, but also across other countries for the UK to actually explain what do they do within AI security and cybersecurity, alongside all the other great work and reports that were also released at the same time.
Dave Bittner: Where do you suppose this is headed next? I mean, is this a first round? Do you expect there to be updates along the way?
Dr. Peter Garraghan: Yes. I think there will be updates. I believe in the report, at the very end, I do mention that the security of AI is not a solved research topic. It's actively changing on a week-to-week basis, and no one can claim that this space is actually completely known. Therefore, it seems a point in time solution by saying, "What is the state of the art from, you know, from Year 2020 all the way up to the beginning of 2024? What is the snapshot of using AI and the cybersecurity recommendations [inaudible 00:37:22]?" I expect come a few years later, conceptually a lot of the recommendation advice will apply. What will be updated though is the actual primary sources of which recommendations have been tried and tested to minimize risk within AI.
Dave Bittner: I'm curious. As you were taking part in this research and making your way through all of that literature, was there anything that you came across that surprised you or was unexpected?
Dr. Peter Garraghan: So, I don't think anything necessarily unexpected, because I spent a lot of my career looking at, you know, very different type of primary sources and secondary sources in literature, but I think one thing that was quite surprising is a lot of the innovations in AI security, given it's so new, a lot of the real interesting recommendations and descriptions of attacks, has not come from academic research papers or technical company frameworks. It's come from blog posts, people who are super-technical, who are passionate and interested in hacking AI systems and how to fix it. The real meat of the evidence to imperatively demonstrate their effectiveness comes from non-peer reviewed sources. And obviously, those have to be scrutinized quite carefully, and as a scientist, we can correlate what they mention with our [inaudible 00:38:42] experimentation, but that's been quite surprising. And I suspect this is going to continue because within the AI security space, there isn't a formal database of our abilities yet. Therefore, there's lots of people trying different things, and I think in the recommendation space, they can apply these to different frameworks and tools, which is great. It's the new things, in terms of the technical techniques they need to recommend, it's still unknown how they're going to work.
Dave Bittner: What's your own outlook here? I mean, looking forward, are you optimistic that this is something that we're going to get a handle on and it'll become a, you know, regular part of our day-to-day lives?
Dr. Peter Garraghan: Yes, I think it will, Dave. It comes back to the mantra that AI is software. Replace that word, replace AI, machine learning, ChatGPT, with an application. Applications and software have problems. Yes, we know this. We've been spending the last few decades with hacks against systems, data being leaked, or just having poor performance, or like people [inaudible 00:39:42] my network from people communicating over bot nets. AI is no exception. Therefore, I do expect there'll be a lot of great process coming down into how to secure AI and recommendations, but I also do envision there'll be problems, like any type of software you can expect. The difference now is, people were burned quite badly with the rise of cybersecurity as a concept. I think now a lot of the governments and technical companies are getting slightly ahead of themselves in terms of they know this is coming as a problem. They're putting down the governance frameworks and recommendations now, ahead of actually this becoming massively adopted at a huge scale. That's very different from previously where it was a lot of trial and error in terms of, "Let's build this thing and we can then respond to the type of threats and risks we encounter." [ Music ]
Dave Bittner: Ben, what do you think?
Ben Yelin: That was really interesting, talking about security hygiene, things that can be done at the organizational company level. I think that's a good frame of reference for this because we often talk about what can be done at a policy level, but I think it is going to be incumbent on individual organizations to protect their own security in the AI era by hiring people who can manage the legal and regulatory requirements that are increasing by the day, engaging with stakeholders, things like that. So, really interesting interview with Dr. Garraghan.
Dave Bittner: Yes, it was great to have somebody who's been so close to the inner circles, you know, talking about this. In his case, in the UK, but to get his perspective I think is definitely valuable. So again, thanks to Dr. Peter Garraghan from Mindgard for joining us. [ Music ] That is our show. We want to thank all of you for listening. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your podcast app. Please also fill out the survey in the Show Notes or send an email to caveat@n2k.com. We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's pre-eminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment: your people. We make you smarter about your teams, while making your teams smarter. Learn how at n2k.com. This episode is produced by Liz Stokes, our Executive Producer is Jennifer Eiben, the show is mixed by Tre Hester, our Executive Editor is Brandon Karpf, Peter Kilpe is our publisher, I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening. [ Music ]