Afternoon Cyber Tea with Ann Johnson 10.3.23
Ep 82 | 10.3.23

AI: The Promise and Potential Peril

Transcript

Ann Johnson: Welcome to "Afternoon Cyber Tea," where we explore the intersection of innovation and cybersecurity. I'm your host, Ann Johnson. From the frontlines of digital defense to groundbreaking advancements shaping our digital future, we will bring you the latest insights, expert interviews, and captivating stories to stay one step ahead. [ Music ] Today I am joined by Dr. Hyrum Anderson and Ram Shankar Siva Kumar, who are co-authors, and congratulations, guys, the co-authors of the book "Not with a Bug, but with a Sticker." Hyrum is colonel CTO at Robust Intelligence, an AI integrity platform and solutions provider. Hyrum's technical career has focused on security, having directed research projects at MIT Lincoln Laboratory, Sandia National Labs, FireEye, and as chief scientist at Endgame and principal architect of trustworthy machine learning at Microsoft. Hyrum also co-founded and co-organized the Conference on Applied Machine Learning and Information Security, ML Security Evasion Competition, and the ML Model Attribution Challenge. That's a lot. Ram Shankar is a self-described data cowboy here at Microsoft with his work focusing on the intersection of machine learning and security. He is the founder of Microsoft's AI Red Team, which brings together an interdisciplinary group of researchers and engineers to proactively attack AI systems and defend them from attacks. I am really excited to welcome both of you, Hyrum and Ram.

Ram Shankar Siva Kumar: Thank you for having us, Ann.

Hyrum Anderson: Thank you, Ann.

Ann Johnson: So, we have a lot to talk about today. But let's start -- yeah. It's a big hot topic right now. But let's start with your book, right? "Not with a Bug, but with a Sticker." This was published earlier this year. It's a fantastic read that underscores both the promise and the peril of AI as it goes into the mainstream. So for the audience, who may or may not have read it already, let's have them grab a copy. But also, let's kick off the discussion with why. Why did you write a book right now? What was the genesis of the theme and content? And Hyrum, let's start with you.

Hyrum Anderson: Sure, Ann, thanks. And thanks to listeners. We do encourage you to take a look at this book, which was written before the ChatGPT effect has taken place. But if Ram and I had been collaborating on this topic since, I don't know, maybe 2016 or '17. Where we were screaming into winds about the need that as people adopt AI, they also adopt AI risk. And this is about that. It tells stories and it hopefully will help the readers to understand the risks and the opportunities to correct those risks as we move onto this brave new world with AI.

Ann Johnson: It's interesting. Because I read the book. I actually was blessed to be a person that was able to pre-read the book before it was published. And I find it to be fascinating, not just from the stories you tell, but how the stories are told. So, Ram, can we get your perspective on why you wrote the book right now and what was compelling and what was important?

Ram Shankar Siva Kumar: Yeah. I think, this was my pandemic sourdough. And I was rudderless during the pandemic. And first off, this book kind of helped me ground the narratives of the space Hyrum and I have been working on. But more probably than that, one of the things that inspired me was just all these interesting set of -- we think of security AI system as this monolith. Just like one set of folks who seem to be working on this, one set of personality who's like tackling it. But this is really variegated in the types of people, their background, their stories, and that for me really -- I was very much wrong, too. For instance, the book opens with this person who's standing in Magnuson Park. If you ever visit Seattle, it's like an off-leash dog park. And this person's kind of like holding this stop sign. Which looks very unremarkable. And only for say, like a few graffiti style stickers. And the book kind of traces and is waiting for this car to pass by. That's kind of how the book opens. And we use that as a lynchpin to show how everybody from the US government to Google to Microsoft to policy makers, you know, the European Union, you know, these underground organizations, all get really loaned to the wind by this one act of this one person holding this stop sign with a sticker. How it balloons into not just a moment but kind of almost a movement that really comes to define how securing AI systems currently have taken shape. So that for me was a very interesting story to tell, Ann.

Ann Johnson: I think it, like I said, the way you tell story can captivate. You know, some folks might be intimidated to read the book. Oh no, it's a book about the security of AI. I don't even understand AI and I don't understand security. But the storytelling of the book is so good that I would encourage folks to take the risk, right? Read it. It's also not a super long. It's not "War and Peace" length, right? So, it tells a great story and it tells it in a very succinct and crisp way. But anyway. Alright. The first few pages of the book, and Ram, I want to start with you on this question. You underscore some of the most impressive and important AI powered advances in business and science and society, because it's not just about, you know, technology, right? And I'm sure since the book's published, there have been even more groundbreaking discoveries. Can you help paint the picture of what AI might be able to do for the world? What massive changes and problems it can help solve?

Ram Shankar Siva Kumar: Absolutely. For me, as I was working, like working with Hyrum on this book, it really, you know, we think of Tesla and we think about Facebook and we think about Google as the forefront of folks who are working in AI systems. And we think of them as oh, they're the ones who are commonly identified as the AI vanguards. But for me, it was super surprising to know that Hershey's is using AI to kind of like identify the ideal number of twists in Twizzlers. You know, you've got McDonalds kind of using like AI to optimize their supply chain. So, things that you may not think about your chicken nuggets is almost powered by AI, I would like to think so. But it really is no longer this piece of technology that's only relegated to the people who are creating it, but democratized completely across the board. And that's really the interesting aspect. We have invited this technology that we really don't understand what the risk is? But we see massive economic gains around it. And that is a very interesting proposition. Like here's something that people are still do not know what the consequences are, but has pervaded everything from the time that I wake up to the time that, you know, from driving my car to work, to kind of like doing my work, and going back home and unwinding with Netflix. There's like every part of it is touched by this like transformational technology. And the question that Hyrum and I kind of try to tease out in the book is great. This system is now absolutely essential to our world. What does it mean for an adversary to go after it? Jen Easterly talks about how security is now a kitchen table conversation. It's not something that only CISOs talk about, it's something that you teach your kids about how to secure their, you know, passwords. Or how they should lock down their profiles. Now the question is, how do you teach people to lock down a system that completely pervades their life? That they seemingly may have no control over? And more importantly for Hyrum and me, you do not understand the risk of it.

Ann Johnson: Yeah, that's something that we talk about a lot on the podcast. Is the -- because I use my family as an example, right? And I'll tell you just, I'll do a little aside here. I had this like really healthy debate with my husband who literally did not want to use multi-factor authentication for his Starbucks app. Because it's just one more thing he has to worry about when he's trying to get his morning coffee. And I said well, do you have a credit card that you automatically take funds from to the Starbucks app, right? He's like yeah, why is that a problem? I said just use multi-factor authentication. And we stopped having this conversation. Right? It's like you look at your phone. All you have to do is look at your phone. But people don't. You know, that 10 seconds, if people don't understand the underlying security reasons, it really feels inconvenient. Right? Still today. So these are conversations like Jen said, we're having in the world, not just in security. Alright. I digressed for a second, and my husband when he license to this episode, I want to apologize, I'll apologize to him right now. But so, Hyrum, a few pages later in the book, you're quick to point out the peril of AI. In an excerpt, "In AI We Overtrust," it's important of course as AI goes more mainstream that everyone from researchers and technologists to every day people understand its limitations. From what it can and can't do, what it should and shouldn't do. Hyrum, can you help unpack that for our audience? Why is that healthy skepticism about AI so important? And why should we continue to have skepticism?

Hyrum Anderson: Yeah, Ann, thanks. And listen. Ram and I both are optimists. Especially when it comes to the utility of AI to make a better world. To make a more convenient world for us. And so, when we talk about "In AI We Overtrust," I think that the basic thing to remember is that when AI is trained, it's trained to do one thing pretty good. And when it does that one thing pretty good, we often ascribe its ability in areas it was never designed to perform well in. So, this is one element of people relying on AI. We rely on it for one thing. An example would be relying on a robot to give directions, as was in our book. In a normal situation, it turns out that because we gained this reliance and trust in this certain situation, we tend to overtrust it when we depart from the normal behavior that the robot was trained to do. So surprisingly, this automation bias that we have extends to AI in a way that we need to be careful about. Second thing is that even for the things it's good at, when an adversary is present, those things that thing is really good at can actually be tricked against you. So it can be manipulated. So when you trust it, you are falling for actually something an adversary is doing to the system to manipulate you and your trust in that system. That's not new to AI. Basically, adversaries rely on your trust in systems for them to get an advantage in cybersecurity, in fraud, and whatever. And in AI, that is kind of a new thing for us here.

Ann Johnson: And it's really important. And it brings us into the next question, which is also about data, right? Data suddenly became a thing. We've been talking about this, you know, on our podcast recently. Big data was this rage, you know, 10 years ago. And now suddenly everybody's talking about data again because of its role in training foundational AI models. You talk about data from a slightly different dimension in your book, Ram, as a potential mechanism of attack on AI systems with data poisoning. You called poisoning out as one of the top concerns for business leaders and researchers. Can you talk -- can you explain first of all, for our audience, what is data poisoning? And then explain what it's such a huge concern?

Ram Shankar Siva Kumar: Absolutely. You know, one of the -- all the snarky subtitles in the book comes from me because Hyrum is too polished and very academic and so highly statured to come up with snarky titles. So, please excuse me for that, Ann. One of the things -- one of the subtitles, one of the subchapters of the book is, you know, your AI is trained by vampire novels. And there's this now very contentious dataset called the common crawl dataset, where essentially it was a set of like -- it basically indexes everything in the internet and uses it to train ML models. And one of the interesting things is like, you know, it's not just trained on human, you know, just like what you would think of like as highly polished academic papers that's used for training. It's trained on everything. From like Reddit forums to vampire novels, to young adult fiction, to everything that's anything on the internet goes into training your machine learning system. If it was used by common crawl, which has been used very widely by everybody. I'll come back to common crawl, and I'll tie it back to this conversation, so keep that in your mind. Turns out that, you know, when people train on the entire cesspool of the internet, it goes into really weird directions. So, one interesting thing that people have already pointed out, that these ML systems are biased because all of the sudden, you are training on kind of like the really dark corners of the internet, which is really not humane. So that is just by default. Now on top of that, you can kind of think of an adversary kind of inserting malicious content into these URLs. And that, if it gets picked up by the ML model for training, all of a sudden, it becomes poisoned. This is like classic like, you know, the fruit from the poison tree. Think of everything that all links as well and now you poison the well and now everybody is essentially affected by it. And in fact, Hyrum kind of showed with folks from Google and Videa that it only takes 60 bucks to poison the common crawl dataset. And that has huge ramifications. All of a sudden, you can no longer kind of escape this inescapable, if you're not checking if an adversary has polluted your dataset, and now you just inherit this poisoned dataset to train your ML models on and you basically have very little recourse. If you find out later on that your ML model has been poisoned, then you basically chuck your model out for all practical purposes and build a new one out. And this I not farfetched. There was an inadvertent poisoning that happened from Unity. The gaming software company. And in one of their earnings calls, they kind of pointed out how they trained their ML system inadvertently on bad data, and they essentially lost millions of dollars because they had to chuck the model out and kind of like build a new one from scratch. So, this is a really big problem that you should be thinking about and if you're not thinking about it and you find out post-hoc, then you essentially will have to start from scratch all over again.

Ann Johnson: I think that's really good context. And I'm also sure, as we get later in the episode, we're going to talk about some positive things, too. But I do want to cover off risk. So Hyrum, I know data poisoning is not the only challenge that's top of mind for leaders and researchers. What are some of the other issues that you're most concerned about?

Hyrum Anderson: Yeah. So poisoning is where one has a causative influence on the model to change it at the time of training so that it behaves poorly later. Lots of machine learning attacks today actually happen on high-quality good models that have not been poisoned. And that's because models are just imperfect. And even though they're good at the task that they've been performed to do, they can be tricked. So this class of attacks is often called exploratory attacks. So after a model has been -- like ChatGPT. After it's out there, it works pretty well. But by doing kind of the right inputs to this model, it can be convinced to behave poorly. Some of the most, like the earliest examples of this, were in changing small parts of an image, imperceptible to you and to me, that make the model now believe that this is not, say, a bus, but an ostrich. Even though it looks like a bus to you and me. It also has existed in phishing in cybersecurity. So email phishing, how does an adversary change what's presented to the model so that the model is confused? Even though a human might not be confused by whatever change has happened in that space. So, exploratory attacks are a broad class. Today, I think top on everybody's mind, are all of these things that have to do with large, generative models like ChatGPT. That are breaking out of the intended purpose of that model, called a jailbreak. Or other style of prompt injection attacks that misappropriate the model to do something that might cause a security violation it was never intended to do. And these are very top of mind to people that we talk to today.

Ann Johnson: I think all of that is real and makes sense. And I think as we learn the promise and the hope of the technology as it proves out, there will be additional risks. I know that one of my favorite excerpts from the book, and perhaps one of the most important, focuses on the role of defenders. Right? And the challenges they are going to have in defending everything within their purview all at once. It's impossible today. In the excerpt, you discuss information asymmetry. And Ram, what is the importance of understanding information asymmetry between attackers and defenders?

Ram Shankar Siva Kumar: You know, when Hyrum and I were writing this book, and we were writing it for a wide variety of audience. Security professionals are curious about, you know, machine learning systems and their fallibles, but also like ML folks who are like I don't know anything about security and kind of like pick this book up. So some of these concepts like information asymmetry has been really rooted, very well established in the security community. And where the defenders will have to know everything about their system to fortify it. Whereas an attacker will only have to find one crack to get through the system. And so, there you can already see one aspect of asymmetry where the onus on the defender is much larger than for the attacker. The other interesting aspect is that the attacker would know much more about the system and how to break it than say, the defender, because they are focused on fortifying it. May not have the capacity to think through like how the system could fail. So one of the things that we were thinking about was hey, would this asymmetry in play, which is very well established in game theory and very well established in the security community, what impact does it have on securing AI systems? So the way that defenders, early defenders in this space, thought about securing the AI systems was, you know what? I am going to make it really difficult for the attacker. I am just not going to release the ML model. So hey, haha, now I have something that the attacker does not know. I know the internals of the ML model, the attacker does not know the internals of the ML model, I gain an advantage. Turns out that way of the now very well established security by obscurity does not really work for the ML models at all. Turns out that if you think your advantage over an attacker is keeping your ML system internal only, well that's really a faulty assumption. Turns out to levy any of these attacks on ML models, you need to know nothing about the internals of your ML model. Virtually nothing. It's called like zero knowledge or black box style attacks, tend to be very, very effective. And you know one of the pioneers of this attack that we profiled in the book, Nicolas Papernot from University of Toronto, he was able to crack this as a grad student who knew nothing about machine learning. And picked this up by watching YouTube videos. And brought the downfall of some of the most state of the art machine learning systems. And now he's a very well established professor. But for me, that journey of how somebody who knows nothing about machine learning can ramp up on YouTube, gain just the right amount of knowledge, find that one single crack in the machine learning model, and exploit it, that for me really undergirds why we need to think about information asymmetry, especially from a defense aspect, for securing machine learning systems.

Ann Johnson: I think it's a really important way to think about it. I also love your reference that we can't be security by obscurity anymore. We have to have common defense. We have to be working together. And the only way to do that is to be transparent. Alright. Let's talk, security's not doom and gloom. I'm an optimist about this industry. I'm always an optimist about this industry. There's certainly challenges that need to work through, but Hyrum, when you talk to leaders and researchers, what advice do you give them to help navigate AI and help work through the challenges so that they can realize the promise of AI?

Hyrum Anderson: As we said in the beginning, we're encouraging people to adopt AI. But to do so with the awareness and the kind of mindset that will help them to do so responsible. One comparison that Ram and I make in the book is the state of AI and AI security is not unlike, you know, the internet was in 1999. Right? Ann, I think that you were using Netscape and Clippy was your AI assistant in Microsoft Word. And at that time, there were lots of, you know, lots of actually attacks happening in the internet. One was from this 15 year old named Jonathan James, James, who, he created a backdoor in US Department of Defense. He's 15. There was another Canadian young man, also a 15 year old, I think from Canada. His name was Mafiaboy. And he de-DoS-ed Amazon and CNN and eBay and Yahoo. And why did they do this? They did it for fun. And so one thing that we're talking about to business leaders, people adopting AI today, like we're just at the very beginning of this. There's so much more to come. Both in the defenses. Ram and I have interviewed brilliant minds that have allowed us to write this book. Brilliant people who are working on this problem. At the same time, you know, attacks will also commence and be stronger. That shouldn't prevent you from doing the internet in 1999. Like you should do AI. You should do it. There's something I think that's axiomatic about how software, in this case, AI, co-develops with attackers in terms of its security maturity. And when that happens, the software, AI in this case, gets stronger and better as we're just kind of having a mindset to make it -- to provide defenses for it. So how does business leaders today evaluate and mitigate those challenges? Number one is just being aware. It's adopting a risk mindset for yourself, for your organization, and it's implementing simple practices, mostly fundamental cybersecurity-like hygiene, that allows you to, you know, in a principal way adopt these new technologies. And there are lots of examples of that that I probably can enumerate here. But just one. Like you, when you're, you know, when you're using AI, you need to think about trust boundaries for data. How, you know, what you're allowing it to do automatically. Those kind of system level concepts can be every bit as important as preventing a poisoning attack. Preventing innovation attack. Which are also important, right? But you can start with the basics and that will take you a long way.

Ann Johnson: That's really helpful. I think it's also, again, aligning which people can consume and understand. Look, I've said this for years, even before GenAI, that I think AI's going to shift the scales in favor of cyber defenders and resolve some of the information asymmetry we previously discussed. Do you agree, Ram? I mean, I don't think this is a losing battle for us.

Ram Shankar Siva Kumar: Oh, absolutely not. And I want to go on a little bit of a diversion to, Ann, I remember that in 2019 when I was just like putting together the AI Red Team, I was like shopping around this idea to people. And Ann, I distinctly remember our conversation. I felt like I never had to go in salesman mode with you. You were like Ram, you have to do this because this is important. So even before the wave of, you know, now AI safety and AI security is in vogue. You know, everybody from the White House and all the companies kind of like are on board. But for me, the penny dropped where Ann, I think you mentioned this, security cannot be like built on top of AI systems. It needs to be bolted in. And that is like super important. And that I think is a fantastic opportunity for defenders who really have seen waves of technology come and go. You know, they've seen the rise and fall of the internet. And now the rise again. You know, they've seen transformational technologies. Like IoT. I think AI's just yet another transformational technology. Just like how cloud was. Just like how IoT was. And how the internet is and was. So for me, it's just old wine and new bottle for defenders. Things, table stakes like are you doing proper SDL? Do you have like an in-service response team that is geared up to face these new threats? Do you, what kind of actionable advice are you giving to engineers? How is a CISO going to think about this? It's really not going to change that much with AI, but it's also going to change a lot with AI. And that tension is where I think there's a massive opportunity for veteran security professionals to jump in and be like okay, kids. I know that you think AI is new, but here's some like hard-earned battle scars that we have incurred and here's what you can learn from it.

Ann Johnson: I think that's really helpful, too. I mean, AI is like a shiny new object to a lot of folks. But at the end of the day, machine learning, you know, the beginnings of this technology have been helping cyber for a decade probably or more. And we're in the infancy of the promise, right? Well, look. I want to thank you both. I know you're both really busy. So Ram, since I have you, I'd love to know what you're working on today that has you excited. And Hyrum, we'll go to you with the same question.

Ram Shankar Siva Kumar: One of my promises to Microsoft is to kind of think about failures more holistically. So the AI Red Team identifies failures in AI systems, Ann. And for the last three years, we have built significant muscle around identifying security failures. Now I think building on top of the great momentum folks from MSR New York, [inaudible 00:29:41] Center, with the Responsible AI folks in Microsoft have built, I really want to think about this more holistically. Not just identifying security failures, but co-joining that with responsible AI failures. If, and this tactically means like as I'm looking for hey, can adversaries steal my ML model weights? Also kind of thinking about like how can I proactively find if a model is generating content that incites violence? Or content that can promote self harm? We want to be able to look at these things and identify these risks even before it reaches the hands of the customers as we have been doing for a very long time. So how do I do -- how do I combine security with responsible AI harms, how do I scale it to the needs of the community? Because Microsoft is blessed to have an AI Red Team that a mom and a pop shop may not have. So how do I empower them? And how do I raise the sea for everybody? That is something I am thinking about, Ann.

Ann Johnson: And Hyrum, what are you thinking about and what are you working on today?

Hyrum Anderson: What really strikes me is that as I'm talking to really big, mature, accomplished enterprise companies, even the most mature companies are really on a journey and still trying to wrap their minds around how to grapple with the security of AI. And as you think about this, how long have we had like unit tests in software? Or, you know, system integration tests for testing software? But today, really that's not a motion that people have for AI models in a kind of automated, regular way. So where Ram has been doing for years now, and that I joined him at Microsoft, is this figuring out in a -- through kind of expert knowledge and curiosity and tenacity, the failures of systems and how they could go wrong. And one thing that I'm realizing is there's actually even kind of the simple failures need to be automated. Right? Some of those don't require the hands-on exploratory creative. They're just software checks that have to happen --

Ann Johnson: Yeah.

Hyrum Anderson: -- to machine learning. So what I'm thinking about is helping organizations to adopt those motions. How can they have sort of the very basic principles of security that will help them to adopt AI in a meaningful way and a secure way?

Ann Johnson: No, I think that's right. And I think that if we can relate it back to, Ram said it too, learnings from the past and apply those learnings, then we can avoid making, you know, mistakes in the future, right? Look, I'm an optimist about cyber. I say that all the time. I'm optimistic about the future. There's a rise in cyber crime, I know, but I get up every morning and I have for 25 plus years, because I feel good about this industry. Can either of you or both of you tell us, tell the listeners, what you're optimistic about?

Hyrum Anderson: Let me start so Ram can have the last note. I am optimistic about the people that we work with. You and I in every day in this journey. These are smart people, collaborative people, engaging people who together are mission driven to solve really serious but not insurmountable challenges in cybersecurity. And, as it relates to AI security, that gets me up every day. I love working in this space. I love collaborating with folks who are likeminded and discovering together, failing together and learning from those failures. It's been a tremendous to me in my life to be -- I feel like, really, I feel like I'm the dumb one in my company. I'm definitely the dumb one in this industry. And I love learning and moving together with all of you folks.

Ann Johnson: I think that may be the first time I've ever heard anyone call you the dumb one, but thank you, Hyrum. Thank you for the humility, also. Ram, what are you optimistic about?

Ram Shankar Siva Kumar: What more can one add to Hyrum's always wise insights into the field? He is our almost grandfather of the field with his like seminal contributions. I feel very strongly about the diverse perspectives that are now being included in finding these failures. It's no longer centralized to this small set of folks who understand technology. Now, anybody with a browser can go and red team these large language models. With this intense democratization, we're going to have all these people who have not even thought of themselves as red teamers, who've not even thought of themselves as like folks who can break these systems, they're going to come and they're going to like find this. And that for me is going to be so interesting, Ann. I'm just excited that everybody and anybody is now empowered to go find failures in ML systems and that's going to bring such diverse perspectives that we did not look for in the security field. You know, years before.

Ann Johnson: Thank you both, Ram, Hyrum. I know you're both super busy. I know your schedules are tight. And congratulations again on your book.

Hyrum Anderson: Thank you, Ann. Such an honor to be with you today.

Ram Shankar Siva Kumar: Thank you, Ann.

Ann Johnson: And many thanks to our audience for listening. Join us next time on "Afternoon Cyber Tea." [ Music ] So I've known Ram and Hyrum for a while. They were early at Microsoft in AI defense and thinking about the security of AI, and of course, Ram actually created and leads the AI Red Team practice at Microsoft. So inviting him on the episode was a natural, especially after I read their book. I was lucky enough and fortunate enough to be an early reviewer of the book and it's written in such a fun way with so much detail and quality information. So both the storytelling and the stories they tell are exceptional. I would encourage the audience to pick up a copy of "Not with a Bug, but with a Sticker," but you'll also really enjoy the episode. [ Music ]