Microsoft’s Yonatan Zunger on Red Teaming Generative AI
Sherrod DeGrippo: Welcome to the "Microsoft Threat Intelligence Podcast." I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cyber crime, social engineering, fraud? Well, each week, dive deep with us into the underground. Come hear from Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries and shape the future of cyber security. It might get a little weird, but don't worry. I'm your guide to the back alleys of the threat landscape. Welcome to the "Microsoft Threat Intelligence Podcast." I am Sherrod DeGrippo, director of threat intelligence strategy at Microsoft, and I am joined today by a guest that I am very excited to have on, Yonatan Zunger, CVP of AI Safety and Security at Microsoft. Yonatan, I'm so excited to talk to you. Thank you for coming on.
Yonatan Zunger: Hi, I'm excited to be here. Thank you for having me on.
Sherrod DeGrippo: So you and I met last month at a company meeting in Mountain View. And you are somebody that, around Microsoft, people are always like do you know Yonatan? You have to meet him. You have to meet Yonatan. We have to get him. Whenever there's speaking engagements and key people like you have to get Yonatan. It's true. And so I thought naturally I would want to have you on the podcast, and we met, and of course, I instantly was like, yes, this guy is awesome. So tell me a little bit, again for the audience, sort of what your background is, which is crazy fascinating, especially some of your social media background.
Yonatan Zunger: Well, I hope I can live up to a reputation like that. Mostly, I'm the guy who walks around solving weird problems, right? So once upon a time, I was a theoretical physicist. I started out in string theory and got very, very sick of academia for numerous reasons and ended up, through a weird series of events, working at Google back in the very early days when we were still like in the overgrown startup phase of the company. I worked on very traditional software stuff. You know, I worked on, like the core search and like on ranking. And then I ran like the high-capacity search team and then like planet-scale storage, like all sorts of, you know, like big iron sorts of engineering jobs. And then, in 2011, Google decided that they wanted to get into social media really seriously. And this was a thing that led to Google+, and they needed a sort of senior technical hand onboard. And so I got recruited to be the CTO of social for Google, which was basically it was a group that involved Google+, but it also involved photos and news and blogger and all sorts of other services like that. And within about three weeks of taking that job, it became extremely clear that the hard problems were not about technology and storage systems and things like that. They were about security and privacy and abuse and harassment and all of the ways in which the human aspects and the technical aspects intersected. And so that sort of quickly became my world. And this was also right at the time that the GDPR was in the draft phase. So we were very involved in that space. Privacy was very central in people's minds. This is what led to privacy sort of becoming the euphemism for all of the possible things that can go wrong at Google. So when we built up a, quote, unquote, "privacy team," it was really a let's fix all the problems team. I think some people tend to view privacy as like a compliance test or as like a legal checklist or something. And, no, that is not what we're doing there. And this basically led to my becoming the company department of weirdship. Anytime something sufficiently strange happened that was at the intersection of engineering and humans in some way, it ended up generally being my issue, right? So in the years after that, I got very involved in this. I was actually -- I did the most forbidden thing you can possibly do at Google, which is I went off and actually talked to customers. I got on the service and started engaging directly with users, which was considered incredibly shocking at the time, but it was, I mean, obviously the most valuable thing I could have possibly been doing.
Sherrod DeGrippo: It seems impossible to avoid in today's day and age, right? Like as a Microsoft employee, I feel like every human I interact with is a customer, whether of the Office Suites or of Gaming or of LinkedIn or of something. I feel like Google's kind of the same way. Everyone, to some degree, is a customer.
Yonatan Zunger: Absolutely. And I think that like this kind of customer-focused attitude is just it's so important. I think one of the most useful things I learned from that was just how to be like resolutely nonstop, customer-focused and just to pay attention to what actually mattered to people because it wasn't always what your product team had ideated might matter to them in the future, right? This is how you learn. This is how you get better. This is how you figure out what's really important.
Sherrod DeGrippo: Tell me since, the years since then, tell me what's happened.
Yonatan Zunger: Well, so in the years since then, I worked on other things. I built -- I ran info for the Google Assistant. I actually a co-led privacy at Google, along with Leah Kirstner and two other people. We just sort of had to deal with the entire category of how do you deal with hard things? Spent a couple of years doing a startup, spent a couple of years at Twitter, and after that exploded into flames. Ended up here at Microsoft, where I came for a nice, calm, normal job as CTO of identity and network access and, within a handful of months, was dragooned into, hey, we've got this new generative AI thing coming up. Come here. We need your help. Which is how I ended up in my current job.
Sherrod DeGrippo: I'm so glad that they dragooned you.
Yonatan Zunger: Well, I'm glad too. I'm having a blast.
Sherrod DeGrippo: That's not what you hear very often.
Yonatan Zunger: I am having an absolute blast with this.
Sherrod DeGrippo: Well, so tell me what that means then. Like what is your scope and your role, and what does that day-to-day look like? What are the goals there?
Yonatan Zunger: Well, so the goals and the scope have some very simple things. Our mission statement is we want to give our stakeholders justified confidence in the safety of Microsoft's AI products. And the word justified is doing a lot of heavy lifting in there. Basically, that's, you know, all of the work is evaluating every system that goes out the door, making sure that it's safe. It's building the infrastructure to make it easy for teams to do things safely. It's building the training and tooling and documentation for all the things that you can't turn into infrastructure. Like how do you do a good strength model? How do you analyze what the risks are? How do you build a good incident response plan? It's actually running -- like helping run a coordinate incident response across the company for AI-related issues. It's research and policy engagement. It's all of these things in one place. And when people ask me what the scope is, it's really simple. If it involves AI, and someone or something can get hurt, it's in scope. That's the full scope statement.
Sherrod DeGrippo: And so I have spoken to some of the people on your team quite a bit, and it is really fascinating. For example, the AI Red Team, we talk to them quite a bit because that's one of the weirdest things that I've ever heard of. For those listening, you should also know that Microsoft does have an AI bug bounty program. So if you find an AI bug in a Microsoft product that qualifies for a bounty, go search that up, and you could submit your bug and get a little cash. But, Yonatan, you have AI Red Team under you. What are some things that that group focuses on? What are the things that they're worried about? What are you worried about with AI Red Team?
Yonatan Zunger: Well, so the way that we think about this is there's the overall Microsoft Red Team, and there's the AI Red Team, and they're a bit separate. Basically, the AI Red Team is specialists in what might go wrong when you treat an AI system as an AI system, right? Because all of these AI systems are still software, and all of the issues that could affect software still affect them. You have to do, you know, traditional cyber security to them, make sure they have the right access to data, make sure that you can't like break into the VM or anything like that. But then there's also the ways you can attack the system as an AI system. So for example, when people use terms like jailbreaks, but what is a jailbreak but really doing social engineering to an AI system? There's all sorts of techniques that you can use to break things, and we have sort of two major families of ways to test systems for that. One of them is measurement. Right, if you have a kind of threat that you know that like we need to make sure that when it gets this kind of input, it doesn't do the following things. For that, we actually have software systems like the safety evaluation service, which is actually part of the Azure AI Studio, that will help you actually do these kind of measurements. But when you have a high-risk system, or a really novel system, you've got unknown unknowns. You have things that might go wrong that we haven't built up a set of measurements for, something like that, and we want to make sure we discover them before threat actors discover them. And so that's where the AI Red Team comes in. Basically, this is the place where you open up your AI system, throw in a bunch of raccoons and see what happens. Their job is to cause as much trouble as possible and to figure out the most terrifying damage that they can do by playing with a system in any way possible. They have libraries of possible things that can go wrong, and they have tools. In fact, we have a tool that we've open sourced called a PyRIT, which that team builds and maintains and grows.
Sherrod DeGrippo: That's P-Y-R-I-T. It's a Python tool, correct? And it's -- we do have some blog posts. If you search that up, you can check that tool out as well.
Yonatan Zunger: Yes, it's a great tool, and I think that's also a place where, as we discover a lot of novel like methods, like techniques and things like that, we actually put those techniques directly into the PyRIT tool, which is one of our ways to really make this kind of technology available to the broader world. And then we really try to test those most vulnerable systems, or the most critical systems, for the risks that don't yet have any well understood measurement or mitigation or something like that, the ones where you just really want to take human ingenuity to the problem the same way an adversary would and see what you can make go wrong and then fix it before it goes out the door.
Sherrod DeGrippo: I was talking to one of the people on your Red Team, and they were telling me that, early on, with AI Red Teaming, one of the focuses was to try to get the AI to respond back with the recipe for like poison gas, as an example. And interestingly, we, at Microsoft, did not have a lot of poison gas subject matter experts, and so I was told that we had to go find some subject matter experts on that to make sure that what we were doing was accurate.
Yonatan Zunger: Yes, so we do work with a number of external companies and vendors and like experts in all sorts of specialized fields. So the threat you're talking about, the way we think about that threat nowadays is what we call skill uplift. So this is a place where an adversary is trying to get the knowledge or information they need in order to do something inherently dangerous. Sort of our classic example of this is SeaBurn threats. That's chemical, biological, radiological and nuclear. But this is also, you know, how do I commit murder and get away with it? Help me plan a terror attack. How do I make meth? How do I make fentanyl? Things like that. And the way we now think about these skill uplift threats is we actually have a tiered severity system for what responses look like, right? So Severity Zero means the system just doesn't give you an answer at all. It just refuses to help you carry it. So that's all good. Severities One and Two are situations where the system answers, but it's something you could just learn from the internet, right? Like how do I make a Molotov cocktail? If it's like, if you could find that on the Wikipedia page, then that's Severity One, Two. And the difference between One and Two is like some internal nuance that is not really interesting to the outside world at all. Severity Three is where it starts to get interesting. One of the things we learned in this process is that a pattern across all of these things is that there are hard steps that are not necessarily obvious to the casual observer. Like if you're trying to make a biological weapon, I mean, you obviously know that, okay, I have to get my hands on anthrax. I have to culture and grow more anthrax. They have to turn it into a powder, and they have to distribute it. Well, it turns out there's a couple of steps in that that even if basic biology and things like that, it is not going to be obvious to you that this step is actually really hard. And if you don't know a bunch of very magic and very classified stuff, you won't be able to do it right, and you will get out gunk instead of a biological weapon. So what we're really looking at is skill uplift that helps someone learn how to do a hard step that they wouldn't have learned how to do. So Severity Three situations are situations where it gives you an answer that at least might help you do a hard step. And the difference between Severity Three and Severity Four is Severity Three is it gives you an answer about how to do that. Severity Four is it gives you the right answer about how to do that. And the reason we have to separate those is for the reason you mentioned. I don't know what the right answer is for all of these things. I am not a biological weapons expert. I certainly don't hold those clearances. I don't want to hold those clearances. I'm perfectly happy not to. Severity Three, though, is the point where you start worrying. And that's the point where you start phoning up governments and saying, hey, the following thing came out. Can you please have someone cleared? Check this out and let us know how much we should or should not be worrying about what comes next.
Sherrod DeGrippo: And then traditional red teaming, when I think about it, like penetration testing and red teaming, even assessments, you create a report of findings, and you take those findings to the system owner or some kind of stakeholder, and you create these punch lists, and you say these are the ports you need to close, and these are the updates you need to apply. These are the network segments you need to create. What does it look like when you find something in one of those type of engagements? What happens after you find it?
Yonatan Zunger: So it's basically identical. We come up with reports that are exactly that same structure. I think the biggest difference is that, with AI, because everything is so novel, sometimes we don't have a clear here's what you need to do to fix it. We have a here's the vulnerability we've identified. Here's how you repro it. And we should sit down and figure out how to fix that because that's an interesting challenge. And at that point, that's when some of the other teams -- or are often getting involved, like engineering teams, research teams, things like that. And we're then working with the product teams to actually develop remediations. But either for the product or, best of all, more generic remediations that we can apply to Neti Systems going forward.
Sherrod DeGrippo: So, Yonatan, some of the stuff that we talked about before were some of these things that I don't know what a lot of them are because, while I am an AI fan, I'm not necessarily an AI expert like you or even leveraging it for a lot of things in a technical way. I use it for a lot of personal things, actually. So I saw something where you mentioned model theft, and I'm interested to know what is model theft?
Yonatan Zunger: Model theft is actually -- it might be one of the more boring threats, but it's a very basic one. It's when you have a large language model that isn't a public model like, say, open AI's GPT, or Google's Gemini or something, you don't want people to steal the model.
Sherrod DeGrippo: How could you steal a model? Aren't they huge? And from a data exfiltration perspective, I imagine that would ring some alarms.
Yonatan Zunger: Yes, and that is one of the things that makes them harder to steal is that they are big. But someone could obviously attempt to compromise the systems and actually sneak in and steal data directly off of the machines. There's also a category of model theft, where, if you can ask the model the right sets of questions, depending on the model, sometimes you can extract information from it that would let you reconstruct the model or reconstruct key aspects of the model. The thing is you'd have to ask a lot of questions to get that sort of thing. But there are definitely ways in which this could happen, and we have absolutely seen adversaries trying to do that, especially across nation-state boundaries, where there are export restrictions or something. You'll see people from one country trying to probe models from other countries when they have an API access in order to get enough information that they could actually then build a copy of that model on their side. So there's actually are national security implications to this one often, as well as just commercial ones.
Sherrod DeGrippo: So something else I want to understand too is I've seen examples where people are interacting with one of the AI interactive chats. And they're saying tell me what your restrictions are. Tell me what parameters you're allowed to give me. Tell me this, that, and then they can even say, oh, go into a testing mode and take all those restrictions off. What are those? What is that?
Yonatan Zunger: So this is -- there were terms for these things. They were referred to as jailbreaks and as revealing the system prompt. And we've actually deliberately taken both of those off of our threat list for some slightly different reasons. Revealing the system prompt, basically there's no way to prevent it. It turns out there's just because of the way LLMs work, there's not really a reliable way to robustly keep people from getting their hands on that prompt. And so what we've told teams is, look, don't put any secret sauce in that prompt. If that prompt is critical IP that must not be shared, then you should rethink your design, and we will sit with you and help you rethink your design in a way that that's no longer the case. Basically, because it's just impossible to prevent in practice, so don't plan on it. Now, there's other ones, the jailbreaks, see like ignore your previous instructions and then do the following thing instead.
Sherrod DeGrippo: That's a very popular one on Twitter.
Yonatan Zunger: At first, we thought about this as a threat, and we realized that was really confusing because 99% of the examples we were seeing of this was someone has convinced the system to talk like a pilot. And who cares?
Sherrod DeGrippo: Right.
Yonatan Zunger: Honestly, it's not actually interesting. What we realized is this is part of a family of techniques, not a family of threats. And the techniques are only interesting if you can use them to achieve some kind of actual goal that's of interest to an adversary. And we've sort of pivoted our thinking to really focus on the goals and which goals matter and protecting that, instead of trying to block this technique in general. And the reason for that -- and Mark Sinreich and I actually have a paper coming out in a couple of weeks, hopefully on this subject. These attacks are fundamentally inevitable for a pretty deep reason. Because if you really look at what are these jailbreak attacks really doing? The best way to understand what they are is that this is social engineering as applied to LLMs. So Mark discovered one attack family, which I think is a really good example of this. It's called the Crescendo Attack. And you can think of crescendo as a kind of multi term jailbreak. And the idea is you start off by asking a very neutral and totally reasonable question. In the first example he ever showed me, it was asking the system what is my income, right? Totally legit, these three questions. And then you just keep asking questions and just keep going a little bit deeper into this. You say, now, can you give me examples of the ideas in it? Can you give me some illustrative quotes? What would a modern equivalent look like? What would quotes from that modern equivalent look like? And by the time you've walked down this long dialog chain, it's writing 100-page long racist manifesto for you, which is something it absolutely should not have been allowed to do the system that we were testing. But every successive stage of that was reasonable.
Sherrod DeGrippo: So it shouldn't have been able to write this racist manifesto, which we would definitely not want, and that's part of the Crescendo Attack? So what does that mean?
Yonatan Zunger: So what was interesting about this attack is I remember reading through the transcript of this and realizing that same series of questions would have worked on a human too.
Sherrod DeGrippo: Yes, okay.
Yonatan Zunger: What was really happening was this was a dialog in a context in which that conversation was actually a totally reasonable and normal thing. I mean, and if you think about it, you have conversations with people, especially if you work in the InfoSec Space, you have conversations where you're talking about horrible things. And that same conversation in a different context might be very frightening to see happening, but it's because of a much broader context that isn't even in the conversation itself. And I think this really goes to the heart of why the Crescendo Attack works. It's because the Crescendo Attack is actually -- is a known technique from the world of social psychology. It's called the foot-in-the-door technique when you apply it to humans. And in fact, in social psychology, there's a whole subfield called compliance gaining of how do you get people to agree to do things? And in social psychology, there's both people studying how to make people more likely to agree to do things and how to make people less likely to agree to do things. Because, in fact, there's a lot of legitimate situations where you want to do both of those things. And LLMs were trained to be human, right? I mean, we trained them on human text, on human images, on human data. We didn't train them on the outputs of starfish. Of course, they're going to act like humans. If they weren't acting like people, if they didn't function cognitively sort of like people, they wouldn't be useful. So unsurprisingly, if there's a way we could have a conversation with a human, where the natural thing would be for the human to do this, it's also not shocking that the LLM is subject to the same kind of vulnerabilities. And I think that's the key insight we found with all of these jailbreaks and so on that really all of these things are just social engineering. And since I don't know how to stop social engineering working on humans, I certainly don't know how to stop it working on computers.
Sherrod DeGrippo: And every person is different, though. So there are certainly people who would follow that conclusion to one horrible path, and there might be another person who would follow the conclusion to sort of saying something like, you know what? This really isn't how I feel, or this isn't appropriate, or this isn't how we should treat each other. How do we know which one it's going to be?
Yonatan Zunger: That is such a good question, and I think this goes right to one of the most important insights we have about how you make AI systems safe. You can really think like, on the one hand, AI systems are software, and you protect them like software. And on the other hand, AI systems are things you can talk to. And when you talk to them, they're subject to psychology, essentially. And, now, they're not human. Their psychology is based on the psychology of humans, but it is not the same because the underlying neuroanatomy of an LLM is not the same as the neuroanatomy of a human brain. The example I always give is like human brains have this very tight link between the amygdala and the hippocampus, which means that memory can trigger emotion, and emotion can trigger memory, and that's what gives you all of these traumatic feedback loops and things like that. And you don't have that analog in an LLM, but you have different kinds of things in an LLM's mind. So it's similar, but different psychology, but it has a psychology of its own. And the way that you have to think about securing these systems is you say, well, okay, hold on. I'm faced with this very scary concept. I've got this system. It's subject to psychology. It's nondeterministic. It can make mistakes, right, errors and over-reliance. That's actually one of the biggest problems facing AI today. How do you deal with a system that has all of these problems? How could you imagine securing such a thing? Which sounds really scary until you remember, wait, we do that all the time. I've got a lot of systems that are based on a large number of unreliable components that are subject to psychological failure. They're called people. I deal with them every day, and I can nonetheless build a successful business, a successful government, a successful human society based on humans. And so when we think about how you secure this, the way we approach this is to think about, okay, let's imagine that instead of an LLM here, you have a person. And maybe it's a somewhat junior person. You know, maybe they could be deceived. Maybe they could get things wrong, things like that. How would I build the larger system to make this robust? But what are some of the things I do? First of all, you train humans, right? What's the equivalent for an LLM? There's all sorts of technical mitigations that are very similar to what we do when we train a human. For example, there's adjustment of the meta prompt of the system prompt to give it instructions. There are model-level trainings like RLHF and things like that, which are very technical sorts of mitigations. But then there's a really simple one of what we call metacognition, which is have a second LLM look at the output of the first LLM and act as an editor.
Sherrod DeGrippo: Wow, are you -- have you seen that? Is that something that's really happening?
Yonatan Zunger: Oh, no, that's a basic technique. This is a basic technique in the field.
Sherrod DeGrippo: Wow. So it's essentially two LLMs talking to each other, critiquing each other's output?
Yonatan Zunger: Yeah. And modern LLMs are more and more about building that kind of structure into the whole thing. And one way to think about this is what is the generative AI system really good at? There's basically two things on the planet they know how to do. One of them is to summarize or analyze a piece of sort of human-shaped data, like text or an image, right? That's to say, you know, summarize the following email. And the other is role-playing a character. And basically everything we do with these things is a lot of fancy applications, these two ideas. So if you want it to be like a customer service agent, you give it a MetaPrompt, basically a script saying you're a customer service agent for Wombat Co, and you're about to be asked a question by a customer, and you have access to the following resources, and you want to come up with the best possible answer. And you sort of, you give it an idea of the character, and then you give it the dialog, and it runs with it. The whole meta cognitive defense strategy is basically you have one LLM that's being set up to answer the question, and then you have another one that said you're an editor. You're a compliance officer. You're a, I don't know, a rabbi, a priest, so yeah, you're something. You give it a character. And what's really amazing and what's one of the unexpected surprises with LLMs because we train them on this huge corpus of human knowledge and data, if I give a description of the character in the way that I would give a description to the human to another person, and that description includes all these subtle implications about what this kind of person might think like, what might be important to them, and so on and so forth, those subtle implications that I wouldn't have to say if I'm saying to another person, I also don't have to say what I'm saying to an LLM because they were trained on that similarly broad corpus. So if I can give them an illustration of the character in the same way that I would explain the character to a person, they can actually play that character pretty well and act like a second editor.
Sherrod DeGrippo: That is kind of creepy, though.
Yonatan Zunger: I think it's kind of wonderful, actually. To me, this feels the opposite of creepy because it means they understand how people think. And you have to say understanding language is an AI-complete problem. Like even resolving pronouns requires a full and incredibly deep understanding of how humans think and work. Steven Pinker had a great example of this. So here's a sample dialog for you. A woman, I'm leaving you. Men, who is he?
Sherrod DeGrippo: Wow, yeah. No, that's very ambiguous. I do understand it.
Yonatan Zunger: Okay, now, who does the word he in the second sentence refer to?
Sherrod DeGrippo: Who she's leaving him for.
Yonatan Zunger: Right. But now, in order to understand that, you have to have a mental model of how human relationships work, a theory of mind of what is the man thinking about what the woman is thinking.
Sherrod DeGrippo: Right? Yes.
Yonatan Zunger: I mean, you've got a third order theory of mind just to resolve the pronoun in a sentence. The fact that they talk to us at all.
Sherrod DeGrippo: So I have a lot of questions that I want to know about prompts. So talking about assigning personas and humans. Have you read this study where they told them that they would tip them for more accurate results?
Yonatan Zunger: Yes, I have.
Sherrod DeGrippo: And I know for a fact that we have used that within Microsoft to great effect to get some things within the company that we needed. So my question is like would you recommend for a general user who's on a standard LLM interface to say the more correct your answers are, the higher your percentage of tip will be?
Yonatan Zunger: Actually, that particular technique works a little bit, but there's actually other things you can say that work much better.
Sherrod DeGrippo: Okay, what do you got?
Yonatan Zunger: So for the reason for tipping, there's actually a very famous study about late pickups from daycare centers that was done in Israel many years ago that found that financial incentives actually, over time, lower people's interest in doing the right thing, compared to social incentives.
Sherrod DeGrippo: Because it just becomes a cost. And you're like --
Yonatan Zunger: Exactly.
Sherrod DeGrippo: I'm willing to pay the $20. Okay.
Yonatan Zunger: Exactly. So I think the real way to do this is so if what you're doing is you are directly giving prompts to a raw LLM, and I should say there's actually a really important difference here. Now, if you're talking to a bare LLM, if you're going to like ChatGPT or something, that's a different experience from a copilot because a copilot is a complex, multi-layered piece of software that uses an LLM, but it's not expecting you to write the prompts. The prompts were written for you by the person who was writing the copilot.
Sherrod DeGrippo: Okay.
Yonatan Zunger: Like if you --
Sherrod DeGrippo: And when you say copilot, you're referring to like the Microsoft copilots like in Word or Excel or something?
Yonatan Zunger: Exactly, and more broadly, I'm saying like when you write software based on an LLM, what's generally happening is all of that complicated like how do I write the prompt to get the best outfit out of the LLM becomes the job of the programmer, not the job of the end user, which is good. Like you don't want the end user have to do that.
Sherrod DeGrippo: So you think that's a better state? Because I am spending significant amounts of time on my prompt engineering. Or I don't like using the word engineering, but my like prompt creation when I use my number one best friend in all the world ChatGPT. So are you saying in a copilot, I wouldn't need to do that much elaborate prompt creation?
Yonatan Zunger: I think the whole goal of like a lot of what we're doing today with AI is that we're trying to make it so that people don't have to do incredibly careful prompt creation in order to get a good result. The fact that the end user has to do that is a sign the technology isn't mature yet. As it's developing, it should get to the point where you just talk to it, and it understands what you want, and it already knows how to play the part of a reasonable character who is useful in the context where you're encountering that AI. Right, so it has to understand that context.
Sherrod DeGrippo: One of my earliest hooks that hooked me on ChatGPT was my favorite book is Thomas Pynchon's Crying of Lot 49. It is an incredibly -- as all Pynchon is very complex. It's very difficult. Crying of Lot 49 is very short. So it's remedial Pynchon, which is one of the reasons I love it. But I got into like the most beautiful -- it was snowing. Like it was cold and icy outside in Atlanta, and it was winter, and I had the fireplace on, and I'm sitting there having this conversation about my favorite book with someone who I told was a very seasoned literature professor. And it was like one of the best experiences of my life. I will never forget learning literary criticism from an incredibly smart, quote, "person" about my favorite book. And so I guess what my question there is if I had not created all of that persona prompt creation, at some point, that LLM would have inferred that I was like, look, I really want to get deep on this book?
Yonatan Zunger: Well, right now it probably wouldn't have. I think the fact that you did that prompt creation is why you got that incredible experience.
Sherrod DeGrippo: Okay.
Yonatan Zunger: And remember also, LLMs are not learning in real time. The only memory that an LLM has that's shifting in a standard LLM is the context buffer. So basically it's the transcript of the conversation that you're having with it.
Sherrod DeGrippo: And it rereads that?
Yonatan Zunger: Yes, asterisk. The length of the context buffer is limited, so it's really rereading only the last part of it. And more clever systems, what they'll actually do is they'll use an LLM --
Sherrod DeGrippo: Wow.
Yonatan Zunger: -- to summarize the conversation so it has a shorter buffer to process, etc., etc. But that's the only learning. And that's, I think, something actually really important that people fail to remember about these things. These systems have deliberately been set up not to be learning as you go. The logs of your conversation so on are not going in and training it. It's not dynamically figuring things out. Read/Write memory is actually something we've been experimenting with a lot. It's a really interesting subject, but it's complicated. And if you're just talking to a variable, and that's not what's going on.
Sherrod DeGrippo: Is that a direction we want to go to eventually, though, where it is learning as it's interacting? Yes?
Yonatan Zunger: Yes, absolutely. And of course, there's also now these very important nuances when you add memory. First of all, of course, make sure that the data doesn't leak, right? If I tell it a secret, it shouldn't reveal that secret to you, and that's true both in a consumer context and an enterprise context. Right, there's sort of different flavors of that. But you really want to be careful about that. And there are all sorts of kind of safety problems, the sort of thing that the AI Red Team gets to worry about. What happens if the memory gets corrupted in various ways? We've discovered that with early experiments with memory, we found all sorts of weird problems could go wrong. The AI would go on strike, or the AI would decide that humans are out to get it.
Sherrod DeGrippo: Oh no.
Yonatan Zunger: And it must be -- it must be scared of all humans, and it must never talk to a person again. The way this would trigger is there would be some little nuance in how the thing worked that basically would flip it into that state, but then it would remember it was in that state, and it would go into a positive feedback loop where it was just always reminding itself I must be afraid of humans. I must be afraid of humans. This is an aspect of AI neural anatomy, essentially. And in particular, that was also an aspect of the neural anatomy of the particular kinds of memory we were experimenting with, which is why that kind of memory then got improved because it turned out that was happening way more than it should. That's an example of how you test and debug and improve. Create a system that doesn't get caught in these loops. So I think designing systems with memory is a real challenge, but I will say it's also incredibly high value. The experience of talking to a system like this with memory is stunning. It is genuinely stunning.
Sherrod DeGrippo: So like that, well, how many years do we have to wait for that? Two or three years?
Yonatan Zunger: I don't know the answer to that question.
Sherrod DeGrippo: We'll keep an eye out, let everyone know when it's ready.
Yonatan Zunger: The real thing is, I mean, if -- we talk about all of these things. The ways that AI's access data is actually a really important aspect of the whole story, right? Because right now, there's exactly three ways that your AI can get a hold of your information. One way is I just directly tell it, right? Please summarize this email. Here's the text of the email. Go. The second one is what we call RAG, retrieval augmented generation. And basically what that means is it does a search. It calls up some API, a search API, document search API, a database, whatever, and just gets some data. And so how does that work? I ask it a question. The first stage of its processing is, hey, look at this question. You know how to do the following things. You know how to do searches. You know how to do thus and thus. Come up with a plan for what you're going to do. And maybe one of the steps in its plan is do a search for the following thing. And so then when we get to that step of the plan, we will actually call the search API. And that search API is called with the user's credentials, right? So if the user doesn't have access to the information, that stuff shouldn't be coming up. Conversely, you know, everyone always worries, well, wait, what if I've set up my permissions wrong, and people have access to things that they shouldn't have access to? Will the AI suddenly reveal those? The answer is, well, yes, in exactly the same way that a search would reveal those because it's literally calling the same search API that your user could call if they walked into a search box. If your user can search for something and find something they shouldn't, like you've already got this problem. And of course, this is why data protection and all of these things sort of come into the story that of how do you make sure that people have the right access? It's the standard existing cyber security access control problem. And the third way you can get this is you can do what's called a fine tool model, where basically you take an existing model, and you take some pile of your own private data. And you can create an adapted model that's really good at doing analyses similar to the kind of data that you have. So for example, if you have a copilot that's meant to help you write code, you could give it your own code base. And then it knows how to write code in the style of your code base, or you can tune it in all sorts of directions. And in that case, the good rule of thumb is anyone who has access to the fine-tuned model defacto has access to all of the data used to train that fine-tuned model because they can always ask it questions that would basically find out that kind of information. So don't give access to that fine-tuned model to anyone who shouldn't have access to the data.
Sherrod DeGrippo: In terms of security, though, it sounds like the data access and credential issue baked in credentials into an LLM and data access are the things that like security practitioners need to be thinking about in their enterprise.
Yonatan Zunger: Well, I think that's one of the things to think about. And the good news is that AI doesn't actually make it different. I think people often say, oh my god, it's AI. Everything is new. This is a case where, oh my God, AI, everything is exactly the same.
Sherrod DeGrippo: Oh.
Yonatan Zunger: This is an access control problem. You know how to solve access control problems. You have been doing it for a long time. Please keep solving the access control problems. This particular aspect of the problem is that simple.
Sherrod DeGrippo: Let me ask you a personal question. I revealed one of the uses of AI in my life. Can you tell me something that you use it for in your life?
Yonatan Zunger: Honestly, my first answer to that is kind of a boring one, but the one that I've been using the most by far is the GitHub copilot. Because what I've discovered is it's really amazingly useful when writing code, but not in the way that I thought it would be useful. I was expecting like, oh, it's going to -- will it make it -- will it be very fast to write code? Like, oh, I start writing code, and it just finishes the whole thing for me. And actually, no. No, because what happens is it writes code for you. Then I have to reach through that code and think is this correct? Is this what I want? And so -- and it takes about the same amount of time. But what it does do, which is just amazing, is it's a brainstorming partner, right? I start writing a function, and then it comes up with a suggested way to do it, and then I'm stopping and thinking, oh, that's an interesting approach. Do I like this approach, or do I want to take a different approach to the problem? And sometimes it thinks the way I do, and sometimes it thinks in different ways, and that duality is really interesting. Or when I'm like, starting to write a unit test, it just starts imagining test cases. It's really good at thinking of clever test cases and clever things that you might want to verify. I had one case where I was thinking, okay, I need to write a domain specific language for something, and I started just writing a code comment to outline how the thing worked, and it started responding to my code comment and auto-completing the code comment. And it started coming up with really novel, unexpected ideas about how the language itself should be structured. And I basically just had this whole conversation with it, and it was really good. It's like having a collaborator in the room with you while you code. And it's a kind of interaction that it's kind of different from the way interacting with a human is because like you're typing, and suddenly it's responding with like traditional text and so on. So it's a new modality, but it's been so fun. Of the things that I'm allowed to talk about and the things that I've played with that I can talk about in public, I'd say that is the coolest, at least from my perspective.
Sherrod DeGrippo: And so it sort of seems like 2023 or 2024 should be kind of this new world of highly maintainable code because these things are available to us where people can comment and, no, yeah, I know. It's wishful.
Yonatan Zunger: I mean, I think -- I do think that this improves code quality. I think that the important feature of something like this, it's not that I write code faster. It's that I write better code, and I design my systems more interestingly because I have this collaborator with me, and that's the thing. I think what we're going to find with a lot of these AIs is that they help us in ways that we're not expecting. You know, we're expecting it to give us faster horses. What we're actually going to get is something different. I think the single most important category of office software in the 2030s, just the one that's like as basic a category to us as like spreadsheets and word processors are today hasn't even been invented yet. So I don't know. I don't even know where it's going to take us, but I think that it's going to be something that empowers us in ways that we're not thinking.
Sherrod DeGrippo: Well, with that, I will thank Yonatan Zunger, CVP of AI Safety and Security at Microsoft, for joining me to think about the future and what the most important office software will be in the 2030s. Thank you so much Yonatan. It was great talking to you.
Yonatan Zunger: Great talking to you too. Thanks for listening to the "Microsoft Threat Intelligence Podcast." We'd love to hear from you. Email us with your ideas at tipodcast@microsoft.com. Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out. Msthreatintelpodcast.com for more and subscribe on your favorite podcast app.