
Security and Agentic AI
Ann Johnson: [ Music ] Welcome to "Afternoon Cyber Tea." I am your host, Ann Johnson. On "Afternoon Cyber Tea" we focus on where innovation and security intersect, from the frontlines of digital defense to the groundbreaking advancements shaping our digital future. We bring the latest insights, expert interviews, and captivating stories to help cyber leaders and defenders stay one step ahead. [ Music ] Today, I am excited to welcome my colleague, Yonatan Zunger. Yonatan is a Corporate Vice-President at Microsoft and our Deputy Chief Information Security Officer for Artificial Intelligence. Yonatan and his team are responsible for AI research, infrastructure, empowerment, evaluation, and so much more across the entire company. They think about all the ways AI can go right and all the ways it can go wrong. And they make sure that Microsoft has the right plans and tools in place, to design and to run AI safely. Welcome to "Afternoon Cyber Tea" Yonatan.
Yonatan Zunger: Hello and welcome Ann, thank you so much for having me.
Ann Johnson: So, this is going to be a great show. AI obviously is not new, we're all hearing about it in the technology ecosystem, but over the last 18 to 24 months, we have seen AI rapidly progress; we've become more familiar for everyday users. So, from your advantage point, what AI developments have most reshaped the cybersecurity and the risk landscape in recent years? Well, most obviously Generative AI itself.
Yonatan Zunger: It's since then had a few major impacts on the risk landscape. So, first of all, we have attacks using AI. So, not just to steer traditional cyberattacks, but also to do things like scale spear phishing or use deepfake to boost social engineering attacks. There are also attacks against AI, such as indirect prompt injection which are new surfaces for adversaries to go after, but I think actually the biggest risk is the non-adversarial one; it's people misunderstanding AI. It's when they think of it as, you know, "the computer," which gives perfect answers, when it's really a lot more like the new junior employee who's prone to making mistakes. People really need to approach AI with a mindset of applying their human security expertise to it, not just their cybersecurity expertise.
Ann Johnson: I think that makes a lot of sense and I think that people do have this expectation today that if they ask AI something and it gives an answer, that that's the answer. I'll give you a really quick internal example before we move to the next question. I was working with a colleague to solve a really hard problem and we brought in this expert from Microsoft who really is the subject matter expert you want on this hard problem, and the colleague, the expert wrote a statement and wrote a couple of things. He said, "Here is what the problem is. Here's how we're going to resolve it." The colleague literally came back and said, "Well, I talked to Copilot and Copilot said this."
Yonatan Zunger: Oh?!
Ann Johnson: And the expert was like, very politely, "Well, thank you. I appreciate that feedback, but it's not entirely right." He was trying the nuance of like, "Dude," you know?
Yonatan Zunger: [ Laughter ] So... If you want and give like that central like "the thing you need to know," to make it [multiple speakers].
Ann Johnson: Yes!
Yonatan Zunger: People off. So, the thing that you need to know and understand and teach everyone in the organization who's going to be using AI, is that AI can make mistakes. It can be deceived. It's not that bright. And this is fundamental to the way AI works, but it's also not a showstopper, because you already know how to build reliable systems out of unreliable components, they're called "people." And so, when you're using AI, you want to think of it like you just asked the clever new hirer fresh out of school, you know, they might be very well-spoken, but they also might be spectacularly wrong about things. And if you're building something that's meant to be reused, not just asking AI something one-off, you need to recognize that you're building a business process and ask yourself, how do you make that process work imagining that you replace the AI with this very junior person? And this is a lesson that CISOs can keep in their mind, this is a lesson that you can give to everybody who's working with AI in the company. There is also an important lesson for the professional engineers in the room; they need to remember that they're not just building software; they are actually having to design an entire business process that includes the human users of the system and that's what has to be robust against component failure, which is good advice for engineering in general. It's not AI-specific, but you should just always think of the AI as that clever, well-spoken, eager to help, but occasionally very stupid person and you will be on the right track.
Ann Johnson: Yeah. I like that. I like that a lot. As we think about AI we keep hearing also about Agentic AI; a lot of attention lately. Most people truly don't know what it is, because it's so new. Can you explain the concept to-first a lay person, right? A business leader, someone who is not a-to even a technology professional and then can you talk about it from a security standpoint and why does agentic AI matter now?
Yonatan Zunger: The word "agentic" it's a bit of a buzz word, and I think the reason people are having trouble understanding what it means is that no two people mean the same thing by it. And as far as I can tell what it really means at this point, is it means AI that has the ability to do things. And we're increasingly using AI to control a wider range of things. And the security issue is very simply that the more things your deputy is capable of doing, the more things an adversary can accomplish if they confuse them.
Ann Johnson: That-that's really interesting, and as you think about the future, right, what new opportunities, what threats does agentic AI introduce to organizations when they're navigating and modernizing securely or navigating and modernizing even their security platforms, but also just their digital transformation?
Yonatan Zunger: Well, the obvious opportunity is tremendous automation and the democratization of development. Alright, if anyone who has a routine pass to just ask the computer to do it, or if any team that has a process can just build a tool to make it transparent, or anyone that has a one-off asked to do some research or understand and idea can just ask the machine and basically task a virtual analyst on the fly, it's a tremendous boot. And the obvious threat is tremendous automation and the democratization of development. Insecure agentic systems are ripe targets for adversaries, or they can mess up on their own which can wreak just as much havoc. The average person, even the average software engineer doesn't have the instincts for how to harden a system, and if they're doing things like writing one-off agents that use their own credentials to access the user interfaces, you can see how that can go wrong pretty quickly. So, the challenge before us is to develop agent development frameworks that do the work of security for the user. And between now and when frameworks manage to fully solve this problem, which is something that everybody in the industry is working very hard on, the role of CISOs is going to be to provide that oversight training and management so that these capabilities can roll out in a measured and useful fashion.
Ann Johnson: Yeah, and you sort of answered this, but I just kind want to pull the thread a little bit, how should CISOs think about that balance of innovation as with oversight, right? AI is going to become more autonomous. We don't want to stifle innovation. So, how does CISOs work with their business leaders and their peers to balance innovation?
Yonatan Zunger: You know, I get asked this one question a lot and the important thing when you're evaluating any novel technology as a security person, is even if the technology is completely new, if it's something you've never thought about before, you could always go back to the very, very basics of security; the stuff we learned sort on our very first days, understand what can go wrong, for each thing have a plan. Design the systems-design the business processes not just the neutral software systems. This kind of reasoning applies very well to AI especially when you simply remember, okay the AI is just software, normal software rules apply, but also the AI can make mistakes so human roles apply. So, when we talk about making AI systems more autonomous, right, any time you're talking about "autonomous" systems you should be thinking about a hierarchy of automation, right? You start with a completely manual system that you automate more mechanical tasks and it's suggest sections, then it starts performing actions wholly on their own, and of course this is just the logic of building trust in the system before you handover the reins. So, if you're thinking about developing an AI system, well first of all, apply all of your normal software development thinking, but also, apply all of your normal security thinking. So, limit their capabilities to what they need. Think about where humans belong in the loop, right? Think of the AI like a very fallible person, so you would ask "Where might you have another human checking?" You test your systems, test, test, test, actually one of the most important things to know with AI is that, the development time is much shorter than the traditional software, but the testing time is much higher. And that's because the dimensionality of possible inputs in natural language is so much higher than it is with traditional controls. So, there are so many more possible corner cases of input. Don't expect a developed test ship cycle. Get yourself into that mindset of prototype, build, fix; test, test, test, test a lot. Steadily build that confidence in your system, make sure that it understands things. And then citizen development, there is another place where you have to be very cautious, because people tend to not understand how much thinking and care goes into writing software. Where it's-it's one of those jobs like writing text or people management where you get people who say, "Oh, that should be easy. Anyone can do that!" And then get really shocked when it turns out that it's actually hard and these are actually professions. You know, the kind of thoughtful reasoning that we're talking about, this ability to think about okay, how will this go wrong, isn't something that you just know. This is something I think security people tend to be good at because they spend their careers at it, but the systematic problem with computer science, I think is that we don't teach are young people how to do that very well. So, think a very, very important thing that CISOs can be doing is training the people across their organizations to think like security people; to think about how things can fail, as well as how things can succeed and, of course, to adopt things cautiously and thoughtfully. So, you start out with strong human oversight and, you know, organizationally as well, the CISOs should be starting with strong oversight of development and limitation of capabilities and then gradually expanding those capabilities as you discover what's safe and what works in your organization.
Ann Johnson: Okay. I like that and I like the pragmatic advice right? And the, I think what you said that's really important, is people don't understand the complexity that goes into developing software, and understand the complexity that goes into developing software means you shouldn't be underestimating the complexity that goes into developing even with AI. It's not a easy button for people to push.
Yonatan Zunger: Yes. And, you know, we had this problem quite a few decades ago with the invention of COBOL, right? So, Grace Hopper when we invented COBOL, she had a very interesting hypothesis in this, that the big challenges faced with people with programming was how unfamiliar programming languages looked, right? Because at the time, we were really dealing with assembly languages primarily, and so the very reasonable hypothesis was maybe if programming looked more like English it would be easier and you can democratize development. And what we discovered over the years that followed, was that it turned out the hard part wasn't the language, the hard part was expressing what you wanted to clearly enough that the computer could understand it, and understanding like how does the computer think and so on. And I think we're going to be experiencing the exact same story here again, where now you can speak in natural language, it's so fluent, there's so many ways to say it. Well, it turns out that people are often very bad at expressing what they actually want and a lot of the work of a good engineer or a good product manager, or a good UX designer is asking people exactly what are you trying to do and asking them enough times that you actually get clarity about that. And I think people are going to have to learn that, but on the good side, AI might be able to help ask you those questions and force you to clarify yourself until you get it right.
Ann Johnson: I like that. I really do like that approach. Let's move a little back to core security. From a security and a compliance perspective, what are the most pressing risks that you see when you're going to give these AI systems more independence and even from Agentic AI?
Yonatan Zunger: So, I think it's really-it's just the stuff we've been talking about. These systems can make mistakes or they can be fooled, and making them robust against that is just going to take hard work, testing, and induration. Just design your whole system out, lay it out. Understand that the AI component is a component that has failures in it, but has errors as just a normal load, not as a rare partner case that can be eliminated. And you have to get into the mindset of just thinking about it and building that way. And as long as you're backing out and thinking about things in terms of a business process, then you're actually looking at a system where you have components that are humans and you're used to having that thought about them, and so it really turns into a matter of just apply your usual thinking to this novel kind of situation.
Ann Johnson: Okay. How does Agentic AI itself change the model of trust, identity, and control with an enterprise?
Yonatan Zunger: Well, the first order it doesn't. And I mean, Agentic AI is just software and we have existing mental models for how we should trust identify and control software and those still apply. What we have here is proliferation of software, that's the citizen development part, and software that's potentially unreliable, which is the AI part. And so, what we need to do is combine our two existing security skills. Traditional cybersecurity applies, which means giving the agents the right identities, managing access, it's privilege, monitoring and so on, all of things we are used to. And human security and business process design applies, which means like having multiple sets of human and AII's on tasks, cross-checking decisions, finding reasonable points at which to stop and have someone else check what you're doing before committing to a decision.
Ann Johnson: Yeah, okay. That makes sense. What do you think that people don't see right now? You know, given current AI governance frameworks and autonomous systems, what are people missing when it comes to AI, when it comes to AI governance; what are the biggest mistakes people are making?
Yonatan Zunger: I think the biggest challenge for governance frameworks is that they tend to be based on very large project waterfall moves of development that involved lots of compliance and checking, and we really need to be getting into a mindset where we're expecting a lot more of the average person is asking the computer to do something and you need a framework that therefore doesn't rely on having these very larger reuse steps in it.
Ann Johnson: Okay. So, let's talk for a minute about Microsoft. We're investing heavily in secure AI development, what principles, what guardrails are most important that we're deploying when we build these agentic systems?
Yonatan Zunger: Well, the way I've been explaining this to people lately, is I have a set of 9 principles that I've found very useful and this is especially true for sort of big systems where you do have that reuse, that compliance process, the analysis process, although the thinking is something you can apply to every situation. So, 3 of these principles are just generic, they're basic principles of safety engineering; know what can go wrong and for each of those things have a plan; second, design business processes, not software systems; and third, think about how your systems might fail continuously through the life of a system not as an occasional failure planning exercise, right? Understanding the ways your system can fail should happen as frequently as understanding the ways you might want your system to work. And a part or a 4th principle that goes with it is how you execute on it, which is that you need to create a written safety plan that captures what you're doing, why you're doing it, what can go wrong, and what your plan is. And so, then when you actually have a governance structure in place, that written safety plan, is the thing that you're looking at, analyzing it and evaluating. And on top of those 4 more general principles, I've got 5 more AI-specific ones. Two that are very AI-specific; one is think of like a human, a potentially very dumb human and the other one is to expect more testing time relative to coding time. So, think prototype break-fix, not develop shift. And three rules that apply really to any kind of summarization, recommendation, analysis, decision system whether it's human or AI, the first one is to look out for what I call the 5 errors; garbage in. /garbage out; misinterpretation of data; hallucination; omission; and unexpected preferences. Second, to always accompany decision-making with a suite of test cases, so that everyone agrees on what the criteria actually mean. This goes back to that idea earlier where people are very bad at expressing what they actually want and I've seen some spectacular mistakes in expression. And finally, cross-check and monitor any decision-making so that you can see revealed preferences and validate reliability. And we've actually got a blog post about to come out about all of these things in much more depth and really trying to provide a lot of resources to help people reason through these kinds of things the way that we do in Microsoft.
Ann Johnson: So, I love that. I love the 9 principles. I love that we're blogging out about it. I love that we're getting out there. If you were advising a board today or a CEO today, not a technical advance, not the cyber folks, but if you were actually having a senior level conversation, what's the one or two questions they should be asking their teams about AI that they're not thinking about, that they're not asking? And I don't mean, you know, I don't mean from a productivity standpoint, I mean from a security or compliance standpoint?
Yonatan Zunger: From a security or a compliance standpoint, I think that the question the board should be asking is actually the same as the question the CISOs should be asking, because that question is so basic that it applies, not just to cybersecurity, but to every aspect of how you think about a business; what can go wrong and for each of those things, do we have a plan? Yeah, and.
Ann Johnson: And I don't think that just applies to AI to your point.
Yonatan Zunger: Yeah, it-this is.
Ann Johnson: That applies to everything.
Yonatan Zunger: This is a rule for building businesses. This is a rule for building systems. This is a rule for planning a lunch with your friends. This is a basic way of thinking about systems, not assuming that everything is always going to go right in your life. And I think with AI, the obvious business question that I think that the boards always need to be asking is, are we using AI to solve a real problem that our customers have, are we not just sticking in AI for AI's sake? And the obvious safety question is, what could go wrong and what's our plan if that happens, and is that a plan that I'm comfortable with as a leader? Because ultimately it's going to be on the business leaders to deal with a fallout if something goes sufficiently wrong.
Ann Johnson: Now, yeah exactly. As you're talking to those business leaders, what do you think are the use cases that, you know, all of this is fairly nascent, right? Generative AI, Agentic AI, so what are those use cases do you think those business leaders should be focusing on solving that you think are mature enough today to actually have an AI framework?
Yonatan Zunger: Oh, my goodness. So, I think that we're really discovering a lot of things right now and I think the most interesting and important business AI software, in fact, I will say the most important categories of business software of the 2030s haven't even been invented yet. We're in really early days here. I think with AI, some of the obvious things we have discovered; AI is a really good brainstorming partner. It's really good in places where you can just ask it to sort of task an analyst to do something if you understand that the analyst might make mistakes, right? You don't want to say, "Well, I task it to analyze and oh well this is better than the professional subject matter expert I hired." It's much better as an assistant to that subject matter expert as the person, where someone who really knows what they're doing can ask it questions, interrogate it, go back and forth, have it do a lot of the leg work while they go out and do things. Another thing I have noticed, is that AI is very good at helping you chop up a problem into smaller parts and help with the work of structuring a problem and analyzing it and like dealing with all of those components, which in turn is something we then use in AI systems elsewhere as it takes a request that you ask of it, breaks it up into smaller parts and actually tries to do things, execute on those parts. And then there are sort of more practical, forward-facing deployments. I'm always a little cautious about any sort of deployment of AI in a place where you would have a person in a more unsupervised role, and I think that's simply by natural caution about technologies; I think that we should really be paying close attention to what skills do people bring to the problems that they're working on and not just the short list of on-paper skills that we write up because we need to have a formal job description to make labor lawyers happy. I mean the actual set of skills that like what makes a good person in the role different from a bad person in the role? And are you building an AI that would do the job of a good person or the job of a bad person? And if you're about to do the job of a bad person, maybe you should find a different way to do this so that you can actually get high-value out of the system rather than a low value.
Ann Johnson: Yeah, that makes complete sense to me. You know, this has been a great conversation and I always close out every "Afternoon Cyber Tea" with optimism and I think this has been optimistic in general, but with that in mind, considering everything we have talked about, what are you optimistic about and, I'm going to be really poignant, when it comes to intersection of AI and cybersecurity?
Yonatan Zunger: So, I'm actually really optimistic about this and, you know, you don't often hear security people being optimistic, but this is a place where I think there's actually something exciting going on here. You know, we talk about advanced persistent threats all the time, and when we do that I think we tend to focus on the advanced part of the story, but really what's more interesting in practice is often the persistent part. A lot of the power of APTs comes from the fact that they're just going to keep trying over and over and over and one day someone is going to leave the door open and they're going to go in through that door and then they're going to move laterally, establish themselves, etcetera, etcetera, etcetera. AI opens up the possibility of persistent defense. Imagine if you had agents that were continuously examining your world, looking for open doors, and closing them, looking of anomalies and shutting them down and then asking a human to join in and, of course, these are AIs that, you know, they're looking at your logs and so on and so they know that if they're about to close off a permission or something they can look at the logs and see "Oh, is this about to massively affect the business or not?" They can take a pretty good first guess about whether they should call a human first or second. And what's amazing is, this has never been possible before, right? Any scaling of attackers is just a quantitative increase in existing powers, but persistent defense has never existed because defenders have never been able to afford the giant number of people that it take to build a persistent defense. So, I think that we're actually about to see defense capabilities that have never existed before, which means in the long run, AI may prove a significant net win for defense and that's a pretty rare thing in cybersecurity.
Ann Johnson: So, Yonatan, you always have incredible insights. You always give great advice. You're a wonderful colleague. Thank you again for joining me on "Afternoon Cyber Tea" today.
Yonatan Zunger: It's an absolute pleasure. Thank you for having me Ann.
Ann Johnson: And many thanks to our audience who are listening. Join us next time on "Afternoon Cyber Tea." [ Music ] I invited Yonatan to join me because there really is no one better to talk about AI and its impact on security. Yonatan has this deep passion for doing the hard things right and he understands the need to build security into technologies to ensure they have a positive impact in business and beyond. Yonatan's years of experience in security as well as privacy and his insights on what is coming in the world of AI, are sure to answer some of your most pressing questions. Be sure to listen and to follow us at afternooncybertea.com or wherever you get your favorite podcasts. [ Music ]
