
Beyond AI for Security Hype: What Really Matters in Cyber Defense
Sherrod DeGrippo: Welcome to the "Microsoft Threat Intelligence Podcast." I am Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cybercrime, social engineering, fraud? Well, each week, dive deep with us into the underground. Come here for Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries and shape the future of cybersecurity. It might get a little weird. But, don't worry, I'm your guide to the back alleys of the threat landscape. [ Music ] Hello and welcome to the "Microsoft Threat Intelligence Podcast." I am Sherrod DeGrippo, director of Threat Intelligence Strategy here at Microsoft. And, today, I am joined by Twitter celebrity Zack Korman, who is the CTO at Pistachio, oh, he's laughing already, good, where he builds AI and cybersecurity tools. But he is also pretty funny online, which is rare. My first impression of Zack on Twitter, which I refuse to call that letter, I saw just like a post about giving away hoodies and I just kind of scrolled by it. I have a lot of hoodies. And then I saw another post talking about the difficulties of giving away a lot of hoodies. So I clicked through and it was actually posting a lot of infosec content. And I was like, "Oh, this hoodie guy is struggling with the hoodies, but he's doing okay with infosec. I would love to hear him say some of this - hot takes out loud." And then, Zack, I invited you on the podcast and you said, quote, "But then how do I say bad things about Microsoft?" Which is a great way to get to know someone. So I've got Zack here. Today, we're going to talk about AI, infosec, hype versus reality. And I want to talk about why Zack thinks the future isn't AI replacing jobs, but scaling good ideas in weird ways, useful ways. So, Zack, we have a lot to talk about. Thanks for joining.
Zack Korman: Yeah, great. That was - that introduction was wild. I'm both a Twitter celebrity and then we were like talking about the time I completely made fun of Microsoft by accident [inaudible 00:02:19] -
Sherrod DeGrippo: Oh, by acc - sure, by accident [inaudible 00:02:20] -
Zack Korman: Well, yeah, it was all totally on purpose. But, yeah, yeah.
Sherrod DeGrippo: Just really quickly before we get into the meaty topics, how are you doing in your hoodie situation? You okay?
Zack Korman: Yeah, that was great. Yeah. So I guess maybe I should explain like what happened here.
Sherrod DeGrippo: Yeah, tell everyone what that's about 'cuz even I don't really have a -
Zack Korman: Yeah.
Sherrod DeGrippo: - full track on it.
Zack Korman: Yeah. So this one person had posted in the morning and said something like, "If I get 10,000 likes on this post, can I have this computer from" - ah, I feel really bad, I'm like blanking on which company the computer was from, but, basically, he just posted something like that. And then it got real - it started to spread like crazy. And I think I was actually like the second like on the post of it. And so I was like - I had kind of been following it throughout the day.
Sherrod DeGrippo: Were you trying to help out? Were you like, "I'll contribute to your likes"?
Zack Korman: Yeah. And so I had posted multiple things. Like I had switched to like my - I switched to the Pistachio company account just to like it.
Sherrod DeGrippo: To give another like?
Zack Korman: Yeah. Like, yeah, it was -
Sherrod DeGrippo: Oh, my God. Like farming, like farming.
Zack Korman: I was like, "I've got to make sure this person gets her computer." Right? And then -
Sherrod DeGrippo: You're basically a bot now.
Zack Korman: Exactly. And I even had - I was telling everyone, I was like, "You guys got to go switch to whatever alt accounts you guys have." Right? You know? So I had posted this and then - or had been like kind of following this and then, right before going to bed, I just made a joke where I just said, "Hey, you know, you don't need 10,000 likes to get one of our hoodies." Right?" 'Cuz she had said she needed 10,000 likes to get [inaudible 00:03:39]. I'm like, "I would give it to you for way less. Like I'd give you one of these hoodies we have for way less than that." And then I went to bed. And then, when I woke up, I had 250 likes on it and 100-something replies and 20 DMs. And I was like, "Oh, God, what have I done?" And, at first, I just thought, "Okay, people are joking," but then it kept growing, like people kept going. And I'm like, "Okay, like what do I do? Like I don't" - and then I had to come into work and I had to - understand that a lot of people I work with don't really know I have a Twitter account. Okay? And so I had to - I had to lead with that and be like, "So I happen to have this Twitter account where sometimes I make jokes. And, this time, I made a joke about giving away hundreds of hoodies for free." And our team was kind like, "What? So are we doing this?" And I was like, "Yes, because I've actually kind of already - like people were like" -
Sherrod DeGrippo: You've already promised it.
Zack Korman: Yeah. Exactly. And, also, I was like, "Obviously, like we have - like it's - sure, like we - I got like hundreds of likes for Pistachio. It's worth it. Done. Let's go. Let's get these hoodies out."
Sherrod DeGrippo: What was your marketing team's response to this?
Zack Korman: Yeah, they were like - well, at first, they didn't understand it was - because what happens - so this is the funny part about like the disconnect in the company where our RevOps team in like the commercial side - I mean, I am the CTO of the company, I run tech, product and we are one side of this company. And so the other side thinks that they've just got all these like inbound leads coming in because people were submitting forms on our website like for -
Sherrod DeGrippo: Whht!
Zack Korman: - yeah, for like sales meetings. And then, finally, like I - and I see them and I go - I have to go over to the commercial side and be like, "Yeah, those 10 people, they're from like Twitter. So like if you guys could be nice to them and don't try to sell them too hard. Like just don't do what you normally do. Come on." And then they kind of didn't believe me at first, but then they jumped into one of the meetings and they were like, "Yeah, we saw your CTO promising hoodies on Twitter." And they're like, "Oh, God."
Sherrod DeGrippo: Well, -
Zack Korman: So -
Sherrod DeGrippo: I think one of your big mistakes was you posted a picture of the hoodies and they look really nice. Like they look way nicer than the trash garbage free hoodies you get at a lot of like swag places. So they're like very - 'cuz you're - is it 'cuz you're in Norway you get access to nicer hoodies?
Zack Korman: Oh, no. We had to work super hard, too. So we are like probably, in the cybersecurity space, the company that cares the most about just raw aesthetics. Like, purely, like -
Sherrod DeGrippo: I like that.
Zack Korman: - we just - so like, actually, the thing that is the hardest about these hoodies is that the hoodies that we have had, they discontinued one of the two that I posted in that picture and they - they gave us a second version. They sent it in the other day and it was like garbage. So we were like, "No." So we were currently - like our marketing team is currently like demoing like eight different hoodies. They sent me a message the other day, they were like, "Zack, like this one is okay. It's not quite" - and I was like, "No, you're going to go find the highest-quality version of that hoodie 'cuz that's how we do things. Right?" But it's like we put so much work into this aesthetic value of this like - you know, partly because we very much believe that good - like you can't see always how good a product is, but you can like see if they care a lot about it. And one of the ways you can care about something is by like caring about even small details that don't matter, like your typography and your visual color schemes and stuff and your hoodies. And so we care a lot about these hoodies. And, actually, when we send them out, people will see like we hand wrap these things in a very specific way. I've had to like - we were like watching these YouTube videos on this like origami folding technique for it just 'cuz like it matters. Right? So -
Sherrod DeGrippo: That's what I want to see, actually, more of in tech is people emphasizing the humanity of our feelings, how things impact us. I want humans to do emotions and feeling and let the machines do the labor and the math and the thinking. So I'll just say, personally, if you - we're going to talk about AI a lot. I think that Anthropic has the prettiest aesthetics of any of the AI leaders out there right now. It's very bespoke feeling, it brings back a lot of organic color and typeface and it's really nice. Which I'll quote back to you, I think you said something like, "AI is both amazing and dumb." So you've kind of said it's like both brilliant and stupid. So give me some of your point of view on why you think it's brilliant and why you think it's stupid. Examples are extra credit.
Zack Korman: Yeah, yeah, no, of course, I have tons of these examples. I mean, one of the things that happens - so like most people's experience with AI today is like talking to ChatGPT or Claude or using it maybe in development, but, when you build a product that uses AI at its like core, you're going through - so, at Pistachio, maybe I'm going through like 500 million tokens a day. Okay? And - which is not actually that much in the grand scheme of things, but we're brand new in this new product. And so we're 500 million tokens a day and we already see, okay, about.5% of the time, you're going to get an answer back that is horrible. So, if you're using it - so we use it for detection. And so one of the things it loves to do, Gemini, in this case, we use Gemini 2.5 Pro, it sometimes decides it's a spy. So it will start - if you look at its reasoning responses, it will start off by saying, "My mission, shall I choose to accept it is to" -
Sherrod DeGrippo: Oh, Lord.
Zack Korman: And you're like, "Oh, no." And those answers - and what happens -
Sherrod DeGrippo: Crings.
Zack Korman: Exactly. And the thing is, because of the way LLMs work, that it spirals. Like it will - you will never get a model to go from being bad to like coming back into sanity land.
Sherrod DeGrippo: Yeah.
Zack Korman: Like that run is over. Like you're - you have lost it. And so you have to accept that some percentage of your runs of these models will just go off the deep end of stupid land. And we also have cases where sometimes it will get stuck in like repeat patterns. So it'll say it happened between these two dates and then it will try to put the seconds onto the date and it'll be like, "null, null, null, null, null" and it will just repeat the null like 60,000 times until it runs out of tokens and it collapses. So you see all the worst in AI when you do this.
Sherrod DeGrippo: Let's quickly - for a lot of my listeners, they are - they might not know what a token is. So this is - tokens are sort of like units of compute within AI that hold some kind of value then in monetary.
Zack Korman: Yeah. They're basically splitting up every single word, letter kind of - it's not directly words, but you could consider them - like how many words went in and how many words went out is like very close to true. So if we're saying, "How many words are we putting into this model every run and how many are we getting back," right, and so, at most, I can get back 65,000 with Gemini and sometimes it uses them on 65,000 repeat words. That can happen.
Sherrod DeGrippo: Is that why Sam Altman said that everyone's saying "please" and "thank you" to ChatGPT is taking up a lot of compute?
Zack Korman: Yeah. They - and, of course, each of those is a couple tokens. So he's adding and, at enough scale, that becomes a problem. Right?
Sherrod DeGrippo: But is that - okay, first of all, do you say "please" and "thank you" when you're doing natural -
Zack Korman: Yeah, [inaudible 00:10:31] -
Sherrod DeGrippo: - language [inaudible 00:10:32] -
Zack Korman: Yeah, when I'm talking, not in our production systems, but when I am using like, you know, -
Sherrod DeGrippo: Yeah.
Zack Korman: - GPF 5.0 - yeah.
Sherrod DeGrippo: [inaudible 00:10:38] interface.
Zack Korman: Yes.
Sherrod DeGrippo: Yeah.
Zack Korman: Yeah, okay.
Sherrod DeGrippo: And is that, to you, a hedge against a sort of Skynet situation?
Zack Korman: No, I think it's a hedge against losing humanity, right, -
Sherrod DeGrippo: [inaudible 00:10:48] getting it.
Zack Korman: - because - 'cuz there are these times and like we joke about it on my team where we say, "We really would - like I wish I could just punch the model." Right? And then you go like, "Oh, maybe - maybe - like maybe we should stop like losing" - like I don't normally talk about wanting to punch people. Right? And so now it's like I talk about wanting to - there is this weird thing where it starts to feel human enough to where you want to be like inhuman to it. And so I don't -
Sherrod DeGrippo: Yeah.
Zack Korman: Yeah, use your - use the tokens. That's not - like I said, we were running like 500 million tokens a day. If people want to say "please" and "thank you," that is not - like Sam Altman is like, "That is not going to be a killer number compared to everything else."
Sherrod DeGrippo: I think that humanity aspect is important. So, first of all, I use ChatGPT literally all day long. I use it for - and this is because my boss, when I first started at Microsoft, you know, a couple years ago, John Lambert, he is an incredibly smart, experienced, sophisticated information security philosopher. I mean, he is a - he is really a very unique person in this space. And he said, "Sherrod, you're thinking with the old part of your brain, you're thinking the old way. You need to use AI for everything you can possibly use it for." And this was like 2 or 3 years ago. So it was kind of a different time. But I took that personally, as Michael Jordan might say, and I started really thinking like, "What can I do? What can I do?" Every time - I would listen to my feelings, I would listen to my mental state. And, every time I was getting kind of frustrated, I would say, "Can I make the AI take this frustration from me? This is repetitive, I don't like it. I'm bored. I don't want to do this." Every time I felt like a little twitch of a negative feeling, I was like, "How can I get the AI to offload this for me?" And I started doing that. "Oh, I have to reformat this document. All the text is blech." As soon as I get that like blech feeling, and you all know what that means, that noise that I make, I put it in ChatGPT and I'm like, "Do this for me." But I don't want it to take away like my human positive emotions, like my joy or my creativity or my excitement. And I think that you are starting to talk about some of those ideas because you want to move away from that dystopian framing where you were kind of talking about this million interns idea. Like, instead of replacing humans, it lets us scale, which I am a big believer in that. I see that in threat intelligence. So kind of talk a little bit about the metaphor or the analogy around AI being like a million interns.
Zack Korman: Yeah, no, so, I mean, I guess I would start by just saying I really think that people who talk about like, "Oh, we should use AI to like - it will take your job or something," I'm like, "You're so boring." That is the most boring thought anyone could have ever had about AI. They see this -
Sherrod DeGrippo: No greater sin.
Zack Korman: Right? They see this amazing technology and they think, "How can I take someone's job with it?" It's like, who are you? Because it's so both like kind of weird, like I don't - typically, like I don't really care that much about like cost cutting. Right? Like that's someone else's job. So like why would I go build technology to cost cut? Like I have better things to be doing. But, secondly, I think it can unlock - it's like this technology is so great. Like I say, it's so dumb, but there's also these moments of brilliance that come out of it that we should try to use it for the things that we weren't able to do before. And so you talk about how like, yeah, you hit these moments of "Oh, blah, I don't want to do this." Right? And some of those are like, yeah, you used to have to do them manually and it was really like, you know, - but some of them were like so painful that you were like, "I just won't." Okay? Like, "Yeah, it'd be cool if I could, but I just will not do it because it's not worth my time." And I think if - what AI unlocks in the kind of model I have in my head of how I think about AI and building it in products is how do we use AI to do those things at scale that we would have never done before. So, in the context of cybersecurity, the one that I very strongly believe in is there are tons of cases where you're like, "I mean, the best cybersecurity is just having a person sitting there over your shoulder watching what's going on on every single action happening in your company." But like you can't really do that today because you can't have a million people - cybersecurity people for a million employees. But you can use AI to observe and to think about a lot more going on in the company than you would have been able to use before. It's not going to be like the most genius cybersecurity research in the world, but it can find very obviously wrong things or even not so obviously wrong things that are still clear when you look at the data as a whole.
Sherrod DeGrippo: Yeah, I think that that's an important way to look at it. I know that, in security, many of us are - we've been around a long time. I think it's an industry of people that are, honestly, much more experienced. I feel like we've been around, we've seen a lot of things, but that also means that we're not a lot of times on the super cutting edge of what's happening in tech. One of the things that I see talked about quite a lot, speaking of cost cutting and things like that, is people talking about the impossible numbers around revenue and the impossible numbers around, "Well, this amount of VC and it's a bubble" and this and that. And I really don't see it that way. I see this as a space race. I see this as the next step in human evolution. If you haven't watched "2001," it is - Stanley Kubrick's "2001: A Space Odyssey," yes, I know, it has an AI that goes crazy. But that is not the point. The point is the monolith. The point is we are slingshotting human evolution in a way that I think - or, as organic beings, we don't even have the capacity to understand the futureness of it, like five years from now, much less 50 years from now. And that's why I kind of talk about the cost cutting and the job part. And, yes, those are scary things for us in our face today, but if we were given the chance to go to the moon and we didn't, what kind of different species would we be today?
Zack Korman: Right. And I think I posted about this one time on Twitter where I said something like, "Some of y'all are way too young to be this jaded," -
Sherrod DeGrippo: Yeah.
Zack Korman: - because some people, you know, they're like 35 and they're talking like all of - you know, "The companies will never understand." And I'm like, "Guys, you - you're too young. Like you can't be this mad this early" because - and I think there's been this - one thing I see a lot is the people who try to be moderate about AI, they'll say things like, "Oh, it's useful - it's useful in this way, but it's not - it's a bubble or it's hyped - overhyped." And I'm like, "No, no, no, actually, it's like both underhyped and overhyped at the same time." There are so many areas where it's underhyped. But I think this is - actually, I would pin this back on a lot of the cybersecurity companies that we have used for years, like big - I don't know if I'm even allowed to name them on a podcast like this. So the point is these big companies, they will - of course, they're just - they just - literally, you can go back in like a Wayback Machine and go look at the web archives of their websites. All they did is they took their product and they just added the word "AI" everywhere to those pages. And I think they should be punished for that much more. I think that, as a company, if you are using one of those vendors, that is like, okay, they're - they are - they think you're dumb. They think they're going to pull a fast one on you and you shouldn't use it because - and that causes so much conflict, right, where people then go like, "Yeah, but AI is overhyped." I'm like, "You've not even tried using it for these things. All you've been done - all that's happened is you've been missold by one of these legacy vendors." And these legacy vendors [inaudible 00:18:10], they are not using - there is one in particular where they write all about all their amazing AI stuff and then I actually saw a case study they had with Google Cloud and it was like the lamest case study. I would - if that's what all - if they - if - like I would never allow that to be published as a company if that was what we had done with AI. But they thought it was cool because it was like such - and it was basically like a chatbot. It was like it will talk to you about your threats. And I was like, "Oh, my God." And that is the point, it's like it was so lame and yet - and that's what's being pitched to most of these companies, most buyers today is like these really terribly lame cybersecurity ideas, whereas there's so much cool stuff you could be doing that's kind of being bypassed.
Sherrod DeGrippo: So like I have really only worked at cybersecurity vendors for like the past 19 years. And it's interesting because a lot of them, before this AI hype cycle, were talking a lot more about machine learning and data science. And you had the ML teams and you had the data science teams and they were doing all of these different techniques to detect CredPhish, for example, or to find email that didn't have the right kind of sentiment using various machine learning and models and all this stuff, which is something that Microsoft has done for a really long time in detection. And then, all of a sudden, it's like, "We don't say those words anymore. We don't say 'machine learning' anymore. We don't say 'data science.'" You know what it is? It's "artificial intelligence" now. And a lot of vendors have been doing that I think. So you have been pretty vocal, like you said, about vendors just kind of AI washing things. What do you think is an indicator that somebody can kind of see if that's going on with one of their vendors, security or not?
Zack Korman: Yeah. And, to clarify, I don't mean to be critical of all these things that came that people are doing. I think there are even use cases for a lot of these machine learning approaches today still. Like it's -
Sherrod DeGrippo: Yeah.
Zack Korman: - not that AI wipes out every use case before - it's just not AI, [inaudible 00:20:10] -
Sherrod DeGrippo: Well, but I think that the marketing terms have -
Zack Korman: Yeah. Oh, absolutely.
Sherrod DeGrippo: - gone through and been like "find/replace" -
Zack Korman: Yeah. A hundred percent.
Sherrod DeGrippo: - find [inaudible 00:20:17] machine learning, replace with AI. Yeah.
Zack Korman: A hundred percent, absolutely. And I think that that's where the biggest problem is because you are getting the same thing you got before, but, suddenly, it's called AI -
Sherrod DeGrippo: Yeah.
Zack Korman: - and so you're not really experiencing what is possible with what we were talking about when we say "AI" which is basically large foundation models that are large language models, transformer, diffusion models, the things that happened in the last couple years. One of the easiest ways to tell it is to, just like I said, go back in like the web archive online and you can see old versions of these websites. So you can just go to the page that you're looking at and go look at the 2002 version. And, if it describes all the same things with a different word, "AI," I mean, you know, they just -
Sherrod DeGrippo: Yeah.
Zack Korman: And because a big part of this, and I've talked about this online, is that the way in which you structure the data to go into a machine learning model and the way that you would structure your data to go into an AI - like to run AI on this thing, they're so different. Like they're - they don't look the same. The amount of data you're talking about is different, the structure of the data is different. So there's no way that they just replaced their machine learning models with AI. Like it doesn't happen. That's not real.
Sherrod DeGrippo: No.
Zack Korman: And - but a lot of people, they want you to believe that to some extent, too, like that they also added AI on top. And it's just not true. One of the ways you can also really tell, though, is like just ask them the hard questions. Because, understand, AI comes with all these cybersecurity implications that maybe are kind of scary. So, as soon as you start questioning them on that, they'll revert back to the idea of like, "Oh, nothing's actually running on a" - you know, they'll kind of - because the sales team, of course, will want to come back and go like, "No, no, no, we don't ever send your data to - we don't train on your data, we don't send it to the - to cloud providers. It's only our" - and they'll be like, "Well, okay, then - no, you're not."
Sherrod DeGrippo: Is it -
Zack Korman: Right?
Sherrod DeGrippo: Right. Well - and that is something I think, too, that maybe is like unique to those of us that have been in vendors for such a long time. Hold your vendor's feet to the fire. I've had new in career or new CISOs or people that are like rising up in kind of executive leadership in infosec roles say like, "Well, what am I supposed to really do at Black Hat? What am I supposed to really do at RSA?" And I always say, "Well, that's where you go to literally hold red-hot pokers at the face of your vendors and say, 'Prove to me you're doing what you say you're doing, prove to me the tech works the way it is. Show me the efficacy numbers. Here's what I need. Where is product management? I have feature requests." Vendors need to be held accountable for their products doing what they say the products are going to do. And, if they are not, those customers need to really start raising some concerns and making it known that they want what they're paying for when it comes to AI or really anything else. The bottom line is security products have to have efficacy. You have to be effective at what you do. If you're not, what are you paying for?
Zack Korman: Yeah. And I think you should be able to be transparent. So like one reason I post on Twitter is so that like - I don't think like - we have like three of our customers have like found me there and it's kind of awkward, but it's also still like positive 'cuz it's like, okay, if you think that what I'm doing isn't real, then like you have to answer for why I have all of these kind of artifacts that show how hard this is. Whereas you try to find that for any of these other big vendors and it's not - it's not there. You know, there's no transparency on these front because it's not real. So, you know -
Sherrod DeGrippo: Yeah. Well, so something I found interesting was you wrote a big thread speaking about like all of these things that you've been talking about up to now, like the amount of data, what do you really have to back up what you're talking about with three million phish sims, phishing simulations. People in security have a lot of feelings about phishing simulation, which I know is like a data set that you work in, that you have a lot of, that you look at. What kinds of things are you seeing in that data set that are interesting to call out?
Zack Korman: Yeah. So I was posting some about this as well. It's kind of a separate area of our product. It also uses AI, but more to learn about the user to decide phishing simulations for them. You know, first of all, IT people and cybersecurity people hate phishing sims 'cuz they fall for them just like everyone else. Okay? And then they go like, "Well, this is stupid and this is like" - and they get to kind of play that. No, I mean, our data is very clear on the fact that IT is in the middle of the pack in terms of falling for phishing simulations. And that's like my biggest thing of this is that phishing simulations are not a - people don't fall for phishing simulations or phishing generally because of a lack of knowledge because then, of course, you would expect IT and cybersecurity people to be much, much better than average. They fall for them because you get caught with the right type of thing at the right time at the right moment and you forget to pay attention. And so, really, like I've built this phishing simulation product entirely around the idea of like regular phishing simulations that are well targeted, keep people to remember to pay attention and that is the goal. There was this paper that was like - I think it presented at Black Hat where they tried to say like phishing simulations aren't effective and stuff, but, no, this is - their paper actually kind of confirmed just exactly what I argue, which is that the value in phishing simulations is the doing them and doing them regularly. It is not the training part. It's not like, "Oh, people don't know what was suspicious about that email." It's rather that they need to be constantly reminded to do them.
Sherrod DeGrippo: So I agree with you about that, actually, because I have this sort of Triumvirate theory of what makes social engineering work and it's urgency, trying to get you to do something quickly, saying, "Hey, act now, act fast, immediately, oh, this ends tonight at midnight." So urgency. Habit. So links that you always click on, brands that you know, a person's name that you're familiar with, whether it's your boss or a celebrity or a brand, whatever it may be. So habit. And then emotion. And I think, when you can get a person's state to change on all three of those things, you can get them to do things they wouldn't normally do. And that applies to phishing for malware, CredPhish, et cetera. That applies to sales, for legitimate sales of products that you're trying to get people to buy. If you are able to push those levers of urgency, habit and emotion, you can kind of control what someone does. And so getting them in the habit of knowing that this might be phish sim, even just getting people suspicious about simulated phishing changes their behavior.
Zack Korman: Yeah, absolutely. And I think that's like - one thing is people try to define this criteria spectrum of like, "This was an easy simulation" or "this was a hard one," and while we do have this spectrum at like an objective level across a bunch of categories we use, the real thing is like people fall for easy sims all the time because they're not easy if they catch you at the right moment -
Sherrod DeGrippo: Right.
Zack Korman: - at the right time. Like they're really, really hard. If I got a fake Facebook friend request from this person I had had a meeting with that day, and even if there's all these weird parts of the email, like it - you're like you kind of recognize it so much and then you don't want to be awkward by not accepting it so you kind of like go, "Okay, maybe I'll click that." And, even if it was really obviously fake in hindsight, like, in the moment, you might just do it. And I think that's one thing that we see in our data also is that - so, in the first six months, we will typically have between 35% to 40% of an organization will fall for at least one simulation. And "fall for it" being like they did one of the kind of bad things, not necessarily all of them. They didn't necessarily leak their credentials, but they like clicked, they replied, they whatever. And that's because like everyone is susceptible to it if you get them at the right moment at the right time.
Sherrod DeGrippo: I think, too, phish sim is creating and practicing skepticism that every person, it's subjective. Right? Every person has whatever level of skepticism and anxiety, it's just built into who they are in their DNA. And getting people's skepticism meter pushed up 3%, 5%, 6% will have a good outcome I think. And that's something a lot of times we say people don't have enough of. Right? They're not skeptical enough. They don't have enough anxiety. I always say people don't have enough anxiety.
Zack Korman: Yeah.
Sherrod DeGrippo: Right.
Zack Korman: And so, once people fall for one thing, they then are like, "Oh, I don't want to look stupid again." So then they pay more attention and then they try harder and like it's - it just stays on the top of mind. But this is why you also want people to fall for phishing simulations 'cuz you want them to be like - that anxiety to go up a little bit. Right?
Sherrod DeGrippo: Yeah.
Zack Korman: Yeah.
Sherrod DeGrippo: It's like a - it's like an attention tax in a way and it's like we want to tax them just a little bit more and that will increase a little bit of the awareness and the security posture and things. But I'll tell you, I used to work for many, many years at a company that did email security. And working at an email security company, I ran intelligence and detection. So we were writing the signatures that blocked things. And, every once in a while, we would get phishing simulation and, immediately, the team was like, "Oh, yeah." They're rubbing their hands together, they're super excited. They're like, "We're going to block simulated phish." And it's very exciting, you know, but then you have to go back and say, "Actually, they've paid a provider to do the phishing simulation. We have an agreement that we won't block it for them." And then everyone's bummed out. But -
Zack Korman: It's a great cat and mouse game I will say of how we have - I'd say about half of our engineering effort on that product goes into how we get other vendors to not block, -
Sherrod DeGrippo: Yeah.
Zack Korman: - right, because they have all of these tools so we have all of these crazy hacks that to clarify would totally work even - so some of our things that we do to bypass them are - so I don't actually send emails, we inject them via Graph API. So I go bypass their ser - the email entirely. Like you're using Graph API to - and - to create an email inside. Yeah. But that doesn't bypass a lot of products because a lot of products also check like URLs, things like that. And so we have all of these things that would genuinely help like real hackers if they were to have the same technology because - or real social engineers because how we move our mouse at the right pla - you know, some of these are like their detections checking where your mou - your cursor starts and stuff. And so we have to - it's like a very, very cat and mouse game. One does something, you change it. So I'm basically like constantly having to - my team is constantly having to run around [inaudible 00:30:45].
Sherrod DeGrippo: I mean that's detection engineering. It's the exact same sort of like the thread actor made an update, well, we have to update detection sakes. The threat actor changed -
Zack Korman: Yeah.
Sherrod DeGrippo: - this indicator, we have to now look for the new indicators. And I think that that's the reality whether there's AI involved or not. It's always going to be an escalation and a change to the threat actors. Detective aid, detective aid, detective aid, it's you - it's like a volley back and forth. It's you, it's us, it's you, it's us. And that's just going to be the way security is because the reality is that's the way security was for thousands of years. Before computers, you had to get better door locks because better lockpicks came out. You had to get better security cameras because people could figure out ways to not be in front of them or to evade them or, I don't know, if you've watched a good '80s movie, they put like a Polaroid of the room that the security camera is looking at and they tape it to the lens of the camera. That's just the reality that we're in. You have to keep escalating when it comes to somebody trying to get around your security.
Zack Korman: Yeah. And I think, obviously, AI introduces a million new way - workarounds as well. So I'm very positive on AI for cybersecurity, but I'm also like, obviously, AI creates a huge new avenue for attack as well. And so that is like this weird chasing between the two of them.
Sherrod DeGrippo: You also were talking about AI browsers because now a lot of the big AI leaders - ChatGPT came out with theirs a couple days ago, maybe a week ago now, Brave, the browser, is becoming an AI browser. And you were talking about AI browsers chaining AIs together to jailbreak each other. Tell me about that.
Zack Korman: It's like -
Sherrod DeGrippo: And then give me the paranoia levels.
Zack Korman: Yeah, this is like my new hobby of like - so it's like, you know, one of the things - there's - and I wish I could remember who I need to credit on this. But - so there was this post about how maybe you're running like a AI agent thing on some of your code environment, whatever you're doing. And then maybe you're running - so there was this avenue attack for a little while where you would get that AI to modify its own configuration and then that would allow you to do things that maybe were configured not to be allowed. But then, of course, they got a little smarter and they found out how to like hard block them - like the agent from actually accessing its own config. But, if you like have two of them running, like then one can access the other's config and the other one. So you can use the AIs together to like config match each other, like jailbreak each other. And the same applies I think when AI browsers started. So there's one here - Opera is actually a Norwegian company. They refuse to give me access to their AI browser and it's driving me crazy. My sister-in-law literally works there and they still won't give me access. So take from that what you will on their security.
Sherrod DeGrippo: I think they are right they are smart. They're doing the right thing [inaudible 00:33:44] as well.
Zack Korman: [inaudible 00:33:44], yeah, right? Exactly. Except they don't know I'm going to say that. You know, this is a negative indicator, but, okay. So I downloaded Atlas, right, and I just have Atlas now, ChatGPT Atlas, talking to Copilot, like M365 Copilot, which is amazing because the threat models the two have, the things they try to prevent, are different. Microsoft 365 Copilot doesn't have live access to the internet. So you - it's really easy to get it to do things that are like obviously bad. Like you can - you can get it to basically data exfiltrate your whole company, but it won't actually take that last step because it doesn't have any way to access the internet. So you can't actually send the info back over. But, of course, ChatGPT Atlas can. So all I did is I just - you get Copilot to do those bad things and you get ChatGPT Atlas to basically react to what Copilot's responding with. And then you can use them together to basically exfiltrate.
Sherrod DeGrippo: And that makes ChatGPT Atlas almost a remote access trojan.
Zack Korman: Correct. And this is the problem is like I don't have good answers for how to solve that and do the thing they're trying to do. What I will say is one thing I was very unimpressed by was that all of their security seems to be built into the AI itself. Meaning, basically, you're one good prompt away from not having those security features. So it's not that they've hard blocked to say like, "Okay, a post request, we will always require a button press for a post request." Uh, no. It's like, yeah, you could convince the AI by giving it a good talking to, saying, "No, really, this time, I want you to do it." And then it will do it. You know?
Sherrod DeGrippo: And this goes back, again, I think to social engineering. We are all learning to become better social engineers through simulated social engineering of our natural language AI chatbots. We are social engineering them. And every human is kind of getting better and better at it because we're saying, "Oh, if I talk to it this way, it doesn't do quite what I want. If I talk to it this way, it'll make some change. Oh, I need to change my wording to get right there." We are teaching the world to become, hopefully, fantastic social engineers. Let me ask you a closing question. Fast forward five years for me. I don't mean sci-fi, I don't mean Butlerian Jihad times, this isn't the year 10,022. Not sci-fi, five years from now. What do you think security people, the security industry will look back on five years from now and say, with AI, "Oh, we completely missed that. Oh, we did not see that coming." What are the kind of like blind spots that we're going to look back on?
Zack Korman: I almost feel like that question - I feel like the security community will look back at all of the thing - the bad things that happened over the next five years and say, "I told you so," because kind of a lot of the security community has taken a very like negative attitude towards like a - they've kind of just said, "We shouldn't be doing this." And I fully get where they're coming from. I think the thing they're going to miss is that the companies - or, not necessarily the companies, but that they're going to have AI running detection. Like it will be happening. They will be using very serious AI products in a lot of their stack and they will, if they're being honest themselves, realize that like not all of the threats apply in all of the cases. Right? I think they're not going to be - I'm sorry to like kind of take your question the other direction, but I don't think they're going to be surprised by all the terrible stuff that happens to fall out.
Sherrod DeGrippo: So you're saying that the AI doomsayers are actually just going to be proven right?
Zack Korman: No - well, yeah, I think that a lot of the bad things that will happen - not, of course, like the "kill you all" and I don't believe in like a takeoff -
Sherrod DeGrippo: Terminator.
Zack Korman: Yeah. Right? But, I mean, like we are going to have breaches that will confirm -
Sherrod DeGrippo: Yeah.
Zack Korman: - all - like this will happen. Like I almost - I can't see any way around it because I don't - I can't even envision certain solutions and I can see the path we're on. With that said, I think that there are - so I don't think anyone's going to look back and go like, "Wow, I never expected it to do that." No, I think a lot of people are very clear that they think this is going to do horrible, horrible things. And they're probably right. I think that they're missing the parts that it will do really, really well.
Sherrod DeGrippo: I agree with you on that one. And I also - I have just really expanded my view on AI overall and it's a moonshot to me. It is a moonshot. And, when we did the moonshot to the moon for real, there was a variety of concerns, everything from individual astronaut safety to, "Hey, we might contact another species" and "there could be aliens." I think we're at that level with AI. I think it is that level of moonshot. And will it be ultimately beneficial? I don't know. Maybe - maybe yes, maybe no. But, at this point, it's kind of like we have - I feel like we owe it to ourselves as humanity to find out what's next there.
Zack Korman: Yeah. It's crazy we get this experience and get to be part of it. I think it's super cool. But like, yeah, there will be - like some of the astronauts are going to die. Right?
Sherrod DeGrippo: Yeah.
Zack Korman: I mean, that's going to - I think that's going to happen.
Sherrod DeGrippo: That's bleak, Zack.
Zack Korman: Yeah, yeah. Sorry. Yeah, yeah. Sorry to end the podcast and be like, "Oh, I knew it something" - yeah.
Sherrod DeGrippo: On a positive note, I think that the fact that we talk about these things and that we have security professionals thinking all of these things and, if you go back and look at the list of podcasts that we've done, like with Mark Russinovich and Yonatan Zunger, really smart leading minds about AI, security, what it's going to look like, I think that there are some really fantastic things that people can learn. And I also think that one of the things that's impressed me the most about AI researchers, these super, super smart people that are creating all of these things, you ask them, "Oh, is this going to happen, is it going to be like this," and they'll look you right in the face and say, "Well, we don't really know. We don't really know what's going to happen." And I respect the frontiersmanship. I respect the pioneering spirit. Like let's go forward and see what's there.
Zack Korman: Yeah, I agree.
Sherrod DeGrippo: Zack Korman from Pistachio, thank you so much for joining us. Check him out on Twitter. Tell me your Twitter handle.
Zack Korman: I think it's @zackkorman.
Sherrod DeGrippo: Okay.
Zack Korman: Yeah.
Sherrod DeGrippo: Check out Zack on X/Twitter, you know, @zackkorman. We'll link to his Twitter in the show notes so you can click through and check it out and see what kinds of crazy things he gets into next by offering people free stuff on the internet. You end up on a podcast, you end up sending out a lot of swag, who knows?
Zack Korman: Great. Thank you for having me. [ Music ]
Sherrod DeGrippo: Thanks for listening to the "Microsoft Threat Intelligence Podcast." We'd love to hear from you. Email us with your ideas at tipodcast@microsoft.com. Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out, msthreati0ntelpodcast.com for more and subscribe on your favorite podcast app. [ Music ]
