The Microsoft Threat Intelligence Podcast 1.28.26
Ep 61 | 1.28.26

Fact vs Hype: How Threat Actors Are Really Using AI Right Now

Transcript

Sherrod DeGrippo: Welcome to the "Microsoft Threat Intelligence Podcast." I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cybercrime, social engineering, fraud? Well, each week, dive deep with us into the underground. Come here for Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries, and shape the future of cybersecurity. It might get a little weird, but don't worry. I'm your guide to the back alleys of the threat landscape. Hello and welcome to the "Microsoft Threat Intelligence Podcast." I'm Sherrod DeGrippo with Microsoft. Now, on the show, we usually talk about adversaries. We talk about threat actors, talk about how they operate, how campaigns work, what defenders are actually working against in the real world. But today we're going to talk about the technology that is increasingly shaping everything. You may have heard of it: artificial intelligence. This is the AI Hot Takes episode. I just want to be clear, though. This is not AI 101, so if you don't know what a large language model is, you might be a little bit behind. So we're not repeating any narratives you've probably heard before. We want to have a conversation for people who actually work in tech, work in security, work in threat intelligence, and people who want to understand where AI actually is, how is it being used, and what that means for defenders, as well as what it means for threat actors. For me personally, I really think AI is a space race. I'm a believer. I use it day in, day out. I'm fascinated by it. I do think that it is an evolutionary moment in how we operate as a species. But it's beyond just economic, geopolitical, and security concerns. This is something where if you fall behind, it will compound so quickly you really won't even believe it. So obviously, Microsoft is very much a part of that race. We're not here just to compete, but we want to help shape how those capabilities are built, how things are secured, and how AI can be used responsibly. So we're going to have this conversation properly. And I am joined by two people who live at the intersection of technology, security, risk, and the global realities that come with all of those things. First, I have Crane Hassold, a security researcher, with me here at Microsoft. He has deep experience tracking threat actor behavior, understanding how threat actors work, understanding how they use new technology and are able to adapt so quickly. I'm also joined by Chloé Messdaghi, who focuses on AI governance, cybersecurity risk, and helping organizations think clearly about responsibility, oversight, and resilience. So the three of us are practitioners. We're in this space all day long, and we're here to really talk about the thought experiment of what AI is and what it could be. So Crane, Chloé, welcome to Hot Takes.

Chloé Messdaghi: Thanks for having us.

Crane Hassold: I'm excited to get this started.

Sherrod DeGrippo: Oh boy. So just to be clear, this was Crane's idea. This topic was Crane's idea.

Chloé Messdaghi: The soap box. I remember the soap box comment last time we were all together.

Crane Hassold: Yeah? This is -- I have so many hot takes. I cannot wait.

Sherrod DeGrippo: I'm excited to hear those hot takes.

Crane Hassold: Let's see where this goes.

Chloé Messdaghi: I think that if you're not thinking critically and evaluatively about AI, not just in your organization, not just in your enterprise, but in your life, for real in your life, you have got to catch up. And I know that there is AI fatigue. There's AI messaging fatigue. I'm fatigued. I get it. But it is really a transformational time. It is a transformational technology. And in order to feel the power of that technology, you have to talk about it critically. And that's something that, you know, a lot of think pieces and a lot of certainly, you know, the marketing stuff, you won't necessarily get that critical evaluation that I think all of us need to be doing. So let's start by getting rid of the hype. Where are we actually with AI today? What do each of you think it does well, and where do you think things have been over, promised or shown to be fragile? Crane, I see your face. Where are we with AI? Where are we for real with AI today?

Crane Hassold: Yeah, so AI is one of those things that, you know, when it came -- when it came on the scene, what, about five years ago, we were all -- we were all having fun inside of our houses during COVID.

Sherrod DeGrippo: All fun and games. All fun and games.

Crane Hassold: ChatGPT came on the scene. And it became, you know, pretty clear very, very quickly how important and useful it could be in our day-to-day lives for a variety of different things. And as we've taken those thoughts of what we do in our personal lives and applied them to what we see or what we expect to see in our jobs, like from a cybersecurity perspective, I think there was an expectation that, hey, we found it to be very useful. The bad guys must be finding it to be very useful too. How are they -- how are they using it? How are they using AI in these devious, bad, malicious ways? And so there have been so many think pieces and presentations and symposiums and journal articles --

Sherrod DeGrippo: Conference-stage hours.

Crane Hassold: Yes, so many hours, so many marketing people working on PowerPoint presentations about AI and how it can be used maliciously. And you see all of these news articles out there that said, hey, this presentation was given at DEFCON about something scary related to AI. This is how the bad guys could be using it. And I think that there's a big difference between how AI could be used and how it is being used in various cyberattacks. And that's really where I think there's a really big deviation between what is real and what is assumed or just completely made up. And so from what we see today, if we're -- if we're thinking about -- we think about sort of a cyberattack in three different forms, right? So you have pre-attack, the actual attack, and post-attack. Where we are not seeing AI being used very much -- and very clearly I gotta -- I gotta say this is based on what we can actually see and measure and be empirically sound on -- we don't see a ton of AI being used within the attack itself, right? So we assume that attackers could be using ChatGPT or Copilot or Gemini to write really sophisticated emails, to create some variation in those emails, and send them out, and a lot of people are going to fall for them. We assume that's the case, but we don't really know if that's the case. It's really hard to measure these at scale to, you know, to look at an email and say, hey, that's written very well. Was that AI or not AI? Like well-written emails have been around for decades, at this point. You know, ChatGPT has only been around for a couple of years. And so just because an email is written very well, it doesn't mean that it was -- AI was used to create it.

Sherrod DeGrippo: Well, I want to just mention too contextually there. What I think about with that is, you know, things like autocorrect, spell check, these things have been around for years. It is, again, a slot-in tool that makes the threat actors objective, faster, easier, whatever it may be. Instead of just using, you know, spell check, and grammar check, and all these things, now they can put it through an AI check. It's a tool that's gotten a little bit better. Yeah, I agree with that, and I think it does make things better. But at the end -- at the end of the day, you know, we're teaching people to look out for these really sophisticated emails that look totally legitimate. We've been teaching people to do that for decades with security awareness training. Sometimes it works. Sometimes it doesn't work. But at the end of the day, even if a threat actor used AI, some sort of, you know, sophisticated LLM to create, you know, to make a more realistic-looking email, that only impacts the body of the email. It doesn't impact a lot of the other signals that we use to actually detect the attacks. So at the end of the day, it doesn't really matter if a malicious email was created using an LLM or some sort of AI because we can detect it in other different ways. And it's, you know, that's just what's creating the email itself. The infrastructure that's being used to do host phishing, phishing content, or malware -- that doesn't change. It's all about the content and making it look more realistic, and there are other ways to detect those. I'll let Chloé chime in here a little bit before I go too off the rails because my stream of consciousness is just going right now. So what you're saying is that off the rails is coming?

Crane Hassold: It's coming.

Sherrod DeGrippo: Later in the episode. Later in the episode, everyone.

Crane Hassold: It's the new year. It's 2026.

Sherrod DeGrippo: Everyone stay tuned for off the rails a little bit later. Chloé, let me kind of recenter us back. So in your opinion, where really, actually, where are we with AI today, the beginning of 2026? It's a new year. Where are we?

Chloé Messdaghi: Yeah, so I would say Crane does bring up some pretty valid points. And the whole thing with AI using it for like phishing emails, there is a 4.5 times more likely the person will do the click-through rate versus one that isn't but that's because, like you shared, like misspellings are no longer there, and that does help, in a sense, for people to do click-through. But we've always been telling everyone like, hey, don't click on something if you haven't had your cup of coffee yet for the day. So it is still continuing those same safety practices. What I would say is what we have seen with adversarial threat when using AI, it's really for automating vulnerability discovery -- we have seen -- and also for phishing. You've also seen it with like potentially malware generation and data analysis as well. But like Crane also mentioned, it's very hard to know what things were actually used by AI. We can make a good --

Sherrod DeGrippo: Yeah.

Chloé Messdaghi: -- idea, a good guess, but we really don't have like solid proof sometimes, so that makes things very challenging in trying to understand how much is AI being used when it comes to adversaries? But I can tell you that AI has been helpful for like advancing defense. You know, it helps with detection, like identifying detection gaps, also for threat analysis. Also, it helps with when we think about like detection authoring -- these are all great things that it can do and is helping us out, especially when you're introducing agents too. That's also another thing to think about. But I mean, at the end of the day, what AI is really doing is just creating new attack surfaces. So we have to just keep an eye on those such as like, you know, GenAI prompt and responses to like AI data and orchestration, web data and source context, of course, your plugins and your functions. So these are things to think about. But I do think that when we think about dangerous capabilities of AI, we have to think about like the production-sensitive materials and also scale uplifts. So say, for example, someone who doesn't know how to develop a chemical weapon could possibly learn. So these are things that we have to think about.

Sherrod DeGrippo: Yeah, I think, you know, talking a little bit about detection, I think that if you've been in the detection engineering space, which is really where I started my security career, 20 plus years ago, the AI capabilities today to help you do things like write rules or improve your detection capability is massive, and it also is an accelerator for people, for example, who struggle with regular expressions, for example. I, in previous roles, have worked with teams of regular expression wizards doing detection engineering, and today to be able to work with a decent AI and say, hey, I need a regular expression that detects this and this and this and this and this, and can you write this for me? And can we iterate on it? And can we get it, you know, really, really tight on this particular rule and really, really honed in? I think that that's something that is going to accelerate the ability for defenders to defend more effectively. And I'm excited about that aspect of it. I'm excited about the aspect of, hey, you might be really good at looking at PCAPs or doing an understanding what a credential-phish landing page looks like, but maybe you have a hard time with some of the other complementary pieces that go with it. Now you can become really good at those other pieces really fast. So I think it creates almost like super-powered detection engineers and defenders.

Chloé Messdaghi: Absolutely.

Sherrod DeGrippo: So something else I want to talk about is integrating AI into workflows. The model is not really the end of what the value is of a particular AI system. The value is putting into your workflows, using it in your tooling, somebody knowing how to actually interact with these AI systems. You know, it's become a joke, and I do find it comical. But the concept of the prompt engineer sort of came and went as this new career field we were supposed to see everywhere that didn't happen. But the reality of that is there is a learning curve and a need to know how to use these systems. And I will complain all the time to friends of mine. It used to give me what I want, and now it doesn't behave the way that I like. It's not doing what I want it to do. And when my friends will say, oh, did you set -- did you set your prompt right? Did you -- did you tell it this in this way? Are you, frankly, are you socially engineering your AI to do what you want in the way that you want it? And I think, oh, I have some refinement to do. So I guess the question that comes from that is where will threat actors pull AI into their tooling that maybe we haven't seen yet that we could see, that doesn't necessarily fall into that FUD category?

Crane Hassold: So the last time we talked about this, you brought up a really good point that I think is probably the more realistic scenario when it comes to thinking about how threat actor is going to use AI in their overall -- in the overall attack chain, and that is pre-attack recon, like recon information collection, intelligence collection.

Sherrod DeGrippo: You'd be a fool not to.

Crane Hassold: -- about targets being able to do the -- yeah, doing that so much more quickly and efficiently -- that's easy -- and then post-attack sort of data triage, right? So we have -- when ransomware first started going after the enterprises, what, about sevenish years ago now, you know, all -- and then they started pivoting over to the extortion and dumping everything on these dark websites. One of the biggest issues with that was, yeah, you have this information from these organizations that didn't pay, but practically speaking, like no one's going to go through all of that information. Now you have tools that can go through all of that data very, very easily and efficiently, sort it, find the really interesting components to that -- to that information, and then use that in malicious ways. So triaging exposed data or documents that have been collected in cyberattacks, that's another way that I think is more realistic for actors to actually use the AI's LLMs more effectively than simply making more realistic-looking attacks within the attack cycle. I think the other side of it is what we don't -- we don't see like there's a -- there's a scare tactic out there about, you know, this super malware where you have actors that are creating, you know, zero days right and left because they have, you know, ChatGPT or some other LLM creating it for them. Again, if you think about AI, AI is only as good as what it's trained on, and it's trained on stuff that is -- has already happened. It's learning stuff that is already there. If anyone has ever tried to use ChatGPT in a creative way, like say, hey, ChatGPT, give me 10 funny fantasy football names that I can use this year. You'll know it's really bad at it because it's --

Sherrod DeGrippo: Yes.

Crane Hassold: -- not creative at all, inherently.

Sherrod DeGrippo: Finally, something that humans can keep for ourselves.

Chloé Messdaghi: Right?

Crane Hassold: Yeah.

Sherrod DeGrippo: We're funny. We're funny kind of.

Crane Hassold: Yeah, sometimes.

Sherrod DeGrippo: But no, I completely agree. I think that's really important. These models depend heavily on data quality, on guardrails, on context. What they are trained on is what gives them their power. And that can be why putting them into operation -- that's why that can be so hard is because you don't always have that visibility. You don't really always know what it's trained on, how brittle it is. What is this data quality? And that's why sometimes, you know, you get results that are wild or unhelpful, or you get, you know, ask for jokes, and you get jokes that aren't funny. So are they really even jokes at that point? I think that that's important to mention, and I also think that it's important to understand something that I've spent so much of my career working on, which is threat actor psychology. Threat actors generally, generally, even many of the sophisticated threat actors, they stop once they get what they want. There are very few threat actor groups that will get access and then go get access from another pivot point that will keep going and going and going after they've gotten what they want via exfiltration or data or whatever it may be. Yes, there are apex predator threat actors out there, but they're rarer. They're more dangerous, and obviously they're high priority. But in general, we see threat actors get what they need and move on, and they're going to do the same thing with AI. If the email that they write is passable enough, and their return on that investment of time and money is getting them what they want, they'll generally say, oh, this is good enough for my purposes. I'm good. And they're not gonna -- I don't think. I'm open to what Chloé and Crane think. I just don't think threat actors are gonna push this technology to the edge of the possible.

Chloé Messdaghi: Yeah, I would agree with that. Whatever I could give money the fastest and get out, like that's basically the thought process.

Crane Hassold: I mean, we still have Nigerian print scams, for God's sake, that have been around --

Sherrod DeGrippo: Yeah.

Crane Hassold: -- for 30 years that are still around because they're still making money. Like and I think 99.999% of people would receive that email and be like this is stupid. Why is someone still wasting time on it? But someone's still wasting time on it because they're still making enough money to make it worth it. And so as long as -- and those are the most basic of basic emails. The templates, the formats, have been around for decades, at this point. Some of them have barely changed. Like you look at some of these emails, they're this like the same emails from 2005.

Sherrod DeGrippo: Yeah.

Crane Hassold: It's crazy.

Sherrod DeGrippo: Why not? It works.

Crane Hassold: But they still work.

Chloé Messdaghi: Yeah.

Sherrod DeGrippo: Why mess with a winning formula? And I think that that is threat actor attitude from beginning to end for the most part. I will never discount or dismiss our super apex, top of the top, top, persistent and sophisticated and capable threat actors. I'm not dismissing them, but by and large, most threat actor psychology says don't mess with what's working. Get what you get and get out. Make it easy on yourself. Take the path of least resistance. Do what you got to do, and then stop.

Crane Hassold: When you think about the motivation of threat actors because you're absolutely right, you know, 90% to 95% of all threat actors are motivated by making money, like financially motivated. If they're financially motivated, they want to do the least amount of work to make the most amount of money possible, increase their profit margins or ROI. If that's their motivation, then AI doesn't really do much for them. That being said, those apex predators, the state actors, the mission-oriented attackers who only care about getting to an end result, getting to that final goal, completing that goal, regardless of resources, regardless of time and effort, those are the actors that are likely to be more invested in AI, not only because AI could probably do more for them, but also they're the ones that probably have the resources to develop their own tools that leverage AI a little bit more effectively than normal financially-motivated actors.

Sherrod DeGrippo: I want to mention -- that's a really good point. I always think about too, I think, you know, we see -- we're surrounded in this AI discourse. We're surrounded by the push, right, and threat actors are surrounded by it too. And I think about, you know, we see ransomware threat actor groups highly operationalized, operating essentially like a business or a startup. They're using ticketing systems. They're doing stand-up meetings. They're discussing their paychecks and working conditions and all of these things. Are their bosses pushing them to use AI too? Are their assignment generators saying, hey, you know, we're going to go out and make $5 million of ransom this year, but I want to see you leveraging AI to get it done. I'd love to know. Release some more chat logs, please. That's my personal call to see some more chat logs. Chloé, what do you think? Do you think that threat actors are leveraging AI because it just kind of seems like the thing to do? It's like compulsory?

Chloé Messdaghi: Right? Everyone else is doing it, so I need to do it too, right? You don't want to feel left out. But I think your points are valid about, you know, the apex ones are probably going to find ways and take the time to do it. But I feel like most times when it comes to malicious actors, it's usually like how do I do something that doesn't take a lot of my time, and it's quick, fast, and I get money? And so like, I could see them using it for trying to figure out how to do malware development. But like Crane said, a lot of these things is old, and so it's not too much of a concern. But the same time you have to have like think about what does that -- so when it comes to the use of AI, what are they really using it for? At the end of the day, I think it's mostly based on phishing.

Sherrod DeGrippo: I want to know too like are they just like -- are threat actors just trying to get things done that they can get done, and they don't really care? I think that that's ultimately the truth is that --

Chloé Messdaghi: Yeah.

Sherrod DeGrippo: They're looking for a payday. They're looking to take action on objectives. But that's kind of a different sort of, you know, motivation, motivational factor. But I also wonder too if like they try vibe coding some malware. It doesn't really go well, and they think maybe I'll vibe code an eCommerce store instead and try and like sell some legitimate merch. The need to do malware might kind of disappear if --

Chloé Messdaghi: Yeah.

Sherrod DeGrippo: -- it really does accelerate the ability to create software and do coding and engineering really super fast that it didn't before.

Chloé Messdaghi: Yeah, I would agree on that.

Sherrod DeGrippo: Let's hope some people go legit instead. So I want to ask both of you. Chloé, I'll start with you.

Chloé Messdaghi: Okay,

Sherrod DeGrippo: Does AI shift the balance of power more toward threat actors or toward defenders? Who's winning right now?

Chloé Messdaghi: I would say an organization that is using AI for defense is definitely winning. I think the organizations that don't really know what they're doing, where they're just saying like, oh, we'll just use AI, but then don't even do any preparation for it, no discovery for it, no protection for it, and governance for it, then I see that where AI for defense is not working. You need to be able to know what is in your systems, your infrastructure, and what your organization is using. That's where I see it. If you're using it, you're doing a better job.

Sherrod DeGrippo: Praying who's winning.

Crane Hassold: I totally agree. If an organization is leveraging AI, and its, you know, similar derivatives, you know, machine learning and all those things that get lumped into AI category anyway, it, I mean, at the end of the day, it makes the ability to detect potential threats more efficient and more effective because you're able to go through all of the tedious stuff much more quickly. So having agents that can triage potential threats for you as essentially a, you know, a first line of defense, instead of having individual people doing it manually, having that capability and having those tools in place makes things much more -- much more easy. It also allows you to scale the ability to search, you know, email for potential email threats by using machine learning and AI in ways that we, you know, probably wouldn't do or couldn't do as human beings. And so I think from a who does it help more? I think 100% it helps defenders more than the threat actors because there are more clear and easier use cases for us to use them in ways that really help us take our ability to detect threats to the next level.

Chloé Messdaghi: Oh, but one thing to add we should probably talk about, which is the fact that like also employees are responsible when they're using AI because so then you help your defense team out. Because if you're doing over-reliance or have any -- you're like leaking any information that is, you know, that should be private, these are all things that also puts your organization at risk, even your own employees. That's why it's so important is to not, you know, be aware of over-reliance and information linkage.

Sherrod DeGrippo: I think that's important too. If you're familiar with Maslow's Hierarchy of Needs, it was developed by a psychologist, Abraham Maslow. He also said a very famous quote, which is, "If the only tool that you have is a hammer, you tend to see every problem as a nail."

Crane Hassold: Yeah.

Sherrod DeGrippo: And I actually think that AI is an electric nail gun that you, you know, walk through your enterprise and, you know, somebody says, oh, well, did you use the nail gun for that? And you pick up the nail gun, and you think, well, I don't really need a nail gun for this, but I'm going to use the nail gun now. Let's go. So I think, you know, Chloé, what you're saying about over-reliance is really important. And I think seeing AI, you know, as a nail gun, it is a tool. It is a tool. And if you do not have an application for a tool, you are going to damage and destroy things. So you need to be careful. It's like fire. You know, it can warm you, and you can make hot cocoa, and you can roast marshmallows. You can burn your house down. It's really something that you need to use judiciously and smartly and responsibly, and you'll get results that you want. Just applying a nail gun to something, it's going to cause havoc.

Crane Hassold: Yeah, I'll give you a great example of that because -- so as a threat intelligence analyst, as someone who's been in the threat intelligence world for 20 plus years, when GenAI first came up -- came out, I think one of the first things that I tried to use it for, and I've still tried to use it for today, is, hey, give me -- if I have a collection of indicators or artifacts that I know is associated with some sort of campaign or some sort of threat actor, can you summarize that for me? Can you put all that together in a way that we know -- I think, as intelligence analysts, we write a certain way. We write in a way that not only conveys what we want the audience to understand, but also to understand the so-what of that, understand why we're telling that to you, and understand how you can apply it. And what I've found is that I've tried this prompt engineering in so many different ways, trying to have AI -- sort of an AI platform give me a response that really concisely summarizes a threat of -- based on indicators I'm giving it in a way that I could actually use it in my -- in my job. Like essentially say, hey, AI, here's -- here are all these indicators. Here are these artifacts that I have that are associated with this campaign or threat actor. Give me something back that I can just literally just copy and paste into a document, make my life much easier, more productive. I cannot tell you how terrible it is.

Sherrod DeGrippo: Oh.

Crane Hassold: It is so bad --

Sherrod DeGrippo: Yes. Oh no.

Crane Hassold: -- the output that I get, it makes -- it is like it's reading one of the most boring outputs --

Sherrod DeGrippo: Yeah.

Crane Hassold: -- and something that is completely uninteresting that doesn't give you the point of what you're actually trying to convey. It's literally just regurgitating information without that analyst's so-what, without sort of giving that nuance and understanding to the data that threat analysts and intelligence analysts do when we're writing our intelligence products. That is one of the -- like you can't use that type of nail gun. Like I remember this was probably 15 years ago before AI, but the same concept. I was working for the government at the time, and we had created an automated case-matching system that would take -- it's very similar to what we do today in our -- in clustering. And we created this sort of very basic algorithm that would cluster stuff together, cluster cases together and then come out with and essentially say, hey, here's the end result. Here are some cases that are likely associated with the same actor. And I remember at that time, this was way before -- way before AI. My supervisor, at the time, was like, well, that's great. So how many analysts do we not need anymore because we have this? And I'm, at the time, I'm like what, are you joking?

Sherrod DeGrippo: Oh no.

Crane Hassold: This is -- this is a great correlate to what we're doing today with AI. Like AI is not the silver bullet. It's not going to solve all of our problems. It's a tool in our toolbox. So like that case-matching system giving us that step 1 of these possible matches is great. Now we need an analyst to review those and understand the context of them, understand the nuance of them. Very similar to what we're doing today with GenAI and some of the AI tools. It -- the initial triage for like the very rudimentary, repeatable stuff that humans have -- were previously doing over and over and over again AI, take it, run with it, fantastic. But the stuff that still requires some thought, some brain power, like those are the things that an AI agent can't do very well. I have no doubt 5, 10 years from now that probably will not be true anymore, but in present day, the present-day world reality, there are so many things that AI still can't do and, you know, we're, as human beings, we're still -- there's still something that they need us for as our AI overlords.

Sherrod DeGrippo: I agree.

Chloé Messdaghi: I agree here.

Sherrod DeGrippo: I think it's funny too that, for whatever reason, people are very comfortable saying, oh, you know, I had some technical difficulties. I need to reboot my phone. I need to close this program. Oh, it's giving me some weird errors. It's all computer. Even though that AI that you're talking with seems very legit, and it seems like you're having a chat, it also could be prone to a variety of errors that, for whatever reason, we accept as reality with every other piece of technology, you know, that we use day-to-day, whether it's in our work or our home or, oh, you know, this little error popped up on my car or whatever. We're like fine with that, but for some reason, oh, the AI is infallible. It will -- it will lead us, you know, to the Promised Land, and it's definitely right every time. And I don't -- I don't know how that sort of mind virus happened.

Chloé Messdaghi: Yeah, I don't know either.

Sherrod DeGrippo: We're not as skeptical, and, you know, I don't think that we are treating the usage of AI with the same kind of skepticism and, you know, fallibility that we do with everything else, with computers. I won't use the C word, clanker, but -- or is it clunker? It's clanker, right? I won't use that word, but, you know, I think that there is an element there where it's like, look, these -- technology is never going to be exactly perfect every single time. That's what evolution means is that it gets better and better over time. So, yeah, it'll get better. Will it become indistinguishable from magic? Maybe. But it's not quite there yet.

Chloé Messdaghi: Agree.

Crane Hassold: But also like, I'm a huge -- I love history. I'm a huge history buff. And one of the things that I -- that I think when you think about AI today, like in the past, there have been transformational technologies throughout history that I would imagine, at the time, people probably thought the same way that we're thinking about AI right now, right? So let's say when penicillin came out, you know, in the early 1900s that everyone was like, oh my gosh. Is this going to eliminate all disease in the world? We're never going to have to worry about diseases again? No, but that was something that was so transformational at the time --

Sherrod DeGrippo: Yeah.

Crane Hassold: -- that, you know, everyone was like, oh my gosh. But down the road, once it's been around, once it's been used, you know, over and over again by a bunch of the population, then you start to sort of see what it can be used and what it can't be used for. So I would, I mean, so this has been going on forever, as far as having technologies like this in place, and I think in a few years, like we'll probably have a better understanding of, you know, what's realistic and what's not realistic when it comes to AI. You know, my son actually came home from school today starting a new semester, and he was like, hey, we started doing some coding, and he's in elementary school. And I was like, that's interesting. I wonder if -- like I didn't, right, when I was growing up, coding wasn't a thing in my elementary school, not a thing, but how much coding does he actually need to know? Because as he gets older, he could probably just have AI code something for him or create something for him.

Sherrod DeGrippo: Oh.

Chloé Messdaghi: Yeah.

Sherrod DeGrippo: Yeah, and I thought that when I was -- like in my 20s, I really believed that every person one day would have their own Linux distro that they created, that every person would have their own web page, that every person would know how to use a computer, open a computer, and I was in a dying field because eventually it would be so ubiquitous. But the truth is it's like the television. I don't open my TV and like tinker around inside of it. I don't open up, really, hardware anymore at all. Maybe, you know, when I was younger, I did, but these are consumable products now, and I don't think that -- like the general population, I don't think that they're going to get more technical with AI. I think that the divide is going to get bigger.

Chloé Messdaghi: I could see that.

Sherrod DeGrippo: So Chloé, something that I've been hearing a lot about is kind of what the emerging threats are around AI, using AI. What have you seen with that?

Chloé Messdaghi: Well, I would say like the emerging threats that I'm concerned about is probably about securing long-running agents. You know, they can drift from their intended goals, or they could be manipulated through hostile data. But also the risk from read-write memory is another concern that I have these days, but there is some hope in that, of course, there are these things called guardian agents, which you can like embed alongside your operational agents, and that can really detect any type of manipulation, but also can enforce policy. So that's actually something I'm looking more and more forward to is more discussions and more learning about AI agents. But of course, the thing that does keep me up, still, to this day, is prompt manipulation attacks and, you know, direct and indirect prompt injections, but also the exploitation through protocols like model control protocol like MCP and ATA, which is agent-to-agent. These are things that I do get nervous about because, you know, even though recon is usually involved at some point, the thing is if someone has the ability to inject malicious payload into an AI-processing stream, that's concerning because it can hijack like behavior and run attacker controlled instructions. So that's why that is stuff that I still think about. People trying to attack AI systems, that is the thing I'm concerned about as an emerging threat.

Sherrod DeGrippo: What's interesting about that, to me is that, yet again, in our security worlds, in our security realities, it is the glue. It is the seams, those little seams in between systems that, you know, end up being where vulnerabilities happen and where exploits happen. Crane, you have looked into AI as a target. What are you seeing there? What are threat actors doing there? What do you think they might do?

Crane Hassold: Yeah, so, I mean, that's one of the things that, you know, Chloé just sort of referenced a little bit and something that we haven't really touched on today. You know, we've talked about -- a lot about how AI can be used to help the threat actors, but AI being the target in and of itself, I think, is probably one of the more, you know, pressing issues of today.

Sherrod DeGrippo: Yeah.

Crane Hassold: So prompt injection attacks, essentially having an attacker inject a malicious prompt into an AI tool and have it give back information that wasn't intended as a way to expose sensitive information, sensitive data, and stuff like that. I think that's probably a more realistic and pressing issue because that is a direct attack vector that is more exposed and more available to many, many more attackers that probably has a higher ROI than AI being used in other different ways, like attacking the AI itself. Like there's a lot that you could potentially get out of that through relatively simple prompt injection attacks. And I could easily see, you know, a -- an as-a-service group essentially selling prompts that can be used to exploit various vulnerabilities, similar to what we've seen with malware-as-a-service being, you know, selling malware to exploit various vulnerabilities. That I think is something that we've seen in the wild is absolutely happening, and I think that is probably only going to get more and more active as we -- as we move along.

Sherrod DeGrippo: I think that's true. So I will wrap this up by asking everyone to treat AI like a power tool, something that you learn, something that you master, something that you're good at, not something that you just, you know, shoot from the hip with, literally or figuratively, and not something that you just kind of take advantage of when you have time. I want to thank Crane Hassold and Chloé Messdaghi for a very honest conversation. Thank you for joining me, and thank you to everyone listening. We will see you next time on the "Microsoft Threat Intelligence Podcast." Thanks for listening to the "Microsoft Threat Intelligence Podcast." We'd love to hear from you. Email us with your ideas at tipodcast@microsoft.com. Every episode, we'll decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out, msthreatintelpodcast.com, for more and subscribe on your favorite podcast app. [ Music ]