The Microsoft Threat Intelligence Podcast 4.10.24
Ep 16 | 4.10.24

Microsoft Secure in San Francisco


Sherrod DeGrippo: Welcome to the "Microsoft Threat Intelligence Podcast." I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cyber crime, social engineering, fraud? Well, each week dive deep with us into the underground. Come here for Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries, and shape the future of cybersecurity. It might get a little weird. But don't worry. I'm your guide to the back alleys of the threat landscape. Hello.

Brandon Dixon: Hello.

Sherrod DeGrippo: This is the "Microsoft Threat Intelligence Podcast." I am here with Brandon Dixon, product manager for Microsoft Copilot for Security. Is that right?

Brandon Dixon: Partner.

Sherrod DeGrippo: Partner group product manager of Microsoft Copilot for Security.

Brandon Dixon: It's a mouthful. It's a lot.

Sherrod DeGrippo: Yeah. Brandon, you are like an AI person. And a security person. So you came from the Risk IQ acquisition.

Brandon Dixon: Yes.

Sherrod DeGrippo: And we had dinner the other night and we're talking about the olden days of cool hacker stuff. And Risk IQ's a big part of that story.

Brandon Dixon: I think so. Many years spent, you know, analyzing adversaries on the internet and good old attack surface management which is making a comeback in the industry. It's good to see. So Axonius just went to 100 million. They were celebrating 100 million on Nasdaq. So it's interesting to see vendors that we previously competed with or worked with and launched capabilities at different times still growing in the industry. So it shows that -- shows that we've still got a thriving market and it kind of speaks to the fact that like we were ahead of the times.

Sherrod DeGrippo: Speaking of the times, let's talk about evolution. So what do you think the difference is between like the security realities of today and the security realities of like when you were a punk kid?

Brandon Dixon: A punk kid? The realities of like the past was like used to get -- it used to be a lot easier to get into systems, I can recall.

Sherrod DeGrippo: So you're saying the industry's successful? That's a hot take.

Brandon Dixon: I think yeah. I think so. I think we don't give ourselves enough credit. Right? Like it's a stressful industry to operate in, but I think oftentimes like if you -- if you think too critically about the work that you're doing, it's very easy to get lost in the noise. Like did I actually succeed? Because threat actors are still getting into businesses. But I mean for me I think that we've made a lot of positive and tremendous gains and the -- the attack surface is fundamentally shifting. Like I have hope for the fact that, you know, a server list system that I build or an application, I could launch a company today and have a fundamentally different attack surface that is more resilient than it was 5 years ago, 10 years ago. And I think of the days like writing buffer overflow and heap overflow exploits and like this was like prior to ASLR and depth being employed and memory management. And those things they didn't necessarily stop the attacks from recurring, but they became less prevalent because it pushed the level of sophistication up. So I just think that like in some ways the organization -- like the industry itself has gotten healthier. I don't think we give ourselves enough credit for it.

Sherrod DeGrippo: When you say healthier, you mean more successful at preventing attacks?

Brandon Dixon: Yes.

Sherrod DeGrippo: And what about mental health?

Brandon Dixon: Mental health I think is still -- still a challenge. It's a stressful job. I think people are under a lot of demand to, you know, succeed against the adversary which only has to be right once, but a defender has to be right every single time. I am encouraged, though, by the fact that, you know, it used to be one of those taboo things where if you were compromised everyone would be like, "Oh, my gosh. I can't believe this happened." And so I think there was a shift. I mean I think it was like I forgot who said it. It was probably Crowd Strike. There was something to the effect of there's companies that know that they've been compromised and there's ones that -- I can't remember what the phrase was, but they were effectively equating the fact that everyone has to get familiar with potentially having a breach and become resilient. That you're not going to be able to hold everybody outside of your environment and that these things will happen and it's okay. And it's how you design your organization to be resilient after the fact. So I think that stress is still high, but I'm encouraged by the fact that it seems less taboo to suffer from a compromise which is the reality. It happens.

Sherrod DeGrippo: And building that muscle is kind of the key. Right? Like your first incident is way harder than your second or your third. And that incident response gets hopefully better and better even though it could possibly be different every time. People are learning. Learning how to deal with that. I do see that acceleration as well. I feel like breach response, incident response, and hardening has become a much stronger muscle in the industry than we had, you know, five or six years ago.

Brandon Dixon: Yeah. And we have a litany of tools to monitor. Right? And tell us all this information. So I mean I remember the days when I would do incident response and we would have a -- we'd go into a customer site and I'd say, "Okay. Like where are the logs associated with your web servers?" And they're like --

Sherrod DeGrippo: What?

Brandon Dixon: We'll go -- we'll turn those on. Which ones do we turn on? I'm like, "Guys, it's already over."

Sherrod DeGrippo: That's the point of a log is it's past.

Brandon Dixon: Yeah. I'm like we don't have any information. And now -- now I think it's quite ironic that like in the day and age of compute that we have it's that storage is so cheap that data's now ubiquitous. It's all over. And, in fact, there's so much of it that people can't honestly keep up.

Sherrod DeGrippo: Okay. I'm -- you just triggered me. You were talking earlier about context windows because you're an AI guy.

Brandon Dixon: Yes.

Sherrod DeGrippo: What is a context window? I don't even know what that is.

Brandon Dixon: All right. So I don't -- I don't consider myself an AI guy per se.

Sherrod DeGrippo: Don't tell your boss that.

Brandon Dixon: A generative -- a generative AI guy.

Sherrod DeGrippo: It's literally your job.

Brandon Dixon: Well, so it's a new technology. Right? But to me what's exciting about any new technology is like what are the -- what are the capabilities of it? What can we do? And so within generative AI you have your foundation models and you have this -- a lot of conversational interfaces that people have created. So you have a couple words that we'll throw out there is like prompt and prompt engineering. You have responses, retrieval, augmented generation. You have context windows. You have tokens. And so these are all ways in which they're technical terms that I don't think or I hope that most people won't have to understand as this technology becomes more integrated. But a context window is effectively the amount of tokens that you can fit within a -- it's both the input tokens and the output tokens that's available to the model. And you can almost think of that as like a buffer in a way. It holds information. So it holds context. And so when you're using something like Chat GPT and you're asking it a question, and then you have subsequent follow on questions and you're not going to refer back to the entities that you previously introduced, that's within the context window.

Sherrod DeGrippo: Oh.

Brandon Dixon: And so it gives this illusion in a way that it's reasoning over the information, that it understands like what you've talked about. And each one of these models have different context windows given the way in which they're designed is that some of them have larger ones or -- and it also just requires more compute to be able to process more tokens.

Sherrod DeGrippo: I learned from our coworker Eric Douglas [assumed spelling] who is this like mythic iconic figure at work, oh my gosh, he is wow, I can't -- I want him on the podcast so bad. This guy is wow. He is -- he's iconic in my opinion. He was talking about RAG. What is that?

Brandon Dixon: So RAG lovingly referred to as RAG is the retrieval augmented generation. So when you think about foundation models, for instance let's just use Open AI because, you know, we partner with them, we use them extensively, foundation models or training these foundation models takes a lot of compute and a lot of time. Now with -- just imagine the amount of information that gets generated on the internet every day. It's -- if you had to retrain --

Sherrod DeGrippo: It seems like a lot.

Brandon Dixon: It is a lot. I would imagine. I wonder what the quantifier is.

Sherrod DeGrippo: The internets are vast. Info bond.

Brandon Dixon: Huge. So if you had to train every single day like it just it's costly to do that and potentially the amount of time it would take to train would then put you in the next stage here. You're always behind. And I know that folks like Open AI are trying to keep up with the latest information. But one of the -- one of the tricks to pulling context within these foundation models and make them aware is something called retrieval augmented generation. So you can almost think of it as retrieving the data from another source and then it's augmenting the model's understanding by putting that information in there. And so then the generation is the fact that it generates the response afterwards. So, for example, I might develop a plug in that goes to a weather website and I might ask for the weather forecast for the next week and summarize that into some audio stream or something. So the way to do that, it's weather data changes, you know, sometimes by the hour and even then like I don't even know where it's coming from because it's seemingly never right, but you know you've got a plug in that reaches out to the weather source. It pulls in that information, tells, you know, the model, "Hey, this is weather data for the next seven days. Summarize this into some text based format." And there you go. That's RAG. RAG in a nutshell.

Sherrod DeGrippo: Well, so this is -- this is where I'm heading with this is I come from an infosec background. Like 20 years in infosec. And now I'm like I have to learn AI things. And so what are the old school infosec dorks, what do you think they need to know about AI?

Brandon Dixon: Oh man. Like see this is -- this to me is exciting. When I see again these new technologies come out, like that's why I lean into it because I'm like, oh man. I wonder how this can make my job better. How -- like the things that bother me like I like to try and automate portions of my job when I can. I think that's -- that's a huge opportunity. So I actually talk with people inside of Microsoft, but also outside of Microsoft because I feel like I just want people to get excited about the potential for generative AI. And one of the common things that I hear when someone's not necessarily engaged is they're like, "Well, I'm not as technical as you are. I'm not a developer. I'm not X. And so I -- it's just not it's not for me. Like I'm going to wait until it's like in my phone and like just ubiquitous and technology." And I go back to them and I'm like the thing that's interesting about generative AI for somebody in tech or infosec, it doesn't matter, is the fact that like it's controlled in a way through natural language. And so yeah. You've got to figure out how to write the right prompt and talk to it in a certain way and there's idiosyncrasies between each of these models and how they respond, but at the end of the day it's not a specialized tool set. It's not like we're talking about operating system mechanics from Linux. Right? Or the kernel and how we compile things. It's articulating what it is that you want to achieve or the information you want or the structure within natural language. And so what I often -- like my hyperbolic statement is if you can write, you're a developer. And -- and what I mean by that is if you can articulate the problem that you want to solve in natural language, then the model will help produce the response. Now the way that that response is getting generated it used to be that you had to write the code to do it. And we're now at a point where in some cases for certain activities, especially ones that are natural language processing ones where you're summarizing text or pulling out sentiment or analyzing something in detail, the model can do that for you. So I don't have to write code anymore. I can replace it with a prompt and the response and they call that the API.

Sherrod DeGrippo: Is that -- is that really true? I don't have to write code anymore?

Brandon Dixon: I mean it's not fully there, but I would say here. I'll give you an example. I wrote a -- there's so many research papers that are coming out on security related to LLMs and I just don't have enough time to look at them. So I think it's something to the effect of like over -- over about 120 have occurred this year. So each one of these papers let's say they're an average of 15 pages long. I just there's about 20 a week, 10 to 20 a week. I don't have the time to review it. So I wrote an application in Python. It's basically a script, a couple hundred lines of code. And what it does is it pulls down all those papers. It -- and reads the papers. Quote "reads." Summarizes them into a bunch of bullet points and then sends me an email. The actual code that I wrote to automate all that task like that's -- that's a lot of like scaffolding code is what I would call that. The actual code to engage with the LLM, to do the heavy lifting summarizing the papers, putting them into the appropriate points, generating a cute little emoji that goes along with each title so I can put it in the newsletter, that is less than 10 lines of code. And it's like it's boilerplate code. Like I would argue it's probably like three lines of code. And the model will give that to you. Like there's enough examples on the web that you could copy and paste your way to actually totally utilizing and building what I built.

Sherrod DeGrippo: So I think what's interesting about that is that most infosec operators, most infosec professionals, would say, "I'm a lousy coder." That's something I hear all the time is like, you know, "I write junky code which are not that good." And I think that that's one of the things that the Copilots can change security with is that you've got a lot of people who don't feel strong in their coding and you've got a lot of people who don't feel strong in their reverse engineering.

Brandon Dixon: Yep.

Sherrod DeGrippo: And this gives them almost like a super power to take themselves if they rate themselves a D minus, to take them to a C plus, to take them to a B. And then have them get to the point where they're like, "You know what? I'm not a great coder, but I've got a copilot that kind of helps me out and I'm getting better and I feel a hopefulness about my skill around that."

Brandon Dixon: Yeah. I mean like I don't want to -- I don't want to diminish like the capacity for copilots to produce code, but like if -- I've been in interviews before where you know here in like Silicon Valley where they're like, "Brandon, you're not a -- you're not a coder."

Sherrod DeGrippo: Yeah.

Brandon Dixon: You're not a developer.

Sherrod DeGrippo: It's a gatekeepy world with that.

Brandon Dixon: They're like, "Sorry. You're -- never mind the fact that you've, you know, built products and launched them and like, you know, it's just -- " I have a different philosophical viewpoint on what it means to write code. Like if you're writing engineering code that needs to be perform it and scalable of course like you want to get engineers to do that, but when people articulate in infosec like, "Hey, my code's not great. I'm not a developer. I just write scripts," that's okay. And, in fact, that is great because it means that you're going to be comfortable enough to start to leverage these models and you'll be able to take the minimal amount of code that you have and then utilize prompts to facilitate some of the more advanced things that you may not have the capacity to do. And as you articulated, like the way that these things function as copilots is that you could conceivably use GitHub copilot to have it start suggesting the actual code. So one of the processes I have is what I call scaffolding. So I'll use the foundation model to describe the problem that I want in natural language. And then once I get a consistent output occurring, I then ask the model to now write the code to produce that output. So it's sort of like I'm working backwards. I'm getting to the solution first in a way that's not incredibly performant or efficient. And not always consistent. And then from there I'm saying, "Okay. Now that we have this working, I could model it in prompts, but I'm going to potentially pay more for that." Because there's a cost imposed with executing the prompts and the imprints. Or I can have the model start to output the code so that I can just reproduce this on my own with the alternative ways of doing things like scripting. I do think it will give people more confidence and we've certainly seen that with the customers that we interact with with Copilot for Security.

Sherrod DeGrippo: And so here's my other question to you about that is you -- you have a lot of philosophical thoughts here. You have a lot of Linked In posts and you recently moved those. Is that right?

Brandon Dixon: They're -- they're at a blog that I control now only because like they're generic. And it's sort of I want to try -- like I don't like using the word I want to try and inspired people. That's not what I'm after. I want -- I want to find other people who are interested in how they could push the technology's boundaries to then generate ideas and like almost I like sort of being in the public forum and publishing because it allows me to be vulnerable. Right? Like I'm putting out an idea for critique. And somebody could say this is a horrible idea. They could say it's a great idea. But what I'm always interested in is why I write is I want ideas to get out because that sharing of an idea then maybe inspires someone else to do something. And in fact like that's been a net positive is because this is a new technology and I write about this people who otherwise are saying like, "Dude, I have no clue what I'm doing," or, "I've been struggling to figure out how this actually works," or, "I'm worried about how this is going to augment my job. What can I do to get ahead of that?" It's nice to see people message me from those writings to then say like, "Hey, I actually did get an idea here." Or, you know, "I was writing up a blog about something similar. Could you take a look at that?" Like could we talk about it? So I just think it's yeah. I write a lot, but it's I want to bring people in. And I -- the philosophical part is for the fact that like this stuff isn't fully landed in the sense that like it's not like we've had the technology for a long period of time where you know over a year now since kind of Chat GPT was launched and revolutionized how people are thinking about things, but the technology's still rapidly evolving. We're seeing developments occur within competition, within other businesses, within the open source community. So there's still a lot of room. So the philosophical aspect is more or less just it's fun to think through all the different problems.

Sherrod DeGrippo: I -- we have Rom [assumed spelling] here who's another one of our coworkers. I was talking to him yesterday. He will be on the podcast soon. There's a lot of Rom fans around Microsoft and elsewhere, but Rom is so cool. What I love about talking to Rom is that he says things that are sort of similar which is I'll ask him these crazy questions like, "What's going to happen with the AI? What's the future?" And he's like, "Yeah. I don't know." Anything -- anything could happen. I don't know. And I love that about a lot of the people I talk to that are working in the AI space. They're very open to this is so nascent and early and the beginning of something as Princess Irulan reminds us that the beginning is a very delicate time. And I love the AI people's attitude of like, "Yeah. I don't know. We're going to just keep working on stuff and kind of see what happens."

Brandon Dixon: Yeah. The tricky bit is doing it responsibly. And that -- that is one thing I enjoy a lot about working at Microsoft is you know you want to move -- like there was a -- I've heard the phrase before uttered like move fast and break things.

Sherrod DeGrippo: I think that's a 37 signals. It's a I think that's a DHA term. One of those guys.

Brandon Dixon: When you -- move fast and break things is fun until you're on the receiving end of the broken thing.

Sherrod DeGrippo: Until you yourself are broken.

Brandon Dixon: And when the thing is broken consistently or all the time, that can be frustrating. So what I like about Microsoft's approach with responsible AI is that it's like integrated into our solutions. And so that actually adds a lot of -- it takes the cognitive load off of me from a product management standpoint to be able to move quickly and iterate with our customers, but also do it safely so that they themselves have -- you know, they know that they can trust the product and that they -- it's not going to do anything that they didn't intend for it to do. And there's times where like that I don't want to say holds back innovation, but it certainly forces us to think more critically. You know, we really want to bring state changing plug ins to Copilot for Security for instance.

Sherrod DeGrippo: What does that mean? What's a state changing plug in?

Brandon Dixon: So a state changing plug in would be today we read a lot of information and then we provide responses. A state changing plug in might say, "Well, I actually want to go and update that incident." Maybe I want to write a comment back into it or I want to isolate this user. I want to delete this user. So a state changing plug in would be anything where there's like some sort of action that's occurring on behalf of the user that the AI or generative AI is taking on their behalf. The philosophical or interesting part of this is like well what does it mean to take action across the enterprise in security? There's a lot. Right? Like you know okay. So you know a lot of the principles that come to us it's like, well, we need to get consent before the AI can do anything. And I'm like, "Anything?" Like anything. And I'm like trying to contend with this from a product perspective. Like what does that mean for automation? Like what does that mean for the user experience? Like every single time I'm going to have to ask them like did they want to do this, do they continue to want to do it. Like are all actions created equally? Like what are the actions that exist across security? How do we emulate those? And so it's -- it's a fun problem to work through because there's a lot of nuance to it. Like in product land you know it's something you just want to push it out there and see what happens. But then, you know, what I like again about the RAI aspect and our deployment safety board is that they come and they interview us and they're like, "Okay. Tell us how the feature will work and what it will do." And -- and they'll be like, "What did you think of this case?" And it's like oh no. Didn't cover that one. They're like, "Yeah. That would be a problem if that got executed. Right?" And I'm like, "Yeah. It totally would." And you realize the power of natural language is the ease of use, but it's also the ease of making the mistake too. So it's very important that we like educate our customers around what the system is doing and how it's doing it. It's a very long winded way of stating how I appreciate working for the company and how we take things responsibly.

Sherrod DeGrippo: What do you -- the responsible AI people are very interesting too. We should get them on. What do say to somebody who's like, "I'm in threat intel," which is the majority of our listeners. They're threat intelligence information security focused. What do you say to them when they say, "I don't need this." Like what can it really do for me? If you're a threat intelligence professional and you're like chasing threat actors, you're doing attribution, you're doing all those things that we associate with threat -- threat intelligence, what can they sit down and do with Copilot for Security?

Brandon Dixon: Yeah. I mean the response that I sometimes get is like, "Well, is it ready?" And like, you know, what is it going to do for me right now? And like how does it operate? Or, "Hey, this isn't going to replace me." And it's like well it's not meant to replace you. Like the name copilot itself is interesting. It's like probably one of the best names that we've seen because it's not a name of a thing which I liked. We're not naming the model like as a -- as a person. So I appreciated that from a product perspective. But then the name or the moniker itself suggested that like it's helping you. It's assisting you in the operation itself. So on the threat intelligence side I mean a lot of what we work with is long form unstructured reporting or in some cases structured reporting. But it's information that has to be read, digested, often presented to a variety of different users. So it could be management. It could be analysts. It could be incident responders that may not have enough time to consume the data that they need. There's of course the raw signal that we have, the enrichment that takes place, the formulation of like who is the adversary and keeping track of them. I think any case where there's like text generation or information that would need to be synthesized that Copilot for Security could help quite significantly. And an example of that would be, you know, if you have a pretty extensive profile associated with --

Sherrod DeGrippo: Octo Tempest.

Brandon Dixon: Octo Tempest. And you wanted to -- maybe you don't want to read all of that. You could ask Copilot for Security to summarize it into five points and if you're somebody who's junior and you're getting into the industry and you're looking an incident that references that, you may have no clue about it and you can say write it for a non technical audience. You could ask, you know, what TTPs are associated with this threat actor. What mitre attack frame or techniques have we observed? And take this particular piece of content of activity that we saw maybe from DART. DART's doing incident response. They see a bunch of activities take place. They put their notes into kind of like their little bullet points as they're moving quickly. Copilot of Security can help them summarize that into a report. It can help them find the intersection where mitre attack techniques are relevant to help then make that information more actionable. So really it's -- in my opinion, it's not going to be great for constructing like the adversary profile like in the sense of like, you know, go collect all the technical indicators of compromise and like put them all in the same spotlight. You still use the traditional techniques for that, and you still have your traditional threat hunting approaches, but I think it can help summarize information, present the information, draft the reports, keep them up to date without a user necessarily having to do all that work.

Sherrod DeGrippo: I like that. And I also like the idea that I know I'm going to -- it sounds crazy. I like the idea of CISOs having Copilot for Security.

Brandon Dixon: For sure.

Sherrod DeGrippo: Because it can really cut down on the PIRs and it can really cut down on the, "Hey, what do you know about this?" Which props to all the CISOs asking great questions, but it can be very interruptive to an operational security professional to have a super high up executive say, "I saw this on the news. And I need you to tell me how this relates to us and what this is for us." I would love to see executives have direct access to Copilot for Security because that gives them friendly natural language access to all of Microsoft's threat intelligence.

Brandon Dixon: And they can get it in the format that they want and it goes beyond the threat intel. Like to me what I find exciting is that, you know, we want to bring that capability to defender XDR, to Entra, to Purview, to the Access PM solution that we have. I want to make it such that like you can present a dashboard to the CISO. But dashboards are difficult because they always require some level of customization. Everyone wants their own little flavor of how they want to tune it. But that is -- that is a great point that you've made that I mean if somebody could just come in at a CISO level and just sort of knows it's a CISO. And it's like, "Hey, look. I don't have enough time to read all this stuff." It's like I saw a new vulnerability came out. Tell me does it impact my environment. That's a common question.

Sherrod DeGrippo: That's the number -- that's the number one.

Brandon Dixon: Right? And Copilot being able to surface that. And then find the intersection between like okay of the assets that you've returned, which ones are my quote, "crown jewels" or my really special ones? Like is there any one of those that are impacted? Oh yeah. There's -- there's like two of them. Okay. And, you know, is there a threat actor exploiting the vulnerability? And, you know, is that threat actor, you know, target my industry? These are questions that you could start to ask in natural language and Copilot could return those answers. And yet becomes like this self service way to get someone, you know, like a CISO the information that they conceivably need. But it also creates vulnerability for someone who's senior or junior who otherwise may not want to ask a particular question of their peer.

Sherrod DeGrippo: Yes. That's where I was going next. I love that.

Brandon Dixon: It's I mean it's nice. Right? Like a lot of people are like how do you know all this stuff, and it's like I just go like two layers deeper than I think most people do. And I use the models to help me learn. So I could see a day where, you know, a junior analyst is in the system and like I don't know some acronym comes back like, "Hey. Make sure as a recommendation to practice principle of least privilege." I mean maybe on the surface I understand what that means, but like I want more detail. Can you tell me more about principle of least privilege? And the system will go and do that. So it's like this opportunity to like learn while you're doing the job.

Sherrod DeGrippo: From a completely non judgmental entity.

Brandon Dixon: Oh for sure.

Sherrod DeGrippo: That's the cool thing I think too. Like I do see Copilot and Chat GPT and all of the -- I use Bing AI. I use them all. The thing that I really like is that I have no shame about what it thinks of me. Like I'm just like I'm going to ask you whatever I want to know and I don't really care if you think I'm dumb which you have to admit in security there is a lot of I don't want someone to think I'm -- I don't want them to think I don't know. And I've only been at Microsoft a year and let me tell you the acronyms are mind bend -- it's -- oh. It's so hard. And being able to have a Copilot where you can be like, "What is this?" And it's a quick little thing and it's like, "Oh, that acronym is actually the old name. It's now called this with this new acronym." Those are the kinds of things that I think can up level a person's ability to operate within their professional environment because they're not afraid to ask and they can get access to that information incredibly quickly.

Brandon Dixon: Yep. I mean I think it's transformative. Like that to me is the biggest opportunity. We -- we always see like shortages of staff that exist across our industry and like, you know, I was previously working with like recruiting folks, trying to get people jobs and pair them with the hiring managers. And it's tough. There's just not enough people. And to me like one of the interesting things, you know, I wrote about this a couple months ago, but like at Risk IQ one of the things that I enjoyed was the fact that we had headquarters or an office within Kansas City. And, you know, we got all walks of life that were interested in changing careers. So it'd be car salesmen or real estate agents that are like, "Hey, look. I just you guys are a tech company that came into our town. I'd really like an opportunity to be able to support the work that you guys are doing and learn how I can break into this field." And it's -- it's sort of it takes two to tango. Right? Like you need the person with the interest and like the capacity and willingness to learn and the company that's willing to take the chance on the person and invest the time to make them better. And that to me was something that we saw as successful at Risk IQ. Now of course not everybody works out and achieves the results that we would want. And some people realize it's not a good fit for them, but we had several people in that company that progressed up the various stages of the business and really became like experts of their domain and in doing that, you know, across like the business itself. And I always think back to that now with generative AI and, you know, what we just spoke about. The fact that someone can feel vulnerable, that they can learn on the job. I'm hopeful that the technology enables a case where a junior person who may be wanting to change their career, and when I say junior it's not indicative of an age --

Sherrod DeGrippo: Well, and everybody's junior in something.

Brandon Dixon: Yeah.

Sherrod DeGrippo: Like in security there's so many different domains it's like you don't know them all. Nobody knows it all.

Brandon Dixon: Right, but I'm hopeful that like well that businesses will feel more empowered to hire people with less experience because the copilots will give them the confidence that the people they're hiring are going to get the information that they need and that they'll be able to learn much faster by having that resource at their disposal. Like that would be the transformative thing. Right? Like all of a sudden a lot more people are there to do the work and to check it. And like what does that mean for senior folks? Like well it just means that like you as you learn these systems don't have to deal with the drudgery of writing the reports or doing the things that people complain about anyway.

Sherrod DeGrippo: Get time back on that.

Brandon Dixon: Right. It should lower the stress. Like that's my hope, right, is that the pressure maybe subsides a little bit and that we all don't have to run at 150% all the time with limited resources. Not only could we get more resources. The resources that we have could be more empowered. And that we ourselves who sit in these more senior positions being in the industry for so long can democratize that knowledge or kind of take a step back and focus on being more proactive. That's super exciting.

Sherrod DeGrippo: Yeah. I think that's really cool. I -- I've started saying like I know the AI, you know, the A stands for artificial intelligence, but to me it's really an accelerant. It's an accelerant of intelligence. It makes things more efficient and more accurate and faster. And I think that that's the way people need to see it is that it really does have that acceleration capability. I want to ask you one more question before I let you go. If people want to learn more about the Brandon Dixon philosophy, where can they go?

Brandon Dixon: So I write in two locations. So I have a applied security generative AI website that I can link in the show notes, but then there's also the tech community that's part of Copilot for Security. And so those are -- anything that's related more to the product offering I'll try and publish in both locations while our tech community, that's a great resource for people to get familiar with the product and how -- all of the great advancements that we're making. Of course for our customers we have a tremendous amount of resources as well. Going to be doing weekly sessions to engage and have live Q and A, live examples. Really excited about that. And then outside of Microsoft, just more generally, I -- I write on that blog that we'll share out. And that's more my philosophy, things that I'm trying to work through in terms of automation and you know education. What does it mean to bring in a junior person or a senior person? And this is more from my personal standpoint, but it's meant to solicit conversation with others I know in the industry.

Sherrod DeGrippo: I want to share that in the show notes. So check the show notes and you can learn more Brandon Dixon hot takes. Brandon, I freaking love working with you. I find you inspiring and fun and like an old school nerd. So it's like hanging out with friends.

Brandon Dixon: I know. It's like this is we're sitting here in a cube recording right now surrounded by people that can't hear us. And it's wild to think, right, like we've had this like folks like the OGs if you will of like threat intelligence. Right? There's like we kind of just like float around and do different things and when we get a chance to kind of speak with each other it's just really refreshing. Get to see like, you know, years have passed and everybody's just like kind of in their thing doing their thing.

Sherrod DeGrippo: Still doing it.

Brandon Dixon: So I love working with you. I love the energy that you bring and the excitement and the fact that like you have a platform like this to get people on and I'm very much grateful that you took the time to chat with me.

Sherrod DeGrippo: Of course. You have to come back even. This isn't our last time.

Brandon Dixon: We'll just keep going.

Sherrod DeGrippo: Okay. Thank you so much, Brandon. And we'll have more for you from the Microsoft Copilot for Security launch event.

Brandon Dixon: Thanks, Sherrod.

Sherrod DeGrippo: Thanks.

Brandon Dixon: Bye.

Sherrod DeGrippo: Hello, and welcome to the "Microsoft Threat Intelligence Podcast." I have a superstar. Everyone wants to see. Everyone wants to hear. Vasu Jakkal. CVP Microsoft security. Vasu, thank you so much for coming on the pod.

Vasu Jakkal: Oh, my goodness. It's wonderful to be here with the lovely Sherrod. And I look forward to a great conversation. Thank you so much for having me here.

Sherrod DeGrippo: Thanks for coming on. So you were kind of quite the figure in Microsoft security. Everyone knows Vasu. Everyone wants to have you in the media talking on podcasts. Everyone has your new newsletter, "The Heart of Security" which just came out. Check Linked In for that. What is it that you really love about security and being in this world?

Vasu Jakkal: Well, first of all, thank you for those very kind words. Merely a reflection of all the amazing humans around me including you, Sherrod. So all these kind words right back at you all. The reason I'm in security is when I chose security I wanted to do something which was truly impactful for our world. And if I look at Maslow's hierarchy like let's get to real basics. Right?

Sherrod DeGrippo: Yes.

Vasu Jakkal: Food. Water. Shelter. What's right next to it? It's safety.

Sherrod DeGrippo: Yes.

Vasu Jakkal: And if we think of our lives today, we live so much in the digital sphere and so much of that safety is digital security. So that was one of the reasons. I thought that if I wanted to do something really impactful changing the world doing my teeny tiny part in making a better world, security might be a great place to be in. And that's why I chose security.

Sherrod DeGrippo: Do you feel like you have more or less or the same amount of anxiety as the average person?

Vasu Jakkal: [Laughs] well I think I have more or less of the same level of anxiety as any security person.

Sherrod DeGrippo: Okay.

Vasu Jakkal: It really depends. I feel security is the mettle of you know like superheroes and I just feel very privileged to be a part of a world where we get to see like some incredible human beings do incredible work. I am a massive optimist, Sherrod. You probably --

Sherrod DeGrippo: I love that. I know. And we need optimists. Oh, my gosh.

Vasu Jakkal: So I'm a massive optimist. So it's hard for me to have anxiety because of my just optimistic heartbeat. I would like to believe that we can work together to build a better world. So I don't really get anxiety other than if it's my kids doing something.

Sherrod DeGrippo: Okay.

Vasu Jakkal: And I feel like if we work together, all problems are solvable. And we should lead with heart and optimism through it all.

Sherrod DeGrippo: So that being the case, what would you tell someone who came to you maybe like a college student and they were like, "I really want to do security." Is there a path, a piece of advice, a tip? What's the tip?

Vasu Jakkal: So first I would say yea. Yes. Please do security. I would say security is a great field for all no matter who you are, no matter where you are, no matter how you work, how you learn. And I would say like this is an opportunity for you to change security in your own way to include more people because today, Sherrod, we have such a massive talent shortage in security. Right? 4 million jobs worldwide that are not being fulfilled. It's heartbreaking. So I would tell this young amazing human, "Let's go." Like let's do great security and let's invite other people so they can join us and let's build a optimistic positive inclusive world for security that everyone feels like they belong and have a great seat at the table.

Sherrod DeGrippo: So let's talk a little bit about how you got into security. How did you end up here? You have an incredible --

Vasu Jakkal: Very accidental. When -- I come from very humble beginnings and when I was around nine years old this show called "Star Trek" had a huge impact on my life. So for all you Trekkies out there, live long and prosper.

Sherrod DeGrippo: I want to just point out really quickly that this is completely true and authentic. She talks about "Star Trek" all the time. Like she really does. Like something will just be happening and she'll be like, "Do you know what? This is like on "Star Trek.""

Vasu Jakkal: Yes. Exactly. "Star Trek" rocks. So I grew up -- and that's kind of how I fell in love with technology. I didn't know that that was what science or technology was. I asked someone, "Well, what is that?" Like I want to be on that Starship Enterprise. What do I need to study? That's called science. So I got into science. I actually started in engineering. I'm a silicon designer and I loved it. And that was my first like encounter with security. Remember we were doing security encryption technologies and that time security was a back end conversation in the labs. It was not at the forefront. But that was my first introduction. I was at Intel at that time. And then a few years down the line at Intel when I was part of a business, the embedded business which became the IOT business, we started doing more in security. Like how do we protect the internet of things? The cloud was being reborn, like what does the mean? So that's when I met security. But my first like pure play security role was when I was at Mandiant FireEye. And it was brilliant. Like I loved every part of it. I loved threat intelligence which is your world. It's finding the bad actors and figuring out what they're doing and helping defenders. I mean how can you not love that?

Sherrod DeGrippo: It's fascinating. It's like if you're someone who's interested in humans, you can be interested in threat actors.

Vasu Jakkal: Yeah. And so I just I've had, as you mentioned, a very interesting loop de loop career. I've learned so much and have so many great experiences. Great humans that I'm grateful for. And I fell in love with security along the way and I fell in love with it because all of the technology aspect, but also like the humanity aspects of it. I think security's a cultural challenge and a cultural solution as much as it's a technology challenge and technology solution.

Sherrod DeGrippo: Security for me I feel like there's so much of it that's subjective. And I love playing in the subjective. I love those things that you can't definitively say absolutely it's this. I like the things that but in security if you did it this way then it goes from something that's objective to subjective and it's on the spectrum where we're constantly striving to be more secure. And I like that.

Vasu Jakkal: Absolutely. And I love what you said about subjective because security's about trust and trust is about it's about heart. And it's about that mind share that you get and it is a very human thing. Like security's a very human thing. And that's why it's a very subjective thing because to solve security there's no one way of doing it. There's so many ways and each can be a different way to solve these problems.

Sherrod DeGrippo: So you talked a little bit about "Star Trek." And today, literally today, we are at this event, Microsoft Copilot for Security, and I guess kind of do you feel a bit of that "Star Trek" reality happening when we look at AI? Are you kind of getting that feeling of if this evolves a little bit more, we really are going to do those human leaps?

Vasu Jakkal: I think so. I mean remember Data and remember all of that. I'm a big believer in the super powers of generative AI for security. I think security's a defining challenge of our year, our decade, of our century. And I do believe that the generative AI we can tilt the balance in favor of defenders. And here at this event with you and all the amazing wonderful leaders it feels like a little bit of that magic is happening around us. I also think like going back to "Star Trek" as much as like the earlier conversation we had on subjective and culture and all -- as much as "Star Trek" was about the technology and all which is what our AI and [inaudible 00:42:43] do, "Star Trek" also highlighted a lot of cultural things that we have to solve for. And that's why, Sherrod, the message that we have from Microsoft is security for all. You know, inclusivity. Like having everyone around that table. And as we build AI it's going to be very important for us to have diverse workforce and do it in an inclusive way so we can truly have that Utopian world where we can leverage AI to be our ally and our copilot in all things security.

Sherrod DeGrippo: I think that's one of the things too with Copilot that people are talking about. It really is this trusted ally that literally it feels like it's sitting next to you being your copilot. It feels like I just need a little help here. And that's kind of what Copilot can do. So what do you think like is the most transformative thing that Copilot's going to do?

Vasu Jakkal: Well, first and foremost Copilot is going to help us defend at machine speed and scale which has been -- you know, it's been hard to do that. Second Copilot is going to help us catch what may have been really difficult to find like the -- you've seen this tool and this product, Sherrod, like me, and it does things at such scale and in such a way connecting dots which we may have missed earlier. And then third I think Copilot is going to reduce the talent shortage that we have because it's going to -- it's a tool for everyone. You can be early in your career. You can be late in your career. You can be part of small organization. You can be part of a large organization. You can be anywhere in the world. And it's going to help you. It's going to help you learn. It's going to help you be faster. It's going to help you be more productive. It's going to bring a diverse set of skills that you may not have. So those are the three things that I think Copilot's going to help us bring to the table and is going to do all of this in what's going to be the most powerful coding language now which is natural language.

Sherrod DeGrippo: That's amazing too because I think, you know, I've said before and I feel like you are as well, we're chatty. We like to talk. Right? We like to -- we like to really communicate that way. And I think the ability to put natural language into the copilots has really democratized so many things. Something I'm excited for and be interested if you've heard any of this from some of those Copilot customers that have it now, I love the idea of executives getting it and not -- not going to their SOC every time and saying, "Wow. I heard about this. Can you -- can you tell me? Can you help me? What is this?" I love the idea of CISOs and executives saying, "I don't need to put in a request. I don't need to put in an email every single time. I can self serve."

Vasu Jakkal: 1,000%. A million percent. And imagine how much of productivity --

Sherrod DeGrippo: You can save your people.

Vasu Jakkal: We can save. Right?

Sherrod DeGrippo: The CISO doesn't ask you every morning what they saw on the news that day. Because I've worked in those roles with threat intelligence. Right? It's the CISOs and the SOC directors are the ones who -- and the executive boards are the ones who say, "I saw this on the news this morning. What does it mean for us? And I don't understand." And somebody who's in like a junior role can get a full -- or maybe not even a junior role. But, you know, who's on the floor operationally gets this big beautiful profile of the threat actor and the TTPs and the IOCs and atomics all the way down. And then they say, "Actually, Copilot, could you give me this in like a paragraph and some bullet points because I need to give this to an executive?" And at a certain point I think those executives are just going to have access to Copilot and do it themselves.

Vasu Jakkal: And that's our goal is to have like give a Copilot for Security to everyone who needs it so they can ask those questions versus even like asking someone else for the question. Isn't that saving like minutes from our day and meeting? And how powerful is that? Like if someone can -- if Copilot can summarize a report and give it to us in this beautiful taxonomy that you and the team have developed for all these threat actors, wow. And they're all getting smarter learning about this. You know, you asked me earlier about what I'd say if I had met someone early in their career. I actually met 15 girls that were in Italy in Milan meeting our customers I think two weeks back and I met a cohort of 15 girls 19 to 21. Amazing. Smart. Wonderful. Brilliant. They had come to discuss whether cybersecurity was a great profession for them. And I was just thinking imagine if we give Copilot for Security to each of these girls and they can use it and they can get trained on security using it in their own way because we can ask questions which make -- you know, we all learn differently so we can asked questions that make sense to us. I think that's going to change the way how we do security.

Sherrod DeGrippo: I think that there's incredible potential in that because it can accelerate those minds that are interested and want to go faster and want to do more. To me a lot of the copilots just they make things faster, easier, more efficient. I know we talk all the time about the study that Microsoft did on copilot users. You're able to rattle off those statistics better than I am, but I think it was something like 26% faster and more accurate. And across the spectrum. And then isn't there a stat about people said they want it again? Something like that.

Vasu Jakkal: I call it my joy stat.

Sherrod DeGrippo: What's the joy stat? So you were talking about this earlier.

Vasu Jakkal: That's just how I call it.

Sherrod DeGrippo: Well, no. You've made a really good point. So walk me through again what was the stat about they want it back?

Vasu Jakkal: Yeah. 97%.

Sherrod DeGrippo: There's -- they said, "I want to play with it again." Okay. Yeah.

Vasu Jakkal: So that's why I call it my joy stat. I don't know. I just feel like, you know, isn't it wonderful that people are like, "Yeah. I want to use that tool again."

Sherrod DeGrippo: Well, you said something earlier and I'm just going to steal your line. You said, "How often do we say we get joy from a security product?" Never. I think that's a really rare experience. And it was very funny. Everyone sort of laughed. And the fact that everyone sort of laughed is really proving the point that those people don't associate their security products with a joyful experience. But if 97% of people said they want to use the copilot again, that kind of is turning that around.

Vasu Jakkal: Totally. I think we should just call it a joy stat because isn't it like awesome? Hey, I like this tool. I want to use it again. I feel I'm more productive. I feel like I'm getting more results out of this. And yes. You're right. We have like data like 26% faster. 35% more accurate. Like these are like -- for early in career professional I think that's a game changer for us. So I love it. I love that people are using it. I love they're playing with it. I love that they're making it their own.

Sherrod DeGrippo: I really like also that, you know, I have found in my day that I'll be doing something like, "Oh this. I hate this. It's taking me for -- " And then I go wait. Why am I not using one of the AI tools that are available to me? I'm hoping that it can be a vibe change for people.

Vasu Jakkal: I love it.

Sherrod DeGrippo: In their day where the tedious pain in the rear type work that they instantly say, "Oh. I'm just going to have the copilot do it for me and I can focus on the creative, the interesting, the depth things that I really want to do." And I can elevate my vibe a little bit where I'm not down in those like, "Oh. I've got to comb through all these IOCs. I've got to normalize this data. I've got to make this look good. I've got to fix this." And just say, "I'm going to have the AI do it for me." I'll have the copilot do it.

Vasu Jakkal: You know I love you. Like I adore you and I love the vibe. I'm so using that. We need more language like this in security, Sherrod. I mean for those who don't know, and I know you all know Sherrod, like she's absolutely freaking brilliant.

Sherrod DeGrippo: And the vibe.

Vasu Jakkal: Right, but like absolutely. Like if you can use your time -- you know, we have 1440 minutes in a day.

Sherrod DeGrippo: Yes. Yes.

Vasu Jakkal: If you can use your time to do critical thinking, to think differently about solving a problem, to bring those new ways of solving something, isn't that more worthwhile? And then all what I call drudgery.

Sherrod DeGrippo: Yes. I hate that.

Vasu Jakkal: Like all of this you're like, "Oh. It's just using my time."

Sherrod DeGrippo: Yes.

Vasu Jakkal: To do this stuff. Like to do this report. To like go back and look at all of these. Like what if Copilot did that for you?

Sherrod DeGrippo: Yes.

Vasu Jakkal: And, by the way, also like a shout out to our community. Security's a team sport.

Sherrod DeGrippo: Yes.

Vasu Jakkal: Said that. It takes a village. We need to all to be in this with us. Like Microsoft is not going to solve this by ourselves. So use it. Make it your own. Like the more people who use it, the more it learns differently. And I think that's going to help.

Sherrod DeGrippo: So we were talking just now about I want Copilot to do those things for me that I'm like, "I don't like all this drudgery." What do you love working on? If you're going to have Copilot free up a bunch of your time, let's say Copilot's going to free up a few hours for you every week, what would you love to spend that time on?

Vasu Jakkal: I would love to spend that time on more visioning of how we can simplify security. You know, it is -- one of the things which gives me deep joy is just saying, "Well, how can we change security?" Like how can we reshape the smart gauge to think about different ways in which we can help defenders? How can we do more security for small and medium businesses? For consumers? So I would use that time to like do strategy work and visioning work to say, "Well, how should we define security for the future?" How can we use AI? That's what I would use it for.

Sherrod DeGrippo: So you're you like to spend time on the philosophy of security.

Vasu Jakkal: I do love to spend time on the philosophy of security.

Sherrod DeGrippo: We have quite a few people in MSTIC that I really consider security philosophers and you know they -- they love to --

Vasu Jakkal: The what ifs.

Sherrod DeGrippo: They love to what if. And they love to posit. And they love to think about, you know, almost that what is the true meaning of security. What are we really doing? And I love those conversations. And so okay. You've freed up this time. I want to ask you professionally what are you using AI for? And then I'll also ask in your personal life which I think will probably be pretty interesting. So what are you using AI for at work?

Vasu Jakkal: Sure. So we talked a lot about Copilot for Security. Of course I use it to do a bunch of stuff, but I won't talk about Copilot for Security. I'll talk about like generic AI that I'm using. So I use our Copilot products that we have at Microsoft. I use it for my favorite is summarize what's happened in this meeting.

Sherrod DeGrippo: Because you've been on chat or --

Vasu Jakkal: For me? Or like, you know, like we're just back to that. Right? Meeting. So I'll join a meeting and I'm already late. And then I love --

Sherrod DeGrippo: For the record, Vasu's never joined a meeting late that I've been on, but apparently she thinks she joins late.

Vasu Jakkal: Thank you, Sherrod. That's very kind. And then I'm just like, "Oh, my god." Because you also know I love being engaged in a meeting if I'm going to like ask a lot of questions. So then I'm just trying to catch up. And have you seen like this Copilot feature where like should I summarize to you what has happened in this meeting so far? And I love that because I'm like yep. Quickly. Like tell me. Like -- there were introductions. I love like meeting actions and you know I like to be organized. So much of [inaudible 00:53:37] who's my amazing chief of staff like so much of her time goes in just summarizing meetings and actions so that would be like, "Nope. I'm just going to use Copilot to summarize and give me actions." How many times have been -- I know I'm still on the meetings example. How many times have we all been like that three meetings all happening at the same time? That we all have to be part of. And you're like, "Okay. I physically cannot attend these." But you can just say, "Copilot, summarize all these meetings and tell me." And it can give you a quick summary of that. So those are some of the work. I use Copilot in Word. I use it in PowerPoint. I use it in all of the productivity tools. And I love it. Everything from, "Tell me a better way to write this email message."

Sherrod DeGrippo: That's better. Yes. I do that too.

Vasu Jakkal: I need a -- all these like grammatical errors that I'm making. Or like, "Oh. What's a better word for that?" Right? To all the way like, "Hey like give me some guidance on like I'm just about to start writing a strategy doc or an operational doc or a business status." Like so all of that. And my personal consumer life as a mom I love cooking, Sherrod. I really, really do. And I love experimentation. So I use it everything from, "I want a recipe from this region which is vegan which has these three ingredients." You know, like coconut. I look in my fridge.

Sherrod DeGrippo: Which is what I try to use a search engine for.

Vasu Jakkal: Which is very hard because you can't add on these things. Help me find a recipe from Turkey which uses I don't know basil. I'm making it up. Or like uses these three ingredients. Tomatoes and cilantro and chickpeas or something like that. You know, so I use it for that as well. So I use it for all kinds of stuff. And I love it because like you it makes my life efficient.

Sherrod DeGrippo: And I do love that. That's very important to me. And I think that, you know, you kind of said, I'm still talking about the meetings, but what is one of the number one complaints people have at work? Too many meetings.

Vasu Jakkal: And we didn't even rehearse this.

Sherrod DeGrippo: That's true. It's the duo. It's we are aligned. It's too many meetings. And so if you can get the copilot to make the meetings less painful, more efficient, faster, or reduce the need for them, that's actually one of the biggest needs people have. It's like get me out of these meetings. Make them better so that I can focus on the visioning that you were talking about. For me I love to just go research threat actors. Like --

Vasu Jakkal: Critical thinking.

Sherrod DeGrippo: Fun stuff. I want the machines to do the drudgery stuff so I can focus on the creative thing.

Vasu Jakkal: Your vibe can be higher. I love that.

Sherrod DeGrippo: Yeah. I want to choose -- I want the better vibe. And so like the too many meetings thing is I think really, really huge for people. And so I'll ask you one final question. I have seen both with customers, with friends, with coworkers, everyone, we are all on this mindset change journey of incorporating AI as a tool into our mapping of the world. I feel like about a year ago I was very early. I was playing. I was playing with Chat GPT. Now I have the copilots at work and I'm leveraging them for labor. Where do you feel you are personally in the journey of reaching for Copilot first? Or reaching for an AI solution most of the time. Like where are you in that journey?

Vasu Jakkal: That's an interesting question. I think I'm still -- I look at Copilot just like an ally. Like I'm in the journey where if I need help and I don't want to put a burden on my team, an example, right, like, "Oh, Sherrod, get me these three things." Or [inaudible 00:57:20] like I am in the part where I would like to reduce drudgery in your lives because of my questions. So I look at Copilot as an ally to like, "Hey, can I just get this information quickly?" So that I can get smart and have more context on these questions. And then go to our teams and not waste their time. Like go to them when I'm really looking for, "Hey, I would like you to give me your assessment on this. Can you just -- "

Sherrod DeGrippo: Right.

Vasu Jakkal: Or this document for me. So I think I'm in the journey where I'm sort of getting used to Copilot. You know, like it's not become yet like completely like integrated into all my work flows which is what I want it to be. Like seamless. Like I call it muscle memory. Like you know it becomes a habit. It becomes seamless. Big believer in atomic habits too. But right now I'm like I seek it. You know, I'm like, "Hey, find that answer and give it." And I treat it really as an ally in my -- in my life.

Sherrod DeGrippo: I love that. I feel like like I don't really cook, but I do make cocktails. I feel like --

Vasu Jakkal: Try that recipe. Try looking for it.

Sherrod DeGrippo: Yes. I do. And I instantly in the kitchen am like if I'm cooking, which I don't do much, but if I'm making drinks or people are coming over I do instantly go to Chat GPT.

Vasu Jakkal: Yeah.

Sherrod DeGrippo: And I've made that a rote like you said muscle memory. It's a habit. I'm integrating it into my tasks and my labor and my world. And there's a different style of that for me at work still. I'm still not fully there. My boss John Lambert was like -- I said, you know, "I need daily and weekly threat reports for intelligence. I need this." And he said, "No. No. No. You don't need people to do that. You need to get the AI to do that." And he's further along than I am. And so I'm trying to get to that point where I grab the right tool at the right time.

Vasu Jakkal: And you know the beauty of it is because it's so customized. Right? That's the beauty of gen AI is you can customize it for your life because you're going to look for different information. And then once you have that habit, you build it, amazing. Like you've just saved that time. And like, you know, your cocktail example earlier, my -- the reason I use it for like recipes is you can go to work and you can go online and look for recipes. But you may not have the ingredients. Exactly. Like oh my god. I have to go to grocery shop. So that's why I like it because you can say, "Make a drink. I have lemons."

Sherrod DeGrippo: Right? No. I'm always out of simple syrup.

Vasu Jakkal: Like fruits.

Sherrod DeGrippo: You're drinking sangria.

Vasu Jakkal: I'm drinking sangria. Exactly. I love sangria. So, you know, you're like well I have only these three things so I need a recipe. Because otherwise you're like, "Well I don't have like 10 out of the 15 ingredients."

Sherrod DeGrippo: Right. Right.

Vasu Jakkal: So that's where I think like customizing it for what you have and there's still so much to discover in our world. I feel, Sherrod, like we have this incredible opportunity to live, you know, being philosophical for a moment. Like we have this blessing. Like we get to live. We get to like we have this beautiful world with so much magic in it. But it's so hard sometimes to find the information that we need. Like, you know, I want to learn about something. Or I want to read a new book and I want to find a book which is just going to vibe with me in that day. Now we have a tool which can help us too. So I'm truly hoping that our AI tools can help us discover more magic of our world as security AI tools can help it be safer for us so we can then feel empowered to go and live our lives in the best way possible. Right? Because if you don't feel safe, you can't do really anything else. And that's why I think AI and security's so important and AI in the rest of our world is so important. So I'm looking forward to like our world just being glorious and amazing and joyful and abundant and using these tools to help us live that way.

Sherrod DeGrippo: There are so many people in the security industry that need more exposure. So I'll go ahead and let you go. But I want to leave with everyone heard it here. Vasu Jakkal said AI can help you discover more magic. And I love that.

Vasu Jakkal: Well, Sherrod, you're magical. And you're amazing and I absolutely adore you and love you and thank you for doing what you do. You are -- we are so lucky to have you. And thank you for having me. What an honor and privilege. And I hope this was helpful.

Sherrod DeGrippo: Of course. And having Vasu on the "Microsoft Threat Intelligence Podcast" has been a treat, a delight. And we'll have you back again to talk more about all kinds of other things in your life and what you're doing and all of the magical things that you are discovering using the copilots.

Vasu Jakkal: I look forward to it. Thank you.

Sherrod DeGrippo: Thanks for listening to the "Microsoft Threat Intelligence Podcast." We'd love to hear from you. Email us with your ideas at Every episode we'll decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out. for more. And subscribe on your favorite podcast app. [ Music ]