The Microsoft Threat Intelligence Podcast 6.25.25
Ep 47 | 6.25.25

The Art and Science of Microsoft’s Red Team

Transcript

Sherrod DeGrippo: Welcome to the Microsoft Threat Intelligence Podcast. I'm Sherrod DeGrippo. Ever wanted to step into the shadowy realm of digital espionage, cybercrime, social engineering, fraud, well each week dive deep with us into the underground. Come here for Microsoft's elite threat intelligence researchers. Join us as we decode mysteries, expose hidden adversaries, and shape the future of cybersecurity. It might get a little weird. But don't worry, I'm your guide through the back alleys of the threat landscape. Hello, everyone, and welcome to the Microsoft Threat Intelligence Podcast. I am Sherrod DeGrippo, Director of Threat Intelligence Strategy here at Microsoft. And today I'm joined by a fascinating guest who I think you're all going to really enjoy hearing from, my colleague at Microsoft, Craig Nelson, who leads the Microsoft Red Team. Craig, welcome to the show.

Craig Nelson: Thank you very much for having me.

Sherrod DeGrippo: So Craig, you're on Red Team, but tell me, what was your first exposure to threat intelligence and threat actor behavior?

Craig Nelson: So I like to think that my first exposure was really that moment where threat intelligence became mainstream. So that moment stuck with me for a while and it goes back to about 2013. I remember being at the RSA Conference in San Francisco, and right before that conference, Mandiant and FireEye, right, they dropped a report on APT1. So it was right before the conference and like during the conference they announced an impromptu lunchtime briefing. And this was the first time the private security firm publicly called out a nation-state actor by name. And they didn't hold back. They talked about 140 targets and years of sustained intrusion, the forensics that they did. They had photos of the facility in Shanghai where they thought these threat actors were working from. And if you search for "Mandiant APT1" on YouTube, you'll actually find a video that has like guards chasing down a car where a reporter was investigating the report. It was like super dramatic. But I remember at that time like the industry was really buzzing, and that moment changed a lot for me. And because suddenly attribution wasn't just something governments whispered about, it was fair game for the private sector. And that's what spawned a lot of what we see today in threat intelligence. So it really legitimized threat intelligence as a force that could influence policy, defense strategy, and most certainly red teaming. And that really pushed red teams to evolve. So red teams weren't just simulating generic threats anymore, they had to start thinking like real threat actors and simulating what started coming out from all the threat intelligence reports that had a lot of geopolitical context and operational realism. So we're still seeing a lot of that play out today. So of course this also brought some problems. It made a lot of companies suddenly invest to defend against nation-state level actors, even if that level of threat wasn't even relevant to them. So I then saw this as a problem where companies decided to train against the cyber equivalent of a heavyweight boxer, but then they get knocked out by streetfighters like fraudsters. So but, you know, I think the key point here is that the biggest takeaway it does show the power of naming the threat, and that's why I absolutely appreciate this podcast.

Sherrod DeGrippo: So Craig, I am sure that people hearing that they get to learn about red teaming inside Microsoft are super excited. It's one of the most elite red teams really in the world, looking at one of the most unique attack surfaces globally. I can't really imagine another organization that has the same kind of profile as Microsoft. So I guess what I want to know from you to start is as a red teamer, what makes it exciting that Microsoft is your target?

Craig Nelson: Microsoft's exciting because it's such a large attack surface, we have to emulate so many different styles of attackers, and condition a very large organization to be resilient towards such a diverse set of attacks and technology landscape that's always changing.

Sherrod DeGrippo: Tell me, how did you did get in the Red Team? Like give me sort of the timeline of your journey, because I've talked to you about this a little bit and it's pretty fascinating. Because you and I are weirdly similar in like age and tech adoption, so where did you start and how did you end up leading this Red Team?

Craig Nelson: Yes, thanks for leading off with that, because again, we both got lucky and started at the right time in history and of like in the '90s and --

Sherrod DeGrippo: Nineties, man, that was the time. [Laughs]

Craig Nelson: Yes. That's right. But yes, I was born with that foundational skill of being able to sit in front of the computers for 12 hours a day, and back then that's what you needed to know how to do. Because we just didn't have very much, we had dial-up modems and BBSs, and early command line systems that forced you to figure things out. But you know, it really created a frame that I share with like so many others like yourself that started in the '90s; like because if you wanted to accomplish something in tech, you really had to build it, and break it, and bend it to play by our rules. And that time demanded a lot of creativity and resourcefulness. And let's just say there was a very vibrant hacking scene, and that's where I found my footing. So you know, it was definitely a perfect time to be learning. And as I kind of got deeper into computers and finished my computer science degree -- this is back in the time where Perl and Java were just showing up, the internet started to take off, and you could really feel the rules being written in real time. Like I think that's something we both share. And what always fascinated me was how the hacker culture kept trying to bend those rules and do more with less. And I've always appreciated and kept that mindset of like challenging the system, pushing the boundaries, and make tech do things that it wasn't really originally intended to do. So now tying that back to red teaming, what I realized was if you really want to influence technology, you have to bring that '90s hacker spirit to today and you've got to pressure the system and make it bend. And that means influencing people, process, tech, and where that tech exists today and where we anticipate it's going to move in the future. So we have to challenge it in the right way for it to evolve, and that's why red teaming has become a great path for me because I just love to do that.

Sherrod DeGrippo: It's funny you mention Perl specifically because I feel like in the early 2000s Larry Wall was this like spiritual guide of --

Craig Nelson: Right.

Sherrod DeGrippo: -- the TimToady, "There's more than one way to do it" Perl mantras. It was almost like -- he really was like a guru that had these like lofty ideas, "the cathedral" and the "bazaar", and Stallman was out there telling people what to think about software. And it was just --

Craig Nelson: Yes.

Sherrod DeGrippo: -- a different time I think that formed people like you and me? [Laughs]

Craig Nelson: Yes, it's weird, remember back then, you know, the race was to see who could write the best program in the fewest amount of lines and code?

Sherrod DeGrippo: Yes.

Craig Nelson: And now you have Python that makes you put your tabs in. And I am a tab person, by the way.

Sherrod DeGrippo: [Laughs] People do often like to say whether they are a tab person or not. Since you brought that up, what's your text editor of choice?

Craig Nelson: Do want to go with Vi by default. But I do gravitate to now Visual Studio Code a lot because of all the awesome extensions and plugins.

Sherrod DeGrippo: Yes, and like -- and it's such a Microsoft-y foundational piece.

Craig Nelson: Yes.

Sherrod DeGrippo: Yes.

Craig Nelson: But if you landed on a show, you've got to just go with Vi.

Sherrod DeGrippo: So Craig, you lead Red Team. A lot of people understand what that is; essentially your responsibility is to hack Microsoft, find weaknesses, find vulnerabilities any way necessary. But tell me, for yourself and your team, what do you convey to your team really is your focus and mission?

Craig Nelson: Yes, so as you said, we simulate real attacks on Microsoft's infrastructure. We don't touch customer infrastructure, we don't touch any targets outside of Microsoft. We don't hack back, nothing like that. We focus on attacking Microsoft infrastructure. And I frame to my team that we are the lawful good bad guys, right, we are chartered with pushing Microsoft to the same way that real threat actors that we talk about on this podcast all the time are focused on when they are looking at Microsoft as a target. So we're actually doing the same operations from all levels of the spectrum, from very simple attacks all the way through nation-state level, and then we're also looking at new technology in the same way that real threat actors use new technology to make themselves better and faster.

Sherrod DeGrippo: I love that. I love to talk to your Red Team, and for those listening -- and Craig, you know this, the Red Team will ping me and just say, "Hey, this threat actor, Citrine Sleet, Star Blizzard, Luna Tempest, Vanilla Tempest, what do they do for this? What does the exact TTP chain look like from this threat actor?" And I've noticed as we've combined forces closer with MSTIC and the Red Team, they start really getting creative questions. So have you seen some of that intelligence capability being brought in since we've sort of gotten closer with MSTIC and Red Team?

Craig Nelson: Yes, absolutely. So I think that kind of goes to where we intersect. And I will say first red teaming and threat intel really meet at that point of storytelling, right, we're both really much built on narratives. Threat intelligence tells that story of all the threat actors that you mentioned and what they're doing globally. Red teaming takes that and we tell the story grounded to that organization, and how if that threat actor was operating against us, what would that look like? And I really love the storytelling because, you know, that storytelling is what drives emotion and action. So we're not just coming up with a list of theoretical things to fix, we're making it real. So we can see that exact signal onsite and then at that point we can use that to drive change. And I think the second area is that we -- Threat Intel and Red Team really meet in the data. So TI, and my observations of TI -- and it's hard for me to keep up with all the threat actors that you talk about on the podcast, even though I listen every week, TI produces just this massive sets of data, like indicators, and TTPs, and narratives about how an attacker works, but does that data make a difference? So red teaming is taking that data -- that's why they're asking you about the TTPs, so we can use it and we can initiate real tests. So if red teamers are finding footholds based on threat intel, you know, that intelligence is working. Right, if we know that an actor is really good at deserialization, for example, well we want to make sure that we're looking to our parameter for that particular tactic and really pushing the organization to solve all those problems. And if we can't turn threat intel into a detection, an enrichment, or some sort of decision, you know, the value is questionable. So I really want to position the team to work with our threat intelligence colleagues and validate whether or not the intelligence loop is working for us. And I think the other kind of key thing that I've observed as we work with threat intel is that we're really starting to meet in the future. So threat intelligence looks outward to emerging threats, red teaming, we're focused on trying to beat new technology, and together we're exploring, you know, what happens if this threat actor adopts this technology quickly? So we have to really collaborate so we can get ahead of the threat, not just report and execute on what's already happening. So we get that signal from Threat Intelligence and then Red Teaming is testing and tuning and just trying to make it real.

Sherrod DeGrippo: So that looks about threat intel. I want to talk a little bit about defenders, because there's a lot of people listening out here that are defenders, just very solidly blue team listeners out there. How do you feel about getting caught?

Craig Nelson: Well, I don't feel too bad about it, because it shows that something worked great. That's actually a success story. I don't mind sharing good news when things are working as designed. Now, I do incentivize the team to try to avoid getting detected, because we want to put pressure, we want to try to evade systems, because real threat actors are doing that exact same thing; right, we know that one of the areas that they're very conscious of when they're landing in an environment is, are they going to get caught? They want to avoid that; so do we. But if we are caught, it is a good story.

Sherrod DeGrippo: Yes, I love that back and forth in that relationship between Red Team/Blue Team, and how everyone really is on the same side, right, like whether you're threat intel, or incident response, or on the frontlines in defense, ultimately that red team simulation activity it really is like training day, or practice camp, or whatever you want to call it in a sports metaphor, which I'm not great at, it gets you ready with that muscle memory for when threat actors really do try those things.

Craig Nelson: Yes.

Sherrod DeGrippo: Okay, we're going to do rapid fire; are you ready?

Craig Nelson: Yes, I'm ready for the rapid fire.

Sherrod DeGrippo: What's your go-to hacking soundtrack; what music are you listening to?

Craig Nelson: Radiohead today on repeat.

Sherrod DeGrippo: Okay, wow. Any other preferred choices?

Craig Nelson: I'm sticking with pretty much any Radiohead album. There's just something that just clicks with my brain and it just works, and it gets me into that level of focus that I need.

Sherrod DeGrippo: Love it. And how about a favorite movie villain?

Craig Nelson: Hands down the Joker, because there's this one iconic scene in "The Dark Knight" that says a lot, and that's where Commissioner Gordon is interrogating the Joker and he's demanding to know where another villain, Harvey Dent is; and the Joker tilts his head and he asked, "What time is it?" And Commissioner Gordon snaps back, "Why does that matter?" And then the Joker says, "Well, because depending on the time, he might be in one place, or several." And I love that. I love the villain, I love that scene. And it's such a perfect moment that just shows the importance of time. And a threat actor can spread to multiple places really quickly, and that's a big problem to deal with, even for a superhero. So time is really on the side of the defenders.

Sherrod DeGrippo: Would you red team for the Joker?

Craig Nelson: Well, no, of course not. [Laughter]

Sherrod DeGrippo: I don't know if that's very good that you had to think about that. What's one nontechnical thing that you're really good at?

Craig Nelson: I would say I'm good at playing guitar; that's my thing. There's a lot of patterns to that, and it's just pretty amazing to me. It has been that so many variations and there are only so many notes, but yet it's kind of been a part of the world for thousands of years, and there's always a place for new innovation and also it's just the soundtrack for our lives. I'm still a child of the '90s for --

Sherrod DeGrippo: Yes, me too.

Craig Nelson: We have them back then.

Sherrod DeGrippo: I'll tell you, for those of you amongst us who did not live through the '90s that are listening, that really was the -- in my opinion, the era of music that just -- I think it was like just incredible release after incredible release for 10 years straight, just everything across like grunge, grip pop, rock, goth, everything. The '90s had everything for everybody, so. Maybe we need to put together a playlist.

Craig Nelson: Or a new podcast.

Sherrod DeGrippo: Oh, God, no. [Laughter] Please, God, no more podcasts. So tell me, when you guys are doing Red Team operations and engagements, is it more threat actor simulation or are you looking to find actual vulnerabilities? What's the kind of approach there?

Craig Nelson: So we do both, we assimilate actors to see how they'd operate in their environment and we also hunt for vulnerabilities that would matter to those actors, what would they use? So it's a very tight loop. Emulation informs research, and the research informs the next round of what we emulate as a red team. So say for example one big focus right now is source code. We live in a world where source code is a very high-value asset, and we know it's actively targeted. So we're starting to see new tools emerge that can scan and reason through source code at AI speed. And that completely changes about how we think about vulnerabilities. So just the core idea of what a zero-day is could look very different in the near future. And it's not just because someone found a bug the hard way, it's because they used AI to scan public and stolen repositories to discover subtle little logic flaws or abuse paths that are exploitable. So if we know real threat actors are looking there, right, we have to do the same thing. And that leads us to find vulnerabilities.

Sherrod DeGrippo: I love that. I also think it's interesting that you kind of mention what we would think of as software supply chain. We see a variety of threat actors, Silk Typhoon, a lot of the Blizzards out of Russia-based threat actors, they actively go after software vendors, VARs, IT suppliers, consultancies because --

Craig Nelson: Yes.

Sherrod DeGrippo: -- if they can get into those upstream providers, they can easily pivot into the downstream customers.

Craig Nelson: Absolutely.

Sherrod DeGrippo: So can you tell me a little bit about a Red Team operation that surprised you or that wasn't what you maybe expected?

Craig Nelson: Yes, I have to be careful what I say here, but one does come to mind. It was quite a while ago, but it's exciting. So this is a good story to tell. So the code name of this op that we used internally was called "Crashers". So this is a mission that really kind of blurred the line between technical precision and real-world unpredictability. So again, this was many years ago, but it's a good story that I can share. So there was a very specific Microsoft building that the Red Team had to get into to get to another target. So I won't go into details what that was, but you know, if you look at -- this was anchored to a real security incidence that happened at Microsoft in the past. So there was one called the "Xbox Hacker Underground" that actually did involve physical intrusions and stealing like Xbox or Microsoft campus, right, so this stuff happens. So that inspired the Red Team. And the setup for the story is that we spent a bunch of time gaining access to camera systems and we were able to observe how the facilities operated, hoping to use some of that insight to breach the physical parameter of this building and then reach a high-value internal target. So we did a lot of recognizance around the systems that supported the particular facility, and we built a very elegant attack chain payload that we could surgically inject into the system, and that system was heavily isolated. We had to manipulate some tokens, and we had to also execute something with a perfectly timed fault within the authentication flow. And within that, we actually purchased some biometric readers that were the same model that we saw in pictures of the building that we were targeting. So every detail was tested, simulated, rehearsed; pretty confident. So breach day rolls around and the team huddled around the monitors watching the camera systems that we compromised a few weeks prior, all eyes watching these feeds. We launched the payload, it worked. And the system believed everything was legit, all green lights to go forward. So then came the big moment. So one of the red teamers approached the facility, she confidently scanned her hand on the biometric reader at the secure entrance, and then the building security guard glanced up, had a strange frown on his face, and he double-checked his screen. And now they -- we were just sitting there watching on the camera feeds trying to silently figure out what was going on. So then in a twist of the simulation that no one had predicted, the guard looked up and then just opened the manual side door that should have only been used in emergencies and let the red teamer in. So just like that, she's in the location. Once --

Sherrod DeGrippo: What -- what --

Craig Nelson: -- [inaudible 00:18:33], that was that.

Sherrod DeGrippo: [Laughs] Let me ask you --

Craig Nelson: Yes, yes.

Sherrod DeGrippo: -- tell me -- I mean, I feel like there's a big reveal coming next, but --

Craig Nelson: Oh, yes.

Sherrod DeGrippo: -- what did you feel in that moment when you -- I mean well your heart rate must have been off the charts.

Craig Nelson: Yes. Yes, months and months of trying to plan this thing and it turned out all the things that we did we thought would work, but then it turns out that the data that we injected into the database it didn't work, altered the application. And the guard saw the error on the screen and figured there's a tooling problem. So yes, we were -- definitely been nervous but, you know, that guard, turns out, made the decision to optimize for a good experience to let her in. [Laughter] So but yes so then at that point you follow through with the rest of the plan. But the takeaway there was clear, sometimes it's not the fancy exploit that gets you in, it's just human nature. So they were definitely watching us, and it was a very exciting moment.

Sherrod DeGrippo: That's incredible. So tell me that phrase you said again, "Optimized for a better user experience?"

Craig Nelson: Yes.

Sherrod DeGrippo: That sounds like code for, "Made a bad security choice."

Craig Nelson: It is. Essentially, you know, if you're a security guard and you're dealing with humans, you might want to optimize for a good experience for those humans rather than tell them to stand aside as the problem is debugged. And what's interesting is in this particular case, just a few hours later, that application I was telling you about it was investigated by the Engineering team who determined it was a Red Team or it was an intruder. So they launched an investigation and they figured out what happened and did a great investigation. And it turns out that also triggered a lot of updates about how we do a lot of our physical security at Microsoft, how the identity system is connected, the infrastructure, some of the isolation. So again, this was many years ago and a lot of stuff has changed since then, but it is a good reminder that security isn't just all about the tech, to your point.

Sherrod DeGrippo: Well, I'll tell you, too, interestingly, we hired somebody new on my team eight months ago and she had all kinds of badge issues. And I remember very vividly Security absolutely would not let her in, absolutely not. She was confirmed an employee, but the access had not been provisioned yet, absolutely not. And I remember so vividly sitting in the like first set of door's lobby in one of the buildings where there's like some couches and stuff where you can wait for somebody to come pick you up. And I was like, "It's great to meet you, but unfortunately, we can only sit in this lobby waiting area and you cannot come in." So I do feel like the physical security choices, certainly on campus, are incredible. So you're doing all these Red Team operations, and I'm sure that there are significant findings brought back for a variety of teams, and go-do's, and action items. What's something about Red Team findings that you wish teams and organizations pay more attention to; what should people pay more attention to on their red team readout?

Craig Nelson: And looking past that initial symptom, looking beyond the initial exploit that worked or detection that didn't fire, and we focus on the root cause of the system that made the weakness possible in the first place. So at Microsoft we have a healthy system in place to ask these questions. And in fact a lot of it's based on what Amazon created many years ago in a structure they created called the "5 Whys", which is a good thing for listeners to zoom into if you haven't heard about it. The key point there is that problems are usually due to a manifestation of deeper forces, and you have to ask those five whys to really drill down into what happened. And every red team finding has a connection to organizational dynamics, sometimes unclear ownership, design assumptions, or the state of technical dependencies, or other kind of broken decision loops. So once you find that root cause, you can then pivot to the question, "What other areas of the system do we have to test?" You know, it's not just this one system, you're probably going to see it many other places. On a cloud scale, that usually happens about a million times.

Sherrod DeGrippo: And how do you find -- when you take those readouts to org owners, or system owners, or code responsible individuals, what kind of response do you usually get from them? I'm sure they're ready to work with you and they're happy about it, but is it shock, surprise, concern about adding additional work into their workload? What do those readouts typically vibe -- like what's the vibe of those?

Craig Nelson: Yes. Fortunately, we have a culture in Microsoft that really appreciates red teaming. So there is a lot of appreciation there that we're finding it before real threat actors do, because at that point, the cost is much higher and then there's a lot of other complications we have to deal with. So I would say 80% of the time, it's one of appreciation, things get fixed and we actually build a lot of good partnerships that help us whatever the next mission is.

Sherrod DeGrippo: I also think about sort of I see the kind of technology evolution in a couple of different parts. But one of them is the pre-cloud era, like you would buy software in a box. I remember -- you probably too, you would go buy Windows at Staples. [Laughs] Like --

Craig Nelson: Yes.

Sherrod DeGrippo: -- you would buy a box of software at a store and you'll bring that box home. And so securing the software in the box is so different from securing a cloud, right, securing this incredible interconnectedness that we have now. And so from a red team perspective, how do you see the cloud, how do you see attacking the cloud, and sort of what's the approach there now that we're in this cloud transformation post era? Pretty much, everybody's in the cloud now.

Craig Nelson: Yes, so the cloud, the rise of the cloud, and now the shift to AI. And most organizations are in the cloud and that shift over the last decade plus has just unlocked a lot of capabilities. But it's also introduced a lot of complexity. We have -- see a lot of identity sprawl, and shared infrastructure, and tightly coupled services. And then, you know, the rise of microservices that -- and sometimes they're just really complex in how they're used, how they're plugged together, how they authenticate, how they pass data. It's all definitely super complex, and red team over the last decade has really evolved to push those boundaries and really adapt to that growing attack surface, very different than, of course, box software. And now we've got this AI thing that is following a similar trajectory to where the cloud was about 15 years ago and it's this super fast adoption. And that has a lot of security impact. And from a red team perspective -- and this is where we're getting really far away from box software over to AI, there's really kind of two key areas that I see. Number one is this shift has made AI an attack path. So if you look at how AI systems are connected to internal enterprise data, usually through, you know, patterns like Retrieve-Augmented Generation -- and you'll hear your engineers talk about like the RAG pattern and that's what, you know, complements the traditional prompts with real-time data that you can pull in from services like Azure, send that to the prompt and then the response is more up to date than what the static model is trained for, like all those interconnections to data have become very high-value targets, so there's a lot of need for tight controls around isolation, authentication, and monitoring. But that RAG pattern is just starting. At the time of this podcast, the MCP is starting to pop up, and that's basically AI interfaces that can, you know, allow the system to access more data, trigger automation, and it's really turned AI into this programmable, somewhat nondeterministic interface that is going to really mark, I think, the next year or two, and the cloud is definitely evolving. Another area that I want to kind of zoom in on was just using this as an attacker tool. So AI is a dual-use system, and we definitely saw just a few years ago starting with phishing, attackers using generative models to craft more convincing emails. Now it's going to change quite a bit. So those attackers are going to be loading up Visual Studio Code with like different chat clients like GitHub Copilot or there's one called "Rue", and then Cline, that allows them to basically write a spec for their attack and then it will translate in the code. And that code will accelerate attacker development cycles in just the overall sophistication of what we see threat actors doing. So I think the threat intelligence is going to take new forms in this AI era. And if you look at it from the span of time that we both can, from the '90s to today, it is quite incredible, and these three key phases, and we're just at the very start of the tip of the AI phase.

Sherrod DeGrippo: So I think we have a lot of listeners that do threat intelligence, and we also have a lot of listeners in security. Red teaming is sort of for a lot of people I think this big goal, they want to get into red teaming. Help me understand like for you where you see the intersection of threat intelligence, where that might help someone get into red teaming, how do you see that, and then I want to talk about what it's like kind of using threat intelligence at Microsoft?

Craig Nelson: Got it. So yes, I think some of the things that make folks a great threat intelligence analyst and a great Red Team member at Microsoft are pretty similar. Being great at this work really starts with how you think, not just how deep your technical skills go. And a lot of this, this is not something that people are born with, and both threat intelligence and attacker intelligence is built up over time. And I am lucky that, you know, I have some of the best red teamers on my team and I get to learn from them. And what I see is the great red teamers really use three hacks to support how they operate. And I can go into that a bit. Hack number one is they have a breadth of knowledge that they can connect together really fast, right, both red teamers and threat intelligence analysts, so definitely a core skill. So you've got to think like an attacker across many domains, [inaudible 00:27:49], identity, endpoint, code, all of it. One of my favorite quotes is, "Talent can hit targets that no one else can hit, but genius can hit targets that no one else can see." And that's what exactly makes someone stand out in red teaming, and threat intelligence as well. Great red teamers can look at a system and then just instinctively know how it's going to break. And not just because they've seen that exact issue before, but just they know the patterns, right, they've written the code, they've experimented with the technology, and they're fearless in diving into new tech, and connected in ways that folks just didn't anticipate during the design.

Sherrod DeGrippo: I want to say something about that really quickly because one thing I agree I have noticed amongst -- you know, I'm deep in MSTIC all day all my life I'm a MSTIC, and I love it, and that's where I spend my time. And one of the things that I've noticed that's in common with great red teamers is something I think people don't instantly realize, and it's a lot of intuition. There is a lot of I'll say, "Well, how did you know to do that looking at maybe actor attribution or a red teamer going to whatever next step they're doing? How did you know?" And they say, "You know, I just felt like that was the right thing." And they're listening to their creativity and like what is pushing and pulling them and where their interest is driving them. And I think that's really unique and special.

Craig Nelson: A hundred percent; and I'm glad you bring that up because, I mean, that's like -- well, let's go with that hack number two, like great threat intelligence analysts and red teamers, they know their flow, they know how to focus and unlock that exact creativity, right, they know when they're the sharpest, they know how to get into the creative rhythm, and they know how to stay there. So I call this the "flow hack", right, it's a skill of building your environment around your brain, rather than incurring all the costs of having your brain have to retune to your environment. So let me give you some examples of what I've observed on my team, right? So some people hit their stride like early in the morning, some late at night. I see red teamers reset their thinking with a midday run. And I love to watch how people that have a rigorous process for how they take notes and then connect those notes to other ideas, and almost like offload that content so they can use their brain to think rather than just for storage and retrieval. So when you kind of add that to what we're talking about in AI and how real attackers want to use AI, the next-gen red teamers and threat intelligence analysts are going to be mastering AI and take all of that to the next level and use that AI as their second brain. So but it kind of bothers me when I see people who don't protect their flow, right, people that, you know, are trying to do red teaming while juggling instant messages, and social media, and news, and they spend all their energy fighting their tool chain building VMs, debugging their Python environments, without just ever really getting into thinking mode. So unlocking --

Sherrod DeGrippo: Yes.

Craig Nelson: -- creativity is just so, so important. I'm with you.

Sherrod DeGrippo: I think something that I really have focused on, even in the past 10 years, we're knowledge workers. And if you are a knowledge worker and you're not setting aside time to think -- and I mean laptop closed, notifications off, time to think and use the brain, and use the knowledge in an isolated deep work state --

Craig Nelson: Yes.

Sherrod DeGrippo: -- you're not bringing as much value and efficacy to your role as you could be.

Craig Nelson: Yes, so got to protect your flow. To your point, that takes us to hack number three. And I think great red teamers reflect on both their wins and their losses. So after every op, what I observe is that some folks just step back and they'll ask questions, "What did I miss?" Even if it was a win, they'll say, "What did I miss? What in the infrastructure behaved in an usual way? What credential did I pick up that might have the big blast radius unlock another door that I didn't see in my first time?" So the key point here is that reflection isn't a luxury, it's a system of growth that I see folks use and really take themselves to that next level and unlock that next level of creativity.

Sherrod DeGrippo: I love that. I think in intelligence analysis, too, "What did I miss," and then leveraging your team and saying, "What do you think," has been one of the biggest boosts at -- like exponential boost to my ability to understand things and get things done is going to someone and saying, "What did I miss here," "What do you think," and giving big space to myself and other experts to just go look like, "I think this, I think this, I think this." And then eventually you usually hit a bullseye. If enough people are looking, and thinking, and talking about a problem, eventually you kind of whittle your way down to something that's going to work.

Craig Nelson: Yes, and I recognize that folks that are newer to this might be hearing us talk like this and then it's actually somewhat intimidating. But that's what has to take us to, let's say, hack number four, and that is the other thing that I see red teamers do, and we have to be really honest with this, and one of those underrated skills is the ability to just operate with incomplete information. So we both know that real threat actors don't have a full map of what they're attacking and they're comfortable with that; they're probing, and pivoting, and just learning as they go, and they move fast. And great red teamers need to be just as fast living in that ambiguity. But what's really interesting is when you're doing red teaming, again, you're a large company, like the corporate world tends to reward precision, waiting for data, analyze every angle, have a bunch of meetings, take action to like the lowest common denominator that everyone agrees to, and then make sure it's defensible as part of a performance review. And that's just not the way the real world of threat actors work. They move fast, they make decisions on limited signal, they're creative, and they are exploiting things, and if they can do that faster than where the hunters can catch them then, you know, we're in big trouble. So operating with like incomplete information is okay. And you've got to be in that flow and have that level of comfort so you can sit back and think so you have the level of comfort that you need to take action quickly.

Sherrod DeGrippo: I love that. I think that operating with incomplete information is not only an imperative, I think it is what stretches you and makes you grow, right? If you're lacking in the ability to do that, you're not going to get the experience, and the skill, and growth that you would otherwise. There's a television show called "Succession" that I really love.

Craig Nelson: Oh, yes.

Sherrod DeGrippo: And at the very end of it, at one point, I think it's -- God, I can't remember his name, the -- Tom Wambsgans. He says, "Jerry's not afraid of the dark." And he's basically saying, "She will operate even when she doesn't have all the information." And we're talking about billionaire executives. But the point is, having that skill of being not afraid of the dark gets you further down the path than people who are.

Craig Nelson: Absolutely.

Sherrod DeGrippo: When you're doing like actor emulation or adversary emulation, Craig, are you normally drawing more from crime actors or nation-sponsored actors? Do you have a preference there?

Craig Nelson: So we are focused primarily on nation-state actors at this point in time. And that's because of just some of the key moments that have kind of shaped red teaming at Microsoft. So kind of stepping back from just a decade ago, there was kind of moment number one where we embraced the assumed breach model in the company, and the industry did as well. And that's where you basically said that you have to look at yourself from the attacker's point of view full end to end. Just assume the attacker, a nation-state, doesn't matter, can get past the outside shell and they can get in attack decision, and you have to just assume that breach is going to happen. And that's what shifted red teaming to be much more mainstream. And so that was definitely a really positive thing. But you know, and back to your point, within Microsoft, the more recent incidents involving Storm-0558 and Midnight Blizzard, just kind of took us to that next level. And again, these are in the recent past. And Microsoft has always faced a very wide range of threat actors, so we try our best to emulate as much as possible. But you know, these two incidents, for Storm-0588 and Midnight Blizzard were very different. They were really big inflection points that pushed Microsoft to fundamentally evolve how our security governance is done and transformed organizational priorities. We have a massive amount of funding with the Secure Our Future initiative that goes technically really deep into making sure that core pillars of security work and how we execute threat actor tracking, response hunting, and red teaming.

Sherrod DeGrippo: A hundred percent I feel the impact of Secure Our Future initiative or SFI every day in my work. I do things that are driven by SFI, and I know that you, myself, my leadership, we're handing out action items, we're handing out to-do lists to developer orgs, engineering orgs saying, "Hey, Security says you need to do this and let's get it done." And I haven't seen that dynamic before at Microsoft.

Craig Nelson: Yes, it is, I think, rare in the industry that it is, "Here are very clear priorities, here's exactly what you have to do," and then you have to worry about the Red Team and others testing that, Red Team and real adversaries. So it is very significant and very serious.

Sherrod DeGrippo: Well, I want to wrap up with one thing that I think a lot of people will be interested to hear from you, somebody who's done such an incredible red team career history and background, and now you lead Red Team at Microsoft, which I just think is so, so cool. What advice would you give to somebody who wants to get into Red Team that they don't know where to start?

Craig Nelson: Yes, so starting on Red Team is a tough thing. You want to have a foundation, you know, either the system administration, engineering, some other IT job, right, that will help you get accustomed just to how systems work. But as you're going through those jobs over time, look at them as investments, right, look at them as you'll be looking at the edge cases of whatever you're working on and how would you exploit that? So when you are sitting down for your red team interview and you have to recognize that a lot of this is really competitive, you can basically anchor back, not to the academic things that everyone talks about, but anchor back to real experiences of how you see the world. And go into those edge cases, right, and talk about how you fixed something, because you are looking at those edge cases for how a real attacker can build them. So the key thing here is when you're thinking of red teaming, it's about looking at that fixed side of the equation, as well as the fine side of the equation. That fixed is actually really important and that's what would differentiate you when you're at that moment in your life where you've made the decision you want to be a red teamer and you're having that interview to open that door for your career.

Sherrod DeGrippo: I love that because that goes back to something I tell people all the time, and that is mindset and point of view are your differentiators in this industry. Getting skill, experience, understanding, and then putting your mindset and point of view out there in the world, that's what's going to take you far. So I love that you encourage people to kind of start seeing the world that way if they want to be a good red teamer. I love that. Craig Nelson, leader of Microsoft's Red Team, thank you so much for joining us on the podcast. It was great to talk to you. I love every time we get to work together. So I deeply appreciate you joining, and I definitely have a little agenda now to send some crime info over to your team.

Craig Nelson: Fantastic. Thanks, Sherrod.

Sherrod DeGrippo: Thanks for listening to the Microsoft Threat Intelligence Podcast. We'd love to hear from you. Email us with your ideas at tipodcast@microsoft.com. Every episode will decode the threat landscape and arm you with the intelligence you need to take on threat actors. Check us out, msthreatintelpodcast.com for more, and subscribe on your favorite podcast app.