Security Unlocked 4.28.21
Ep 25 | 4.28.21

Knowing Your Enemy: Anticipating Attackers’ Next Moves

Transcript

Nic Fillingham: Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft Security engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories for Microsoft Security, deep dive into the newest threat intel, research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security.

Natalia Godyla: And now, let's unlock the pod. Welcome, everyone, to another episode of Security Unlocked, and hello, Nic, how's it going?

Nic Fillingham: It's going well, good to see you on the other side of this Teams call. Although, you and I were in person not 24 hours ago. You were here in Seattle, we were filming some more episodes of the Security Show. I don't think we've really given listeners of the podcast a full, meaty introduction to the Security Show, have we? Do you wanna let listeners know what they can find?

Natalia Godyla: We play games and hang out with experts in the industry and we've done everything from building robots with folks, to building blocks, to painting our nails. You can find the Security Show on our YouTube channel, so, YouTube.com/MicrosoftSecurity or you can go to aka.ms/securityshow. We talk with Chris Wysopal, the CTO and co-founder of Veracode on modern secure software development, and Dave Kennedy, who comes to talk to us about SecOps and everything you need for a survival kit in SecOps, so come come check them out.

Nic Fillingham: Bad news is you, you have to deal with, uh, Natalia and I on another, uh, media format. But before you go there, make sure you listen to today's episode of Security Unlocked. We have a couple of returning guests. We have Cole and Justin, who have been on before, as well as Josh Neil, who comes on in the, in the last few minutes. And new guest, Melissa. They're all from the Microsoft 365 Defender research team, and they all co-authored a blog from April 1st called Automating Threat Actor Tracking, Understanding Attacker Behavior for Intelligence and Contextual Alerting, which is exactly what it is but I think it buries the lead. Natalia, you had a great TL;DR, what did they do?

Natalia Godyla: The team used statistics to predict the threat actor group and the next stage in the attack and really early in the attack, so that we could identify the attack and inform customers so that they could stop it. I think what's really incredible here is, not only the ability to predict that information, but to just do it so early in kill chain.

Nic Fillingham: Within two minutes after an attack begin, using this model, Microsoft threat experts were able to send a notification to the customer to let them know an attack was underway. The customer was able to do, you know, the necessary things to get that attack shut down. We'd love, as always, your feedback. Send us emails, securityunlocked@microsoft.com. Hit us up on the Twitters. On with the pod.

Natalia Godyla: On with the pod.

Nic Fillingham: Well, welcome back to the Security Unlocked podcast, Cole and Justin, and welcome to the Security Unlocked podcast, Melissa. Thanks for joining us today. We have three wonderful guests, with maybe a, a fourth special guest appearing at the end. And today we're gonna be talking about a blog post appearing on the Security blog from April the 1st, called Automating Threat Actor Tracking, Understanding Attacker Behavior for Intelligence and Contextual Alerting. All of the authors from that blog are here with us. Cole, if I could start with you, if you could sort of reintroduce yourself to the audience, give us a little bit, uh, about your role, what you do at Microsoft, and then perhaps hand off to one of your colleagues for the next intro.

Cole Sodja: Sure. Will do, thank you. So, hi, I'm Cole. I work in the Microsoft 356 Defender group. I'm a statistician. Primarily my responsibilities are driving, kind of, research and innovation in general, with supporting threat analytics, threat hunting, threat research in general. Yeah, been doing that for about three years now, and I love it, and I that's a little bit about myself, I'll hand it over to Melissa.

Melissa Turcotte: All right. My name's Melissa, I work with Cole, so in the same group, Microsoft 365 Defender. I'm also a statistician by background. I've been in the cyber domain for about probably seven years now. I was working for Department of Energy research laboratory in their cyber research group for five years, and I joined Microsoft a year ago. I like all sorts of problems related to cyber. My expertise probably would be in anomaly detection, but anything related to cyber, and there's data in a problem, I like to be involved.

Nic Fillingham: And Justin.

Justin Carroll: Hey. I also work in the Microsoft 365 Defender team, doing threat intelligence. My main focus is uncovering new threats and actor groups and understanding what they're doing, different modifications to how they're conducting their attacks, and the outcomes of those attacks, and then figuring out the most effective ways to either, communicate that out to customers or action on detection capabilities to stop them from succeeding.

Nic Fillingham: Listeners of the podcast will note that you have a super sweet ninja turtles tattoo, is that correct?

Justin Carroll: This is accurate, this is definitely accurate.

Nic Fillingham: And, and we may or may not have a super secret fourth guest on this episode, who may join us towards the end, who you would, you would know from an very early episode of the podcast, but perhaps we'll keep them secret until the very end. Thank you all for joining us, thank you for your time. Again, we're referring to a, a blog post that, that all of you authored from April 1st. This is a, quite a complex, and, and sort of technical blog post, which I know a lot of our audience will love.

Nic Fillingham: I got a little lost in the math, but I, I absolutely was enthralled by what you all have undertaken here. Cole, if I could start with you, can you give us, give us an overview of what's covered in this blog post, and sort of what this project was, how you tackled it, and what we're gonna talk about, uh, on this episode today.

Cole Sodja: Yeah. So if I step back, being someone kind of still fairly new in learning, uh, to cyber security, uh, I approached things pretty much with just using data, right? Doing data driven imprints, as I'd say. And through my research, what I started to, um, kinda ask myself is, can we kinda get ahead of cyber security attacks, you know, from a post-breach perspective? Once we see an adversary in a network, can we start to make some predictions, basically, on what they're likely gonna do? Who is the adversary, or is it human operated, is it an automated script, for example. And then if we recognize the adversary, kinda recognize their tactics, their techniques, their procedures, can we say, okay, we're, we're likely gonna see they're gonna ransom this enterprise, for example.

Cole Sodja: So I tried to look at it as more of a data mining exercise initially, it's like, can I recognize these type of patterns, and then how predictive are these patterns that we're seeing in terms of what likely is gonna occur. Or put it another way, what type of threat is this, essentially, to the enterprise? So, so that's kinda the background, the motivation. Now, when I started this project, back with Justin and then with Melissa, it started really as let's look for particular, uh, threat actors that we're aware of, that we recognize, that we know about, and see, like, can we start, from a data perspective, classifying okay, is it this group, is it that group, and what does this group tend to do?

Cole Sodja: And one of the challenges in that is, is sparsity. Basically, we don't have a lot of labels sitting around out there saying, it's threat actor group A, B, C, D, and so on. We have handfuls of those. Some of these actors, they don't tend to do attacks very frequently, right? They're extremely sparse. So, so one challenge of this, and one the motivation is, how can we actually partner with threat intelligence, for example, and our threat hunters, to try and essentially encode or extract some of their information to help us build models, to help us reason over the uncertainty, essentially.

Cole Sodja: And when we say probabilistic modeling, that's what we mean. It's how do we actually quantify this uncertainty, both in what we believe about the actors, or the adversaries in general, as well as what they're gonna do, right, once they've breached your network. So that's kinda how it started, and what this blog's really about is kinda giving a walk-through, essentially, of what we did initially with this research. It started with, and Justin will talk about this in a moment, it started with looking at few, select threat actors that are very serious.

Cole Sodja: We started to understand their behaviors more and more and we thought it was a good opportunity, initially, to try and build a model to, again, understand what they're doing, track what they're doing, because they do change their tactics over time, as well as just see if we could get ahead of them. Can we actually notify a customer in advance, before, uh, for example, their organization's ransomed? So, so that's one part of the blog that we'll discuss, and I'll hand it over to my good friend Justin to take it from here.

Justin Carroll: So, like, one of the, the main challenges that we kinda face in the intelligence sphere is understanding the particulars of an actor and when they are present in an environment. A lot of times, you'll see the intelligence is really focused on a very particular indicator such as, like, a known IP address that's malicious, or a single behavior. But it's kinda difficult to frequently pivot them out to understand when a suspected attacker is in an environment. A lot of that is due because they don't always do the exact same behaviors when they are compromising... Organization or device. There will be some variation and it basically requires manual enrichment a lot of the times of devices to try and understand the specifics of the attacks and what

Justin Carroll: ... the final outcomes o- wh- out of that attack, so this opportunity presented one to work with data scientists to, like, really supercharge our efforts so that we could kinda come in understanding a much bigger picture and knowing, essentially, what behaviors that we saw occur and then which ones we might suspect. A lot of times with these human operated ransomware ones, the time to alert, to notify of the expected outcome is often fairly short, in particular with, uh, one of the ones that we worked on to kinda test this method out. We had seen very short instances from time to compromise to ransom, so, um, this was to try and see if we could have a, a highly confident method of enriching that intelligence, um, and then working with other teams to get those alerts out.

Natalia Godyla: If I could jump in here for a moment. So, at the beginning of your description, you noted that typically you'd use manual enrichment. Can you talk a little bit about that? So prior to this probabilistic model, how did you go through that manual enrichment process to try to, uh, predict what threat actors they were or determine what stage of an attack it was?

Justin Carroll: It would be something along the lines of, let's say, you had intelligence from either a partner team or open source intelligence that says, you know, "These threat actors are using this IP address as part of their attack," and then looking for the presence of that and then finding out what actually occurred on those devices to understand the entirety of the attack, or looking more generically and saying, like, "Okay, we know these attackers like to use a particular behavior as part of their credential theft," and then so looking for all sorts of instances of that credential theft and then kinda continuing to pivot down into one that is leading to the behavior that y- you're looking for. One of the difficulties that you'll see in particular with this and other actors is, like, they will use multiple shared open source tools and payloads. Um, many of them aren't even malware, they're clean tools with legitimate purposes, so it can make it difficult to try and suss out the ones from malicious versus administrative use, so you have to look for that combination of different behaviors to indicate something malicious is afoot.

Nic Fillingham: Justin, if I look at the blog, I think it might be the first chapter here, there's a MITRE ATT&CK framework diagram, Figure One, and it, uh, outlines sort of the steps taken here for how this model was able to, with high confidence, identify the, the actor and, uh, send an alert to the customer who was able to shut it down. I wonder if you could sort of, could you walk us through this, these sort of six steps as an example of, of how this work, how this worked in, in sort of real life?

Justin Carroll: Yeah. I can walk through basically from a model's perspective, essentially, how it works. Timing, that's more a function of, like, how the attack, uh, typically progresses with this actor. Technically speaking, what the model's really doing is it's encoding each behavior we have, in this case, each MITRE technique in particular in terms of what's the confidence that once we see, for example, initial access follow... Under, let's say, RDP brute force, followed by lateral tool transfer with subset of tools recognized, that particular sequence right there, that's where the model would be like, "Okay, the probability that it's this particular threat actor group conditional on those two things occurring in sequence will be X," and that sequence could occur in a matter of minutes or even days and weeks, dependent on the actor, of course, we're talking about.

Justin Carroll: With the, the actor we're showing in this graph, this actor typically will penetrate a network through RDP brute force, but then w- sometimes the, they won't immediately transfer their tools. They might wait a day or two, or sometimes they'll, they'll do it very fast, like, once they basically compromise a log-in then, uh, they'll, they'll go to that machine, there might be some, um, discovery related commands before they transfer or they might just transfer their tools and then that will be the attack box, basically, in which they stage their attack, and then they'll do some additional things.

Justin Carroll: So at each step, basically, or each stage of the attack, as we like to call it, the model is basically gonna then update its probabilities and say, "Okay, based on all the information I've seen up to this stage, the probability that it's this actor is P and now, conditional that it's this actor with probability P, the probability that we'll now see, for example, defense evasion and this 'tack will be Q," or, or we could even go further in the attack stage to say, "Now, given all this, what's the probability that we'll see, for example, ransomware or inhibit system recovery in the coming hour? Or in the coming, you know, X time?"

Justin Carroll: So the model's able to do that, but it's primarily conditional on the stages it's observed up to a point in time, not so much in terms of the time it takes for the actors to do X.

Natalia Godyla: So, in this blog and in our discussion today, we're gearing up to talk about probabilistic graphical modeling as a way to address the challenge that, Cole and Justin, you've set up for us today, and, and for any of our listeners who'd like to follow along in the blog, the blog is titled "Automating threat actor tracking: Understanding attacker behavior for intelligence and contextual alerting" and you can find it on the Microsoft Security blog. I'd love to dive into the probabilistic graphical modeling and perhaps start with a definition of what that means. So, M- Melissa, could you give us an overview of this approach?

Melissa Turcotte: Yeah. We have this problem which what they are essentially saying is, we have a collection of things which... I'm a statistician so I often call them variables, but, you know, features, if you will, if that's m- more easy for you to understand, but we, th- these TTPs, th- right. The sets of things that the actors are doing, and we have a collection of them. And given some collection of these, we wanna make a statement about whether or not it's ransomware or whether it's not a specific threat actor, or a group of actors. Right? And this is, this is, like, a perfect, um, example of where probability can help you make these decision, and one thing I'd like to stress is that no one of these features gives you enough information about whether or not it's this actor or this, this group of actors, or it's ransomware, you know, whatever your variable interest is.

Melissa Turcotte: It really is the collection of these together that, you know, kind of in Justin's mind, as an analyst, he's, he's making these connections in his head, and I wanna be able to replicate that in some sense, I wanna take into account his knowledge and kind of his decision making process, combined with the data that I have, to make these probabilistic statements about what I think is happening. And graphical models are really great here, probabilistic graphical models in particular, as they kind of provide this joint probability distribution over all these features, and the variable of interest, in this case, is kind of, maybe is it this actor, but not necessarily. I mainly wanna know something about any one of these other features. I may already know it's this actor, and I may wanna be like, "Wh- what are the common things I see this actor do?"

Melissa Turcotte: So, so graphical models really shine in this case where you have this collection of things that you are observing, and you kind of want to ask questions about any subset of them. Given some observations of others, and so th- this is a really great tool to use in this setting, and it's also quite interpretable. So if you kind of look, if you're looking at the blog and you see this Figure Two, which is a toy example, but y- you kind of, as a human, you can look at that and you can kind of understand that, "Okay, so I'm seeing transfer tools and lateral movement are related." Um, and you can kind of understand sort of wh- what the relationships the model is making. Um, and so that kind of provides this extra, you know, benefit of this in that, yeah, I can talk an analyst through what this kind of is showing and then i- it's quite interpretable for them even if they don't understand the underlying maths, and that's kind of something we really wanna strive for. Um, you shouldn't have to understand the underlying maths to kind of understand the decisions that are being made.

Melissa Turcotte: It's really attractive in this sense, and then the Bayesian networks, why I really like it is kind of, the Bayesian paradigm is... So you, you have, you know, statistics, generally, or data science, you have some data and you're kind of, you know, making inference given the set of data to make statements about things of interest. So the data tells you something about your beliefs and the state of the world, but you have your own subjective beliefs about wh- what you think could and could not happen. The, the Bayesian paradigm kind of combines those two things, so it's, you have your beliefs and then you have what the data is telling you, a- and your ultimate kind of predictions are based on the combination of those things. And generally, the, the way it works is the more data you have, the data will always win through.

Melissa Turcotte: So this problem, bringing it back to attacker prediction, is a case where we don't have a lot of data, right? We don't... Companies get attacked... Or we say, companies get attacked all the time but not at the scale at which we collect the underlying data, so like, you know, we have, you know, you as a user are performing actions, logging into computers you use... You know, this shows up in the data thousands of times a day, whereas an attack happens kind of, like, on a monthly scale, so c- the scales of attacks to the data we're getting is just really small, and then when you go into attacks that kind of we've labeled as being attributed to a threat actor, I mean, that's even way smaller. So it's, it's kind of a small data problem, uh, in terms of the number of labels you have.

Melissa Turcotte: But what we do have is this analysts who have spent years tracking these people and have their kind of, you know, beliefs about what they do and how they changed over time. And so we

Melissa Turcotte: Wanna capture that. We definitely want to include the evidence we see and the data, but we wanna capture that really rich knowledge that we get from the analysts. And so kind of that's where the Bayesian network part becomes attractive because it, it provides a very principled way to, to capture the analysts' expertise, combine that information with the data we're seeing to make these ultimate predictions.

Natalia Godyla: For our audience, could you really quickly describe a Bayesian network?

Melissa Turcotte: So, a Bayesian network is a way of building a model for a collection of variables whereby the idea is that you have different variables which are related to each other. It, it, it kind of helps draw out or show what those relationships are so, like, in the graph, you know, if there's an arrow from impact... Or from transfer tools to impact that's saying if I see transfer tools, that has a direct impact... I'm gonna use the word impact twice here. Has a direct impact on whether or not I'm going to see impact. So, so it's kind of the way the variables relate to each other and the way the probabilities change according to those relationships. And so a Bayesian network encodes all this information.

Nic Fillingham: If I can take another swing at that one... Thank you, Melissa. I'm wondering what were some of the other, uh, techniques that you either considered for this approach? Like, did you experiment with other methods and then ultimately chose Bayesian?

Melissa Turcotte: Yes, um, in fact, uh, so the initial kind of... The perhaps most obvious thing to do is to c- to think of decision trees, right? You s- you're, you're, you're seeing, you know, these things over time. Okay, I saw, um, what was the first one? Initial access with this... You don't go as broad as initial access, but I saw initial access using this, you know, minor technique. And so you can kind of think, like, you, you, you have a tree that's kind of... Okay, I saw this, I didn't see this, but I saw this and I didn't see this, so now I think it's this actor. But kind of where this is preferable is the fact that, as Paul says, we don't want to see the whole attack happen before we make a statement about what we think it is. And Bayesian networks work really well in, in the absence of some observed variables.

Cole Sodja: Yeah, I'll just quickly chime in. I agree with Melissa. So, I did experiments, for example, with several models including decision trees. Even, um, different forms of Bayesian decision trees like BART for example. And in addition to what Melissa is saying where, for example, predicting the probability that it's threat actor conditioned on certain variables we saw, uh, we might also, as Melissa pointed out, want to say, okay, let's predict, for example, that this threat actor is going to do impact or a certain form of impact. And with decision trees, that means basically you're building multiple decision trees to do that. You can't just build one decision tree... Well, let's put it this way. You can't easily build one decision tree to have multiple target variables. That's something you get for free with the Bayesian network. Another thing I'll say in addition to what, um... To marginalization is the Bayesian network is more general. So, it could actually handle kind of a broader graphical structure. The decision tree is a specific graph.

Cole Sodja: So, it kind of already inhibits you, if you will, to learning a certain structure over the data. Whereas the Bayesian nets, they could give you a little more general structure. We could also build these models that are time dependent, what are called dynamic Bayesian networks. That's something much harder to do with tree models. So, it's just a more flexible model as well as I would say. In my experiments, the Bayesian network did perform better on average than the set of decision trees I considered.

Nic Fillingham: I'd like to better understand the relationship between this model and folks like Justin. So, is Justin, as a very experienced threat analyst, is Justin helping you define labels and helping you sort of build some of the initial... I'm, gonna get the taxonomy wrong here, so please correct me. But the initial sort of properties of the model? Or is, is Justin, as an analyst, interpreting what you sort of think you have in the model? How, how do I understand the relationship between the analyst and, and how they're providing their expertise into, into this model?

Melissa Turcotte: All three.

Nic Fillingham: Oh, great. (laughs)

Melissa Turcotte: All three things you said is actually correct. So, so hopefully we, we've explained it somewhat well. So, yes. The first stage, right Justin? The analysts are providing us our label data. So, yes. That's the first thing. And then they also help us kind of, you know, you have the raw data, but that's kind of... There's so much data processing that goes... That, that happens before it's kind of... This data's kind of in this tabular forms that's like, yes, we... You know, these are the features we are tracking, so think of your TTPs, the different notes in your graph. Getting the data into that, kind of that schema, the threat analysts help with. So, you know, help define what, what these tactics, techniques, and procedures are that we should track. Like you said, you, you can't be super broad. Lateral movement doesn't really have a lot of meaning, um, to kind of like the different ways in which someone can do lateral movement and how granular w- you want to go.

Melissa Turcotte: So, we discuss with the analysts all the time to kind of build up, you know, the ontology, if you will. And then, you know, as a first stage, like I said, it's a small data sample, so we're like... Justin helps inform what the model thinks about in a probabilistic sense. So, you... One thing I might ask him, I, I would be like... If I saw net... you know I'm borrowing from our toy example, but if I saw a network scanning modify system process, transfer tools, but didn't see any of the others, do you think it would be this actor X? Or do you think it would be ransomware? And he would be like, hmm, I would probably 60% certain. I can take that information and encode that directly so that, in the absence of any data, the model would return 60%. It would... If I didn't see any data, it would return what Justin believed was the probability in the presence of a certain number of variables.

Melissa Turcotte: And then we kind of see data and we update our beliefs over time based on that. And then, also, after we've kind of trained these things, I go back to Justin and say does this make sense to you? So, he, he's kind of involved in all three, the whole process.

Nic Fillingham: Melissa, I think you're telling me you've built a virtual Justin.

Melissa Turcotte: We... That, that is what we are literally trying to do. And back it up... And, you know, and back it up with data as well. I'd, I'd like to like... You know, I'm a firm believer that everyone has their subjective beliefs, Justin has beliefs as well. Oftentimes, I can prove analysts wrong. Be like, they think something, I'm like, well, the data is telling me something else. So, we need to figure out, you know, that discrepancy. But, yes. We are essentially trying to build virtual Jus- uh, Justins. Although, like, th- there... I don't think there's any stage upon which we won't need the analysts to constantly feed back in with the new information they have.

Nic Fillingham: Got it. And then can it come full circle? Justin, how do you as an analyst, how do you get smarter and better at what you do by what this model is, is telling you? What's the feedback loop look like here for you?

Justin Carroll: It's one of those where, basically, using the model kind of super-charged my abilities where, instead of having to look at this very granular kind of like ad hoc, oh, this may be interesting, now I have the instances already serviced to me, and I have a good understanding of what success rate through the kill chain the attacker was able to get. And maybe figure out which ones that I needed to enrich more to understand was there data that we can add into the model because they've done something different that we need to capture and then look for opportunities in that way. So, really, it's basically... It made it where, give or take, sometimes it would take anywhere from 10 to 20 minutes sometimes to try and figure out, like, is this who I think it is? And like, what have they done? What are their goals? To just looking at the result from the model. And within usually seconds, being like, yeah, that looks exactly right. That's... It's confirmed, I think that's spot on.

Natalia Godyla: So, Justin, was there something that was the most surprising in working with this model? Something that the model taught you either about threat actors or any details about the features?

Justin Carroll: One of the things was kind of reexamining My confidence levels on different parts of the attack. Um, where Melissa was stating, for instance, you know, the data suggesting this and the models coming to this conclusion, uh, you know, thinking that it's this probability, and there would be times where I'd have to kind of reevaluate and think, like, hmm, I might've been missing something or overestimating the prevalence of a particular thing and saying it's related to such. Like, uh, I can tend to get very biased based on my narrow scope of the attacks that I'm looking at and think that it's related to this thing, but the model was able to provide a lot of clarity to some of the behaviors that maybe I didn't think were as confident a signal or extremely confident signal and I wasn't giving them the appropriate weight. That's one of the advantages of using it to understand what the attacker's doing, is I let it do much of the leg work once everything's kind of coded in. And then occasionally, like if we found opportunities where it was like, hmm, this still isn't quite right, then it could be tuned as a c- um, as necessary.

Justin Carroll: I think that was probably one of the biggest ones of kind of trying to work through and actually spell out, like, my own thinking processes when I'm evaluating the data. It was something that you just kind of do without thinking, where you're constantly, as an intelligence analyst, looking at data and making conclusions on that data. But you're not usually saying, like, okay, I saw this so I'm gonna give it a 60% probability that it's this. And like, you're, you're just kind of sometimes it's either gut intuition or working on it that way. But actually having the model encode and return back what it was understanding made a, a pretty big impact in trying to understand how my own decision processes work and basically how best to kind of think

Justin Carroll: About these different, wide array of attacks that we're constantly investigating.

Nic Fillingham: The types of indicators that you're building this model on, again please correct me on my taxonomy here, but you're not looking for, you know, NFO files or like ASCII art or, you know, the actual threat actors name being sort of hidden somewhere in the jpeg that they drop as a, as a for the LOLs, like, they're... You're not looking for a sort of a literal signature of these threat actor groups, you're, you're, what you're, what you're doing is you're, you're seeing the actions that have been taken and without any other way of attributing them to an individual group, you're piecing them together.

Nic Fillingham: And as you, as you get more actions and you piece them together based on the, the labels that you get from people like Justin, you're able to, to ultimately have a high probability that it's this threat group actor and they're doing this thing and they're likely to do this thing next. Have I got that right? You're, they're... In no way shape or form are you actually finding a secret text file that has the name, you know, the, the, the handles for all the hackers who are doing it for the LOLs.

Cole Sodja: So let me just quickly jump in, you pretty much nailed it. I'll say this, so, we wanted to do both actually, right, because we don't want to restrain the model if it's, if core's gonna add predictive power, so like you said, we're not actually searching, grepping for example, for a threat actor name and some file or image, certainly not that level. But, for example, some of the actors, maybe they have common infrastructure, maybe they use particular types of tools in their attack typically, right? Like, maybe there's a SHA-1 out there they've used a lot in their attack, or, or recurring IP addresses they use as part of brute forcing.

Cole Sodja: Those are there, but those are very specific and if you just relied on those, like Melissa was saying, either one or a few of those, you're not gonna generalize. You'll probably miss that attacker, right? But we certainly don't want to exclude it from the model because, um, if we happen to see that, the model will, uh, come back with a different type of probability, right? It'd be like, okay. Now the model might be more confident early, rather than waiting to see how the rest of the kill chain progresses. On the more general side, we probably won't go to the MITRE categories, 'cause they're a little too general, right? But if we go to some of the sub techniques, we don't actually have to look at the particular types of executables, or tools, or IPs used.

Cole Sodja: Sometimes just the timing and sequencing is enough actually, to narrow down to, maybe not a particular threat actor, but a group of actors or, more generally, we can say with high competence, you know, this is a human adversary. They're taking this amount of time to do discovery commands, they're, they're doing lateral these type of ways. And the model could recognize that, even without knowing the particular commands, it's just seeing the more general techniques involved, right? So we do a bit of both, actually. We tend to want to rely more on, kind of, the general attacks or indicators as you're saying, that's right. But, we certainly don't want to throw away specifics that are reuse because we could get ahead of the attack much earlier too. So it's a bit of both at the end of the day.

Melissa Turcotte: So yes, Nic, if, if, if you have an evil bit, look for the evil bit. You don't need data science for that.

Nic Fillingham: (laughs)

Natalia Godyla: And how is this model being used today, meaning is this a model that's being used by our internal security team to protect Microsoft and its customers, is it being used by a Microsoft threat experts group or is this actually embedded in some of our solutions today, and our customers are feeling that benefit? And what is the future intent of the model?

Justin Carroll: One of those... So, there are multiple uses that are in place for the model. So one of the big things for me, so in my own selfish interest, it's intelligence, it's one of the easiest ways that I can keep tabs on the attacker and continually build new profiles and understand, basically, reports out, this is what they're doing, this is how they're doing it, this is how active they are. Like, are we seeing, you know, large volumes of their attack, are they taking a break, that kinda stuff. Then, the Microsoft threat experts are using it as a signal to help understand attacks early on in the kill chain so that they can get those notifications out ideally before the ransom, which can be quite difficult a lot of the times depending on the adversary and how quickly they seek to ransom. A lot of times there isn't a great deal of time.

Cole Sodja: Yeah, there's other products, for example, M365D. So, um, there are plans, uh, it requires some engineering, ultimately, because this is a big product, um, huge customer base and so on. But there are already plans in motion to take what we've built already, as part of this framework, and integrate that into that product. There's other products as well, both from a threat intelligence perspective, and possibly kind of from SOC alerting perspective as well, that I'm in active discussions with other products across Microsoft to do the POC, make sure it works with their data, make sure they're comfortable and then work with their engineering team to at least get that in the plan. Those are ongoing discussion but M365D does have, kinda, I'll say, in their planning cycle, to get this in the product.

Nic Fillingham: I wonder if this might be a good time to bring our secret special guest on microphone, Josh, if you're there, I think I might ask, uh, might wonder if you could jump in on this one. I think you've understated the power of what you've built here. From everything that you've just explained, you know, within a couple of minutes of a threat actor getting initial access to have a high probability index to be able to contact the customer and say, here's who we think is inside your network, here's what we think they're gonna do next, so they can shut it down. This is the next level, right? And, and Josh, when we interviewed you on episode three, you were hinting at this, if I'm not mistaken. Is this, is this sort of what you guys have been working on?

Joshua Neil: Yeah, I'm so proud that we, that we took it from concept to realized value for the customers and, and at this point we've had that impact with your customers in stopping human operations. And, and so it's really exciting and, and it's, it's on the journey but, you know, if I extract an overall theme from this, it's consistent with that podcast that we had before because I was sort of complaining about AI. And I was sort of complaining about what we see in some of the, in some of the branding and marketing that, that folks do in, in cyber security. And I think this team and, and the work they've done exemplifies the right applications of data driven methods.

Joshua Neil: There is no magical, artificial intelligence today. What there is is, and this is a, an experience that all of us on the data science team have had over the, over the past few years, and really for me about 20 years, is we can use data and some mathematics and some computing to begin to automate and accelerate what the humans are doing. And so, by sitting very closely with, and working very hard with the human experts like Justin, we're explicitly encoding their knowledge into models. So that's one thing is that the data science we're doing is to automate some of the stuff they're doing today. But the intention is not to solve the world, not to give our customers a license to solve security, we're, we're not gonna be able to do that. What we are able to do is uplift the sophistication of our customers operations.

Joshua Neil: So, you know, what Justin sort of reflected on, uh, he's able to do a more interesting job, a more sophisticated job, because we're taking the data and his knowledge and encoding it and accelerating and automating some of the stuff that he's having to do manually now. And that's where the real nuts and bolts, you know, and the real rubber meets the road here, is that there's no magic gun that's gonna blow away all the adversaries with, with AI. What there is is hard work between data scientists and threat expertise to uplift their capabilities and accelerate their effectiveness in the face of the adversary. And that's what I would like to get across to the, to the listeners, is that by hard work and careful and close collaboration between data science and threat expertise, that's how we really make progress in this space.

Nic Fillingham: Thank you so much Josh. And I just wanted to quickly clarify, from a previous comment from Cole, so this model is in use now, correct? Folks like Justin, Microsoft threat analysts, they are using this model now to make the model better, and to be able to get that additional information and those confidence levels in, in, in doing their analyst work. And so Microsoft threat expert customers are directly benefiting from this work, as of today. That's correct, is it?

Joshua Neil: That's correct. We've sent targeted attack notifications to customers based on this model.

Nic Fillingham: You've all been very, very, generous.

Natalia Godyla: Thank you for that. And, and thank you to the whole team here for joining us on the show today.

Melissa Turcotte: Absolutely.

Cole Sodja: My pleasure.

Joshua Neil: It was a lot of fun as always. And, and thank you, Nic and Natalia for this.

Natalia Godyla: Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us at MSFTSecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on future episode. Until then, stay safe...

Natalia Godyla: Stay secure.