Security Unlocked 10.14.20
Ep 2 | 10.14.20

Unmasking Malicious Scripts With Machine Learning

Transcript

Nic Fillingham: Hello, and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham-

Natalia Godyla: And I'm Natalia Godyla. In each episode we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better-

Natalia Godyla: Please contact us at securityunlocked@microsoft.com or via Microsoft Security on Twitter. We'd love to hear from you.

Natalia Godyla: Hi Nic. So we're finished with episode one. We're onto episode two.

Nic Fillingham: Yes. Welcome back everybody. This is episode two. We made it. We are now officially expert podcast hosts.

Natalia Godyla: Yeah, I got my certification in the mail.

Nic Fillingham: Nice. Mine hasn't come through yet. I may have been denied, but I'm glad you got yours. One of us is qualified. That's good.

Natalia Godyla: Yeah, this validates the whole thing.

Natalia Godyla: So today we have another great lineup of guests. We'll be talking with three experts from Microsoft, kicking it off with Ankit Garg and Geoff McDonald who will be telling us about AMSI and how we're using machine learning to stop active directory attacks.

Nic Fillingham: This was a fantastic conversation, and I thought I knew about AMSI and sort of what it did and how it worked. I'm sort of really glad that we sort of asked them to go back to first principles and explain that to us because I really got a much better feel for how the AMSI interface works, and sort of how powerful it is, and what the relationship it is to machine learning. And how Geoff and Ankit and their team utilize machine learning was just fantastic, or fascinating I should say, to hear them talk through it. So it's a great conversation. I hope folks enjoy it.

Natalia Godyla: Yeah. And paired with that, we had Dr. Josh Neil, a principal data science manager who talked to us about some really interesting perspectives on AI, which were controversial and definitely enlightening. So ultimately he doesn't like using the word AI and prefers different terminology, and definitely puts it into perspective what we should be using or how we should be defining these terms.

Nic Fillingham: Yes, a very compelling argument from Josh on why we probably shouldn't say AI, or perhaps not say AI in the places where we are saying AI, and something I'm definitely going to try and take to heart. I also loved hearing Josh talk about the links between what he does in data science and music as a former professional drummer, which is a bit of a spoiler there from the conversation. But I'm a bit of a musician myself, and so it was great to bump into another musically inclined person in the security space.

Natalia Godyla: Yeah. And I am the opposite of musical, so you can carry that piece of the show.

Nic Fillingham: But you've got some dance moves, though, so that counts.

Natalia Godyla: Yes. Yeah. So to all of our listeners, just know that I have awesome dance moves that I'm doing during the show.

Nic Fillingham: When then podcast becomes a video, we'll definitely get that captured at some point. All right. So shall we get on with the show?

Natalia Godyla: Yeah, let's do it. Episode two, here we come. Well, welcome to the show Ankit and Geoff. Thank you for joining us today.

Geoff McDonald: Thank you.

Ankit Garg: Thank you.

Geoff McDonald: Excited to be here.

Natalia Godyla: We're excited to have you. To kick things off, I'd love to let our audience get to know you a little bit better. So if both of you could share your role at Microsoft and what your day-to-day looks like, that would be great.

Ankit Garg: Hey, I'm Ankit. I work in Windows Defender research team in Melbourne. In my day-to-day work we try to analyze the new Decker techniques and campaigns, and then try to think of what will be the best fit to cover those techniques and campaigns using various type of detection approaches which we have. So whenever we got a new technique or a campaign, we try to look at whether we can able to cover it with the client side detection or the cloud side detection, or can we create a new machine learning model to cover this technique at a broader scale? So this is like my day-to-day looks like just trying to discover the malware and all.

Natalia Godyla: Are you focused on a specific product for these detections?

Ankit Garg: So actually, it's more like not a specific product. We try to look at a broader range of products. So it can be more on, let's say, EXE which is doing something malicious, or a script file which is doing some malicious stuff. But yeah, most of those on the Windows side of things.

Natalia Godyla: Interesting, thank you for that. And Geoff?

Geoff McDonald: So, I'm Geoff McDonald. I work for Microsoft Defender Antivirus ATP. So I lead a team of data scientists who build machine learning models to protect our customers from malware attacks. So we build machine learning models into the antivirus product itself which run on your device, usually highly performance, low memory overhead, low CPU overhead to not slow down the devices. And then where we build a lot of our machine learning models is for our cloud protection service. So we've got clusters of servers in each region around the world which run real time machine learning models, and that's where we get most of our impact.

Geoff McDonald: A lot of what we do on our team involves building machine learning model pipelines. So we'll be coding machine learning and big data pipelines, training the latest machine learning models, and then we'd be setting up pipelines to automatically retrain and redeploy and test these every single day. And we do a lot of machine learning models for a lot of really interesting scenarios. We're talking about our AMSI script behavioral integration capabilities where we've built machine learning models specifically for the scripting engines.

Natalia Godyla: So if both of could tell us a little bit more about what you discussed in your recent blog on stopping active directory attacks with AMSI and machine learning, that would be awesome.

Ankit Garg: Yeah. So actually in the blog, we try to discuss, firstly, more of a broader machine learning product like the project which we did. So initially we start with a challenging problem like how to detect script-based types of attacks at a bit of generic way. How can we detect those types of attack? As we all know, there's a shift from a normal disk-based attack, which is more focused on p to the script-based attack in last few years. And it is very easy even to obfuscate the script and try to bandaid according to the environments as well. So that is a big challenging problem for us. So what we did is we just try to look at the behavior and the content of the script at that run time using AMSI, and then try to create a machine learning model using that data and try to stop the attacks by looking at the patterns.

Geoff McDonald: One of the challenges we often see with our customers is dealing with human operated ransomware attacks. This is a big issue for our enterprise customers where the attackers breach an endpoint on the targeted enterprise. And then they use lateral movement techniques in order to infect the whole network, in order to encrypt everything within the organization at the exact same time, and then demand a very large ransom from the enterprise. So they might ask for, even a million dollars can be a reasonable amount that they would demand after encrypting everything in the organization. So this has been a really big plague upon the enterprise businesses out there. And one of the techniques they use once they breach a box in your network, which is usually through phishing, or maybe it could be a RDP brute force attack, is that what they're going to be doing is trying to infect as many devices on your network as possible.

Geoff McDonald: Now active directory is the infrastructure which manages identities within organizations, and it's often the point of target that a lot of the attackers target in order to try to move laterally within the organization, and they're using a lot of these. For example, there are two active directory attacks that our machine learning model stopped in this blog, and both of them were for really different purposes. One is a blue team tool called Bloodhound. Bloodhound is used by defenders in order to help analyze and enumerate active directory within the organization to look at everyone's roles, look at the permissions and access of all resources within the organization. So it's a really useful defender tool. But we actually see this same active directory enumeration tool being used by the attackers to find defense flaws in the organization. So often they're trying to move laterally to the domain controller of the enterprise because once they hit the domain controller, then they have full access to the entire organization, and that's kind of the jackpot where they can encrypt the entire organization at once.

Geoff McDonald: Now, the second attack against active directory that was in the blog was the Kerberoasting attack. And it's in order to elevate their privileges within the organization. So they've compromised a single device within the enterprise and they can use Kerberoasting by interacting with the active directory on the domain network that they're interacting with in order to dump the credentials from other devices on other accounts and resources on the enterprise network. So they extract all of these hashes through active directory. Then they can do password cracking offline outside of the target environment in order to crack the passwords for higher privileged users and resources within the organization. Then once they crack those passwords, they can now move laterally within the organization to try to infect more devices within the organization.

Nic Fillingham: There's a lot to unpack here in

Natalia Godyla: Yeah.

Nic Fillingham: ... This one. Thank you for that, such a detailed overview to both of you. I'm going to start ... I might even use the blog post sort of as a bit of a treasure map here, and A-M-S-I or AMSI, I wonder if you could sort of give us a description of what is AMSI is that in and of itself, the new technique that you're talking about here in the blog? I don't think it is, but I'd love your kind of clarification of what is it, how long has it been around and what role is it playing in what's being discussed here in the blog.

Geoff McDonald: We have a big problem in the security industry where the attackers are using the scripting engines in order to obfuscate impact their script content. So if they have a malicious JavaScript content, for example, they don't just put the malware code directly in the JavaScript. They write their malicious JavaScript content, and then they're going to pack and obfuscate it, so it's really hard to analyze and see the intent of the script. So this is an order to evade antivirus products as well from being able to identify and detect the underlying malicious script.

Geoff McDonald: So in order to help with that in Windows 10, we launched a new feature called AMSI, that stands for Anti-malware Scanning Interface. So this is an interface where any application on your computer can ask the default installed antivirus product being used by the user to scan content. So this isn't like Windows Defender specific, this is a Windows feature we introduced, which allows applications to be able to call the default installed AV product, whether it's Defender or Norton or any antivirus product to scan content. It allows us to cut through a lot of the obfuscation and packing that the attackers use to hide the script content and allows us to see the actual intent of the scripts in a more behavior manner, which is a lot more robust to protect the customers.

Nic Fillingham: Do I need to do anything, either as an end user or an admin to actually turn this stuff on or configure it, or is it, it's baked in and a part of the product?

Geoff McDonald: This is baked in and part of the product. Yeah.

Nic Fillingham: That's always the best kind of protection. You don't need to do anything it's already there and it's working.

Natalia Godyla: What was most difficult about identifying these types of active directory attacks?

Ankit Garg: This is a really interesting question because, okay, so when we actually moved our models to the production, we trying to figure it out what we are ready ... First of all, we are very excited that some of machine learning models, which is stopping risky behavior, move to production, and we are getting the good number of blocks in a particular week. So we are exciting to look at what are all those blocks looking like, what exactly we are blocking and what types of attacks those are. Then we try to dig more into the telemetry of the blocks, and some of the interesting things which come up is the use, like these active directory attacks. So when we tried to look at a telemetry and when we start looking at some of our PowerShell blocks, we start stopping a lot of these active directory based attacks as they are also based on PowerShell.

Ankit Garg: And when we try to look specifically what they are doing, we find out that LR model got trained on the behavior we are detecting these behaviors, where some attacker or some pentester try to move literally in the active directory environment, or try to elevate privileges using PowerShell and all. So these are some of the challenges which we have in the past, like to detect these types of attack, which machine learning models is able to overcome or fill that gap.

Geoff McDonald: Yeah. One other interesting thing to note is you might be wondering like why machine learning in these cases.

Nic Fillingham: Yes, why machine learning?

Geoff McDonald: Yeah. So one of the big problems we face is the scale and volume of the diversity of attacks we see in the real world. So we get an enormous number of attacks every single day and to have humans go and analyze and write signatures or write behavior traps for these attacks, doesn't really scale to the scale, the way we needed to, to the volume of attacks we see every day in the wild. So a lot of using machine learning is being able to scale to automatically learn and block these attacks without having to get humans in the loop, which isn't a particularly effective approach to protecting our customers. We're training the machine learning models to broadly defend against these stripping attacks. And these weren't specifically trained in order to detect active directory PowerShell attacks on the network.

Geoff McDonald: This machine learning model actually preempted the human signatures, which would have detected and prevent these attacks. So it's learned automatically for us where we don't need to get a human in the loop in order to write more signature based solutions, which we don't rely on, aren't very robust and are lot more reactive approaches. So machine learning is able to be proactive and scale to a way that human response can't.

Nic Fillingham: This is what machine learning can do. It's not just about infinite monkeys with infinite typewriters. It's about sort of being able to see the evolutions maybe faster and more efficiently than a bunch of humans sitting in the sock.

Geoff McDonald: Yep. Yep. Exactly. This PowerShell AMSI protection was probably the hardest scenario that we were shipping. So a lot of the other scripting engines like JavaScript and visual basic macros, there isn't quite the same diversity of clean, clean scripts in the whole world, as we see with PowerShell. So with PowerShell is just a humongous, enormous amount of clean PowerShell scripts being used by all of these enterprises that are often custom to those enterprises. And it was one of the hardest ones for us to ship. So we had to work through a lot of problems and a bunch of iterations in order to get it successfully working with a very low signal to noise ratio.

Nic Fillingham: And as you said, that's because PowerShell is just so prevalent and PowerShell in and of itself is so powerful and also so customizable that there's probably not a lot of overlap between two enterprise customers doing the same thing with PowerShell at same time.

Geoff McDonald: Exactly. And like the implications about false positive can be fairly disruptive to an enterprise on top of that too.

Nic Fillingham: How do you tackle that problem of trying to sift benign from malicious with something like PowerShell? It sounds like maybe it's an easier, and I use easier in inverted commas that no one can see because it's a podcast, but maybe an easier task, as you say, with some of the other scripting languages, but due to the nature of PowerShell, how do you tackle that?

Ankit Garg: So initially when we start looking at the signal to device ratio, we find, as Geoff mentioned, lot of the blocks which our model is doing is very similar to the benign things. So what we did is we tried to narrow down those cases, like why exactly our model is detecting of benign. But when we tried to dig a lot into the data, we try to come up with the new features. And when then we try to look at how can we restrict these benign things from getting detected. And for those, we try to include lot of guard rails, which is more like, "Okay, so we start looking at lot of age and prevalence, and we also start collecting a lot of new features, which we can use." So those features can eliminate these benign samples and able to recognize more of the malicious content.

Geoff McDonald: So one of the challenges is because there's an enormous amount of clean PowerShell scripts used custom to each enterprise. So one of the ways that we do to learn that all of that PowerShell content is benign is that we track what we call a healthy machines. So if these are enterprise machines, they're seeing PowerShell AMSI content on their devices, but they've never encountered a threat on the device then it's high likelihood that, that PowerShell content is benign. So actually when we train our cloud classifiers, we're training all of this custom enterprise PowerShell script as benign, and we don't actually have the PowerShell script. We just have a featurized description of the PowerShell script. So we can't actually train on the PowerShell script content itself, but we're just training on the featurized PowerShell script as negative.

Geoff McDonald: Now, as Ankit mentioned, it's very hard to get malicious labels of PowerShell on behavior. So one trick that we had in order to improve the quality of our catch rate of true attacks from these attacks is that we look at timelines of devices during a known malware attack on the devices. So if they encountered a malware, we looked at the first time malware was seen on that device in retrospect. And then we look at the PowerShell AMSI buffers from around the time that the malware was first seen on the device and we use that in order to expand our positive label set.

Nic Fillingham: That's fascinating, so in order to find malicious, you first focus on benign and clean, and focus your attention there. And then you almost in sort of a post-mortem sense, go backwards in time, looking at telemetry from where there were known attacks and find out what was happening from a feature perspective in PowerShell on those devices. And then you can sort of ascertain maybe where the malicious stuff was happening. Is that accurate?

Geoff McDonald: Yeah. That's a great description.

Nic Fillingham: That is some fascinating problem solving. You guys must feel pretty good about that one.

Geoff McDonald: Yeah. It's really nice to have that shift. It was a challenge in the PowerShell, especially.

Natalia Godyla: So what's next for the team then?

Ankit Garg: So actually, right now, we are in the process of now shipping the WMI AMSI model in the production, which will include a WMI capability in our suite. And then we are also thinking to work on the .net AMSI as it is pretty new. So we are thinking

Ankit Garg: To work on that and also try to shape those models as well to the prediction.

Nic Fillingham: And then what's next for the cloud detections or the cloud machine learning that you and your team are working on Geoff, is there anything you can give us a sneak peek on?

Geoff McDonald: Oh yeah. I'm really excited. Our cloud machine learning service is really exciting. So we run them all in real time as queries arrive. So this isn't like when your device talks in nearest cluster servers to you, we're running about 90 machine learning models in parallel against every single query, producing classifications using ensembles to make decisions. But the really cool neural part is now in that cloud service in Azure regions which support it, we're getting a GPU inferencing in the cloud. So we're going to be able to scale up a lot of our deep learning models to actually run at the scale we need it to.

Geoff McDonald: Because each day, we have about 800 million queries per day, and then we have to run all 90 ML classifiers against each of these in parallel to clump classification decisions. So it's quite large scale problems, but we're really excited about our new GPU capability.

Nic Fillingham: So all of those queries are running against physical or virtual CPU's and now they're going to ship over to GPU's?

Geoff McDonald: Yep, exactly.

Nic Fillingham: Wow.

Geoff McDonald: So we're going to be using GPU acceleration for a few of the model types.

Nic Fillingham: Wow. That's pretty exciting. Well, we'll have to get you both on the podcast at another time to talk about that.

Natalia Godyla: Geoff and Ankit. Thank you so much for joining us. It was a fascinating episode.

Geoff McDonald: Thank you so much for having us. It was really a pleasure.

Ankit Garg: Yeah, thank you so much for having us. It's a really fun.

Natalia Godyla: And now let's meet an expert from the Microsoft security team to learn more about the diverse backgrounds and experiences of the humans creating AI and tech at Microsoft. Today, we have Dr. Josh Neil on the show with us. So Josh, would you mind kicking it off with telling us what is your role at Microsoft and what does your day-to-day look like?

Dr. Josh Neil: Sure Natalia, it's nice to meet you. So I'm a principal data science manager, and I work in Microsoft threat protection. We're a research team supporting several products, but focused in enterprise security. In terms of what does my day look like, it's quite busy. That's pretty obvious. And mostly at this point my career consists of mentorship, guidance of research directions, development of research directions and strategy, interface with engineering in order to bridge the gap between research and production solutions for our customers.

Dr. Josh Neil: We're certainly motivated to create innovation, but do so in a scaled way that can actually help our customers. We're not a pure research organization. We're very motivated to help protect our customers from threats they face on a day-to-day basis. So there's a real combination between actual research and the scientific method and the needs of scale production computing. And so a lot of my time is spent in understanding the production engineering requirements and letting them know what my team needs in order to bridge the gap between the research and a solution for our customers.

Nic Fillingham: How did you find your way into this position? How did you find your way to Microsoft? What was your path to here?

Dr. Josh Neil: Boy-

Nic Fillingham: We have all the time in the world.

Dr. Josh Neil: Okay, great. And I love talking about myself.

Nic Fillingham: Well, you're on the right podcast then.

Dr. Josh Neil: I am. So yeah, I started out as a music major, a music performance major. I played the drums in high school and was in bands in college. It turns out that a formal education in music wasn't consistent with my passion for music, which was really about performance and playing music, not studying music, if that makes sense. And so after a couple years as a professional musician, I also realized that at large was for me. And so I went back to school, always had another passion for mathematics and computing. Wandered around in various majors, including geology and chemistry.

Dr. Josh Neil: But did end up in pure mathematics with a minor in computer science. And then I got a job at Los Alamos National Laboratory. I think I was hired as a scientific programmer. So I was lucky enough to be able to go to school again while employed and ended up getting a master's in electrical engineering and then a master's in statistics and then a PhD in statistics. In those days, and this is in the early 2000s, they didn't call it data science. That's actually a relatively new term. I was a numerical programmer or research programmer for awhile. And then eventually, they called me a statistician. And then it's only recently they started calling me a data scientist. That's okay. I'm happy with that. But the work that I did then, and that I continue to do today is in the application of statistical methods for identification of attacks in computer networks.

Nic Fillingham: Staying with data science for a sec, from your perspective, when did the work that you were doing that was initially referred to as statistics or in the statistics realm, when did it start to bleed over into this field that we maybe sort of broadly think of as AI?

Dr. Josh Neil: That is a loaded question. Because... And I have these arguments on LinkedIn actually, and you can see my LinkedIn feed for some of this kind of discussion, but I'm demanding that we define AI in the first place. What is it? And people have a lot of different answers for that. I think it's another term which is a bit confusing, just like data science. Because actually under the hood, it's all a bunch of things, and I will be controversial at times about this, but I don't think we have a good concrete definition for that. And therefore, I don't really like to use the term.

Nic Fillingham: For all the Dr. Josh Neils' out there, what word should ask laypeople use that is more accurate, if nothing else?

Dr. Josh Neil: I guess I've settled on data-driven methods.

Nic Fillingham: Data-driven methods?

Dr. Josh Neil: Yeah. So they're informed by the data. And the only definition I can really come up with for AI which is appropriate and defensible is when we're trying to actually mimic the human brain intelligence. The neurons that are firing in the brain and the patterns and the learning that we do over time and so forth. Can we write algorithms specifically to try to mimic that? Then I sort of feel like "Okay, that's AI." But actually, we can use computers in ways that brains don't work. And it scales and for problems that humans aren't very good at.

Dr. Josh Neil: So should we really be trying to mimic the brain? I don't know, and if we're not, I'm not sure we're talking about artificial intelligence. Although people can argue with me about these, these are just terms, but I think what we're really doing is try to make it as transparent as possible, there's a bit of math, and a bit of computing and a lot of data to try to solve people's problems.

Dr. Josh Neil: I could spend quite a bit of time with you talking about explainability. And I know the audience here... I know the feeling among our customers that cybersecurity and AI and cybersecurity has a lot of snake oil in the market. And it's bothered me from the beginning to see intentional or unintentional obfuscation of what we do. Most of the methods that my team develops are focused in explaining what the data is telling us, as well as making decisions with the data.

Dr. Josh Neil: So a mistake that some in machine learning make is to focus only on the raw performance of the machine learning model, error, precision and recall type of metrics for their detectors or whatever they're trying to do. I will give up some precision and recall, that is I'll make more false positives and more false negatives, in order to be able to explain what the algorithm is saying about the data. And be able to pass that explanation all the way to the customer. In this case, a SOC. But in the end, we want to be able to give our customers extremely clear answers as to why we think something's unusual, not just "It is unusual and you need to look at it, but why?"

Natalia Godyla: What are you passionate about trying to solve within Microsoft?

Dr. Josh Neil: Yeah, thanks for asking it. Now you're letting me talk about my passion. That's amazing. So I came to Microsoft in 2018 on purpose, because they were the first company I thought was mature enough in the data collection to accomplish what I'm about to tell you. Okay, so that's a big setup for what I'm going to tell you, which is I believe that signal combination is the wave of the future.

Dr. Josh Neil: That no longer should we be focusing on "Oh, that's fish." And "That's a weird login." And "There's malware on that computer." But instead, a sort of comprehensive effort to combine signals across the enterprise in order to identify attacks. Some of the work that we do... A lot of the work that we do is heuristic. So it's a rule that says "If X is less than 17, or Y equals 56 and Z is 37, alert." And it'll alert on very specific behavior. And the parameters there, the 57s and the 36 are actually extremely valuable, because our security experts have worked very hard to

Dr. Josh Neil: ... get these things right and very precise in identifying attacks. Those are part of the ecosystem. The supervised machine learning to decide malware versus benign or to score malware versus benign probability-wise, and then unsupervised methods, anomaly detection to say, "That was weird with probability X," all these bits and pieces we've been digging so hard into each one of them. We got these massive deep learning models with a billion layers and 4 billion parameters, whatever, to identify malware. Right? We can, "This file is bad." Right?

Dr. Josh Neil: I think the, well, the next passion for us, for me and my team, is in the combination of maybe weak signals. "Yeah. You think that's malware, but we can't alert on it because we have too many false positives. This thing is suspicious, but it's not suspicious enough." This is how this stuff gets through. Then the next thing that happens, they disable the security tools on the box, they change the registry so they can survive a restart. But in combination, yeah, we have some suspicion they got in here, but then if we also combine that with some suspicion that they disabled the security tools or they did some reconnaissance, those two together have a really strong multiplicative effect on our probability of detecting true positives and not detecting false positives.

Dr. Josh Neil: So the overall performance of our detectors goes way up just by combining these signals together. So a little bit of a sales pitch is Microsoft threat protection is the product to do that, and I'm so excited to be a part of that research team building that product. That's what I'm here to do. I'm just very passionate about that.

Nic Fillingham: Could you talk just briefly about the makeup of your team and some of the folks that may also have diverse and maybe unorthodox... although what does unorthodox mean... but paths to Microsoft? What kind of experiences are they bringing? Then what do you also look for when you're hiring new people into your team to do the work that you do?

Dr. Josh Neil: Yeah. Great, great question. I work with a lot of students in U Dub and others to tell them that this, too, here's what I'm looking to hire. So the team. Let's see. Some basic tenets are I have diversity in the team, both in backgrounds and... Well, all aspects of diversity. Okay? I very much believe in different backgrounds, experiences, gender, race, ethnicity. I believe very much that those things help us do good research and serve our customers.

Dr. Josh Neil: Experience is when it... I don't believe in having a too top heavy team, a principal level, 15 year, 20 years in it. No, I want junior level folks, mid career folks, and senior folks. There's a mentorship pipeline that I like to have where the senior folks get to teach the juniors what they've learned, and the juniors get to learn it. I like that sort of environment of learning and progression.

Nic Fillingham: How many former professional drummers are on the team? Just you?

Dr. Josh Neil: Oh gosh, is there anybody else? Not on my direct team, but there are many musicians in the larger security research org. In research and in security in general, you tend to find musicians. I like patterns, and so drums and patterns go... Rhythm goes together well. That's appealing to me.

Nic Fillingham: Have you found yourself consciously bringing any of musical theory or that sort of pattern creation or recognition through into this work?

Dr. Josh Neil: I don't think formally, but I trained from an early time when I was in the elementary school, practice drums, implanting patterns and rhythms into my head. That probably influences what I do. It's not direct, but I certainly have a predilection for identifying patterns and data. That's what I do for a living, and that's also what you do when you're playing drums.

Dr. Josh Neil: Although there's this other subtle thing, which is passion. Musical expression, it's magical. I don't get that feeling with anything else. When I'm playing the drums, it's a little bit different than on a whiteboard with a piece of math or a computer.

Natalia Godyla: What would you say to students to encourage them to enter a similar space?

Dr. Josh Neil: Yeah, thanks, Natalia. I think that we're on the edge of a great innovation time. The data availability, and I've suffered through poor data, but the data has really come along. There's too much of it for us. So the opportunities are tremendous in data science in general, and the combination of data science and security, like we talked about earlier, is extremely nascent. There is much work to be done in a high demand. So I encourage you to study, then work hard and learn how to write code.

Dr. Josh Neil: But I think also learn the mathematics and do your homework with the mathematics. Really make sure that you understand the fundamentals of probability and statistics, not just application of black boxes. I tend to hire folks who are builders, not tool users. They're toolmakers, and we really get to the fundamentals and you need to know these. But if you do spend the time, you've got such a tremendous promise in your careers that this old guy would encourage you very much to go with gusto into the future.

Nic Fillingham: What gives you hope? It sounds like you're passionate about students and the next generation. Is that the golden, shining light? Is that what gives you hope?

Dr. Josh Neil: Yeah, tremendous hope. I've seen so much progress over the last 20 years. It's amazing times to be alive. I think we're miles ahead of where we were in the past, and the future is very promising for defense. We are going to exceed the adversary, and this next generation is the one that's going to do it, I think. So that's what excites me. Don't get me wrong, I'm not quitting today, but I think we'll see this in the next 10, 20 years. It's going to be a good time for security, coming around.

Natalia Godyla: Innovation and passion will see us through.

Dr. Josh Neil: That's right.

Natalia Godyla: Thanks, Joss, for joining us today. Was a great discussion and I loved your definitions or your contrary definitions on AI. It was very eye-opening.

Dr. Josh Neil: It was my pleasure, Natalia. Thanks so much.

Natalia Godyla: Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: Don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.

Natalia Godyla: Stay secure.