Security Unlocked 10.14.20
Ep 1 | 10.14.20

Going Deep to Find the Unknown Unknowns

Transcript

Nic Fillingham: Hello and welcome to Security Unlocked, a new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft Security, engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better.

Natalia Godyla: Please contact us at securityunlocked@microsoft.com or via Microsoft Security on Twitter. We'd love to hear from you.

Nic Fillingham: Well hello, Natalia.

Natalia Godyla: Hi Nick, how's it going?

Nic Fillingham: It's good. I'm in Seattle and surprise, surprise, it is overcast.

Natalia Godyla: I'm in Boston and from where I'm sitting, it looks pretty sunny. So feeling good, ready for this podcast. What about you?

Nic Fillingham: I'm also excited and ready for this podcast. This is our first one, we're doing it, this is it. This is the first episode.

Natalia Godyla: Yeah, and I feel prepared for it, considering all the podcasts I hosted before, which is a whopping zero.

Nic Fillingham: Yeah. I also have no experience hosting a podcast, but I've listened to a lot.

Natalia Godyla: Which counts. In our podcast we're going to be listening to a bunch of experts. So I think we're primed for that.

Nic Fillingham: Listening and asking questions and that's something that I've done over the past 15 years at Microsoft. I've often listened to incredibly smart people, much smarter than me, talk and every now and then I got to ask them a question. And almost always at the end of those conversations, I've thought I should have been recording. That would have been such a fascinating conversation for other people to listen to. And that's what we're doing here on the podcast. So I'm excited that I can bring that to other people through this format. But you're relatively new to Microsoft, so what have you been doing for the past 15 years?

Natalia Godyla: Yeah, I'm definitely not a Microsoft old timer like yourself, very much a newbie.

Nic Fillingham: Hey, watch it.

Natalia Godyla: Well, I've been in the security vendor, compliance vendor space for a while, and I am super excited to just start meeting people within Microsoft. So a little selfishly excited to have the conversations myself. This place is huge, and so each new episode, I get to meet a couple of new people and learn a lot of things.

Nic Fillingham: One of the first people we meet in this first interview you're about to hear is Arie Agranonik, who is currently in Israel. And he talked to us from his attic, which he temporarily turned into a mini recording studio, which we very much appreciated. And he's going to introduce us to this concept of deep learning and how that works in a Microsoft 365 Defender, which I think is a great way to kick off the podcast. And then after that.

Natalia Godyla: We talked with Holly Stewart, principal research manager, about her path to cyber security and how she builds an awesome security research team. I am particularly excited about this one, she is known as the AI queen and what a legacy that is. To be known within your organization as the AI queen.

Nic Fillingham: Absolutely, that's a pretty cool nickname. I would try and get that one on a vanity license plate for my car, if possible. And look, before we jump in with the podcast and the interviews, I just want to say, you've probably heard it in the intro, you may have heard in the trailer, we genuinely, genuinely want this podcast to be your podcast. We want to represent you, dear listener. We want to ask the questions that you want to hear. We want to cover the topics you want to hear covered. So if you have requests on how we can make this better, or stuff you'd like us to cover on a future episode, please, please, please reach out to us through all those various contact methods. And we will do our very best to incorporate it into future episodes of the show.

Natalia Godyla: Our goal is to be your voice and to provide a platform for you to learn more from these people.

Nic Fillingham: And so with that, let's get on with the podcast.

Natalia Godyla: Let's do it.

Nic Fillingham: So I'd like to welcome Arie Agranonik, a Senior Data Scientist in the Microsoft Defender ATP research team. Arie, welcome to the Security Unlocked podcast.

Arie Agronanik: Hi, thanks for having me.

Nic Fillingham: So Arie, can you tell us a little bit about yourself and the work that your team does?

Arie Agronanik: So I work in the Defender ATP EDR product, I've been in Microsoft for two and a half years. And in the EDR product, we basically develop models to detect breaches in the operating system. Anything that is attacking the operating system with different attack vectors, we wright machine learning models to defend against that.

Nic Fillingham: Now, it is late July in 2020. We are all still in the work from home mandate due to COVID. So that's why we're not in the same room, but Arie, you're actually over on the other side of the globe from us. You're in Israel, is that correct?

Arie Agronanik: Yes. I'm actually sitting in my attic in Israel. It's 9:00 PM here. So, yeah, it's near Tel-Aviv, pretty far. And we're still working from home, so lots of fun.

Nic Fillingham: Well, thank you for taking the time to talk to us in the podcast. Now you were one of the co-authors of a blog post on July 23rd titled Seeing The Big Picture: Deep Learning-Based Fusion of Behavior Signals for Threat Detection. This was a fascinating blog post.

Arie Agronanik: Thank you.

Nic Fillingham: I understood several portions of it, but I'm really excited to have you on the podcast so that Natalia and I can better understand what is deep learning and how it's being used. And I wondered if you could give us a bit of a summary of what you talked about in the blog post that was published on July 23.

Arie Agronanik: So basically the blog post is talking about a model that we created. We have many models in our product, but this model is a little bit different in the sense that ... I guess in two ways, it's different. First of all, it's a deep learning model, which we can talk about what is deep learning in a minute. And the second thing is, it's using behavioral signals to learn different types of attack vectors that can happen inside of the operating system.

Arie Agronanik: And that's very interesting, because if you take it to a higher level, looking at behavioral signals instead of looking at lower level data, is usually what the machine learning models do. You find that those types of models can actually find very interesting attacks that were not seen before, and that's really the goal of building machine learning, or anomaly detection models in security. So that's why I think it's interesting.

Nic Fillingham: So let's start with that first term you just use, deep learning. What is deep learning? It's a type of artificial intelligence. Is it a type of machine learning? Can you break that down for us?

Arie Agronanik: Sure. So deep learning is basically a neural network with lots of layers. That's why it's called deep. And I think one of the things that made it so interesting in the past few years is three aspects. First is algorithms, new algorithms that were developed to train those models. Second is the amount of data, so we have now big data that we didn't have before. And labeled data, that's very important as well. And the third is compute, so compute is something that, I guess prior to 2005, 2010, we had less of. And also GPU, because deep learning models are based on or trained on GPUs.

Arie Agronanik: All those factors together created a pivot point where those types of models became really, really powerful. And the last few years, we've seen many, many breakthroughs in many areas in deep learning. When we think about translation, we think about speech recognition. When we think about natural language processing, understanding language, self-driving cars, all of that, really, is essentially deep neural network that's working behind the scenes to make that happen.

Nic Fillingham: When you say a neural network, what is it that defines something as a neural network?

Arie Agronanik: First of all, it's an algorithm, it's a training algorithm and there's a representation of the model. The model itself is slightly, you could say, based on how the brain works. So you have neurons which are basically activation functions and those activation functions are only activated when a signal comes in that is high enough. And when you have thousands and millions and billions of those neurons stacked together in the right way, you can create what's called representation learning. Which is basically. If we look at an image, for example, that has thousands and sometimes hundreds of thousands of pixels, and the algorithm can look at the pixels and create layers of intelligence to say, "Okay, what's in the picture?"

Arie Agronanik: There's great features that are, as you go up, the layers, those features are saying, "Okay, that's a corner, that's a car, that's a face, that's two people shaking hands," and so on and so forth. And as you go up the layers the network can actually defer or understand better what's going on in those complicated data points or complicated images. So that's just one example. Of course, you can also defer that to any area, like NLP, natural language processing, understanding text, understanding voice and things like that.

Nic Fillingham: So is it accurate to say that a neural network is an algorithm that more closely mimics the way the human brain functions?

Arie Agronanik: In a way, yes and no. Yes, in the sense that it's neurons, of course, and it's based on the brain, of course. But no, because there's a long way

Arie Agronanik: To go before we can reach a point that what's called AGI artificial general intelligence. And there's a lot of hurdles that we need to jump through to get there because the neural network is just a representation, but we still need planning, we still need the memory, we still need the common sense, things like that, that makes our brain in much, much more complicated and smarter than those neural networks. But identifying images, identifying malware, identifying cars on the street, things like that we can already do pretty well.

Natalia Godyla: And is that the goal to move closer towards the neural networks acting like the brain?

Arie Agronanik: I think it is the goal. I guess there's a lot of research going on in the industry to make it so that we can create AGI. There's lots of companies like DeepMind and OpenAI that actually only do this, only do the research that needs to happen to make AGI. But usually the things that work in the industry are much less than AGI. And they're basically just solving, known problems and trying to solve representation and learning.

Natalia Godyla: Right, makes sense. So we already have applications that we can go after. We don't have to be evolved to the state where it's mimicking the brain exactly.

Arie Agronanik: Yeah. I think even if we stopped all researching AI right now, and we kind of just start to implement what was research in the past 10 years, it will take us a decade to actually put that in production. So we have a few years to go.

Natalia Godyla: And then, what are the challenges that the team had to overcome when building the deep learning models and applying that to the product that we have?

Arie Agronanik: The challenge, I think we can classify it into two types of challenges. The first is the data itself and the representation. Our goal was to take a process tree, to look at the behaviors of this process tree, to extract the behaviors that researchers wrote. We call them observations. So observations are basically like you can think of anything that happens in the operating system that could be a little bit malicious, but not distinctively so that they can create an alert. For example, if you create a file in some area on some folder in the operating system where you shouldn't, or you rename a file to a name that is part of like tools in the operating system, that would be considered an observation. So taking those observations and grouping them together into the form that a neural network can access and feeding all that data into a neural network, that's challenge I think, number one.

Arie Agronanik: And challenge number two is basically, creating the architecture to be able to learn those features or those representations. And I guess, also challenge number three is to put that into production. I guess, there's a lot of research going on in deep learning, but much less so that things that actually work in production. So, that's the three challenges of thinking this project.

Nic Fillingham: Just to clarify, the work that you and your team talked about in this blog post, this is actually in production right now. This is a part of the ATP product and it is protecting customers.

Arie Agronanik: Exactly. We have many, many models in the product that they discover those types of things. And this is just one of them, but yes. So it's been in production and is giving a good value.

Nic Fillingham: And this model is sort of uniquely qualified at identifying malicious process trees, is that accurate?

Arie Agronanik: Yeah. So if you think about what happens when someone takes over your machine, so someone basically sends you a phishing email and you click on some link or open a word document. What happens is actually a script is being executed on the machine. This script might be very, very, thin and very small, and it might just call a network like a C2 or command and control server out there that the attacker is using to download some other file. And that file will also be executed. And then, once those scripts are executed, they actually create a process tree on the machine. So the process tree has many, many processes. Some of those processes do next to nothing, and others might do some malicious activity, but not too much. But if you look at a single process in the process tree, you might not even recognize that it's malicious.

Arie Agronanik: So you have to look at the kind of the big picture. And when you look at the entire process tree, you can find that, say the first process went through the network downloaded some. The second process, wrote to the registry. The third process created persistence on the machine, so that next time you reboot the machine, it will actually run itself again. So things like that, if you look at the entire process tree are actually very, very malicious. So what we try to do is to look at all those activities that happen in some timeframe of course, and we try to classify them as malicious or benign. And we have millions and millions of examples. So it's not an easy task.

Nic Fillingham: Got it. And so, you talk about in the blog post, the Bondat worm. What was unique about the Bondat worm and I guess, also what was unique about this approach in being able to identify it?

Arie Agronanik: So, the Bondat worm I think, was introduced early 2008. What it does is when someone sends you an email or a phishing campaign and you download it and you run it, it will download some more things from the C2. And then it, it might do coin mining or DDoS attacks to other machines, to WordPress sites or other locations. That's what we saw on the internet that happens. But eventually this type of worm, once it's installed on the machine, the attacker can choose to do anything they want, because they have control through the C2. They can send commands and they can do, they have a botnet and the botnet can do really whatever the attacker chooses to do.

Natalia Godyla: So when we're thinking about new strains of malware, how are we able to evolve our behavioral observations to combat sophisticated attacks that are alluding our detection mechanisms?

Arie Agronanik: Yeah. So if we think about, for example, ransomware, okay? So, ransomware will usually have a very distinctive set of activities that it does. Now, there's different types of ransomware obviously. And each ransomware might have different signatures and different behaviors. But eventually if you look at it from a higher level as a human, you know that ransomware, what it's trying to actually to do, is to scan your machines, scan your computer, and maybe spread itself to other machines. Once it scans and finds a word document or the types of documents, PDFs, whatever, then it goes to each one of them encrypted, save the file, deletes the original file and goes to the next. So that's a pattern that we as humans can really understand. It's quite challenging for machine learning models to do that.

Natalia Godyla: That's fascinating when we think about the variation in attacks and how they evolve over time, there's also the variation among different organizations. So, can you speak to the future of customizable machine learning models? For instance, users, endpoints, they all act differently in each organization. So how do you see machine learning models evolve over time to meet that need?

Arie Agronanik: In this case, we don't use a pair organization model. We have a general model of all organizations. But what we can do, and that's probably something to think about in the future, is doing transfer learning. Basically, what you do is you teach the model on millions and millions of examples that you collected from many, many organizations, really the same way that they do transfer learning in an image classification. You know in image classification, you can train a model on millions of images. And when you have a small data set of your own with several types of image that we're not trained on, you can do transfer learning. You can freeze the network and only train the last couple of layers on your own data, once it's already learned the millions of examples, and then that network will really be able to classify types of examples that you want. So this same methodology can be used for pair organizational activities.

Nic Fillingham: Arie, can you walk us through the process with which you and your colleagues created this technique? How long did it take? Who was involved? And what kind of learnings did you go through along the way?

Arie Agronanik: My colleagues that worked on this is [inaudible 00:19:15] and [inaudible 00:09:16], and both of them may helped a lot with the blog. It took us a few months to really get the collection process working correctly. When you train a neural network, when you work on a neural network, part of the challenge is to find the correct architecture that will work best, given the data that you have. So that process also takes a while and you have to do many, many experiments on different architectures and different types of parameters, what we call grid search or parameter sweeps, until you find the right architecture. And then once you do that, once you found it, now it's ready for testing and production. So, that took us a while

Arie Agronanik: ... to get there and eventually we got there.

Nic Fillingham: What kind of tools do you use to build a model like this, are you in Python? Are you in Kusto?

Arie Agronanik: To extract the data, we use Azure ML and we use Cosmos as well. But to train the actual network, we use Python and different tools and different libraries in Python ecosystem for AI like Keras, TensorFlow, PyTorch and these types of libraries, they are very popular these days.

Nic Fillingham: I think my final question is what do you know of the inverse of the work that you're doing? So the bad actors, the folks that are out there creating this malware, it's getting more and more complex every day, are they utilizing any adversarial machine learning or how are they deconstructing the work that you do to try and create more complex malware?

Arie Agronanik: I think the advantages of our product in general is that all the models are in the cloud. So the attackers don't have the luxury or the capability to actually look at the model and start dissecting or reverse engineering it. So that's something that we saw in different areas where if you give the model, if you show them the model, they can actually start running it against many, many examples, and then start doing adversarial machine learning. But on the flip side, we also always try to develop models that can be given adversarial examples and they'll try to detect them as well, and that's a very complicated process because what you have to do is you have to basically train two types of models. One, the attacker model and one is the defender model, and those two models are basically learning from each other. One model might find different variations of the same data point to attack, and the other model will learn from that data point, how to learn that it's actually fake or how to learn that it's malicious. So that's something I feel that we're investing in as well.

Natalia Godyla: One last big question. What's next for deep learning.

Arie Agronanik: Oh, wow. So I think there's many, many areas that we can go into, anything that's got to do with representation learning. I think in security in general, we're a little bit late with deep learning because I think it's very difficult to take the data and to make it so that it will be able to be represented to the deep learning model. This is a constant challenge in security. When you think about images or sound or any of those other areas, the data is continuous and it's much more fitting to a neural network to learn from. So I think one of the challenges will be to create more and more types of models like this that can look at the security situation or security data and be able to learn from it given millions or even billions of examples. That's I think one of the challenges.

Arie Agronanik: And again, adversarial machine learning is also a very big challenge. I think a lot of people are working on it, and in general, putting those models into production is also a challenge because they usually require a lot of compute. If you think about like a normal or standard linear model that we used to work on like for 15 years, it might have, I don't know, 10,000 parameters or 5,000 parameters or whatever, and a deep learning model might have millions or billions of parameters. So you have to have really good hardware to actually run it at scale and this is, I think a big challenge as well.

Nic Fillingham: So Ali if listeners of the podcast would like to learn more about deep learning and the technique that you and your team have created. Is there some way that you recommend they go to read more either about this directly or related topics?

Arie Agronanik: If you look at our blog posts, there's links to other machine learning articles that our team already wrote, specifically on PowerShell, and there's a lot of things on the Microsoft Security blog as well that relate to machine learning.

Natalia Godyla: Thank you Ali for your time and all of the insights that you shared today. It was a fascinating discussion, and definitely one I'm going to keep diving into.

Arie Agronanik: Thank you for inviting me. That was great.

Natalia Godyla: And now let's meet an expert in the Microsoft Security team. So more about the diverse backgrounds and experiences of the humans creating AI and tech at Microsoft. Today, we're talking to Holly Stewart, Principal Research Manager in the Microsoft Offender Research Group. Welcome Holly. Thanks for joining us.

Holly Stewart : Hello. Thank you for having me.

Nic Fillingham: Awesome. So let's start with, if you could just give us your title at Microsoft, but maybe more interestingly, walk us through what the day-to-day function is of your role.

Holly Stewart : Sure. So I am a principal research lead at Microsoft and I work in the endpoint protection side of research. I like to say our team's superpower is using AI to help protect people. Machine learning and data science techniques are used everywhere within our research team, but with our team we have a primary focus on using those techniques to try to help people and keep them safe.

Nic Fillingham: That's awesome. And you run a team, is that right Holly? How big is the team?

Holly Stewart : It's about 25 now.

Nic Fillingham: Yep, and they're all in the AI data science realm.

Holly Stewart : Yeah. Actually, they're this super interesting mix of researchers and data scientists, and they come from all walks of life. We have folks who are security experts who really understand what threats do, how they work. Some of them understand criminal undergrounds and other things like that, and then we have data scientists that come from many different facets. many of them, not particular experienced in security, but some may be an expert in deep learning, another person may be more on anomaly detection side. But you take all these folks with different perspectives and different strengths and you put them together and really cool things happen.

Natalia Godyla: I love that. And speaking of backgrounds, what was your path to Microsoft?

Arie Agronanik: My path to Microsoft was I'd say a little unconventional. I studied International Business and French in school. I thought I would end up in the Peace Corps in Africa somewhere, and instead I ended up working at a security startup out of Atlanta, Georgia for many years and found my love and passion for security and data science. I've worked with a ton of researchers in my time and really found that data science was the way forward for me. It was the way of the future for me. So we got to Microsoft where we have this amazing data, amazing researchers, great compute power from Azure, and it was my perfect world where I could take all of these ideas about how we can use data science to solve customer security problems, and really put that into practice here.

Nic Fillingham: So, Holly, you talked about learning French and what you studied at college. What other things in your education, your history pre Microsoft do you feel brought you to where you are now and that you were using in your day? Perhaps things that maybe seem a little unorthodox?

Holly Stewart : You know, I'll say that I grew up with a really strong work ethic. My family actually comes from farming and my father has this really strong work ethic. He gets his guilt complexes about, if he's not doing something productive, he hasn't made the... his day is not complete, and somehow I am instilled with that. So when I got into security, I kept seeing so many problems. You're the threat de jure. Every single day, we're just bombarded with information. It's an overload, and I always thought, how can we better solve this problem? How can we help people really understand what matters? And when I started getting into data science, I thought, this is the way, this is how we can make better decisions, help people make better decisions and help protect them in a way where... focusing on the problem de jure really wasn't getting us anywhere, really wasn't moving the needle.

Nic Fillingham: So perhaps that drive that maybe thought you were going to the Peace Corps, you're utilizing a similar motivation there, but now in the data science realm.

Holly Stewart : Yeah, absolutely. I mean, I love being able to say that I go to work and the work that my team does, we are trying to help people every single day to keep them safe, keep them protected. It's something that I feel good about.

Natalia Godyla: That's great, and how does AI and ML factor into that when you're thinking about all of these big complex problems you want to take on?

Holly Stewart : Yeah. It's a great question. Like if you think about how maybe we traditionally approached security research where a researcher might reverse engineer some malicious program, figure out what it does, find some heuristic techniques to be able to detect that in the future, make sure those heuristic techniques don't detect the good things that we want our computers to run. That takes a lot of time, and the truth is that now where it has become so complex, that there's literally hundreds of millions of features that feed into what makes malware malware. It's really difficult

Holly Stewart : The human brain to wrap your mind around all these permutations, but that's the beauty of Machine Learning and AI, it's built for that. And so we take this incredible ecosystem diversity from benign applications to malicious applications, we feed that information into the machine learning systems. We train them how to recognize good from bad, and they can come up with these permutations that the human brain just wouldn't be able to wrap their heads around. And that's really how I connect all of those things together in our day to day.

Nic Fillingham: Got it. And so what types of... When we say AI and ML, that's a relatively broad set of acronyms there. What type of techniques, what type of approaches do you and your team use, or where are you sort of heavily invested?

Holly Stewart : We invest in lots of things. So if I break down and I'll say "AI" in quotes, because I kind of use it interchangeably to really just mean data science and data science approach. We use many different techniques from what you call supervised machine learning to unsupervised machine learning. With supervised machine learning you're using signals to help teach the machine how to deck something new. So I may take a set of saying 100 files and 10 of them are bad and 90 of them are good. I extract a bunch of features from those files and then I feed that into machine learning system to teach it how to detect new things that are similar to those files in the future. So, that's what you call supervised.

Holly Stewart : Unsupervised is really good at finding what we call the unknown unknown. So in supervised learning you're teaching it's something that you already know, and it just gets better at that. With unsupervised you're trying to find those pockets of uncertainty that maybe haven't even been classified before, or maybe should be clustered together, or perhaps in using past data you find that, hey, this is an anomaly something I haven't seen before that doesn't have a label, but that could indicate that something bad is going on. And so we really use a combination of all of these approaches to help train machines to amplify human knowledge, and also find the things that maybe as humans we were not thinking about in the first place.

Natalia Godyla: Can you share a couple examples of how this AI and ML is driving some of the Microsoft products, even products that like Nick said we use day-to-day?

Holly Stewart : Yeah, absolutely. So there are a lot of files that use what we call social engineering to try to trick people into opening them. So, one example that we saw over the past year is these attackers were using local business names and making look like they were sending an invoice from a local business name. I think it was a landscaping firm or something like that and so they were using that invoice that looked like it was from a local landscaper, sending it to these other businesses to try to trick them into opening up this invoice. Since inside it, it led to this phising site and then they'd try and collect their credentials. And so when you're just looking at this file, you may not see that it looks benign, but the machine learning system because it was able to extract all these different features from that file, it was able to see, hey, this is not a normal type of invoice that I would see from a legitimate business and it was able to flag that as malicious and help keep those customers protected.

Natalia Godyla: So, Holly, what's next on the horizon? What are you most passionate about trying to solve next?

Holly Stewart : Sure. So today we've done a pretty good job of using AI to help discriminate malicious software from benign software. It's not perfect, but we've made a lot of progress in that area, but what's next on the horizon for us is really deeper than that. So it's great to discriminate malicious from bad but what more can I learn from that? Say, for example, if we understand the entire kill chain of that malicious activity, from how it arrived to the victim to what it did after, if the victim installed it or clicked it to the final sort of motive of the attacker.

Holly Stewart : And if we can understand that entire story, we can look at all of the pieces in that, what we call kill chain and be able to provide protective guidance and automate protections to essentially learn from what attackers are doing today and make our defenses stronger and stronger over time. And that's really the evolution of AI in security is to help automate that for the customer. Because the amount of threats that we're facing, the amount of security information is an overload and we have to get better, we have to automate, and we have to use AI to do it, to really get to where we need to go.

Natalia Godyla: And how far away do you think this next step in the evolution is?

Holly Stewart : I'm sure I'll be working on it for the rest of my life.

Nic Fillingham: Holly, do you have a Twitter account? Do you have a blog? Do you have anything you want to promote? If folks want to learn more about you, your team, if you're hiring.

Holly Stewart : So we post all of our content on the Microsoft Security blog so you can find it there. And we are hiring data scientists here in the next week or so we should have the postings up.

Nic Fillingham: Great. And so you would find them on the Microsoft Careers website probably under data science?

Holly Stewart : Under data science or look for Defender and data science and you'll find us.

Natalia Godyla: Thank you, Holly for your time today it was fantastic to hear about your insights on AI.

Nic Fillingham: Yeah. Thank you, Holly I know you're busy, you're running a big team doing some great work. We really appreciate you coming on the podcast.

Holly Stewart : Thank you.

Natalia Godyla: Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us @msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then stay safe.

Natalia Godyla: Stay secure.