Security Unlocked 12.9.20
Ep 7 | 12.9.20

Threat Modeling for Adversarial ML

Transcript

Nic Fillingham: Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft security, deep dive into the newest threat intel research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better...

Natalia Godyla: Please contact us at securityunlocked@microsoft.com, or via Microsoft Security on Twitter. We'd love to hear from you.

Nic Fillingham: Hello, Natalia, welcome to episode seven of Security Unlocked. How are you?

Natalia Godyla: I'm doing well. Refreshed after Thanksgiving break. What about yourself? Did you happen to eat the pork that you were planning? Those bratty pigs?

Nic Fillingham: The bratty pigs actually have survived for another day. They live to tell another tale to eat more of my home delivered fresh produce, but we did eat a duck that we farmed on our farm. So that's the second time we've been able to enjoy some meat that we grew on the farm, the little mini farm that we live on, so that was pretty exciting.

Natalia Godyla: Yeah. That's been the goal all along, right? To be self-sustaining?

Nic Fillingham: To some degree. Yeah. So we achieved that a little bit over Thanksgiving which was cool. How about you, what'd you do over your Thanksgiving break?

Natalia Godyla: Well, I made apple bread. Oh, there's a special name for the Apple bread, but I forgot it. Pull-apart Apple bread. And I spent a lot of time on WolframAlpha.

Nic Fillingham: You spent a lot of time on WolframAlpha? Did your firewall break and it was the only accessible website? How did you even get to WolframAlpha?

Natalia Godyla: WolframAlpha is like Sporcle. It's like if you have time, you get to play with their technology and they've got...

Nic Fillingham: Sporcle? Sorry, Sporcle?

Natalia Godyla: What? Do you not know Sporcle?

Nic Fillingham: I'm really old. You'll have to explain that one to me. Is this a millennial thing?

Natalia Godyla: Wow. Okay.

Nic Fillingham: Bring me up to speed on Sporcle.

Natalia Godyla: Sporcle is like fast, quick trivia games that you play with a group in one person just types in the answers while you're running through it.

Nic Fillingham: I thought it was when you take a fork and a spoon and you dip them in glitter. Anyway, so you're on Sporcle, and you're like, "I've completed Sporcle. What's next?"

Natalia Godyla: And you go to WolframAlpha. That's the next step?

Nic Fillingham: So, what did you pose to WolframAlpha?

Natalia Godyla: All right, at what point does a cat's meow become lethal to humans? Good question, right?

Nic Fillingham: At what point does a cat's meow become lethal to a human? When it's connected to a flame thrower? When the meow is a series of poison darts? What does that mean?

Natalia Godyla: There are a lot of use cases for cats. In this one, it's how high the decibel of their meow is, because that can eventually hurt a human. But it's really about spacing. Where you put the cat is very critical.

Nic Fillingham: The question was how loud can I make a cat's meow, so that it hurts a human?

Natalia Godyla: A well-trained army of cats that meow at the exact same time, synchronized cats.

Nic Fillingham: Oh, a synchronized army of cats, all directed at a single person. Would their collective uber meow, would that serve as a rudimentary weapon? That was your question?

Natalia Godyla: Yes.

Nic Fillingham: And? Answer?

Natalia Godyla: Theoretically, but it depends on how far away all the cats end up being. I'm now thinking that I should have just like planned to capture the cat's meows in a can or something.

Nic Fillingham: Capture them in a can. What does that mean?

Natalia Godyla: Like a can of whoopass.

Nic Fillingham: Who would capture a cats meow in a can? Okay, Professor Farnsworth.

Natalia Godyla: You can tell I'm not the expert on these podcasts.

Nic Fillingham: So hang on, did you work out how many cats you needed in a single location to produce a loud enough meow to hurt somebody? Do you know the answer to this question?

Natalia Godyla: No. No. I was more focused on total and I don't also know the answer to the question.

Nic Fillingham: All right, to all the mathematicians out there and audiologists who have dual specialties into the capturing of cat meows into cans, and then the math required to multiply them into a focused beam of uber meow as a rudimentary weapon, please send us an email, securityunlocked@microsoft.com. Oh, Oh, segue, segue. We have email, we have email. We got messages from people who've been listening to the show and they send some very nice things, which is great. And they also gave us some topics they would like us to cover on the show, and we're going to cover one of them today.

Nic Fillingham: Shout out to Ryan and to Christian and to Tyler who all asked us to continue to thread on adversarial ML and protecting AI systems. We're doing that exactly today on this episode. We have Andrew Marshall joining us, who is going to absolutely continue to thread that Sharon Xia started a couple episodes back talking about protecting AI systems in the MDDR report, and then who are we talking to Natalia?

Natalia Godyla: Sam Schwartz. So she is a security PM at Microsoft and works directly with the Microsoft Threat Experts Team to deliver managed services to our customers. So she helps to provide threats Intel back to customers and is working on scaling that out, so that more and more customers can benefit from the billions of signals that we have, that we then apply to the data that we get from customers, in order to help identify threats. On to the podcast.

Nic Fillingham: Welcome to the Security Unlocked Podcast, Andrew Marshall. Thank you for joining us.

Andrew Marshall: Thank you. It's great to be here. Appreciate you having me on today.

Natalia Godyla: Yeah, definitely. So why don't we start off by chatting a little bit about your role at Microsoft. Can you let us know what your day to day looks like?

Andrew Marshall: Sure. So I'm a Principal Security Program Manager in the Customer Security and Trust Organization at Microsoft. My role is a little bit different from a lot of people who are security engineers. I'm not part of a product group. Instead, I work across the company to solve long-tail security engineering problems that maybe one particular group may not have the authority to lead all up. So I do a variety of different things, like killing off old cryptographic protocols, where we have to bring the entire company together to solve a problem.

Andrew Marshall: And lately, I'd say the past two or three years in particular, my focus has been AI and ML. In particular, the security issues that are new to the space, because it brings an entirely new threat landscape that we have to deal with. And we have to do this as an entire company. So it was another one of those cross-company security engineering challenges that I really enjoy to tackle.

Natalia Godyla: And what does the ML landscape look like in Microsoft? So if it's cross-company how many models are you looking at? How many different groups are using ML?

Andrew Marshall: It's a really all over the place. And by that, I mean everybody's using it. And it really is pretty much in universal usage across the engineering groups. And while there's been a big focus to everybody, whether it's in Microsoft or elsewhere, everybody's been interested in jumping on this bandwagon. But as the past couple of years, we've started to see that there are specific security issues that are unique to AI and machine learning, that we're only now, as an industry, are starting to see come out of the world of research-driven, proof of concept contrivances, where somebody created a research paper and a vulnerability that they had to make a bunch of leaps to justify. The pivot is occurring now from that into actual weaponized exploitation of these attacks.

Andrew Marshall: So what we're trying to solve here from a security perspective is with this worldwide rush to jump on the AI and ML bandwagon, what is the security debt around that? What are the new products and features and detections and mitigations that we need to build as a company to solve these issues for ourselves and for the world? One of those things is really focused on education right now, because we've published a series of documents that we made, we can publish them externally. We've got a machine learning threat taxonomy, which covers the intentional and unintentional threats that are specific to machine learning. We've got some documents that were built on top of that. One of which was called Threat Modeling AI/ML Systems and Dependencies.

Andrew Marshall: And this is a foundational piece of security engineering education work that's being used at Microsoft right now. The issue being security engineers, who have been... you can be a great security engineer, with tons of experience. You could have been doing this for 15 years, or more, but it most likely also means you don't have any data science expertise, or familiarity. So security engineers and data scientists are not two skillsets that often overlap. Ann Johnson calls them, "platinum unicorns", because that's just this mythical creature that nobody really seems to see. But the idea here is that we want all of our security engineers across the company to be intimately familiar with these net new security threats, specific to AI and ML.

Andrew Marshall: But here's the problem with all of that. This is such a nascent field, still, especially machine learning specific InfoSec, that if you are going to address these problems today, what you need is you need automation. You need new development work to be able to detect a lot of these attacks, because of the way that they occur. They can either be direct attacks against our model, or they can be attacks against the data that is used to create the model. The detections are very primitive, if they exist at all, and the mitigations are very bespoke. So that means if you find a need to mitigate one of these machine learning threats right now, it means you're probably going to have to design that detection or that mitigation specific to your service in order to deal with that issue. That's not a scalable solution for any company.

Andrew Marshall: So where we need to be is we need to get the detections and mitigations for these machine learning specific threats, get them to be transparent, on by default, inherited by the nature of using the. Platform where it just works under the hood, and you can take it for granted, like we take for granted all of the compiled in threat mitigations that you get when you build code in Visual Studio. So for example, Visual Studio, if you build code there, you inherit all of these different compiled in threat mitigations. You don't have to be a security engineer or know anything about this stuff, but you get all of that goodness just by nature of using the platform. It's on by default and you're oblivious to it. And that makes it easy to use. So, that's where we need to get with this threat landscape too. That's just a very exciting, very challenging space to be a part of.

Nic Fillingham: Well, I think we're done. Thanks very much, Andrew. No, joking. Wow, so much there. Thank you for that intro. So I think my first question is this conversation we're having is following one that we have with Sharon Xia recently talking about the machine learning insecurity section that was in the recently published Microsoft Digital Defense Report. You're referring to the threat modeling AI systems and dependencies work that's up on the docs page. We'll put a link to that in show notes. When we spoke to Sharon, she really called out upfront, and I think you've just really emphasized that the sort of awareness... This is a very nascent topic. And especially at the customer level, awareness is very low and there needs to be awareness in this field. So I think what is Microsoft doing... First, maybe what is Microsoft's role in promoting awareness of this new category and what are we doing there?

Andrew Marshall: So we have a role on a couple of fronts here, both within the company and more broadly, within industry and with different governments and customers around the world. So our responsibility is to act... Internally, we'll help shaping not only the educational efforts within the company, but also the research and engineering investments that are made in order to address these issues and solve these problems in this space. There's a policy shaping side of that as well, which is working with governments and customers around the world to help them shape meaningful, actionable policy. That policy in any kind of space can be a dumping ground for good intentions. So whenever people are working on some kind of new policy or some kind of new standard, we always want to make sure that everything is as practical and as actionable as it can be with... And has to be really crisp because you can't have ambiguous goals. You have to have exit criteria for all of these things.

Andrew Marshall: And the reason I'm elaborating on that is because my team in the company owns the security development lifecycle. And we're very, very careful about new security requirements that get introduced into that so much so to the point that we try not to introduce new security requirements there, unless we've got some kind of automation already ready to roll for people to use. And that way, we can just tell them, "Hey, this is now a mandatory thing that you have to do, but it's really just run this tool and fix these errors. It's not some kind of new manual attestation or big painful exercise to go through." And that's how we can keep adapting the SDL policy. On the responsible AI side and AI and ethics, we've got... This responsible AI standard that we're working on is basically the guiding principles around responsible AI for Microsoft in terms of how we deal with bias and fairness and transparency and reliability and safety issues as they relate to AI, as well as to security. And this is another element of policy that's being shaped within the company.

Nic Fillingham: So you mentioned that very few of these guidances have been automated. Obviously, one of the goals is probably, I assume, to get them automated into toolsets and into SDL. So let's... I'm going to put a customer hat on. I'm a customer of Microsoft. How should I feel about the work that Microsoft is doing to secure its own AI and ML systems? So obviously, we're practicing what we preach here and putting these guidances into place. How is success being measured? Or what are the metrics that we're using to, be it manually or automated, to make sure that our own AI and ML systems are protected?

Andrew Marshall: We're spinning up significant research and engineering investments across the company specifically to tackle these kinds of problems. Part of that is largely security. And it's part of this broader series of AI and ethics investments that we're making, but the security issues in particular, because we know that we've got customers reporting these kinds of things, and because we know that we've got our very own specific concerns in this space, we're putting together roadmaps to deal with these kinds of issues as specific sets of new product features and threat detections and mitigations in this space.

Andrew Marshall: We understand that you can't catch any of these things manually. It takes automation to catch any of this stuff. So that gives us a roadmap of engineering investments that we can prioritize and work directly with engineering groups across the company to go solve that. And the idea here being that when we deliver those solutions, they're not just available to Microsoft services, but they'll be made available to customers of Microsoft as well.

Natalia Godyla: So, Andrew, how are we seeing these attacks start to evolve already? So if you could talk us through a couple of examples, like data poisoning, that would be awesome.

Andrew Marshall: Oh, I'd love to. So data poisoning is something that we've seen our own customers impacted by because as we point out in our threat modeling guidance, there's a tremendous over-reliance on using public uncurated data feeds to train machine learning models. Here's an example of a real situation that did happen. So a customer was aggregating trading data feeds for a particular futures market. Let's just say it was oil. And they're feeding these training data feeds from different trading institutions, brokerages, or trading websites or whatever. They're taking all this stuff over a secure channel, they're generating a machine learning model from it. And then they're using that ML model to make some really high consequence decisions like is this location a good place to drill for oil or bid on rights by which you can drill for oil? Or do we want to take a long position or a short position in the oil futures market?

Andrew Marshall: So they're essentially trusting the decisions that come out of this machine learning model. And what's the nature of futures trading data feeds there's new data every day, so they're constantly incorporating this new data. Talking about the blind reliance on this untrusted data, even though it was over a secure channel, one of the training data providers was compromised not in a way that resulted in the website being shut down, but what happened was their data was contaminated. The data that they were sharing with everybody else. Unless you're actively monitoring for something like this as the provider of that data, there's no way that you're going to know that you're sending out that data to everybody else.

Andrew Marshall: So if the providers are unaware of the compromise, then the consumer of the data is also going to be equally as oblivious to the fact. So what happens is over time, that data became trusted high confidence garbage within that trading data model. So then that led to these decisions like drilling for oil in the wrong place or longing the futures market when they should have been shorting it and vice versa. So the point here is without automation to detect that kind of data poisoning attack, you don't know anything went wrong until it blows up in your face.

Natalia Godyla: It really gives you perspective because I feel like normally when you're hearing about cyber attacks, you are hearing about data being stolen and then sold or money itself being stolen. But in the case that you just explained, it's really about altering decision-making, it wasn't just direct money stealing.

Andrew Marshall: That was an interesting case because we're also thinking, all right, well, was it a targeted attack against the consumer or the people building machine learning models? How did the attacker know that? Were they looking to see what kind of outcomes this would generate? Is this the only place that they were doing that? Of course the data provider doesn't know. That's one of the more interesting, more insidious attacks that we've seen because we've got to create new types of tools and protections in order to even detect that kind of stuff in the first place. So you're looking for... As your machine learning models are being built, you're looking at taking on new data and looking for statistically significant drift in certain parts of the data that deviate from what looks normal and the rest of your data, and we're looking at ways of solving that. And that's an interesting space. So, yeah.

Natalia Godyla: So you noted that one of the potential reasons that the threat actor was playing around with that ML system for that customer example was because they were also just trying to figure out what they could do. So if it's so nascent then threat actors, are they in a similar place as us? Are they ahead of us?

Andrew Marshall: Well, we've already had that pivot from contrived research exploits where people are just trying to show off. We've already had that pivot into actual exploitation. So I don't know how to go back and attribute the level of attacker sophistication there. I don't think it was actually... In the attack that I mentioned here, the oil company scenario, that was compromised through traditional security vulnerabilities of that data provider. And I think the jury is still out on the final attribution of all of that, as well as the level of attacker sophistication or if... What would be even more interesting than all of that is really what other customers of that data provider were compromised in this and building machine learning models that were contaminated by that data. Think about hedge funds, who else was compromised by this and never even found out? Or who else had a model blow up in their face? That'd be a very interesting thing to see.

Nic Fillingham: The question I wanted to wrap up with, Andrew, is make me feel like we're on a good path here. Like, can we end on a high note? We talked about a lot of very serious scenarios and the consequences for adversarial ML. And obviously it's very important and very nascent, but should I feel like the good guys are winning? Should I feel like we've got good people on this? We're making great progress? That we should feel confident in AI and ML systems in-

Andrew Marshall: Yeah, absolutely.

Nic Fillingham: The second half of 2020?

Andrew Marshall: That's our entire focus with the AI and ethics and engineering and research group. We are bringing the entire weight of Microsoft to bear around these issues from a research, engineering, and policy perspective. And we want to solve all these issues so that you do have trustworthy interactions with all of our products. And that's an essential thing that we realized collectively as a company that has to happen where people won't use these kinds of products. If it doesn't generate an outcome that you can trust is going to be accurate and free of bias and something that you can rely on, then people just won't use those things. So we've got the AI and security centers of gravity working across the company with research and policy experts to tackle these issues. It's a fascinating time to be a part of this. I think that... I just had my 20 year anniversary last month, and I think this is about the most fun I've had period in the past 20 years working on this stuff now.

Nic Fillingham: It wasn't the launch of Windows Vista?

Andrew Marshall: I have so many horror stories from that. We really don't want to air those.

Nic Fillingham: Well, that's awesome. Gosh, what was I... I had this great question I was going to ask you and then the Vista joke popped in and now my brain is mulched.

Natalia Godyla: I love how that took priority.

Nic Fillingham: Like the most intelligent question I'm going to ask the entire interview and it's like just a joke bonk.

Andrew Marshall: I have some very, very funny stories from Vista, but none that are appropriate for here.

Nic Fillingham: Well, we may have to bring you on another time, Andrew, and try and sanitize some of those stories because the statute of limitations has surely run out on having to revere every single release of Windows. Surely we can make fun of Vista soon, right?

Andrew Marshall: I'm sure we can.

Nic Fillingham: So, Andrew, final question, where do you recommend folks go to learn more about this space and keep up to speed with any of the advancements, new taxonomy, new guidelines that come out?

Andrew Marshall: I would definitely keep tabs on the Microsoft Security blog. That's going to be the place where we drop all of the new publications related to anything in this space and connect you with security content more broadly, not just AI and ML specific, but yeah, the Microsoft Secure Blog, that's where you want to be.

Nic Fillingham: Great. Thanks Andrew Marshall for your time. We'll also put a link up to the guidelines on the doc's page.

Andrew Marshall: All right. Thank you very much for having me today. It's been great.

Natalia Godyla: And now let's meet an expert in the Microsoft Security Team to learn more about the diverse backgrounds and experiences of the humans creating AI and tech at Microsoft.

Natalia Godyla: Hello everyone. We have Sam Schwartz on the podcast today. Welcome Sam.

Sam Schwartz: Hi, thanks for having me.

Natalia Godyla: It's great to have you here. So you are a security PM at Microsoft. Is that correct?

Sam Schwartz: That is correct.

Natalia Godyla: Awesome. Well, can you tell us what that means? What does that role look like? What is your day-to-day function?

Sam Schwartz: Yeah, so I support currently a product called the Microsoft Threat Experts and what I am in charge of is ensuring that the incredible security analysts that we have, that are out saving the world every day, have the correct tools and processes and procedures and connections to be the best that they can be.

Natalia Godyla: So what are some of those processes look like? Can you give a couple examples of how you're helping to shape their day to day?

Sam Schwartz: Yeah. So what Microsoft Threat Experts does, is it as a managed threat hunting service provided by Microsoft defender ATP product and what they do is our hunters will go through our customer data in a compliant safe way, and they will find bad guys, human adversaries inside of the customer telemetry. And then they notify our customers via a service called the targeted attack notification service.

Sam Schwartz: So we'll send an alert to our customers and say, "Hey, you have that adversary in your network. Please go do these following things. Also, this is the story about what happened, how they got there and how you can fix it."

Sam Schwartz: So what I do is I try to make their lives easier by initially providing them with the best amount of data that they can have when they pick up an incident.

Sam Schwartz: So when they pick up an incident, how do they have an experience where they can see all of the data that they need to see, instead of just seeing one machine that could have potentially been affected, how do they see multiple machines that have been affected inside of a single organization? So they have an easier time putting together the kill chain of this attack.

Sam Schwartz: So getting the data and then also having a place to visualize the data and easily make a decision as to whether or not they want to tell a customer about it, does it fit the criteria? Does it not? Is this worth our time? Is this not worth our time? And then also providing them with a path to, with that data quickly create an alert to our customers so that they know what they're doing.

Sam Schwartz: So rather than our hunters, having to sit and write a five paragraph essay about what happened and how it happened, have the ability to take the data that we already have, create words in a way that are intuitive for our customers, and then send it super quickly within an hour to two hours of us finding that behavior.

Sam Schwartz: So all of those little tools and tracking, and metrics and easier, like creating from data, creating words, sending it to the customers, all of that is what I plan from a higher level to make the hunters be able to do that.

Nic Fillingham: And to better understand the scale of what's happening here, like with a typical customer, what is the volume of signal or alerts or, I'm not sure what the correct taxonomy is, but what's the volume of stuff that's being monitored from the customer and then is being synthesized down to a bunch of alerts that then go and get investigated by a hunter?

Sam Schwartz: So I don't have a per customer basis, but we have about, I think it's either 450 customers currently enrolled in our program. And unfortunately, we can't take everyone that would like to join us. Our goal is that we will eventually be able to do that, but we don't have enough people and we're still building our tooling to allow us to scale.

Sam Schwartz: So with our 450 customers, we have every month, about 200,000 incidents that get created and we then bring that down. So some of those incidents don't get investigated because they don't meet our bar. Some of those incidents get investigated, but aren't interesting enough to actually have an alert created. And some of them even, although the alert is created, it's not actually interesting enough to send, or we've already sent something similar and it's not worth it.

Sam Schwartz: So from those 200,000, we send about 200 to like 250 alerts a month about, but it also depends on the landscape. Like it depends on what's going on that-

Nic Fillingham: And if I go even higher up the funnel, so before the 200,000 is it, what's the right taxonomy, is it an alert?

Sam Schwartz: Incidents. We call them incidents.

Nic Fillingham: ... What's above an incident. What is, because I assume it's just tons and tons and tons of network logs and smaller signals that end up getting coalesced into an incident. Is that correct?

Sam Schwartz: Yeah. So what we do is we call them traps. So what they are is they're queries that run over data that finds something super interesting. And you can think about these as similar to alerts that customers get, but much, much, much lower fidelity.

Sam Schwartz: So for us, for our products, a trap, if it fires a hundred times and of that a hundred times, 99 of them are false positives, 99% of them are not super helpful for the customer, we're not going to send that to the customer. That's bothering them 99 times that they don't need to be bothered. But for our service, our whole thing is that we are finding that 1% that our customer doesn't know about.

Sam Schwartz: So we have extremely low fidelity traps. Some of them are high fidelity that it can run a thousand times and only one time is it important? We want to see every a thousand times because that one time is worth it. So we have traps, I think we have about 500 of them. Some of them return thousands of results a day. Some of them won't return results for months.

Sam Schwartz: And if that gets a hit, then those are the things that get bubbled up into our incidents. We cluster all of those trap results into the incidents, so that's ensuring that our hunters get all the information that they need when they log on, so the signals are massive. There's a massive amount. I don't even have a number.

Natalia Godyla: I have literally so many questions.

Sam Schwartz: Oh my God, happy to help.

Natalia Godyla: So you said earlier, there's a bar for what the Microsoft Threat Experts will focus on. So what is in scope for them? What meets the criteria?

Sam Schwartz: We are focusing on human adversaries. So we're not focusing much on commodity malware, as much as we are focusing on a hands-on keyboard attacker. So there are some traps that are, some of them are commodity malware, but paired with other traps so paired with other signals, that could be a hands-on keyboard person. And those are things we look at, but then maybe some of the traps on their own don't meet a bar for us to go look at.

Nic Fillingham: Is that because commodity malware is basically covered by other products, other services?

Sam Schwartz: (Affirmative). It's covered by our defender ATP product in general. So our hunters wouldn't be adding. Our whole point is that we have hunters who are adding context and value to the already incredible ATP product. And since ATP is already alerting and covering that, we'd rather find the things that aren't being covered.

Nic Fillingham: So Sam, let's go back in time a little bit, so tell us about how you found yourself in the security space and maybe it's a separate story maybe it's the same story and how you got to Microsoft. We'd love to learn your journey, please.

Sam Schwartz: It is the same story. Growing up, I loved chemistry.

Nic Fillingham: That's too far back.

Sam Schwartz: I know.

Nic Fillingham: Oh, sorry. Let's start there.

Sam Schwartz: I loved Chemistry. I loved like molecules and building things and figuring out how that all works. So when I went to college, I was like, I want to study chemical engineering. So I through my education became a chemical engineer, but I found that I really liked coding. We had to take a fundamentals class at the beginning and I really enjoyed the immediate feedback that you got from coding. Like you did something wrong, it tells you immediately that you messed up.

Sam Schwartz: And also when you mess up and you're super frustrated and you're like, why didn't this work? Like I did it right. You didn't do it right, it messed up for a reason. And I really liked that. And I thought it was super interesting. And I found myself like gravitating towards jobs that involved coding.

Sam Schwartz: So I worked for Girls Who Code for a summer. I worked for a Dow Chemical Company, but in their robotics division. So I was still like chemical engineering, but I got to do robots. And then when I graduated, I was like, I think I want to work in computer science. I don't like this chemical engineering. It was quite boring, even though they said it would get more fun, it never did. We ended up watching water boil for a lot of my senior year of college. And I was like, I want to join a tech company.

Sam Schwartz: And I looked at Microsoft and they're one of the only companies that provide a program management job for college hires. So a lot of PM positions because there's a lot of high level thinking, coordinating and collaboration. A lot of PM positions are one of those, like you need experience, but in order to get experience, you have to do the job and it's like one of those weird circles and Microsoft allows college hires to do it.

Sam Schwartz: So when I interviewed, I was like, I want to be a PM. It sounds fun to get to hang out with people. And I ended up getting the job, which is awesome.

Nic Fillingham: Is that all you said in the interview? Just, it sounds fun to get to hang out with people?

Sam Schwartz: Yes. I was like, this is it, this is my thing. What they did is they, in my interviews, they asked me a bunch of, they asked me a very easy coding question, I was so happy. I was so nervous that I wasn't going to get a pass that one, but that was easy. And then they asked me a design question. They asked me, "Pick your favorite technology." And me, I'm sad to say it. I feel like I'm better now looking back on myself, but I'm really not good with technology in general.

Sam Schwartz: So they're like pick your favorite technology. And I was like, I'm going to pick a chemical engineering plant because I didn't know anything. So I picked an automation plant as my favorite technology. And they asked me a lot of questions around like, who are the customers? What would you do to change this to affect your customers? Who gets changed? How would you make it better?

Sam Schwartz: Then I was talking specifically about a bottling plant, just because that's easy to understand. And I left that interview and my interviewer was like, I didn't know, he said, "I didn't know anything that you were talking about, but everything you said made perfect sense because it's about how can you take inputs, do something fun and then have an output that affects someone. And that's everything that we do here. Even though it's a bit obfuscated and you have a bunch of data and bad guys and hunters hunting through things, it's taking an input and creating something great from it."

Sam Schwartz: And that's what we learned in our chemical engineering world. And I ended up getting this job and I walked on my first day and my team and they're like, "You're on a Threat Intelligence Team." I was like, "What does that mean?" And-

Nic Fillingham: Oh, hang on. So did you not know what PM role you were actually going to get?

Sam Schwartz: No. They told me that I was slated for the Windows. I was going to be on a Windows team. So in my head like that entire summer, I was telling people I was going to work on the start button just because like, that's what... I was like, "If I'm going to get stuck anywhere, I'm going to have to do the start button. Like that's where my-"

Nic Fillingham: That's all there is. Windows is just now a start button.

Sam Schwartz: I was like that what... I was guaranteed, I'm going to get the star button or like Paint. Actually, I probably would have enjoyed Paint a lot, but the start button and I came and they were like, "You're on Threat Intelligence Team." And I was like, "Oh, fun."

Sam Schwartz: And it was incredible. It was an incredible start of something that I had no idea what anyone was talking about, when they were first trying to explain it to me in layman's terms, they're like, oh, well, there's malware and we have to decide how it gets made and how we stop it. And I was like, what's malware? I was like, you need to really dumb it down, I have no idea what we're talking about. And initially when I started on this threat intelligence team, there were only five of us. So I was a PM and they had been really wanting a PM, and apparently before they met me were happy to get a PM, but weren't so happy it was a college hire. They're like-

Nic Fillingham: Who had never heard of malware.

Sam Schwartz: We need structure.

Nic Fillingham: And thought Windows was just a giant anthropomorphic start menu button.

Sam Schwartz: They're like, we need structure, we need a person to help us. And I was like, hi, nice to meet you all. And so we had two engineers who were building tools for our two analysts and it was, we called ourself a little startup inside of security research inside of the security and compliance team, because we were figuring it out. We were like, threat intelligence is a big market, how do we provide this notion of actionable threat intelligence? So rather than having static indicators of compromise, how do we actually provide a full story and tell customers to configure, to harden their machines and tell a story around the acts that you take to initiate all of these. These configurations are going to help you more than just blocking IOCs that are months old. So figuring out how to best give our analyst tools, our TI analysts, and then allow us to better Microsoft products as a whole.

Sam Schwartz: So based on the information that our analysts have, how do we spread that message across the teams in Microsoft and make our products better? So we were figuring it out and I shadowed a lot of analysts and I read a lot of books and watched a lot of talks. I would watch talks and write just a bunch of questions. Then finally, as you're around all these incredibly intelligent security people, you start to pick it up, and after about a year or so I would send meetings and I would listen to myself speak and I was like, did I say that? Was that me that one, understood the question that was asked of me and then also was able to give an educated answer? It was very shocking and quite fun. And I still feel that way sometimes, but I guess that's my journey into security.

Natalia Godyla: Do you have any other suggestions for somebody who is in their last years of college or just getting out of college and they're listening to this and saying, heck yes, I want to do what Sam's doing. Any other applicable skills or tricks for getting up to speed on the job?

Sam Schwartz: I think a lot of the PM job is the ability to work with people and the ability to communicate and understand what people need and be able to communicate that in a way that maybe they can't communicate. See people's problems and be able to fix them. But I think a lot of the PM skills you can get by working collaboratively in groups, and that you can do that in jobs, you can do that in classes. There's ample opportunity to work with different people, volunteering, mentoring, working with people and being able to communicate effectively and connect to people and understand, be empathetic, understand their issues and try to help is something that everyone can do and I think everyone can be an effective PM. On the security side, I think reading and listening. Even the fact that, the hypothetical was someone listening to this podcast that are already light years ahead of I was when I started, but just listening, keeping up to date, reading what's going on in the news, understanding the threats, scouring Twitter for all the goodness going on.

Sam Schwartz: That's the way to stay on top.

Nic Fillingham: Tell us about your role and how you interface with data scientists that are building machine learning models and AI systems. Are you a consumer of those models and systems? Are you contributing to them? Are you helping design them? How do you fit into that picture?

Sam Schwartz: So a little bit of all of the things that you mentioned, being a part of our MTE service, we have so many parts that would love some data science, ML, AI help, and we are both consumers and contributors to that. So we have data scientists who are creating those traps that I was talking about earlier for us, who are creating the indicators of malicious anomalous behavior that our hunters then key off of. Our hunters also grade these traps. And then we can provide that back to the data scientists to make their algorithms better. So we provide that grading feedback back to them to have them then make their traps better. And our hope is that eventually their traps, so these low fidelity signals, become so good and so high fidelity that we actually don't even need them in our service, we can just put them directly in the product. So we work, we start from the incubation, we provide feedback, and then we hopefully see our anomaly detection traps grow and become product detections, which is an awesome life cycle to be a part of.

Nic Fillingham: I want to change topics then, but this one's going to need a little bit of context setting because you are famous inside of Microsoft for anyone that has completed one of our internal compliance trainings. I don't even know how to describe this to people that haven't experienced it. Natalia, we've both done it. So there's this thing at Microsoft called Standards of Business Conduct, it's like a internal employee compliance. This is how you should behave, this is how you should function as a responsible employee and member of the Microsoft family, but then also how we work with customers and everything. And it's been going on for a few years. Sam, you had a cameo, you were the only non-professional actor in the recent series, that's correct?

Sam Schwartz: I was, I was, I'm famous, I will be signing headshots when we're all back in the office.

Nic Fillingham: So tell us about how did this happen?

Sam Schwartz: So I, as anyone who has seen the Standards of Business Conduct videos, I wouldn't call them a training, I would call them a production.

Nic Fillingham: An experience. Or production.

Sam Schwartz: An experience, yeah. An experience.

Nic Fillingham: They're like a soap opera. It's almost like Days of Our Lives. They really stir the emotion and we get attached to these characters and they go on wild journeys in a very short space of time.

Natalia Godyla: I was just watching an episode and I literally got stressed.

Sam Schwartz: Yeah, you're so invested in these characters and their stories and you're rooting for them to do the right thing. And you're like, come on, just be compliant. And in my first week on the job I was telling, I watched this training as everyone who starts Microsoft has to do and I was telling my team that I was obsessed with the main character who has his own trial and tribulations throughout the entire series. And I just thought it was fun and I was like, how do I get on it? That was my thing when I first joined, how do I get on Standards of Business Conduct? And every year, Microsoft is super passionate about giving, giving back, donating money, and every October we have this thing called the Give Campaign where every employee is encouraged to give back to their community.

Sam Schwartz: And one of the ways that they do is they have an auction. So some of the auction things are, you get lunch or golf with Satya, or you get assigned, I don't know, computer or X-Box from Phil Spencer or whatever it is. I made those up.

Nic Fillingham: You get to be the Windows start button for a day.

Sam Schwartz: You get to be the Windows start button for a day. And one of those is a cameo in Standards of Business Conduct. And you can donate a certain amount of money and there's a bid going, where the person who donates the most money is at the leaderboard and then if you donate more money, you got on top. So a silent auction before giving back and donating. And I saw that last year on the gift campaign, but I didn't think much of it. It had a high price tag and I didn't want to deal with it. And then a couple of months later, I had just gotten back from vacation and my skip level was like, hey, I missed you a lot, let's get lunch. And I was like, okay, great, I love that.

Sam Schwartz: And he was like, I want to go somewhere fun, I want to go to building 35, which is the executive nice cafeteria building at Microsoft, which is not near our office. And I was like, okay, weird, he wants to go to another building for lunch, but we can go do that. So I went with him and it was five to 10 minutes into our lunch and these people come up to our table and they're like, can we sit with you? And I'm looking around and there are tons of tables, I'm like, what are these people encroaching on my lunch for? I just want to have lunch and chat and these people want to come sit at my table, but of course, we're going to let them sit at our table. And I look over at the guy who's sitting next to me and it's the main character from Standards of Business Conduct. It is the actor, it is-

Nic Fillingham: It's Nelson.

Sam Schwartz: It's Nelson. And I fan girled over him for a year and a half now, I've seen all his work, I'm a huge fan.

Nic Fillingham: Please tell me it was a Beatles on their first tour to America moment. Please tell me there was screaming, there was fainting.

Sam Schwartz: I blacked out.

Nic Fillingham: That's the picture in my head.

Sam Schwartz: I don't remember. I don't remember what happened because I actually blacked out. And there's a video of this and you can see my body language, when I realized you can see me grab the arms of the chair and my whole body tenses up, and I'm looking around frantically, like, what's happening. And the woman who was sitting next to my skip-level, she actually created Standards of Business Conduct and she's in a lot of the videos, her name is Rochelle. And she's like, your team has pulled together their money and bought you a cameo in our next Standards of Business Conduct. And I turned around and my entire team is on the balcony of the cafeteria filming me and it was very cute and very emotional. And I got to see Nelson and then I got to be in Standards of Business Conduct, which is awesome. And it was a super fun experience.

Nic Fillingham: So in the Microsoft cinematic universe, what is your relationship to Nelson? Are you a colleague?

Sam Schwartz: We're all on the same team.

Nic Fillingham: So you're a colleague.

Sam Schwartz: We are on the same team.

Nic Fillingham: So forever, you're a colleague of Nelson's.

Sam Schwartz: Yeah, I am. And he knows who I am and that makes me sleep well at night.

Natalia Godyla: Thank you, Sam, for joining us on the show today, it was great to chat with you.

Sam Schwartz: Thank you so much for having me. I've had such a fun time.

Natalia Godyla: Well, we had a great time unlocking insights into security, from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us @MSFTSecurity, or email us at SecurityUnlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.

Natalia Godyla: Stay secure.