Security Unlocked 3.8.21
Ep 18 | 3.8.21

Celebrating Women in Security


Nic Fillingham: Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft security engineering, and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security. Deep Dive into the newest threat intel, research, and data science.

Nic Fillingham: And, profile some of the fascinating people working on artificial intelligence in Microsoft Security.

Natalia Godyla: And now, let's unlock the pod.

Nic Fillingham: Hello Natalia, welcome to a very special episode of Security Unlocked, how are you doing?

Natalia Godyla: I'm doing great, and it is a very special episode. It is International Women's Day today and, we are going to be celebrating that with our compilation episode, pulling together a few of the awesome women that we have been interviewing throughout the course of the podcast.

Nic Fillingham: Yeah, we have taken, uh, three interviews that actually went live, uh, in episodes one, four, and seven respectively. So, if you haven't made your way back through the archive, if you haven't binged the Security Unlocked series so far, uh, you may have missed these ones. And, they are amazing, uh, interviews so we wanted to sort of, bring them out of the archive and pull them together for this special episode. First up, you're gonna hear from Holly Stewart, who was the first person that we profiled on the, on the podcast on the first episode. Holly is affectionately known inside the Defender Org as the Queen of AI, She gives a sort of, a wonderful perspective on, on ML and AI.

Nic Fillingham: Then we hear from Dr. Anna Bertiger, who has a PhD in Math and has this incredible energy and passion for how she uses her math to catch villains, I think you'll, you'll love that perspective. And then, we round it out with Sam Schwartz, who provides a wonderfully fresh viewpoint on security and coming into security as someone that's a little, sort of, newer in career into, in the cyber security space. I think it's gonna be great episode.

Natalia Godyla: Yes, and it doesn't stop there. We will be highlighting women throughout the month. So, we'll be covering different Deep Dive topics with female security leaders at Microsoft as well as profiling a few women in their careers.

Nic Fillingham: On with the pod?

Natalia Godyla: On with the pod.

Nic Fillingham: Welcome to the podcast, Holly Stewart. Hi Holly, thanks for your time today.

Holly Stewart: Hello, thank you for having me.

Nic Fillingham: Awesome. So, let's start with if you could just give us your title at Microsoft but, maybe more interestingly, sort of, walk us through what the day to day function is of your role?

Holly Stewart: Sure. So, I am a Principle Research Lead at Microsoft, and I work in the endpoint protection side of research. And, I like to say, sort of, our teams super power is using AI to help protect people. Machine learning and data science techniques are used everywhere within our research team, but with our team we have a primary focus on using those techniques to try to help people and keep them safe.

Nic Fillingham: That's awesome. And, you run a team is that right Holly, how big's the team?

Holly Stewart: It's about 25 now.

Nic Fillingham: Yep, and they're all in the, sort of, AI data science, sort of, realm?

Holly Stewart: Yeah, actually they're this super interesting mix of researchers, data, and data scientists and they come from all walks of life. We have folks who are security experts, who really understand what threats do, how they work, some of them understand criminal undergrounds and other things like that. And then, we have data scientists that come from many different facets, many of them not particular experienced in security, but some may be an expert in deep learning, another person may be more on the anomaly detection side. But, you know, you take all these folks with different perspectives and different strengths and you put them together and really cool things happen.

Nic Fillingham: So holly, you talked about learning French and, sort of, what you studied at college, what other things in your, your education, your history pre Microsoft do you, sort of, feel, sort of brought you to where you are now, and that you're, sort of, using in your day? Perhaps things that, that maybe seem a little unorthodox.

Holly Stewart: You know, I'll say that I, I grew up with a really strong work ethic, my family actually comes from farming. And, you know, my father has this really strong work ethic, he gets these guilt complexes about... if he's not doing something productive, he hasn't made... th- you know, day is not complete. And, and somehow I'm instilled with that and so when I got into security, I kept seeing so many problems, just sort of the threat de jure, every single day we're just bombarded with information, it's, it's sort of an overload. And I always thought, how can we better solve this problem? How can we help people really understand what matters? And when I started getting into data science, I thought, this is the way this is how we can make better decisions, help people make better decisions, and help protect them in a way where, you know, sort of focusing on the problem de jure, really wasn't getting us anywhere, really wasn't moving the needle.

Nic Fillingham: So perhaps that drive that maybe thought you were going to the Peace Corps, you're, you're sort of utilizing a similar motivation there, but now in the data science realm.

Holly Stewart: Yeah, absolutely. I mean, I love being able to say that I go to work and the work that my team does, we are trying to help people every single day to keep them safe, keep them protected. It's, it's something that I feel good about.

Natalia Godyla: That's great. And and how does AI and ML factor into that when you're thinking about all of these big complex problems you want to take on?

Holly Stewart: Yeah, it's a great question. Like if you think about how maybe we traditionally approach security research where a researcher might reverse engineer some malicious program, figure out what it does, find some heuristic techniques to be able to detect that in the future, make sure those heuristic techniques don't detect the good things that want our computers to run. That takes a lot of time. And the truth is that malware has become so complex, that there's literally hundreds of millions of features that feed into what makes malware malware. It's really difficult for the human brain to wrap your mind around all these permutations, but that's the beauty of machine learning and AI, it's built for that.

Holly Stewart: And so we take this incredible ecosystem diversity from, you know, benign applications to malicious applications, we feed that information into the machine learning systems, we train them how to recognize good from bad, and they can come up with these permutations that the human brain wouldn't be able to wrap their heads around. And that, that's really how I connect all those things together in our day to day.

Natalia Godyla: Got it. And so what types of... when we say AI and ML, that's a relatively broad set of acronyms there you know, what type of techniques, what type of approaches do you and your team use, or where you sort of heavily invested?

Holly Stewart: We invest in lots of things, so if I break down, and I'll say AI in quotes because I, I kind of use it interchangeably, to really just mean data science , it means data science approach. We use many different techniques from what you call supervised machine learning to unsupervised machine learning. With supervised machine learning, you're using signals to help teach the machine how to detect something new. So I may take a set of say, 100 files and 10 of them are bad and 90 of them are good, I extract a bunch of features from those files and then I feed that into machine learning system to teach it how to detect new things that are similar to those files in the future. So that's what you call supervised.

Holly Stewart: Unsupervised, is really good at finding what we call the unknown unknown. So, you know with supervised learning, you're teaching it something that you already know and it just gets better at that. With unsupervised, you're trying to find those pockets of uncertainty that maybe haven't even been classified before, or maybe should be clustered together. Or perhaps you know, using past data you find that, "Hey this is an anomaly, something I haven't seen before that doesn't have a label, but that could indicate that something bad is going on." And so we really use a combination of all of these approaches to help train machines to amplify human knowledge and also find the things that maybe as humans we were not thinking about in the first place.

Natalia Godyla: Can you share a couple examples so how this AI and ML

Natalia Godyla: I was driving some of the Microsoft products, even products that, like Nick said, we use day to day.

Holly Stewart: Yeah, absolutely. So there are a lot of files that use what we call social engineering to try to trick people into opening them. So one example that we saw over the past year is these attackers were using local business names and making it look like they were sending an invoice for that local business name, I think it was, uh, a landscaping firm or something like that. And so they were using that invoice that looked like it was from a local landscaper, sending it to these other businesses to try to trick them into opening up this invoice. And so inside it, it led to this phishing site and they would try and collect their credentials. Uh, and so, you know, when you're just looking at this file, you may not see that it looks benign, but the-the machine learning system because it was able to extract all these different features from that file, it was able to see, Hey, this-this is not a normal type of invoice that I would see from a legitimate business, and it was able to flag that as malicious and help keep those customers protected.

Natalia Godyla: So Holly, what's next on the horizon, what are you most passionate about trying to solve next?

Holly Stewart: Sure. So today we've done a pretty good job of using AI to help discriminate malicious software from benign software, not perfect but we've made a lot of progress in that area. But what's next on the horizon for us is really deeper than that, so it's great to discriminate malicious from bad but what more can I learn from that. Say for example, if we understand the entire Kill Chain of-of that malicious activity from how it arrived, to the victim, to what it did after, if the victim installed it or clicked it, to the file, sort of, motive of the attacker. And if we can understand that entire story, we can look at all of the pieces in that, what we call Kill Chain, and be able to provide protective guidance and automate protections to essentially learn from what attackers are doing today, and make our defenses stronger and stronger over time. And that's really the evolution of AI in security, is to help automate that for the customer. Because the amount of threats that we're facing, the amount of security information is an overload. And we have to get better, we have to automate, and we have to use AI to do it, to really get to where we need to go.

Natalia Godyla: And how far away do you think this next step in the evolution is?

Holly Stewart: I'm sure I'll be working on it for the rest of my life. (laughs).

Natalia Godyla: (laughs).

Nic Fillingham: Holly, do you have a Twitter account, do you have a blog, do you have anything you want to promote if folks want to learn more about you, your team, if you're hiring?

Holly Stewart: So we post all of our content on the Microsoft Security blog, so you can find it there. And we are hiring data scientists, uh, here in the next week or so, we should have the postings up.

Nic Fillingham: Great, so you would find them on the Microsoft careers website, probably under data science?

Holly Stewart: Under data science or look for defender and data science, and you'll find us.

Natalia Godyla: Thank you, Holly for your time today, it was fantastic to hear about your insights on AI.

Nic Fillingham: Yeah thank you Holly, uh, you know, your time is busy, you're running a big team, doing some great work. We really appreciate you coming on the podcast.

Holly Stewart: Thank you.

Nic Fillingham: It was great to revisit that conversation with Holly, I'm really glad we got to pull that one out of the archive and bring it to newer listeners of the podcast. Up next, Dr. Anna Bertiger who tells us about her superpowers, which are utilizing math to catch villains. So I hope you enjoyed the conversation.

Nic Fillingham: Dr Anna Bertiger, thank you so much for joining us. Welcome to the Security Unlocked podcast.

Dr Anna Bertiger: Thank you so much for having me.

Nic Fillingham: Um, if we could start with what is your title, and what does that really mean in sort of day to day terms. What do you do with Microsoft?

Dr Anna Bertiger: So my title is senior applied scientist, but what I do is I find villains.

Nic Fillingham: You find villains, how do you find villains?

Dr Anna Bertiger: So I-I find villains in computer network, it's all the benefits of the job as a superhero with none of the risks. And I do that using a combination of security expertise, and mathematics and statistics.

Nic Fillingham: So you find villains with math?

Dr Anna Bertiger: Yes, exactly.

Nic Fillingham: Got it. And so, let's talk about math, what is your path to Microsoft, because I know it heavily involves math. How did you get here, and maybe what other sort of interesting entries might be on your LinkedIn profile?

Dr Anna Bertiger: So, I got here by math, I guess.

Nic Fillingham: (laughs).

Dr Anna Bertiger: So, I come from academic mathematics, I have a PhD in math, and then I had a postdoctoral fellowship in the department of combinatorics and optimization at the University of Waterloo, in Waterloo Ontario, Canada.

Nic Fillingham: Could you explain what that is because I, I heard syllables that I understood, but not words?

Dr Anna Bertiger: (laughs). So that is the department unique to the University of Waterloo. So, optimization is, you know, maximizing, minimizing type problems.

Nic Fillingham: Got it.

Dr Anna Bertiger: And combinatorics is a fancy word for counting things.

Nic Fillingham: Combinatorics.

Dr Anna Bertiger: Yeah, which you can do in fancy and complicated ways, and so-so that's what I did when I was not going to make mathematician, is I counted things in fancy and complicated ways that told me interesting things frequently about geometry. And then I decided that I wanted to see the impact of what I did in mathematics in the real world, in a timeframe that I could see, and not on the sort of like, you think of beautiful thoughts, it's really lovely it's a lot of fun. And then hopefully someone uses them eventually. And so I looked for jobs outside of academia. And then one day, a friend at Microsoft, uh, sent me a note that said, if you like your job that's great but if you don't, my team wants to hire somebody with a PhD in combinatorics. And I said, That's me. (laughs).

Nic Fillingham: (laughs).

Dr Anna Bertiger: And so, I, uh, you know, it took a while. I flew out for an interview, they asked me lots of questions. I, when I'm interviewing for a job, I evaluate how cool the job is by how cool the questions they asked me are. If they asked me interesting questions, that's a good sign. If they asked me boring questions, maybe I don't want to work there.

Natalia Godyla: Was there something that drew you to the cybersecurity industry when your friend showed you this job wo-, did you see security and go, Yeah that's cool?

Dr Anna Bertiger: So I didn't actually see security in that job, like that team was, didn't only work on fraud, we worked on, we also worked on a bunch of marketing related problems. But I really loved the fraud related problems, I really loved the adversarial problems, I-I like having an adversary. I view it as this like comforting, friendly thing, like you solve the problem. Don't worry, they'll make you a new one

Nic Fillingham: (laughs).

Dr Anna Bertiger: It's true.

Nic Fillingham: So hang on, so you, you go to bed at night and sleep soundly knowing that there are more villains out there?

Dr Anna Bertiger: I mean, I would kind of like to get rid of all the villains, but also like, they're building me some really old problems, like on a-

Nic Fillingham: Yeah, you-you're a problem solver and they're throwing some good challenges at you.

Dr Anna Bertiger: Right, I'm gonna like make the world a better place. School of thought, I would like them all to disappear off the face of the planet. On the like entertaining me portion, problems are pretty good. And so I worked a bunch on-on credit card fraud related problems on that team, and at some point a PM joined that team, who had a, who was a cybersecurity person who had migrated to fraud. And I said, well, you know, I'm not a cybersecurity person. And he said, Oh no, you are. It's a personality type and it's you.

Nic Fillingham: (laughs).

Dr Anna Bertiger: And then, and then I worked at some other things, you know, worked on some other teams at Microsoft, did some windows quality related things. And it-it just wasn't as much fun, and I found my way back to cybersecurity and I've been here since.

Natalia Godyla: How do you use AI or ML tools to solve some of these problems?

Dr Anna Bertiger: So, the AI and ML is about learning what's normal. And then when you say, Hey, this isn't normal, that might be malicious. Someone should look at it. So, our AI and ML is human in a loop driven. We don't act on the basis of the AI and ML the way that some other folks might, and there are certainly security teams that have AI and ML that makes decisions, and then acts on them on its own. That is not the case. My team builds AI and ML that powers humans

Dr Anna Bertiger: ... who work in security operation centers, to look at the results. And so, I use ML to learn what's normal. Then, what's not normal, I say, "Hey, you might want to look at this because it's a little squiffy looking." And, then a person acts on it.

Dr Anna Bertiger: And so, I use a lot of statistical modeling to figure out what's normal. So, if it, uh, a statistical distribution to some numerical data about the way the world is working. And, then calculate a P-value, that you might remember from Stat 1 if that's something you've done, to say, "Oh, yeah. Well, there's, you know, only a tenth of a percent chance that, like, this many bites transferred between these pair of machines under normal behavior. Someone should look at that. That's a lot of data moving."

Dr Anna Bertiger: And, there, I like to use a group of methods called spectro-methods. So, they're about, if I have this graph, I have a bunch of vertices and I can have edges between them, I could make a matrix that has a one in cell IJ, if there's a vert- if there's a edge between vertex I and vertex J. Let me know if I am getting too technically deep here.

Nic Fillingham: You are but keep going.

Dr Anna Bertiger: And (laughs), and then, now I have a giant matrix. And so, I can apply all the tools of linear algebra class to it. And, one of the things I can do is look at its eigenvalues and eigenvectors. And, one way you might, sort of, compress this data is to project along the eigenvectors corresponding to large and absolute value- eigenvalues. And, now, you know, we can say things like, "All the points that are likely to be connected end up close together."

Dr Anna Bertiger: And, we can try and learn something about the structure of the network and what's strange. And, we've done a bunch of research in that direction. That is stuff I'm particularly proud of.

Natalia Godyla: What are you most interested in solving next. What are you really passionate about?

Dr Anna Bertiger: I'm really passionate about two things. One of which is, sort of, broadly speaking, finding- finding villains. Finding bad guys. So, part of what I do is dictated by what they do. Right? They- They change their- change their games, I have to change mine, too. And then, also, I have a collection of tools that I think are really mathematically beautiful that I'm really passionate about. And, those are spectral methods on graphs and, sort of, graphs in general.

Dr Anna Bertiger: And so, I'm really passionate about finding good applications for those. I'm passionate about understanding the structure of how computers, people, what have you, connect with each other and interact. And, how that tells us things about what is typical and what is atypical and potentially ill-behaved on computer networks. And, using that information to find horrible people.

Dr Anna Bertiger: I think- I stopped being surprised by what our adversaries can do. Because, they are smart people who work hard. Sometimes, I'm disappointed in the sense of like, "Damn, I thought I solved that problem and they're back." But that's (laughs) I mean, and that's mostly just you feel like the, like, sad balloon three days after the party.

Natalia Godyla: At the end of the day, why do you do what you do?

Dr Anna Bertiger: I think there are two reasons I do what I do. Uh, the first which is I want to make the world a better place with the ways I spend my time. And, I think that catching horrible people on computer networks makes the world a better place. And, the other of which is that it's really just a ton of fun. I- I really do have a lot of fun. We- We think about really cool things. Neat concepts in computing and beautiful mathematics. And, I get to do that all day, every day, with other smart people. Who wouldn't want to sign up for that?

Natalia Godyla: You've called Mathematics beautiful a couple of times. Can you elaborate? What do you find beautiful about Math? What draws you to Math?

Dr Anna Bertiger: I find the ideas in Math really beautiful. And, I think that's a very common thing for people who have a bunch of exposure to Advanced Mathematics. But, isn't a thing we filter to folks in school as well as I would like. The- If you think about the Pythagorean theorems, that's a theorem that most people learned in high school. Geometry that says that-

Nic Fillingham: I know that one.

Dr Anna Bertiger: ... square of the lengths of the two sides of a right- two legs of a right triangle equals the- sum together equals the square of the hypotenuse lengths. And, if you-

Nic Fillingham: Correct.

Dr Anna Bertiger: That is-

Natalia Godyla: (Laughs)

Dr Anna Bertiger: ... a fact. Okay. And, if you learn it as a piece of trivia then you go, "Okay, that's a thing that I know for the test. And, you write it down and you put it on a flash card or whatever. But, what I think is really beautiful, is the idea of, "How do you think that up?" And, the, sort of, human ingenuity in figuring out that thats's true. And, the- the beautiful ways you can show that that is true. For sure, there's some really, really beautiful ways to be able to prove to yourself that that is true.

Nic Fillingham: Changing topics, sort of, slightly. Are you all Math all the time? You know, do you have a TV show you're binging on Netflix? Do you have computer games you like to play? Are you a rock climber? What's the other side of the Math brain?

Dr Anna Bertiger: So, the other side of the Math brain for me is things that force my brain to focus on something that is entirely not work. And so, I really love horses and I have a horse. And, I love spending time with her and I love riding her. She's both a wonderful pet and just a thrill to ride.

Nic Fillingham: Awesome.

Natalia Godyla: Well, Anna, it was a pleasure to have you on the show today. Thank you for sharing your love of Math and horses and hopefully we'll be able to bring you back to the show another time.

Dr Anna Bertiger: Thank you so much for having me.

Natalia Godyla: I'm so thankful we go to re-listen to Anna's episode. Up next, we'll be talking with Sam Schwartz who is a program manager for the Microsoft Threat Experts team. But, she wasn't always targeting security. She started out as a chemical engineer. So, hope you enjoy hearing about her career from chemistry to security.

Natalia Godyla: Hello, everyone. We have Sam Schwartz on the podcast today. Welcome, Sam.

Sam Schwartz: Hi, thanks for having me.

Natalia Godyla: It's great to have you here. So, uh, you are a security PM at Microsoft. Is that correct?

Sam Schwartz: That is correct.

Natalia Godyla: Awesome. Well, can you tell us what that means? What does that role look like? What is your day to day function?

Sam Schwartz: Yeah. So, I support, currently, a product called the Microsoft Threat Experts. And, what I'm in charge of is insuring that the incredible security analysts that we have, that are out saving the world every day, have the correct tools and processes and procedures and connections to be the best that they can be.

Natalia Godyla: So, what do some of those processes look like? Can you give a couple examples of how you're helping to shape their day to day?

Sam Schwartz: Yeah. So, what Microsoft Threat Experts does is it is a manged threat hunting service provided by our Microsoft defender ETP product. And, what they do is our hunters will go through our customer data in a compliant safe way and they will find bad guys, human adversaries, inside of the customer telemetry. And, then they notify our customers via a service called the Targeted Attack Notification Service. So, we'll send an alert to our customers and say, "Hey, you have a adversary in your network. Please go do these following things. Also, this is the story about what happened. How they got there and how you can fix it."

Sam Schwartz: So, what I do is I try to make their lives easier by initially providing them with the best amount of data that they can have when they pick up an incident. So, when they pick up an incident, how do they have an experience where they can see all of the data that they need to see. Instead of just seeing one machine that could have potentially been affected, how do they see multiple machines that have been affected inside of a single organization? So, they have an easier time putting together the kill chain of this attack.

Sam Schwartz: So, getting the data and then also be- having place to visualize the data and easily make a decision as to whether or not they want to tell as customer about it. Does it fit the criteria? Does it not? Is this worth our time? Is this not worth our time? And then, also, providing them with a path to, with that data, quickly create an alert to our customers that they know what they're doing.

Sam Schwartz: So, rather than our hunters having to sit and write a five-paragraph essay about what happened and how it happened, have the ability to take the data that we already have, create words in a way that are intuitive for our customers and then send it super quickly within an hour to two hours of us finding that behavior.

Sam Schwartz: So, all of those little tools and tracking and metrics

Sam Schwartz: ... and easier, like, creating ... from data, creating words, sending it to the customers, all of that is what I plan from a higher level to make the hunters be able to do that.

Nic Fillingham: Tell us about how you found yourself in the security space and, maybe it's a separate story, maybe it's the same story, and how you got to Microsoft. We'd love to learn your journey, please.

Sam Schwartz: It is the same story. Growing up, I loved chemistry.

Nic Fillingham: That's too far back, too far back.

Sam Schwartz: I know, I know, I know.

Nic Fillingham: Oh, sorry, sorry.

Sam Schwartz: I loved-

Nic Fillingham: No, let's start there.

Sam Schwartz: I loved chemistry. I loved like molecules and building things and figuring out how that all works. So when I went to college, I was like, "I want to study chemical engineering." Um, so I, through my education, became a chemical engineer (laughing). But I found that I really liked coding. Uh, we had to take a- a fundamentals class at the beginning and I really enjoyed the immediate feedback that you got from coding, like you did something wrong, it tells you immediately that you messed up. And also when you messed up and you're super frustrated and you're like, "Why didn't this work?" like, "I did it right," you didn't do it right. It messed up for a reason, and I really liked that and I thought it was super interesting. And I found myself like gravitating towards jobs that- that involved coding. So I worked for girls who code for a summer. I worked for Dow Chemical Company, but in their robotics division so I was still like chemical engineering, but I got to do robots.

Sam Schwartz: And then when I graduated, I was like, "I think I want to work in- in computer science. I don't like this chemical engineering." It was quite boring. Even though they said it would get more fun, it never did. We ended up watching water boil for a lot of my senior year of college and I was like, "I want- I want to join a tech company." And I looked at Microsoft and they're one of the only companies that provide a program management job for college hires so ... And I interviewed, I was like, "I want to be a PM," sounds fun, get to hang out with people and I ended up getting the job, which is awesome. And I walked on my first day, my team and they're like, "You're on a threat intelligence team." I was like, "What does that mean?" (Laughs) And-

Nic Fillingham: Oh, hang on. So did you not know what PM role you were actually going to get?

Sam Schwartz: Nope. They told me that I was slated for the Windows ... I was going to be on a Windows team. So in my head like that entire summer, I was telling people (laughing) I was going to work on the start button just 'cause like that's what I ... I was like, "If I'm going to get stuck anywhere, I'm going to have to do the start button." Like that's what my-

Nic Fillingham: That's all there is. Windows is just now ... It's just a start button. So yeah.

Sam Schwartz: I was like that's what ... I was like, "Guaranteed, I'm going to get the start button," or like Paint. Actually, I probably would've enjoyed Paint a lot, but the start button (laughing). And I came and they're like, "You're on a threat intelligence team," and I was like, "Oh, fun." And it was incredible. It was an incredible start of something that I had no idea what anyone was talking about. When they were first trying to explain it to me like in layman's terms, they're like, "Oh, well, there's malware and we have to decide how it gets made and how we stop it." And I was like, "What's malware? Like I don't ... " I was like, "You need to like really dumb it down (laughs). I have no idea what we're talking about."

Sam Schwartz: And initially when I started on this threat intelligence team, there were only five of us. So I was a PM and they had been really wanting a PM. They, apparently before they met me, weren't ... were happy to get a PM, but weren't so happy I was a college hire. They're like, "We need ... " They were like, "We need s-

Nic Fillingham: Who had never heard of malware.

Sam Schwartz: "We need structure." (Laughs)

Nic Fillingham: And thought Windows was just a giant anthropomorphic start menu button.

Sam Schwartz: They're like, "We need structure and we need a person to help us." And I was like, "Hi. Nice to meet you all." And so we had two engineers who were building tools for our two analysts. Um, and it was ... We called ourself like a little startup inside of, uh, security research, inside of the security and compliance team 'cause we were kind of figuring it out. We we're like, "Threat intelligence is a big market. How do we provide this notion of actionable threat intelligence?" So rather than having static indicators of compromise, how do we actually provide a full story and tell customers to configure to harden their machines and tell a story around the acts that you take to initiate all of these- These configurations are going to help you more than just blocking IOCs that are months old. So figuring out how to best provide ... give our analysts tools, our TI analysts and then, allow us to better Microsoft products as a whole. So based on the information that our analysts have, how do we kind of spread that message ac- across the teams in Microsoft and make our products better?

Sam Schwartz: So we were kinda figuring it out and I shadowed a lot of analysts and I read a lot of books and watched a lot of talks. I would watch talks and write just like a bunch of questions and finally, as you're around all these incredibly intelligent security people, you start to pick it up. And after about a year or so, I would sit in meetings and I would listen to myself speak and I was like, "Did I say that?" Like, "Was that- was that me that, one, understood the question that was asked of me and then also was able to give an educated answer?" It was very shocking and quite fun, and I still feel that way sometimes. But I guess that's my journey into security.

Natalia Godyla: Do you have any other suggestions for somebody who is in their last years of college or just getting out of college and they're listening to this and saying, "Heck, yes. I want to do what Sam's doing?" Uh, any other applicable skills or tricks for getting up to speed on the job?

Sam Schwartz: I think a lot of the PM job is the ability to work with people and the ability to communicate, and understand what people need and be able to communicate that in a way that maybe they can't communicate, see people's problems and be able to fix them. But I think a lot of the PM skills you can get by working collaboratively in groups and that, you can do that in jobs. You can do that in- in classes. There's ample opportunity to work with different people: volunteering, mentoring, working with people and being able to communicate effectively and connect to people and understand, be empathetic, understand their issues and try to help is something that everyone can do and I think everyone can be an effective PM.

Sam Schwartz: On the security side, I think reading and listening. I mean even the fact that ... I mean the hypothetical is someone listening to this podcast are already light years ahead of I was when I- when I started, but just listening, keeping up to date, reading what's going on in the news, understanding the threats, scouring Twitter for all the- all the goodness going on. (Laughing)

Sam Schwartz: That's a way to- to stay on- on top.

Nic Fillingham: Tell us about your role and how you interface with data scientists that are building machine learning models and sort of AI systems. Where- where are you able to ... Are you a consumer of those models and systems? Are you contributing to them? Are you helping design them? What's ... How do you- how do you fit into that picture?

Sam Schwartz: So a little bit of all of the things that you mentioned. Being a part of- of our MTE service, we have so many parts that would love some- some data science, ML, AI help, and we are both consumers and contributors to that. So we have data scientists who are creating those traps that I was talking about earlier for us, who are creating the indicators of malicious, anomalous behavior that our hunters then key off of. Our hunters also grade these traps and then, we can provide that back to the data scientists to make their algorithms better. So we provide that grading feedback back to them to have them then make their traps better. And our hope is that eventually, their traps, so these low fidelity signals, become so good and so high fidelity that we actually don't even need them in our service. We can just put them directly in the product. So we work, we start from the- the incubation, we provide feedback and then we, hopefully, see our- our anomaly detection traps grow and- and become product detections, which is an awesome lifecycle to be a part of.

Natalia Godyla: Thank you, Sam for joining us on the show today. It was great to chat with you.

Sam Schwartz: Thank you so much for having me. I've had such a fun time.

Natalia Godyla: Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us @msftsecurity or email us at with topics you'd like to hear on a future episode. Until then, stay safe.

Natalia Godyla: Stay secure.