Security Unlocked 11.11.20
Ep 5 | 11.11.20

Protecting Machine Learning Systems

Transcript

Nic Fillingham: Hello, and welcome to Security Unlocked, a new podcast from Microsoft, where we unlock insights from the latest in news and research from across Microsoft Security Engineering and Operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security. If you enjoy the podcast, have a request for a topic you'd like covered, or have some feedback on how we can make the podcast better...

Natalia Godyla: Please contact us at securityunlocked@microsoft.com or via Microsoft Security on Twitter. We'd love to hear from you.

Nic Fillingham: Hello, Natalia. Welcome to another episode of Security Unlocked.

Natalia Godyla: Hello, Nick. How's it going?

Nic Fillingham: It's going really well. We've got some early data, some early data hot off the presses from our listeners. I thought we might jump straight into that, instead of finding out what smells have permeated my basement. Is that-

Natalia Godyla: Great to hear it.

Nic Fillingham: Yeah. So, we just got some data coming out of the various podcast hosting platforms, and we have been listened to in over 60 countries, which is, I mean, that's amazing. That's if my math is correct, that's a quarter of all sovereign nations on earth. So that's pretty cool. Right?

Natalia Godyla: Yeah, we're making headway. I feel like global just makes it sound like such a big deal. We're currently listened to in Estonia, Kazakhstan, the UK, both of our father slash motherlands Australia and Poland. So, it's great to see the representation. Thank you all.

Nic Fillingham: I want to list a few more, because I just want to make sure that the few listeners that I think are there, they're getting a shout out. Myanmar, Azerbaijan, Albania, Haiti. Thank you so much to all of you listening to the podcast. On today's episode, we speak first with Sharon Xia, who is the Principal PM in the Cloud Security team. This will be the first of five or six interviews we're going to have over the next few episodes with authors and contributors to the Microsoft Digital Defense Report, the MDDR. You can download that at aka.ms/digitaldefense. This is what I like to call the spiritual successor or the successor to the Security Intelligence Report, the SIR, which is a document that Microsoft has produced for the last 15 years on trends and insights in the security space. Natalia you've read the report. What would you say to folks that are sort of thinking of downloading it and giving it a read?

Natalia Godyla: Well, first off the machine learning attack section is definitely one to read. It's fascinating to read about the new attacks that there are, model poisoning, model inversion, we'll touch on them in future episodes. So I'll leave it at that, but lots of new goodness, and just in general, the MDDR is a huge effort within Microsoft. It's highly collaborative and it brings together a ton of experts who really know their stuff. And so you'll see just that breadth of knowledge and intelligence when reading the report and in all of our upcoming episodes, since we'll be spotlighting, a number of experts who were contributing to the report, we also, in addition to the MDDR, we'll have Emily Hacker on the episode who is a threat analyst, and she'll talk about her journey from literature major to cyber security realm.

Nic Fillingham: Awesome. We hope you enjoy the episode, Sharon Xia. Thank you so much for joining us. Welcome to the security unlocked podcast.

Sharon Xia: Hey everybody, thank you for inviting me.

Nic Fillingham: Oh, you're very welcome. We're happy to have you, could you give us sort of a brief introduction to yourself? What's your title? Tell us about what you do. Day-to-day in your team, sort of the, the mission and goal of your role and the team that you run.

Sharon Xia: Sure. So I'm the principal program manager, which manages the PM team in Azure security data science team. And we have six PMs with 30 data scientists. Our day to day work is using machine learning to write threat detections and other features that protecting Azure, protecting our customers and also protecting machine learning models.

Nic Fillingham: So that's a team of 30 data scientists, sort of machine learning experts, that are protecting all of Azure and Azure customers. Is that right?

Sharon Xia: That's right. So actually including more than Azure customers, because our products and our solutions applies to on-prem system, as well as, as a crowds like AWS and the GCP.

Natalia Godyla: Microsoft had recently published the Microsoft digital defense report, in which we talked about machine learning and security. And as I understand that you contributed to this report, and one of the themes was something you just touched on, which was preparing your industry for attacks on machine learning systems. So can you talk a little bit about how the cybersecurity space is viewing these machine learning attacks? What's happening? What are the measures organizations can take to protect themselves against these attacks?

Sharon Xia: Yeah, as we all know, machine learning takes an increasingly important role in the operations and in our day to day life, right? It applies to not only like a facial recognition or voice, or even apply in many medical devices or analysis.

Sharon Xia: So it's just embedded in our day-to-day life nowadays. But to the attacks, cyber attacks to the machine learning system and the machine learning models, we're just getting to know these. And it's more and more prevalent, based on our research. We did a survey to 28 large customers, enterprises, 25 told us they have no idea what are the attacks. You know, it's there. And to the machine learning system. So that's kind of alarming, right? And for example, the model poisoning attack, and real world example is, attack can manipulate the training data to make a street sign classifier, that to learn, to recognize a stop sign as a speed limit. So that's really dangerous if you think about it, right. If you're driving a Tesla and you're supposed to stop. I'm not saying Tesla is vulnerable to this attack, but this is kind of an example of a model poisoning attack.

Nic Fillingham: So, we talked about the report. So the digital defense report, the Microsoft digital defense report that was released, it's a pretty lengthy document. It's full of a lot of incredible guidance. You and your team specifically contributed. And what we're talking about on the podcast today to the section within the state of cyber crime, which is called machine learning and security. And as you, as you just touched on that, the very first of the four trends that are called out there is simply just awareness, and preparing. I want to just touch on that stat that you mentioned just a minute ago. So you surveyed 28 organizations, 25 of those 28 just said that they don't have a plan for, they don't have tools. They're not prepared for adversarial or ML. Is that an accurate takeaway?

Sharon Xia: Yeah. So what do we, we seen at this moment is a security team and the machine learning team are running on two parallel orbits right now. So they know to not interact, that they are doing their own things, not aware of security on machine learning system. Yeah. So the first step we, we have been putting a lot effort is the community awareness. And we definitely need community help to pull those orbits together. Finally, interact, right? So that's a call to the community. Like that's a raised that awareness and walk together to first aware of these, then due to some tools, trainings to get our defense up, you have red team and a blue team, right? So they'll get our defense up to the speed.

Nic Fillingham: You mentioned a few types of sort of attacks there against models, model stealing, I think is relatively self-explanatory. Model inversion is interesting the way you explained it, it sounds like it's the ability to sort of reverse engineer or extract the data out of a model. The one that I sort of want to touch on here is, is sort of model poisoning. So you, you explained it as poisoning a model so that instead of seeing a stop sign, if it was trying to identify road and traffic signs, it may see something else. It may see a speed limit or something. How does that happen? How do we know how model poisoning works? Have we seen it in action? Have we been able to sort of post-mortem any successful model poisonings to understand how it actually happens?

Sharon Xia: Yeah. There are multiple ways to have the model poisoning happening because the- like I described, it's about manipulating the training data, right? So if you have access to the training data directly, you could manipulate it, but that- on purpose that needs some machine learning knowledge to do it right? So you can also, let's say if at a first glance, you don't really have the access to the poisoning data, but then you have access to the network. So you can do a traditional main, the middle attack, to disrupt the training. And there are two kinds, integrity attack or availability attack. So if you disrupted the training model to run the training effectively, this is basically kind of attack from availability point of view. And if you change the data, like the street sign classifier, to make it read to us a speed limit, that's called a kind of integrity attack.

Sharon Xia: So there is some multiple ways to do that.

Natalia Godyla: So how are we thinking about assessing the trustworthiness of an ML system? It sounds like it's clear that we're still at the awareness stage and we're partnering with organizations to build out frameworks. What elements are we bringing into these frameworks or standardizations to measure trustworthiness of ML systems and identify whether they've been impacted?

Sharon Xia: Yeah. We came up with kind of an amendment to our item, Microsoft, an amendment to our security development to ripe cycle. One of the process is the threat modeling. So we have machine learning threat detection, the threat modeling for machine learning systems. That's at a specific guidelines, questions, how do you do threat modeling on a machine learning system to identify, those potential attack surfaces and the potential risks in the development process? So that's the first step we are taking to, this is also part of a awareness effort, right? When you are doing the regular threat modeling, and you are asked for these questions, for example, if your data is poisoned or tampered with how would you know? Right? So then the follow-up question is, do you have telemetry to detect a skewed data in quality in your training data? Right. And are your training from user supplied inputs?

Sharon Xia: If Yes, right. What kind of input validation or sanitization are you doing or if your training is against an online data store. So what steps do you take to ensure the security of those connections? There are long list of questions we ask in our, regular threat modeling like that. We actually published the document a while Microsoft security engineering site, it's a public documentation, with all these questions for the community to reference.

Nic Fillingham: Sharon, what should, Microsoft customers know about how we are securing our AI systems and machine learning models that are in production. Obviously we're doing everything we can, we're investing heavily, but this is a very new area.

Sharon Xia: Right. Yeah. So like I said, at the very beginning, we work with Microsoft scale, right. The Scott Battery register, they all aware of the effort. So we will work with the responsible AI at the Microsoft white. Also, we have an ISA working group that focus on, responsible AI and adversarial AI. So it's a Microsoft's effort to make sure at our engineering part, we are building a secure machine learning system.

Natalia Godyla: And aside from protecting our machine learning systems, how are we taking this technology, taking machine learning and applying it to our security solutions so that we can empower security teams?.

Sharon Xia: Good question, we're building solutions, detections in our cloud native, SIEM product, Azure Sentinel. So it's not being released yet, but we are working on it so that, our customers can use the tech knowledge based on our experience, our study and to apply it to their machine learning systems, to at least, detect those attacks to their machine learning system. And another end is we have red team actively, doing red teaming activity to the machine learning system. And we also keep learning the new attack techniques in that way.

Nic Fillingham: Got it. So we've covered that first trend here, which is really about awareness of this new category, of this new sort of threat of attacks on machine learning systems. I might move on to that the second of the four trends that are in the report and that one is talking about leveraging machine learning to reduce alert fatigue. Can you talk a bit about that trend for us, what happened in 2020, or sort of in the last sort of 12 months around how ML has advanced in the use of ML to help reduce alert fatigue?

Sharon Xia: Yeah. So, when you look at the security operations, the security analysts in every organization are dealing with a lot fatigues. I think if you are working in security operation field, you have to deal with salient alerts from different products like Enterovirus or Pareto Network, firewalls, and then EDR solutions, XDR solutions while for, all these kinds of security solutions, just sending alerts like a thousand alerts. So a typical, security analyst in the security operation center for an S 500 enterprises, they get about, 2000 alerts. They have to deal with daily that's obviously cause lots of issues, right? So on the other end, if you're not able to go through all these alerts and you may drop off the real attacks, but all these alerts, there are lots of false positives. So there is a survey saying some products generate more than 50% false positives, or even 70% false positives that really preventing the defender team, the SOC analysts, to deal with the two attacks, real threats.

Sharon Xia: So one of the reason why are all these false positive is because the tradition or low based approach doesn't adapt to the change of the environment. The advantage of machine learning is it learns that new environment, right. And adapts to the change of the environment. And so we are looking at the Azure Sentinel, we have this machine on threat detections called a fusion. Fusion Technology use three different machine learning algorithms and a power, provide a graph and use kill chain and use different machine learning algorithm. We basically correlating signals from multiple products, multiple sources like your identity management system, your firewall, your EDR, your end points, also sources of data and they lock, all these anomalies and chain them in together in the sense of the kill chain, threats and the kill chain sense and fired like a high fidelity alerts.

Sharon Xia: So give you an example. If you find a suspicious login from a tall Barraza meaning, an anomalous IP address, then this is maybe not that suspicious. But it's not meant a high fidelity, like this account is compromised or this login is malicious, right. But then if you follow by unusual mass download or setting up a mailbox, forwarding law in outlook and the forward, all the company, business email to a Gmail or something like that, those activities, if you chain those activity together, you can see obviously there is something like a data ex filtration or seek to attack, depending on different signals, right? So this is how we use machine learning to alert, reduce alert, fatigue and give you high confidence and high fidelity alerts. Allow the security analysts to focus on, these, their energy to investigate and mitigate those threats.

Natalia Godyla: The volume of signals and the need for specialized skill sets, data science skills to develop these ML models. That brings us to a third theme, which is democratizing ML. So can you talk a little bit about, what our ask is to the security community and how we view democratizing ML as a next step in the progression.

Sharon Xia: In a way we've seen in the industry, we're short of security experts. We are definitely short of, data scientists to build good, high quality threat detection. We need to boost knowledge. Security knowledge, as well as machine learning knowledge and going further. We also need domain knowledge, which I mean, industry domain knowledge is if it's a financial industry or healthcare or energy, or Microsoft, we have cyber security experts, right. For IT, information technology. We also have, hundreds of data scientists like my team, have 30 different full-time data scientists. So we also work like across the team, we work with our threat intelligence team, we work our security analysts team leverage their knowledge. So when you use the product we produce at a Microsoft like this threat detection, it's the result of multiple teams, multiple efforts, all the expertise in there, but we don't claim we know everything.

Sharon Xia: And like I said, a generic machine learning, algorithm may work well in one environment, but less effective in another environment because of some special circumstances in that organization. And we fully realize, there is a lack of resource of data scientists in the enterprises. So what do we want to do is enable security analysts. Experts in security and their domain expert in their organization. To be able to improve the beauty in machine learning models, being our products, for example, Azure Sentinel to include quality of the model produce better signal in their environment. So this is the effort of democratizing machine learning in the SOC ML. So we are building this interface and this technology and in the product. So security analysts can customize our machine learning models without any machine learning.

Nic Fillingham: And Sharon that leads us to sort of the fourth and final sort of big trend that's in the report. And again, this is the Microsoft digital defense report, 2020, which you can download at aka.ms/digitaldefense and Sharon, that sort of final trend that's discussed here is about leveraging anomaly detection for post-breach detection. We had Dr. Josh Neil on the podcast. I think in our second episode, his team is actively involved in this area. Can you talk a little bit about the sort of final trend that's called out in the report?

Sharon Xia: Yeah. So behavior changes over time, right? And that's the beauty of machine learning. So, machine learning model, we observed the normal behavior. And then we signal if there's anomalous behavior happens, unusual activities, and these are important for the post-breach detection. If we observe anything abnormal happening, we stitch all these abnormal together and then find those strong attack relevant incidents. So there are the supervised machine learning models and the unsupervised machine learning models. And when we found out, because supervised machine learning models requires labelling and this put lots of demand on our customers. So we are actually now switch to more and supervise the message to attack, detect those behavior changes or abnormal behavior changes that will automatically adjust in a profile, a user or a machine or IP. We call those, all of them entities in the customer environment and they learn those normal behavior versus abnormal behavior. So that's how we, use anomalies to detect those post-breach detections. And because of these kinds of unsupervised machine learning models.

Sharon Xia: Most of the models, we are able to do streaming fashion because it doesn't require training. So to be able to do streaming fashion, which is bring us to the meantime, to detect in the milliseconds, right? This is important. If you can detect a potential compromise in near real time, we want to do that, right. Otherwise like "Oh," nine months later, or maybe two days later, you'll find a compromise, right. So-

Nic Fillingham: If it's not instantaneous, it's sort of useless.

Sharon Xia: Right, I know, yeah. So this is really a truly important advantage in tech knowledge. We are able to detect those anomalies in real time or near real time and stitch them together as quickly as possible.

Nic Fillingham: Well thank you, Sharon. There's a lot in the five pages of the machine learning and security section of the report, there is a lot of content to cover and we've really just touched on each of those four trends.

Nic Fillingham: I highly encourage folks to download the report. We'll make sure the link is in the show notes. If you're someone that can hear links and remember them and put them into your browser, it's aka.ms/DigitalDefense.

Sharon Xia: Yeah. What I wanted to say is it's very exciting that we are working on really this important area, and protecting our customers with machine learning technology, right? And there are lots of new areas, new territory we haven't explored. So I would really call for the community together to work with us and to innovate in this area, so our customers are better protected.

Natalia Godyla: That's great. Yeah, it'll be a group effort. Well Sharon, thank you for joining us today. It's been great to hear about the progress we've made and the progress we are making in machine learning and security. So really appreciate you walking us through this and sharing the great work your team is doing.

Sharon Xia: Thank you for the opportunity.

Natalia Godyla: And now, let's meet an expert in the Microsoft security team to learn more about the diverse backgrounds and experiences of the humans creating AI and tech at Microsoft. Today, we're speaking with Emily Hacker. Thank you for being here, Emily.

Emily Hacker: Thank you for having me.

Natalia Godyla: Well, let's kick things off by just talking a little bit about your day job. So can you tell us your role at Microsoft and what your day-to-day looks like?

Emily Hacker: Yeah, definitely. So I am a threat intelligence analyst on the TIGER team on Microsoft Defender. And I spend my days doing a variety of things. So specifically, I have a focus on email threats. So I gather a lot of information about email threats from open-source intelligence, from telemetry, from internal teams. And I combine all of these sources to try and find the email threats that are impacting our customers the most, and to put in proactive measures to stop those from impacting customers.

Nic Fillingham: I want to know what the TIGER team is. What's a TIGER team?

Emily Hacker: A TIGER team. It does stand for something, Threat Intelligence Global-

Nic Fillingham: Is it a backronym? Were you all sitting in a room, and you're like "We need a cool name"?

Emily Hacker: Oh, for sure. Definitely a backronym. It was definitely a backronym.

Nic Fillingham: Someone's like "Tigers are cool"?

Emily Hacker: Yeah, I feel very confident.

Nic Fillingham: So you made it work.

Emily Hacker: Yeah.

Nic Fillingham: You made it work, but it's not necessarily memorable?

Emily Hacker: No, we do have a lot of tiger imagery and logos and stuff related to our team now. And so we know what animal we are, but we might not know what we do.

Natalia Godyla: I love that you guys went all in on it.

Nic Fillingham: Are there any other teams based on animals of the Serengeti?

Natalia Godyla: No, oh the Serengeti. So there's a fishing org that I've dotted a line to that we recently backronymed as well. And now it's Osprey, like the bird. So I'm like a member of the animal kingdom here.

Nic Fillingham: Yeah, that's like a seagull, isn't it?

Emily Hacker: I think they're pretty scary looking though. I think that was more the imagery.

Nic Fillingham: It's also the name of the big Marine helicopter I think in the British Navy.

Emily Hacker: The helicopter, yeah. And that's what I usually think of first. I think it's the one, the helicopter that maybe folds up or something.

Nic Fillingham: That's got the wings that fold out? Is that right? It's sort of like half a plane?

Emily Hacker: Yep. Mm-hmm (affirmative).

Nic Fillingham: It's like a VTOL, is it a VTOL?

Emily Hacker: It's fancy looking for sure.

Nic Fillingham: Got it. Well, this has been a great conversation. Thanks, we're done here. No, I think you were... I'm sorry, I derailed us by asking what TIGER stood for.

Natalia Godyla: I was going to start with a rather broad question, so I'm glad we did TIGER first. So you spend your day-to-day on email threats. Do you see any patterns that... like to elucidate the audience on?

Emily Hacker: So patterns, I mean we see a lot of different techniques and patterns and stuff that we're tracking for sure. I think with... We look at both malware threats being delivered by email, and we look at phishing, like credential theft, threats being delivered by email. And one of the things that I would say, maybe a pattern that I've noticed is that a lot of times the techniques that we see between the two are kind of different. So it's usually noticeable to us if we're looking at certain techniques that is definitely malware versus fishing.

Emily Hacker: And then we've also recently expanded more of our deep dive into business email compromise, which often is completely wholly different from the other two types of threats that I just mentioned.

Natalia Godyla: Can you describe why business email compromise is often treated wholly different? What is the distinction between that and the other two threats?

Emily Hacker: Yeah, definitely. So business email compromises a lot of times is totally different from malware and phishing, because it won't contain any links or attachments. So it's totally social engineering based, which is interesting to me. Personally, I find it super interesting because it's basically just the quote unquote "Bad guys" if you will, tricking people into wiring them money.

Emily Hacker: So when we're looking at malware threats, a lot of times they're going to use links or attachments that lead to obviously malicious code being downloaded onto the machine. And the emails themselves might be... We've seen completely blank emails. We've seen emails that use really generic lure, such as "Please do the attached invoice." Of course, the attached invoice is fake. And with phishing, similar we'll see lure such as... Actually we see a lot of they're like "Please join this Zoom call or this teams call or whatever."

Emily Hacker: They're going to try and make the recipient click on the link. But with business email compromise, it's totally done in email. So the threat actor will just send an email. A lot of times they will either compromise as the name suggests, they will compromise one of the accounts of a individual who works at a victim company in accounting or wire transfers or that kind of job. And they will send emails from that account. Or another thing I've seen is they will have some kind of methodology of watching emails on a victim's email network. So either via some o-off phishing that they had done earlier, or perhaps they got credentials to the email inbox. But then when it actually comes time to send the malicious email, rather than using the user's email, they'll create one that looks almost identical, but just change a couple of characters.

Emily Hacker: So they might register a domain. For example, if someone was trying to use my email address instead of "Microsoft.com", they might register "Micros0ft, with a zero.com", And then use my exact username. So to an unsuspecting victim, a reply to a thread will look exactly like it came from me, but then the malicious emails themselves aren't going to contain links or attachments. They're literally just going to be the bad guy saying like, "Hey, can you wire me these hundred thousand dollars or more, send it to this bank account?" And since there's already a level of trust with the victim, because it's usually coming either from a legitimate email account that they're used to doing business with, or one that's faked to look very similar to it, these are super successful.

Emily Hacker: The people are wiring money to attacker accounts. And there's no malicious code involved. There's no phishing link involved, it's completely social engineering. Sorry, that was a really long answer. I got apparently really into that, sorry.

Nic Fillingham: Emily, I wonder if you could tell us how you found your way to Microsoft. Have you been in security for a long time? What was path into your role and how did you find yourself in the security industry?

Emily Hacker: Definitely. So it's definitely a bit of a roundabout interesting story. So it goes back a ways to when I first went to college, I guess. So I have a degree in English and communications and a minor in journalism. And I had every intention of being a newspaper reporter. I worked for my school's newspaper for a while. And then I worked for the city newspaper, for the city that I went to college in. And upon graduation, I decided maybe I wanted a job that had a little bit more normalcy. I really loved newspaper reporting, but it was a lot of late nights in the newsroom and stuff. So I ended up going into technical writing, and my first job out of college, I was actually writing software manuals. So it was pretty dry stuff, I'll admit. Where I was writing the manuals that people would refer to if they were having trouble.

Emily Hacker: This was specifically for software for car dealerships, where the stuff I was writing was like "Press the F5 key to submit", or like that level of manuals, those very dry manuals. And I wasn't all that excited by that work. Some people love it and I understand why, but I didn't. So I was lucky that a girl that I had worked with at that job, I had only worked with her for a couple of months and she had gotten another job. Well, she contacted me about 10 months later and said that she had gotten promoted and wanted to hire me to backfill her. And she said it was a tech writing job, but it was totally different from the type of tech writing that we had been doing previously at the company. So I gave it a shot. I applied and I went to work with her.

Emily Hacker: And what it was was I was actually the tech writer for a threat intelligence team at an oil and gas company, but it was my first foray into security. And it was not something I even knew was a thing honestly before, I didn't realize cybersecurity was kind of a field that people could work in. And it was very exciting to me. And I remember the first year or so that I worked there, everything was new and exciting, like "Oh my God, threat actors, what are those? This is so exciting. Nation States, Oh my God, this is a thing that's real." And it just all seemed like this movie script, except it was real. And after a bit of doing the editing and stuff for their reports, the reports that I was editing were very interesting to me. And I would ask questions because I needed to, to understand the report in order to edit it.

Emily Hacker: But also just because I was legitimately interested, like "How did you do this analysis? What is this?" And I quickly decided I liked their job better than mine. So, I decided I was going to learn from my coworkers. And I am extremely lucky that the team of threat intelligence analysts that I was working with are some of the best people I've met in my life at that job and were super open to helping me learn. If I would say like "Hey, what are you working on? Can I kind of sit with you and learn from you?" Everyone was always just like "Yeah, let's do it, let me show you what I'm doing, blah, blah, blah." So I learned from them, and eventually, there was a time where we were a little short-staffed, as is common in security. And we were in charge of checking the phishing email inbox.

Emily Hacker: So when users at the oil and gas client that I was working for would submit potentially suspicious emails, they would all go to an inbox that we had to analyze to determine if they were malicious or not. And it was a time-consuming job, and we just didn't have enough people on the team to do it and the rest of our work. So I kind of volunteered to help out. And that was how I got to learn how to do actual analysis. And I had job duties related to analysis. So I learned pretty much completely on the job from my coworkers. And then from there, I did that for about a year, maybe a little bit more after that. And I decided I wanted to move to Seattle, I was living in Texas during that.

Emily Hacker: And I was very interested in living up here in the Pacific Northwest. So I left that job and got a job as a security researcher at a security vendor here in Seattle. So it gave me that other side of security that really allowed me to see the full picture of both having worked at a SOC, having worked at a vendor. And then I did that for just over a year. And this position at Microsoft opened up and I actually applied,

Emily Hacker: I don't want to say as a joke, but I didn't think I was going to get the job.

Nic Fillingham: As a stretch.

Emily Hacker: Yes. It would be like if I applied to be president of the United States or something. It's one of those, where I'm like, "Oh, wouldn't that be great to submit the application," thinks never again about that moment. And then I was shocked to say the least when I got called for an interview and even more shocked when I got offered the job. So that was back in March. So I've only been here for a few months and I am loving it obviously so far. And what is really exciting to me is how this job is kind of, I get both the focus of having in-point telemetry like I did at my first job and phishing email telemetry. And then I also have a wider birth of just a lot of data and open source intelligence like I did it at my second job. And now I have them both here as well as getting to work with some of obviously the smartest people in the industry. So it was very exciting and I still am a bit amazed that I work here.

Nic Fillingham: When you were writing manuals in for the car dealership and probably thinking about what was going to happen in the future, was there a little kernel, was there a little nugget of, it'd be awesome to be a company like Microsoft and doing cool nation state security, investigatory stuff?

Emily Hacker: Absolutely not. I didn't even know that this was a job opportunity. The fact that this is a job that people do and now that I do. When I had first graduated and gotten my first job out of college, there was just so much about the world that I didn't know, but there was so much about careers that I didn't know. I didn't even know this was an option. And I do remember distinctly, I wasn't a huge fan of that job, but I didn't know what else was out there. And it just feels, everything's very overwhelming when you're 22 years old and you're like, "What is life like? Is this what I have to do forever?" So I'm just glad that I now know that this is an option.

Nic Fillingham: What is life? Guess what? You keep asking that question. I'm afraid it's continually one you keep going back to. In a good way though. Do you find yourself bringing your technical writing skills, your formal sort of literature training? Do you find you're bringing that into this current role?

Emily Hacker: Yes.

Nic Fillingham: Are you writing a lot of reports and does that help you?

Emily Hacker: Amazingly so much so that I think that this is something that people who work in technology don't always think about, but I work in threat intelligence and a large, extremely important facet of threat intelligence is communicating that intelligence to decision makers. If you know what's the intelligence but you're unable to communicate it, it's useless. So we write a lot of reports. I have a lot of those skills from my previous work. So writing a report is not difficult for me. It's something I've literally used to do for a living and knowing exactly how to phrase technical situations in a way that everybody, including non-technical people can understand is something I'm very good at because I have historically been a non-technical person. So it's something that is very useful to me.

Emily Hacker: The other people who work on my team are also very good at it. But my point in that is that a lot of them have tech backgrounds. They have degrees or jobs where they have worked in technology. And so they have that tech skillset, but they have to learn the writing and communication on the job. And I have the writing communication and I had to learn the tech skill set on the job. And now all of us are good. We all do the job and we're all very good at it and we all have our things that we specialize in and we can help each other. But the point being when it comes to working in security or technology and hiring for security or technology, there's a large swath, if you will, of skillsets that are needed and nobody's going to have all of them for the most part. So finding people that have some of them, they can be trained up in the other ones, even if the ones that they're being trained up in are the technology ones.

Nic Fillingham: Yeah. So have you found yourself in the same way that your colleagues were sort of helping you in the early days? Learn, fill in gaps, if you will, with you sort of being sort of somewhat new to the industry? Have the tables now turned? Are you now helping your colleagues be better communicators and helping them in their ability to pass this intelligence on into way that people understand?

Emily Hacker: Yeah, I think so. So I definitely have edited a few of my colleagues reports before they went on to the formal editing process and just kind of taking the time to sit with them and be like, "This is what I'm changing and why." Either A, it's grammatically incorrect and let me explain to you what grammatically correct would be, or I'm saying this is unclear and we can make it more clear by saying this or this is too technical, only a handful of people reading this are going to know what this means and we need to simplify it to layman's terms. And I think people appreciate it. I hope. Either that or I'm like the red pen girl who just comes in and ruins everybody's reports and they're all terrified to see me coming. But I do think that they appreciate it.

Nic Fillingham: What do you like to do Emily?

Emily Hacker: Yeah, I do things.

Nic Fillingham: Good answer.

Emily Hacker: Okay. Believe it or not, I live in the Pacific Northwest, so I like hiking. I know. So does everybody in the entirety of the Pacific Northwest, but I actually really like hiking and that's why I moved here from Texas. So that's something that I greatly enjoy. I do things at home. Oh my God. I actually had made a list. This is sad. But at one time I made a list of things I do for fun, because when people ask this question, I always forget. I like writing. I did go to school to be a newspaper reporter. I still like writing. So it's my goal one day to get a novel published, but they may never come. And I play music. So I play several instruments and I like running. Do I like running? I run whether or not I like it. It's questionable.

Nic Fillingham: Does anyone really like running?

Emily Hacker: I don't think so.

Natalia Godyla: I actually immediately want to ask what genre novel would you write?

Emily Hacker: I think I would write a mystery, detective novel, because I'm really into true crime, which also everybody. But I like watching a lot of stuff about true crime, but then I'm also really... Am I admitting this? Probably. I'm also really into paranormal stuff and Big Foot and ghosts and what are they doing? And whether or not I believe in them, it's usually no, but they're interesting stories. And I feel like there's this very interesting intersection of detective stories and paranormal that is the X-Files, but could also be a novel one day. So let's just wait and see.

Natalia Godyla: From your background, Emily, and your hobbies it seems you've got a lot of creativity either in writing or music. So what are your final thoughts on how creativity comes into play in the cybersecurity industry or in your day-to-day job?

Emily Hacker: That's a really good question. And I think it's super important, especially in intelligence, which is all I can speak to because it's really all I've worked in in security. But one of the key aspects of working in threat intelligence is seeing a bunch of different data points. I might have a couple of data points here from open-source intelligence. I might see something weird on a machine and I might have an email and being able to connect the dots. And while that's not always something a machine can do, otherwise, we'd all been replaced by now. But it does require this level of creativity and this level of being able to remember, or kind of be like, "I wonder if I could connect this email to this thing that's happening with this machine."

Emily Hacker: I was talking about detective novels earlier and I think that there's an aspect of that that kind of comes into play here too that's also an aspect of creativity, where you have to put the pieces together. You have to be able to see something once and then three days later when you have a malicious email in front of you be like, "Oh my God, this reminds me of this things from three days ago." There's also this level of creativity. I feel like that helps a lot of us. I was just talking about this with one of my coworkers yesterday, actually, about how one of the things that makes everyone on my team so successful, it is this level of, it's not by itself creativity, but I think it's an output for really creative people is this tenacity of when I see something I have to get to the bottom of it.

Emily Hacker: And I think that I'm not just going to like run one query and be like, "Oh, computer told me it's X." I'm like, "But what is X? How do I get to the next part? What is it? How do I connect it to this Y over here? Do X and Y both connect over here to A maybe? Are they connected to this actor?" It's this level of just making a story out of the information that's presented to me that helps me, I feel like, be successful as an intelligence analyst. And I feel like there's a level of creativity to that that I honestly didn't think about until I've been in the industry for a while.

Natalia Godyla: Yeah. I think you see a lot of unending curiosity with security folks as well. Like you said, as soon as you get one answer, it just opens up another question.

Emily Hacker: Exactly.

Nic Fillingham: So, Emily, you joined Microsoft in March of 2020, is that correct?

Emily Hacker: Yes.

Nic Fillingham: So you joined just as the mandatory work from home order was coming to place?

Emily Hacker: Yeah. I've never ever been into the office.

Nic Fillingham: Wow.

Emily Hacker: Well, okay. I went into the office on day one to pick up my laptop and then went home, but I started after the work from home. So I've never met, well, I never met a lot of the people I work with in person. People always talk about the good old days of being on the office. Apparently there's a fridge that has bubbly water in it. One day I'll maybe drink some bubbly water.

Nic Fillingham: It's a myth. It doesn't exist. We just tell that to people when they join the company and when they come in for the first time-

Emily Hacker: Then they start and then they just make you work from home where you can buy your own bubbly water.

Nic Fillingham: Yeah. Hey, where is this bubbly fridge? There's a fridge with bubbly water. No, it doesn't exist. You've been duped. So hang on. So I want to backtrack a bit because you talked about how you've got awesome colleagues and they've really helped you, so that's your experience completely through remote work.

Emily Hacker: Yeah, it is.

Nic Fillingham: So you've been able to join a new company, joined a new team, been supported and had sort of great experiences with colleagues through a hundred percent remote experience.

Emily Hacker: Yep.

Nic Fillingham: That's fascinating.

Emily Hacker: I think one of the things that's been helpful is that there's a lot of new people on my team. So my team grew significantly around the time that I started. So me and another guy started on the same day and then four weeks later, another woman started and then over the summer we had two more people joined. And so we were in this together. And so it helped us. We all were in the same. It wasn't like everybody else knew each other and I was the new person, like, "Hey guys, let me join your conversation." We were all new. And so that helped a lot. But even the existing people on the team have been really, I don't know what word I'm trying to go for here, but they've been really open, I guess, to this remote work situation.

Emily Hacker: The number of Teams calls, screen shares I've done where I'm just like, "Help. I don't understand what this means." And anybody I talk to is willing to sit on the other end of the Teams call and just walk me through what's happening. It has been honestly incredible. I'm really grateful for my team. I would like to go into the office one day, but I'd rather not be sick and I am glad that Microsoft is taking precautions. So considering the circumstances, things have definitely been going really well.

Nic Fillingham: That's awesome. Well, Emily Hacker, thank you so much for being on Security Unlocked. We will work out how to send you a case of bubbly water.

Emily Hacker: Thank you. Maybe then I won't go thirsty.

Natalia Godyla: Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us at msftsecurity, or email us at securityunlocked@microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.

Natalia Godyla: Stay secure.