Security Unlocked 6.16.21
Ep 32 | 6.16.21

A Day in the Life of a Microsoft Principal Architect

Transcript

Nic Fillingham: (silence) Hello, and welcome to Security Unlocked. A new podcast from Microsoft where we unlock insights from the latest in news and research from across Microsoft Security, engineering and operations teams. I'm Nic Fillingham.

Natalia Godyla: And I'm Natalia and Godyla. In each episode, we'll discuss the latest stories from Microsoft Security, deep dive into the newest threat intel, research and data science.

Nic Fillingham: And profile some of the fascinating people working on artificial intelligence in Microsoft Security.

Natalia Godyla: And now let's unlock the pod.

Nic Fillingham: Hello Natalia. Hello listeners. Welcome to episode 32 of Security Unlocked. Natalia, how are you?

Natalia Godyla: I'm doing great, Nic. And, and welcome everyone to another episode. Who do we have on the show today?

Nic Fillingham: Today we have Hyrum Anderson, Dr. Hyrum Anderson, who, uh, is the Principal Architect of the Trustworthy Machine Learning group here at Microsoft. We have been trying to get Hyrum on the podcast for a long time, and Eagle eyed, Eagle eared, Eagle, Eagle eared. That's the thing I made it up. We're going to use it. Um, listeners will have actually heard Hyrum's name a bunch of times as well as a lot of the work that Hyrum has been pioneering. Hyrum is really one of the leading voices, uh, here at Microsoft in this brand new space that is really just sort of being defined now around Adversarial Machine Learning and protecting AI systems. And so it's fantastic to get a chance to get Hyrum on the podcast and hear about Hyrum's journey into security, into Machine Learning, into AI, and then, uh, finding his way to Microsoft.

Natalia Godyla: Yeah. So Hyrum, as you said, is a leading voice in this area. And I think he said it really well when he framed the, the challenge here that an attacker has to be right once and a defender has to be right 100% of the time. And that perspective is what drives him to be proactive about researching Adversarial Machine Learning, knowing that the attacker community is aware that they can use Machine Learning and they'll leverage it when it becomes the right technique for them. So we as organizations and, and defenders listening to this podcast have to start thinking about it early. We just don't have the luxury to not be prepared.

Nic Fillingham: I love that a lot of the work that Hyrum does, uh, ends up getting publicized and made public through research, through GitHub. If you listen to last week's episode with Will Peers, Will is actually on Hyrums team. And a lot of, a lot of the work that... A lot of the, the sort of research and, and think tank work that Hyrum and folks do, is not just being sort of absorbed into Microsoft products and services, it's being put out there for the community, for the public, for researchers, for security professionals to really help push the industry forward. So a great conversation, I think you'll really enjoy it. I think with that I'm with the pod.

Natalia Godyla: I'm with the pod. Hello, Hyrum Anderson, Principal Architect of the Azure Trustworthy ML group. Welcome to the show today.

Hyrum Anderson: Thank you, Natalia. Nice to be here.

Natalia Godyla: Well, we're definitely glad to have you, and it'd be great to start by understanding who you are and what your role is at Microsoft. What is your day to day look like?

Hyrum Anderson: Well, my role as Principal Architect really means that I code a little, and I talk externally a little, and I'm stuck in that awkward middle. Now that's what, that's what it really means. But it's a really fun role. I joined Microsoft to join a startup inside Microsoft to really address the question, how do we secure AI systems? You know, think about AI systems as a special case, but it is. There, there is a special case that should be considered in the context of larger security, and our little startup inside Microsoft is to address that. So that's why I joined Microsoft. And that's the title I got and I'm happy with it.

Natalia Godyla: (laughs) And is this something that you've been working on for some time? Understanding the impact of AI systems or is this a new endeavor you're taking on at Microsoft?

Hyrum Anderson: Well, I want to just know that this whole idea of Adversarial Machine Learning has been around a long time way before me. I'm not a founding father in any sense of all, all the brilliant work that's come since the mid 2000s, in exploiting weaknesses in AI systems. But you know, in five or six years ago, I became actively involved in this, especially as it relates to how does an attacker who wants to evade your anti-malware model, if he knew it was an AI system, what could he do special about that to make his job easier? So that's where I came into the game. How do I think like an attacker to get around security controls that are implemented as AI systems?

Hyrum Anderson: And from that time, I think that's, that's where some of my work came to be known at. I spoke at Black Hat and DEF CON and things, and, and then, um, that were kind of built and finally, uh, has culminated in a new way of thinking of Microsoft, how do we do this here at Microsoft? And what, what would it look like for both us as Microsoft, you know, first party securing our own, as well as what could it look like for our customers so that everybody who deploys Machine Learning can do it safely and securely.

Nic Fillingham: Hyrum, we've spoke with some of your, your colleagues on the podcast before. Could you sort of expand a little bit upon the, I think you've talked about the mission of Trustworthy Machine Learning at Microsoft, but some of the different roles that are involved, you know, how do you work with, with Rom, if you do, how do you work with folks like, uh, Sharon Shaw? How do you work with Andrew Marshall? Uh, the other folks at Microsoft, thinking about Adversarial Machine Learning and protecting AI systems?

Hyrum Anderson: Our vision is that you should be able to build your Machine Learning model anywhere, and we can help you to manage the risk, any risk associated with that. That's the vision. And there's a lot of risks associate with Machine Learning that starts from simple things like, how do I know that my translation service is accurate and works for every language that it, you know, those, those are risks. There's also risk about ethics and fairness. Does a face detection work better for some and not for others?

Hyrum Anderson: And this final piece of risk is security, and that's how we're focused. So this final piece of risk is, if there's somebody trying to deliberately cause my system or company or business harm, am I able to manage that risk? That's where the Azure Trustworthy Machine Learning team has come into play here as managing that third piece, working across Microsoft to manage the other pieces. Rom has been a internal champion for this effort since several years before I joined. We've had a professional relationship for several years and I, I've known him and he was instrumental in, in, uh, telling me about the cool efforts he wanted to get started here.

Hyrum Anderson: So he has led this effort and I joined to help him co-lead this effort, uh, about a year and a half ago. So Andrew [Power 00:07:18], for example, we work with, uh, we try to stay abreast of relevant attacks and defenses in MSR, Andrew Power does a really good job of straddling the line between MSR and applied security. And it's a great resource for us. Our team actually has these, these two interesting parts. One is, how do we go about Microsoft to assess the security of our existing systems? So we have a red team. We have an, A, a red team that kind of goes around and does that.

Hyrum Anderson: And the second part is how do we address, you know, how do we take those lessons learned and, and, um, implement defensive tooling, both at Microsoft and for our partners? That's the second piece. And as part of the, the learnings that we have from our red team, we also work with, uh, the great folks like Andrew Marshall in the ether committee to help us reach all the corners of Microsoft for defensive guidance. Andrew and team conduct assessments, and risk assessments of AI systems. And we, we try to, to make a one Microsoft efforts in, uh, making sure that we have a common voice in how we address risk mitigation.

Nic Fillingham: Thank you for that explanation. It was fantastic. Matter of fact, uh, we just recently interviewed, uh, Will Pierce the, uh, AI red team lead days ago.

Hyrum Anderson: Will is a treasure.

Natalia Godyla: (laughs)

Hyrum Anderson: Will is a treasure and I, I, if, if you haven't listened to Will's podcast, I have not, but I, I want to listen to it. He is a really interesting individual.

Nic Fillingham: Yeah. And we talked about, you talked quite a bit about counterfeit, which is the, the tool that he sort of built for himself and then it spun up into a, a GitHub project that's been released into the wild. And that was a fascinating conversation. I would love for you to walk us through your journey as far back as you want to go into security, into Machine Learning and, and sort of eventually to Microsoft. When did this start? Were you into, you know, into Legos? Were you into pulling apart radios? Did you build your first computer when you were three? Like what, how did this passion and this career start for you?

Hyrum Anderson: Oh, wow. That's, that's a great question. I, I want to just first, be- before I tell stories, I want to say that I am a relative newcomer to security. And the more I learn from real security people, the more I realize what I don't know about security. So I, I would consider myself as a, an engineer, a researcher who has applied his craft to security. And I'm really appreciative of, of members of my team who are teaching me all the time about, uh, new ways. That sad, (laughs) that said, I just have a story, a great memory I want to share with you of when I was in middle school, early high school, maybe.

Hyrum Anderson: I come from a big family and everybody's a nerd. Like I, I had brothers who were coding Commodore 64. They used to get like these magazines. And if you were too cheap to buy a game, you could actually, you could actually like copy from the magazine.

Nic Fillingham: Yeah. And photocopy the pages and cut it in.

Hyrum Anderson: Yeah. Do you remember that?

Nic Fillingham: I do. Yeah.

Hyrum Anderson: So this, this is how I got my start up computers. I was actually just watching my much more patient older brothers do this, and they'd also coded Pascal and basic at the time. And so I, I got involved. But the security angle, so that the programming started early for me, but the, a really fun security angle is, um, my, my awesome parents and their big family is to help us to focus on the right time they had, they had a BIOS password, right?

Nic Fillingham: Oh, wow.

Hyrum Anderson: So the BIOS password did, did not allow... And this was like windows 3.1 or something. It-

Nic Fillingham: Yeah.

Hyrum Anderson: It didn't allow us to, to log in without the password. So we crafted a way to get around this. It included everything from... So they didn't apparently have regard for either physical or cybersecurity controls, and we exploited this weakness.

Nic Fillingham: This is windows 3.1?

Hyrum Anderson: (laughs) Yeah. No-

Nic Fillingham: Okay keep going.

Hyrum Anderson: ... It was much simpler. One was, um, we taped a mirror to the ceiling.

Nic Fillingham: Nice.

Hyrum Anderson: And then we would tell my dad that it was time we needed to do homework on the family computer, and we would try to watch in the mirror what the BIOS password was. That didn't work so well. 'Cause we're not good at like the reversing, the mirror image. We also tried to put sticky glue on the keyboard so we could figure out what, like what the most common keys were and do kind of cryptographic, cryptographic to a middle school. Right? What were the most common keys? Can we figure out what words were involved in the password?

Hyrum Anderson: Finally, my brothers and I, we found a BIOS book, and we realized that the keystrokes were logged even after boot, and we inserted a little utility into the autoexec doc, that file. If, if this is bringing you back in history, walk with me, enjoy this time.

Nic Fillingham: Please, please keep going. Um, I'm, I'm having visceral memories here of my Osborne 3866. Keep going.

Hyrum Anderson: We, we can make this little tool that would read the last characters typed in the BIOS buffer and dump at the desk. That was our, that was our, our final. So anyway, this, this sort of like rudimentary hacking process was my first introduction to skit computer security. I went on to be an engineer and in a signal processing and Machine Learning, got my PhD at the University of Washington and, and did a bachelor's and master's degree at BYU. Actually did not do anything in computer security, but I did work... I was a researcher at the National Labs and security kind of with a big guys.

Hyrum Anderson: You know, situational awareness for defense industry, that things like that. That kind of helped me appreciate what I think so many people in security just get. And it's this sense of mission and purpose that I don't know that there's a better replacement for getting up to work every day than a sense of mission and purpose. And it's something I have sought at every career hits, right? Like if, if that's missing, I'm not really having a good time. Uh, when I eventually left the National Labs, I started on a data science team at this company called [Mendiant 00:13:26], who had just released a, a big report.

Hyrum Anderson: And they were... Honestly my, my job, Jamie Butler, if you're listening, I remember Jamie saying, saying that, um, "Like we don't really know what to do with you. We just think data science can be cool here. And so we're gonna, yeah, we're trying to build a team and we're just going to kind of figure it out as we go. So there's no purpose." But that was really fortunate for me because you know, this was in the days when, uh, data science, Machine Learning, they're still kind of oil and water, but back then, it was like very much a new kind of endeavor, and gave me some early exposure to lots of failed attempts and some, some early wins in that.

Hyrum Anderson: So from then I've, I've been a data scientist for security, then, you know, a Mediant became FireEye. And then I went to Endgame and, uh, worked with an excellent team at Endgame. I eventually was the chief scientist in Endgame, was acquired by Elastic, Elastic is a, a fantastic company. This opportunity at Microsoft, Rom said, "Hyrum, come to Microsoft. There's a startup here, Security Machine Learning." And here I am. That's my history.

Natalia Godyla: And what are you working on now at Microsoft?

Hyrum Anderson: Well, we do a, a number of these. So the, the team I lead includes the red team and the defensive side, and we are really busy on both fronts.

Natalia Godyla: (laughs)

Hyrum Anderson: So the red team work that happens now is much more sophisticated than when I started. And I was the red team. You know, that was really the, when, when I started at Microsoft and we did one, a red team engagement that has part, parts of which had been publicly disclosed, that was really Hyrum, the Machine Learning person going for a ride with the Azure red team, and saying like, "Hey, if you can find something that looks like this, it's probably a Machine Learning model. Let's go find it." And these really, really smart people, Kathy and Susie were able to find those things. And then I can tinker, um, that this model break it essentially. And they could complete the, the ops.

Hyrum Anderson: So it was very much... I was a, kind of a one trick pony in a, what I consider a really high quality Azure red teaming experience that we can affect some big change. Now, our red team is I think, much more robust, uh, with Will Pierce, who you've interviewed. Now he's actually an ops person who gets ML. He gets both sides of the coin, and he'll go in now and do the whole engagement like himself, right? So that keeps us really busy on, on a day-to-day basis. We partner with both first and third-party teams in assessing if your Machine Learning could be vulnerable to some kind of violation that would cause your business pain.

Hyrum Anderson: And there are lots of them. And nobody knows better than the team itself what that worst night, nightmare scenario would be. And we try to work with them to say, "Okay, that's the nightmare. Let's try to make it happen." And so we, we try to be... Take on that, uh, attack a persona, and then we, we work with them to try to, uh, tell them how we did it, recommendations to plug it.

Nic Fillingham: Hyrum, it feels like we're better at poking AI systems and finding holes and finding flaws than perhaps we are protecting them. Is that sort of where we're at in, in this sort of, this sort of new journey in understanding how to go and secure AI? Are we now, are we sort of at the stage where we're working out how to break in, we're working out how to go and poke holes, but we, we maybe haven't quite got the sort of ratified tools or processes in place to, to, to strengthen them, or am I just missing an, the other side of the coin?

Hyrum Anderson: You're exactly right. But I guess I would also ask like, isn't this always the case that Machine Learning or not is kind of always easier to be an attacker than a defender because of the asymmetry involved? An attacker has to be right once, a defender has to be right 100% of the time. Those kinds of things. The added wrinkle for Machine Learning, I think is that, whereas in like an information security system, you can patch a vulnerability, in an AI system, what it means that patch is a really gnarly issue. There are ways proposed to do it in academia and research. They're really cool and some of them work well in, in some cases, but there issues.

Natalia Godyla: When do you expect attackers will start regularly using this technique? When should organizations be prepared to actively be red-teaming and build a program around it? And on the other end, when will we have the resources to build fully fledged programs and understand Adversarial Machine Learning?

Hyrum Anderson: Well, first I want to make sure that we are talking about the, the difference between a risk and a threat. Okay? So the risk is here and it's everywhere, right? And it can be exploited and that's, that's our job. And, and the red team side of my team, that's what we do, right? The threat exists in niche areas. And those niche areas often don't actually care that this Machine Learning they're attacking, right? There's nothing special. So example, content moderation. It uses Machine Learning to determine if the content you're posting on LinkedIn, or I'm making this up. Wherever, whatever platform is appropriate to, to be seen by others.

Hyrum Anderson: And nefarious people or whatever, for whatever motivation they, they want to get content up there and they find ways to obfuscate it. Right? So that's, that, that is an adversary attacking a Machine Learning model, probably the adversary in that case doesn't even know. But the adversary is finding blind spots or design oversights in that system. The same exists in fraud, the same exists in security. So there are adversaries whether they know it or not, who are attacking Machine Learning systems. What they aren't doing today are using these sophisticated algorithmic kind of fuzzing like procedures to attack.

Hyrum Anderson: That's what we have not seen widely used. We've seen that a lot in sort of research laboratories. And probably the reason we haven't seen it in the wild yet, is like as easy. Like there's just easier ways, right? If I can just guess with my content moderation upload, and I can be right, like why in the world do I need to have a fancy algorithm to, to do it? So as generally, security is improving for systems in general, to plug some of these guess and check methods, which in my opinion, will never go away. There will be more economic incentive to have a kind of a sure-fire algorithmic way to do this for adversaries.

Hyrum Anderson: I do not know if that's going to happen in the next year or the next five years, but economically speaking, if we're doing our job as defenders, that is something in the tool bag that exists is open source and that they will reach for when that becomes the lowest hanging fruit.

Nic Fillingham: This feels like a unique point in time for cybersecurity where, and, and, and maybe I'm being too optimistic here, but where we, we do have an opportunity, we, the industry have an opportunities to sort of get ahead of something before it, before it gets ahead of us. Would you share that sort of optimistic view or do you, do you think we're sort of neck and neck?

Hyrum Anderson: Yeah, by ahead. I mean, we're thinking about this and I don't think that adversaries are not thinking about it. I just don't think they have to, to pull up this bag yet. Right? So are we ahead? We have an opportunity to be ahead. I guess the concern I have is like, if, if you feel like you're ahead, you're guessing, you're guessing at a defense for an attack that doesn't exist. That means an attacker's gonna choose a different kind of attack. So I would not say that we're ahead. I, I think we have an opportunity to be proactive, especially at these higher level questions about how to manage risk. I think we are too early for things like I'm any tech in this kind of thing right now, right?

Hyrum Anderson: Like tho- tho- those things are maybe a bit premature because kind of by construction, you can't be ahead of a threat, in sort of detection and remediation space. Because they haven't punched you yet. You don't know how to, you don't want to block that one. So I agree with you Nic, that we have an opportunity to be deliberate in how we frame this problem. And that is an excellent advantage. And when's the last time that's happened?

Nic Fillingham: It certainly feels sort of unique, but I'm with you. You can't block the punch that you haven't experienced yet. And so that's probably a great analogy. I'm thinking back to the episode we did with, uh, Christian Seifert and Josh Neil in CyberBattleSim. You talked about how sometimes attacks on Machine Learning systems, I think content moderation was your example, the attacker, the adversary doesn't even know that they're attacking against a, a Machine Learning model. So that's sort of a really interesting perspective. But sort of try to bridge the gap there with, with the, uh, CyberBattleSim conversation, how far away do you think we are from having automated agents, automated sort of AI constructs, which I know is a sort of fantastical concept.

Nic Fillingham: But like how far away do you think where we, we are from actually having Machine Learning on Machine Learning going at it, to some degree of scale and sophistication? Do you think we're... Are you thinking like it's a year, 5, 10 20? What, what, what does that timeline look like?

Hyrum Anderson: Now, if you mean Machine Learning versus Machine Learning in a security context for like a breach? I think that's-

Nic Fillingham: Absolutely. Yeah.

Hyrum Anderson: Yeah. Believe it or not, like that is here in very narrow redefined things. So-

Nic Fillingham: Okay.

Hyrum Anderson: An example, my, I'll bring up Will Pierce, he published some research at his previous company about using Machine Learning to detect us kind of sandbox that you're in. So you know how to act in a piece of malware, and that sandbox might have Machine Learning employed also. There's this, um, combatitive elements between them. There's been other work published that has attempted to do things like simple reinforcement learning to choose what kinds of, sort of pen testing actions to get into a network that I think the authors would, would say is, is not yet mature.

Hyrum Anderson: I myself have done research in using machines against machines, and trying to like a reinforcement learning approach to develop a malware strains that will, will evade Machine Learning model detector. So it's using Machine Learning against Machine Learning. In all these cases, they're narrow and there are easier ways in my opinion to date to do that. And if, uh, you know, our listeners are trying to think about kind of, I dunno. If you think about like the Avengers AI, (laughs) Jarvis, like taking on a big massive scale attack and, and another Jarvis defending it, we are very, very, very far away from that.

Hyrum Anderson: I think Machine Learning and AI is best employed today on narrow tasks, sort of this more general artificial intelligence where we're, we're not very mature at all in that larger level of reasoning. So I would not raise any alarm about AI systems swarming our networks in, in mass and, and being effective. I think we're, you know, we're five plus years away from, from that.

Nic Fillingham: So we're not going to have a, uh, Jarvis breach shield sort of moment any time soon, where that's the only instruction required and then the, the next thing, you know, you, you've got root access to, to the shield network. That's a, that's a long way away.

Hyrum Anderson: That's right. And really the thing that, that you should be more concerned about is how Machine Learning could be used by an adversary to make that human much more efficient. And that's actually not a new thing either. I mean, adversaries are smart, they're economically motivated, and they, they use analysis to be smart about how they attack. Think about like a phishing campaign and who they target. They want to use data to inform them. And I, I wouldn't doubt that there are some Machine Learning models that would help them to predict who the ripest target might be, for example. Or in, in, in a breach scenario.

Hyrum Anderson: Let me use, for a very narrow scope, let me use an agent to like, you know, if I know to find out what, what kind of, you know anti-malware is installed, and what's kind of, decide what the kind, the, the best payload would be to evade that. Computers are really good at that kind of fast, quick reflex math and, uh, Machine Learning is w would excel at that. I'd be far more, you know concerned about real adversary, like human adversaries equipped with Machine Learning that scales their intentions more than I would a, about like an autonomous act all by itself, AI doing all the hacking on its own.

Natalia Godyla: And speaking of the future, what's next? What's your next big mission? The next problem you'd like to solve? Is it continuing to educate the ecosystem on Adversarial Machine Learning? Is it to get us to the point where we are establishing preventative measures? Or is it something else entirely?

Hyrum Anderson: Really it's chasing this goal that while elusive I don't, do not believe is impossible. And that is, build your Machine Learning model wherever. And we want to help you to be able to manage that risk. And do it in a way that's natural in kind of the same kinds of motions that if you're a security professional, you're used to assessing or, or like doing compliance things or doing policy things. If we can do that, as Nic brought up earlier, that can be the beginning... Help, help people to begin security programs with AI, not as, as part of an overall security strategy for the business. You know?

Hyrum Anderson: There, there are these special things you have to consider about AI, but you shouldn't make it its own security department, right? Security is a, a business kind of consideration, and we want to make that easy for you now. Today it's hard. Today AI is a special snowflake. We want to make it part of a security network of decisions.

Nic Fillingham: I noticed you are the co-founder of the Conference on Applied Machine Learning for Information Security, CAMLIS. Can you tell us a little bit about CAMLIS, uh, if, if you would like to, and then is there anything else you'd sort of like to point listeners to, do you have a blog? Do you have a Twitter? Where can we go to play along at home with, with your work?

Hyrum Anderson: So CAMLIS, the Conference on Applied Machine Learning and Information Security was founded by Keegan Heinz and myself several years ago, because we didn't find that right venue that was a mix. Really it's for Machine Learning people doing security things. And those would surface at major conferences, but there was never a place you could go for like a sink your teeth in kind of experience. And I have, I am just so thrilled with the community that has developed around CAMLIS and the quality, the people there. And so for anybody who would be interested in how Machine Learning is used in security, or maybe you're in Machine Learning and you want to learn a little bit more about security, this is a great place that has still a, it's still a boutique conference in the sense that there's not 3000 people there, where you can network.

Hyrum Anderson: It's a great location. That will be happening later this fall. I wanna shout out to Edward Raff who will be chairing the conference this year, and you can find out more information in the coming months about that. The second thing I wanna give a shout out to, and this is much sooner, happening much sooner. For the last several years, a partner, Zoltan Balazs and I have been sponsoring a really clever competition that you're all going to want to participate in. So if you like packing things, and if you like malware, and you'd like Machine Learning, this is for you.

Hyrum Anderson: This is the Machine Learning security evasion competition. You get prizes for attacking Machine Learning models to create evasive malware variants. This is as real as it gets. So it's real malware. The malware is actually bites on disks. So you're t-, you're, you, you take all the bits, you don't get a change code. You take all the bits and you get to disguise your malware or the malware we provide rather, to evade a suite of defensive solutions. And this attracts a really, really, really gnarly smart crowd of people who are good with both, both malware and Machine Learning, and do it in really clever ways. Even if you're not a malware reverse engineering ninja, there'll be ways for you to participate and still evade Machine Learning models.

Hyrum Anderson: And, and I will, I will leave that there. If you'd like to know more about any of this, please do reach out to me. Twitter, I will respond to Twitter eventually. Um, Dr. Hyrum is my handle, or on LinkedIn, you can find me also. If you've heard about the announcement for the Machine Learning security evasion competition, you can head over to MLsec.IO.

Nic Fillingham: Hyrum, what do you do for fun when you're not out there on the frontier of Adversarial Machine Learning?

Hyrum Anderson: Nic, uh, you don't know this about me, but I am the most interesting man alive. And-

Nic Fillingham: Oh, no. I knew that. Rom told us this.

Hyrum Anderson: (laughs) Hey, so first I have five kids. So caveat that, that free time expression with knowing that I'm primarily a bus driver and, uh, an entertainer. But, um, so I, I live in Boise, Idaho. I grew up on a hobby farm, and I, I'm lucky enough to be able to work, uh, in a distributed manner. But my folks still have this farm that has like a milk cow. So my COVID hobby, I make artisanal cheese.

Nic Fillingham: [inaudible 00:32:15].

Hyrum Anderson: Yes. I do.

Nic Fillingham: Keep talking.

Hyrum Anderson: Handcrafted.

Natalia Godyla: (laughs)

Hyrum Anderson: Handcrafted [inaudible 00:32:20], and some Alpine sort of Swiss style cheeses, have little cheese cave. Also our viewers can't see this, but in the background, you'll, you'll notice like a little accordion. And, uh, I was a missionary for my church in Russia. And, you know, we didn't, I didn't have a lot of money, but I could spend $8 and buy that sweet puppy.

Natalia Godyla: (laughs)

Hyrum Anderson: As it turns out, when you have one accordion, they're like, they're like amoeba on a Petri dish. They just multiply. I now have three accordions. And the total amount of money I've spent on accordions is $8.

Nic Fillingham: Hang on. You woke up one morning and your, your accordion had divided and split into two accordions?

Hyrum Anderson: Yes, it's amazing. It's more like the neighbor's like, "Oh, weird nerd with the accordion, and I have something in my garage I'm trying to get rid of." But it, it brings such a thrill to me to have three accordions. Kids love accordions. And I am one of the most popular person with like elementary school kids, like who doesn't like happy birthday played on the accordion to them. I [inaudible 00:33:23] anymore.

Nic Fillingham: I do, I do love a sort of an accordionie powered shindig, you know. A bulker or... That's beautiful.

Natalia Godyla: Awesome. Thank you for sharing that. And thank you for joining us on the show today, Hyrum.

Hyrum Anderson: Thank you, Natalia. Thank you, Nic. Great to be with you.

Natalia Godyla: Well, we had a great time unlocking insights into security from research to artificial intelligence. Keep an eye out for our next episode.

Nic Fillingham: And don't forget to tweet us @MsftSecurity, or email us at SecurityUnlocked@Microsoft.com with topics you'd like to hear on a future episode. Until then, stay safe.

Natalia Godyla: Stay secure.