
BlueHat 2024 Day 2 Keynote: Amanda Silver, CVP Microsoft Developer Division
Nic Fillingham: Since 2005 "BlueHat" has been where the security research community and Microsoft come together as peers --
Wendy Zenone: To debate and discuss, share and challenge, celebrate and learn.
Nic Fillingham: On the "BlueHat Podcast" join me, Nic Fillingham --
Wendy Zenone: And me, Wendy Zenone, for conversations with researchers, responders, and industry leaders, both inside and outside of Microsoft --
Nic Fillingham: Working to secure the planet's technology and create a safer world for all.
Wendy Zenone: And now on with the "BlueHat Podcast". [ Music ] Welcome to the "BlueHat Podcast". We're excited for this very special episode. We just finished three days of "BlueHat" events, two of them external, one internal. I missed it this year, but I am super excited that some of these were recorded and we're going to bring some of these to the podcast. And Nic, do you want to talk to us about the date to keynote and what we're going to bring to the podcast?
Nic Fillingham: Absolutely. Wendy, just again, for the record, you were missed, would have been wonderful to have you there. You would have --
Wendy Zenone: Aw, thanks. [Laughs]
Nic Fillingham: -- loved all the Taylor Swift inspired stickers and --
Wendy Zenone: Yes.
Nic Fillingham: -- pins, and sweatshirts, and hats, and everything.
Wendy Zenone: Looked amazing.
Nic Fillingham: Because I know you've got that Taylor Swift tattoo, I know your [laughter] kids are all named after Taylor Swift.
Wendy Zenone: Yes.
Nic Fillingham: Check out Twitter or wherever you play online in the socials if you want to see some photos from the conference. But yes, there were some fun T-Swift inspired shenanigans. But on today's podcast, yes, it is the "BlueHat 2024" day two keynote. We're bringing you the audio from that keynote. And that was presented by Amanda Silver, who is the CVP and Head of Product for the Microsoft Developer Division. Amanda is also the General Manager of First-Party Engineering Systems, which was a great fit for "BlueHat" because Amanda's keynote was on really sharing how Microsoft protects Microsoft's engineering systems, and really tried to wrap together all the ideas around "BlueHat" and bringing together external perspectives and external security researchers and making sure that there's this right balance of transparency and also, you know, obviously checks and balances and evolution in place to keep everything safe and secure. I probably did a terrible job of setting that up, but I'll tell you, it was a great keynote. I think you'll really enjoy it. Let's roll the audio, and please enjoy your special absurd "BlueHat Podcast". [ Applause ]
Amanda Silver: Thank you so much for the introduction. I'm really thrilled to be here to talk about this topic. I actually have never -- I've talked about a lot of things in the developer space before, developer tooling, you know, developer experience; never actually talked about this topic, so I'm excited to talk about it today. So you know, as Tom said, it's really no secret as to why we're all here today. It's not a question as to if there's going to be another attack, but actually, you know, when it's going to hit, where it's going to hit, how severe it's going to be; and that really all depends on how quickly we can detect and remediate it and address it. And so more and more we're seeing really software developers themselves, their workstations, their engineering systems, actually be the target for threat actors, you know, from solar winds to Codecov, to Log4j, to XZ, developer watering holes is kind of how I think about it, and the infrastructure that they work on is really increasingly becoming the target that then, you know, becomes the critical dimension that facilitates the lateral movement. So earlier this month, CSO Online cited a new report from Sonatype that really sounds the alarm about the rate at which malware is infiltrating open-source software. That company has tracked over 500,000 new malicious packages since November 2023, across Java, JavaScript, Python, dotnet packages, all of those package registries, and the number of malicious packages on those registries is -- according to their report, has grown by 156% year over year, and the previous year is actually even higher. And that obviously poses a very significant risk for anyone who uses open-source, and it really is an imperative for everyone to kind of, you know, manage their dependencies on open-source really effectively. So let me just introduce myself really quickly from my perspective. I basically have two jobs at Microsoft. The first is that I run the product teams that build all of our developer tools, Visual Studio, Visual Studio Code, you know, TypeScript.NET, our contributions to Python, C++, you know, the rest compilers, you know, all of that kind of stuff, our Azure DevOps Solutions. We partner very closely with the GitHub team to build, you know, advanced security for GitHub and Azure DevOps, to build GitHub Copilot. And we also build in my team the paths layers of Azure, so basically the application platform. That's half of my job. [Laughs] The other half of my job is that I'm also the GM for our First-Party Engineering Systems. And so in a way you could think about this as we take the retail products that we ship to our external customers and we manage them, we administer them, we host them, we extend them, and we incubate new technologies for our First-Party Microsoft employees for our Microsoft digital estate and, you know, over 60,000 employees in the engineering profession at Microsoft. And our engineering systems are really the foundation on top of which all of our products are built. And because of that, they are the prime target -- they are a prime target for threat actors attempting to infiltrate our systems' and our customers' environments. Now, you may have gotten word -- and I know many of you are Microsoft employees, so I'm sure many of you have heard this before, but for those of you who have not -- who are not Microsoft employees, I would assume that you've probably heard yesterday in yesterday's conference, you know, that -- about this new initiative that we're calling "SFI", or the "Secure Future Initiative" at Microsoft. How many of you guys have heard of this, everybody; maybe not everybody. Let me just explain it a little bit. You know, obviously as one of the, you know, defenders of kind of the most targeted estates in the world, Microsoft really -- you know, we obviously see firsthand the increasingly hostile security landscape. And obviously we recognize the imperative to address it as much as we possibly can. SFI launched in November 2023, and our CEO, Satya Nadella, made security the top priority for everyone at the company. I'm not just talking about engineers, I'm not just talking about the executive staff, I'm talking about our lawyers, I'm talking about the people in HR, I'm talking about literally everyone at the company now has security as one of their core commitments. You know, and so, you know, in some senses everybody's like, "Okay, what can I do for security?" Well, you can at least use multifactor authentication. [Laughs] So this is really an education for everyone across the entire company in terms of how they can contribute and do their part to, you know, support security. But you know, as I've been saying to my team for several months now, what this also means -- and this is true for our overall general business as well, is that there is no new feature, no new skew, no new product, that is more important than ensuring that the infrastructure that our customers take a dependency on for their mission-critical solutions is absolutely rock solid. And so that's the message that Microsoft is getting from Satya, all the way down, right? And I think that, you know, especially in this landscape, that is really the only kind of posture that we can take. So this company-wide initiative was really designed to advance how we design, build, test, operate our products and services, so that we can ensure that we're delivering the solutions that meet the highest possible quality of standard for security. And we have the equivalent of 34,000 engineers working on it, making it, as far as we know, the single largest cybersecurity engineering project in the history of digital technology. And so we're empowering basically all of Microsoft to implement the changes that are necessary to deliver security first. And we're leveraging all of the insights that we've got in the best practices so that we can, you know, deliver the world's most secure cloud; that's really what we want to be focusing on. So all of our work in Microsoft is now guided by these three principles. Secure by design; security comes first when designing any new product or service. This means that, for example, I'm engaging with my design team, you know, people who we think of whose primary job is to design the user interface for our products to actually help them think about how they can create better design user interfaces, but also obviously software engineers in terms of how they design the architectures for their solutions. Secure by default, that the security protections in our products are enabled and enforced by default. And that requires no extra effort on behalf of the customer, and it's not optional, it -- you know, at least internally it is not optional, and we can make it optional for our customers, but if we do make it optional for our customers, we actually want to, you know, warn them, help them understand the risk that they might be taking on. And secure operations; we have to make sure that our security controls and our monitoring are going to continuously improve so that we can meet our current and future threats. Now, every part of Microsoft is being brought together through this initiative to advance the cybersecurity protection across our company and our products. Now, the basics of the SFI framework are elevating security and SFI across the entire company; security above all else, you guys have seen that many, many times. And we've also identified six critical security pillars and we organized the teams to be able to implement these; you know, protect identities and secrets, protect tenants and isolate production systems, protect networks, protect engineering systems, which we'll talk about in a lot more depth in a second, monitor and detect threats, and accelerate response and remediation. Now, in some senses this is almost the first chapter of SFI. And as we are kind of uncovering the layers of what we need to do, we're realizing that there are more and more ways that we need to kind of look about -- look at our entire estate to figure out everything that we need to do to remediate it. But another kind of core principle of the way that we're approaching this is to continuously improve our standards and embedding them into the way that our engineers implement the products through paved paths. And this can happen at the architectural layer in terms of the way that they build, you know, say, a new Azure service or a new service for M365, but it also obviously happens in the way that they integrate into the engineering systems to go from, you know, their idea to code into production. And then we have to also make sure that we're engraining all of this secure posture into the design -- you know, the principles, the secure by design, secure by default, and the operations in everything that we do. And that also means that, for example, we have to make sure that we're actually implementing, you know, telemetry and monitoring for basically everything that we implement so that we can actually see what's happening. Now, when I mentioned that this is an initiative across all of Microsoft, I really mean all of Microsoft. That's over 18,000 organizations in terms of the, you know, sub-organizations that we have inside of Microsoft from an engineering systems perspective; 167,000 reposts, over 100,000, you know, devs engineers, designers, product managers. That was kind of the count before the Activision acquisition, so it's even more at this point. And just like, you know, many of your organizations outside of Microsoft, we have a lot of legacy code that still needs to be maintained; 20-year-old code, or more, 30-year-old code in some cases. And as a result of that, we have a breath of a lot of different types of tools in our engineering systems, and also a growing number of programming languages and ecosystems that we need to support. So it is a lot of complexity. And the way that we can manage this, the only way that we can manage this is through the practice of what I call "platform engineering"; what the industry is coming to call "platform engineering". So platform engineering I think about it as DevOps plus, plus, plus. It's kind of like the combination of DevOps plus, you know, kind of cloud-native together, but it's basically built up. It's a practice just like DevOps was way back in the day, before it actually became products, that was built up from DevOps principles that seeks to improve every development team security, quality, compliance, costs, and time to business value, and developer experience, through, you know, improving the developer experience and enabling self-service systems that enable secure, governed, you know, solutions and infrastructure by default, right? So the idea here is, you know, for many people when they want to create a new product, they actually have to go create a DevOps pipeline, and they have to talk to the engineering system, and then they have to go talk to the Ops team, then they have to talk to the Security team, and they have to, you know, talk to basically every team across their company to actually get the infrastructure that they need. Obviously when you have those kinds of, you know, human interfaces that are being required, there's going to be variation in those human discussions, and it's just going to kind of represent a lot of friction for the engineering teams. And so really what we're trying to do here is to make sure that we have these paved paths by default so that developers can be self-service. They can basically -- you know, they can be as agile as they can possibly be, but also from an engineering systems perspective and from a security monitoring perspective, we can actually understand what they're creating in the context of our environment and ensure that they're adhering to our best practices. In a sense, though, this is also a mindset shift. It actually requires that we think about our engineering systems through the lens of a product mindset. And let me talk about that for a second. But you know, the solution that we have here is the set of tools and the systems that we use to support it. So you know, as the GM of the internal team that builds these solutions, this is really where platform engineering helps us. So we've invested in making sure that we have the mindset shift and we treat our engineering systems with that product mindset. So our customers are the individual developers on the team who use these products every day as end users, right? But we also have stakeholders, really important stakeholders, our Operations team, our Security team, you know, the boardroom who cares about the agility of our engineering teams. You know, so we have to kind of constantly make sure that we're satisfying all of these customers. But we've built basically this central team so that we can codify all of the best practices into these automated processes that reduce the developer toil. And the developer toil can come either through setting up infrastructure, or it can come through security response and remediation, right? And so if we can actually make sure that we're building better foundation for engineering systems from -- by default, it actually reduces developer toil. That's kind of -- that's the key to what we're trying to do here. So we could not handle the scale of driving standardization and consistency across Microsoft's estate without thinking about this practice of platform engineering. And so we -- in a sense, you could think about platform engineering as having three key motions. Now, I could talk about this all day, I'm obviously pretty passionate about this, but I just want to give you a little bit of context on these three motions as they come up in terms of how we think about how Microsoft has to protect our engineering systems. "Start right" really focuses on equipping the developers with these self-service tools, and that allows them to kick start their projects really quickly and adhering to the company's best practices that are defined through templates that are then automated and policies. And those "start right" templates include workflows and config as code that actually leads to the deployment pipelines that enables successful, efficient, and secure operations. Now, "stay right" motions, like policy enforcement, security monitoring, observability, through all of that, organizations can govern their application estate to get better cost control and to reduce risk while delivering a great developer experience. And "get right" motions, which, you know, a lot of people call these "get green" or "get clean". You know, I don't love the term "get green" because there's always red. And red isn't bad, you just need to respond to it. But those are really the campaigns that we drive to, you know, eliminate the technical debt. And the technical debt can kind of happen in two ways. It can either happen because, you know, the code drifted since they first kind of came up with the project. It can happen because our standards evolved. It could happen because there's a new threat that we now have to respond to. And so there are many different ways that we kind of need to drive these "get right" campaigns. So now let's return to the Secure Future Initiative just to show you kind of the double-click on what it looks like within the engineering system pillar. The goal of the engineering systems pillar is to protect our software assets and to continuously improve our code security through the governance of the software supply chain and our engineering systems infrastructure. So this is an enormous goal, and you know, acknowledging we are not perfect here. But we're approaching this through the lens of continuously learning and improving our products and our infrastructure so that we can protect both our solutions and our customers' solutions. So with all that said, let's take a quick look at the five objectives that make up the engineering systems pillar. In a sense, you know, these keywords that you see highlighted here, when I first -- you know, when we first realized that we were going to be bringing the Secure Future Initiative company wide -- I got a call, you know, Friday night very, very late and, you know, got the word, "Hey, we're going to be doing this, and I'm going to need your help, you know, given that you run the engineering systems team." And so, you know, it's one of those late-night calls that you spend time kind of thinking about even when you're not intentionally thinking about it. And these are really the five key themes that came to mind, those words. So let's just talk through it. Number one, we have to build and maintain the inventory for our software assets that we use to deploy and operate our Microsoft production environments. Inventory is just so critical to facilitate rapid response and remediation, and to also continue to maintain compliance with our evermore secure standards. So we cannot do something like a "get right" campaign without actually having an inventory. And you know, today this is actually a system that is internal only. We don't have an equivalent retail solution of what we use internally inside of Microsoft to what we've built for our inventory system. But it is an essential component in terms of how to facilitate this. So as we saw from the recent SaltStack exploit, publicly-known vulnerabilities can really be exploited very quickly. And as a part of our "get right" campaign and our platform engineering motion, over the past several years, we've driven major campaigns to prune our unintended repose and to understand which remaining repose are production related. And this has really immeasurably impacted and helped improve our focus for our efforts. It helps us reduce the noise from the finding so that we can focus on, you know, what actually represents the highest risk and it effectively has improved our MTTR, our mean time to remediation and response, when the security issues arise. Now, since some of the earliest open-source incidents, we've really continued to improve our open-source incident response process. And so now we can actually block known bad packages on our deny lists and enable central notification to all of the affected teams within just a few hours, which is much improved. Now, the second big point is access. In the before times, developers built on their own build servers and they copied binaries over from one build server into the production environment. And you know, we don't live in that world anymore -- or we definitely don't want to live in that world anymore, so we're really trying to apply Zero Trust and least privilege policies to secure access to source code in engineering systems infrastructure. Now, this is a defense in-depth approach that really ensures that only authorized personnel have access to those critical resources and to pathways to production. This is a really critical step. Now, in the interest of transparency with you all, one of the things that, you know, is an important tension here is supporting the inner -- the culture of inner source, you know, and security, right; because you know, yes, we want to protect all of our assets, but we also want to support collaboration and code reuse inside the company. And so our goal is to be able to allow developers to search for less sensitive source code, to facilitate code, reuse as much as possible. And our current approach is to, you know, really make sure that we're standardizing the access groups based on our organizational hierarchy, and also support easy experiences to approve access if somebody wants to contribute to code base in a sister team, and also, you know, provide search capabilities for folks so they can kind of, you know, learn but not necessarily have right contributions into all of that source code as well. So the third big dimension is code security. Now, every single line of code -- and this is our aspiration, if that deploys to Microsoft production environments, really has to be fortified with state-of-the-art security checks. And we have to make sure that every -- all the code that we implement is secure and that we're continuously eliminating categories of attack surface area so that we can continually improve our products. So two examples of how we do this is to, you know, remove secrets from code, as an example using GitHub Advanced Security, to make sure that we're improving the threat modeling across the company in that practice. Other examples of how we eliminate major categories is making sure that we're migrating more of our code bases to memory safe programming languages. And so that's kind of, you know, one of the kind of important dimensions of this. Now, the fourth big dimension is isolation. By standardizing and securing our build and release systems through these governed pipelines, and isolating our dev test environments from our production environments, that will allow us to have consistent and secure continuous integration, continuous deployment solutions across the board. And that will reduce the risk of lateral movement into the production environments. Now, one of the tools in our arsenal that, you know, we have that facilitates this but, you know, it is kind of a challenging thing to kind of use at the scale of Microsoft is -- but it really facilitates the separation between dev test and prod is Azure Deployment Environments. And that allows platform engineers to set policies and settings on various different types of environments and control which Azure resources people have -- can create -- developers can create, and really track those environments across, you know, feature branches and kind of, you know, all the different kind of, you know, ways that they might deploy these things, you know. One of the things that today, you know, developers really struggle with is setting up all of the managed -- the cloud-managed resources that their applications depend on. And it's really error-prone and prone to, you know, configuration fails or other things like that. And you know, we don't really want every single developer to have to become a -- you know, an expert in kind of -- frankly, developers don't love writing YAML. [Laughs] They don't love it, they don't want to write it. So this actually saves them a lot of time. But it also, you know, allows us to kind of have more consistency across all of this. And through this, they can then apply Azure governance and policy based on the type of environment, the, you know, sandbox testing, staging, or production. Now, another tool in our toolbox is Managed DevOps Pools. Now, this is something that we actually developed internally in Microsoft first. We've been using it for the last couple of years and have been rolling it out as part of our standardized build pipelines. But basically this is a way to make sure that, you know, we're driving centralized governed build infrastructure for all of the builds across, you know, Microsoft in this case. But you know, this is actually a feature that we're going to be bringing to our third-party customers at Microsoft Ignite in a couple of weeks. But you know, this really helps to ensure that we're actually centrally managing and governing those build pipelines so that the build infrastructure cannot be compromised, or at least that we have, you know, a more qualified central team that's basically monitoring all of those pipelines. So the fifth objective, which is obviously super critical, is supply chain security. So we're building a more secure software supply chain to protect every component in our production environments and products, which includes aggressively burning down critical open-source vulnerabilities through package upgrades and standardizing on Microsoft-vetted package feeds. And we're super laser-focused on code integrity as well to ensure that only services -- that only services in our production clouds are those that have been built using our production tool chains. Now, as we look at this space in terms of the maturity framework for safe OSS consumption using the secure supply chain framework, that allows us to have inventory, audit, and to track our OSS usage in the event of a new zero-day vulnerability getting declared. So Microsoft developed the S2C2F -- it's that just it doesn't quite roll off the tongue the way that I would like it to, since 2019, and we've continued to lead and maintain the framework as a part of the OpenSSF. So the eight practices in the S2C2F -- thank you, they work together as part of a holistic strategy so that you can secure the team and organizational supply chain. And so it starts with ingest, to make sure that your teams are consuming open-source through an artifact repository like our Azure Artifacts, our JFrog, or something like that. And so that way, you can actually continue to build your solutions even if the upstream goes down. Now, you also need to enforce, and that ensuring that only OSS can be consumed through the artifact repository so that you can really gain control over how dependencies are brought into your supply chain, and that this is part of establishing the paved path with those guardrails. So then these SCA tools, like dependency scanning, can then collect a more accurate inventory and scan for legal risks, vulnerabilities, malware, et cetera. Now, by improving our inventory SCA and other tools like Dependabot, which is part of GitHub Advanced Security, that really helps developers to update their known vulnerabilities very quickly. And that allows, you know, security teams to also audit to make sure that the known vulnerabilities are being addressed as part of a "get to green" campaign. Now, we've published all of the high-level solution agnostic set of practices with a detailed list of requirements for each practice and real-world supply chain threats that are specific to OSS, and how our framework mitigates them. So for those of you, you know, outside of the Microsoft team who are interested in this, you could check out that link at the bottom for some recommendations. Now, internally, we continue to invest in what we call "Central Feed Services". And this is an extension of Azure Artifacts that's designed to protected against vulnerabilities that are stemming from the consumption of public packages. So CFS really ensures that packages are pulled from secure internal registries and they provide ingestion gates for OSS looking at malware, looking at providence. And it provides ongoing continuous scanning with the latest tooling signatures, SaaS, DaaS, and it's transparent to developer experience. So really, the developer experience doesn't change at all. You know, it looks like they're just consuming a package directly from NuGet or PyPI, or wherever, but essentially we make sure that our build systems are pointing to this Central Feed Service. Now, it's not just about us making Microsoft more secure. I also want to call out that, you know, especially given that we have retail products, we have -- you know, we have package managers that Microsoft actually delivers, we also need to think about our responsibility in terms of securing the entire ecosystem. That protects Microsoft and it protects all of our external customers. And so we continue to make really significant investments in the broader ecosystem to secure the consumption of packages and to, you know, reduce the blast radius for the ecosystem. So you know, for the NuGet team, just as an example, for the last couple of years, their -- even before the secure feature initiative, their big focus, their number one priority actually has been improving the security of the ecosystem. So you know, here are just a few of the examples of the work that we've done in NuGet. You know,.NET restore audit improvements really helps you to check all of your dependences including direct and indirect dependencies. Transitive dependencies helps you know that your hidden dependency -- what your hidden dependencies are, both at a project and a solution level. OIDC trusted publishers enhances security with robust authentication, and it enables convenient access across multiple services through single sign-on. And that makes our systems more easier to use and more secure. And you know, obviously, you know, in some senses, the Cyber EEO Act and kind of the requirement of SBOMs has also had a pretty dramatic impact in terms of the requirements across the entire ecosystem of engineering in kind of the developer world. And so one of the things that we've been doing is to ensure that our -- that, you know, NuGet can facilitate building SBOMs properly. And then, "dotnet nuget why" displays the full dependency graph for a specific package within a project or a solution. And so we're going to continue to, you know, partner with the OpenSSF, you know, foundation to make sure that we're continuing to improve our open-source insights. And in some ways, you know, from my perspective, NuGet is one of the package managers that Microsoft really kind of fully controls. And so I think that we have an opportunity with NuGet to actually show, you know, some thought leadership in terms of how we can improve the -- and secure the ecosystem for the entire, you know,.NET community, and hopefully that can then kind of spur additional package managers to also adopt similar practices. Now, this all sounds good, yes; so far, yes? Okay. But it's also not enough; and I just want to call that out. You know, detecting the issues, making sure that we have all of the security vulnerabilities identified, it doesn't actually solve the problem if the developer never takes action; right? And so we have to think about the actual ergonomics for the developers. We have to make sure that they are not, you know, barraged with a wall of issues that is going to represent tech debt that they then have to take literally years to burn down manually. And so we really also want to make sure that we, you know, are not just finding the issues, but fixing the issues. You know, yesterday was GitHub Universe; we were talking about the "found means fixed", right? That's kind of our adage is we want to make sure that once we find it, that our tools and our systems actually automatically helps the developers. It just makes it so much -- so easy for the developer to fix the issue once it's found. And so you know, inside of Microsoft, we use Dependabot, which is, again, part of GitHub Advanced Security, to facilitate developers to -- you know, once we find a vulnerability for them to upgrade their package so they can build more secure software. I just want to talk about culture and the security mindset really quickly. You know, you guys have all heard about the XE attacks, so we're just going to kind of dump -- you know, dig into that a little bit earlier. But you know, at the center of this was Andres Freund, who is a Microsoft developer who kind of detected it. And so first of all, I just want to -- I want to acknowledge the amazing luck that we had, that we had a highly technical engineer -- this is all advancing without me, unfortunately. So I just want to acknowledge that we were so lucky that we had Andres from the Postgres team in Azure, who was doing benchmarking on the test branch of Debian, and he was spending time digging into the profiler, and he was seeing some issues in the remote connections. And so what he -- the analysis -- you know, obviously, in the security community you all are building amazing YouTube videos that go, you know, very, very deep into all of this work; amazing work. But essentially, what the library does is it alters the symbol table and it hijacks the calls to RSA to decrypt from SSHD. And as part of the certificate CA signing key, it actually contains the hacker's commands, but only if the hacker -- let me -- has signed them with their key. So it's pretty sophisticated attack. But I just also want to call out that this is really a social engineering attack. So if we go back to the XE mailing list in 2022, the attack actually started much, much earlier. This is really a long game type of attack. So if you've worked in open-source communities before, you know that the conversations that I'm going to show you here are really not unique, and it's definitely easy to assume that -- you know, that these are, you know, someone who is really trying to contribute back. And it's unfortunate that we're just realizing just how careful we need to be in the open-source space, and we can't really trust what people are saying, you know, that they always have, you know, kind of positive intentions from a contribution perspective. So here's the maintainer of the XE project, Glasi Colin [phonetic]. And he's noting there that he's been struggling with mental health issues because of the pressures of basically maintaining this pretty significant component. Now, you've heard of a Nebraska dependence. It's a reference to this XSKCD [phonetic] sketch. And so, you know, in a sense, this is a great example of an XE -- I mean, a Nebraska dependency. So going back to the newsletter and following what happened in 2022, essentially he's saying, "Why don't you pass off the ownership, the maintainership to somebody else then?" And finally, we see lastly, the maintainer of the project come back and say, "This other person, Gia Tan [phonetic], has been helping me quite a bit in the background. And maybe he can help me a little bit more in the future." And he created a GitHub account in 2021. And this looks pretty solid as an open-source contributor goes. And lots of the GitHub -- he has lots of GitHub commits; he has an avatar, he's very active, you know? But here's where they contribute the backdoors. The pool request is accurate. Yes, there's stuff in those files. And they altar and they build and more -- you know, he's really, really contributing in a really meaningful way, and that actually is protecting anybody from detecting that he is -- he's, you know, trying to do what he's trying to do. So here is the evil pool request. This is essentially a very large false value. So here you could see that they also contributed to this Google fuzzing project to replace themselves as the maintainer in case any fuzzing issues came up. And if you actually look at the GitHub repo, you're not going to find everything. The actual attack files are inside the tarball only. And so a lot of Linux distros will pull a tarball in and build, it as opposed to cloning the Git repo; and so on. So you wouldn't really see it. And also interesting in this case, the hackers modify the.gitignore file, and they've actually said here, "Don't check this build-to-host.m4 file because that's my backdoor, so let's just include that backdoor." So it's really interesting to see all of these breadcrumbs here. And in unraveling this, Andres really personified what we call the "security mindset". And I think that's a really key element of security culture. He was curious, he was vigilant, and he took his responsibility to take the findings to light. And so I just want to close where I began this morning. It's not a question of if there's another security attack coming, it's really a question of when it will hit, where it will hit, and how severe it will be. And you know, I just want to kind of encourage everyone here to think about Andres Freund; like let's just talk about him as an example. You know, what was super interesting in that first weekend when -- even before the New York Times article hit, you know, he reached out to me. I think a mail got forwarded to me eventually, and I reached out to him just to say like, "Hey, you know, how's it going? How can I help you?" And I think one of the things that I learned from working with Andres during this period is as much as we have a security culture inside of Microsoft and we have things like "Report It Now", which basically every employee at Microsoft knows to use if they see something, you know, I think about this as almost like you see a backpack by a trashcan in the train station, right? You see that it's abandoned, right, you see that. It looks somewhat suspicious. If you see somebody who is an authority figure walking by, you're probably going to let them know, "Hey, that's a thing you should go check out." And so essentially what Andres Freund did was the equivalent of seeing that backpack and telling somebody about it. And he did use "Report It Now". But Andres was also an acqui-hire to Microsoft. He came in through an acquisition. He was not in the company very long. You know, and he was a career open-source maintainer contributing to, you know, Linux kernel, Postgres, other things like that. And so he wasn't necessarily kind of, you know, embedded in the overall Microsoft culture. But I think that that's something that we all really need to think about if you think about your teams, you know, who really understands what to do when they see something, and are they going to get the support that they need when they see something and they reported it -- they report it there. So you know, for Microsoft, we have "Report It Now". That's what you should be using if you see something. But also, for anybody who works with other friends in teams, or works with teams that have come in through acquisition, or works with open-source maintainers for components that we have a dependency on, think about how you can actually help them as well so that we can all really have this security mindset to make sure that, you know, the digital world is more protected. So thank you very much. Have fun at "BlueHat". [ Applause ]
Wendy Zenone: Thank you for joining us for the "BlueHat Podcast".
Nic Fillingham: If you have feedback, topic requests, or questions about this episode --
Wendy Zenone: Please email us at BlueHut@microsoft.com or message us on Twitter @msftbluehat.
Nic Fillingham: Be sure to subscribe for more conversations and insights from security researchers and responders across the industry --
Wendy Zenone: By visiting bluehatpodcast.com or wherever you get your favorite podcasts.