
Blue screens of death: A deep dive into the Microsoft CrowdStrike outage.
Dave Bittner: Hello, everyone and welcome to "Caveat," N2K CyberWire's privacy surveillance law and policy podcast. I'm Dave Bittner and joining me is my cohost, Ben Yelin from the University of Maryland's Center for Health and Homeland Security. Hey there, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: On today's show, we are once again joined by our special guest, Caleb Barlow, CEO at Cyberbit. Caleb, thanks for joining us.
Caleb Barlow: Thanks, guys. Pleasure to be here.
Dave Bittner: While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. [ Music ] All right, gentlemen. Let's jump right in here. Gosh, has anything been going on in the cybersecurity space this week? I really haven't seen anything. Been kind of a slow news week. Huh, Caleb?
Caleb Barlow: When I woke up on Friday, most people's first reaction was, "Why are all these flights cancelled?"
Dave Bittner: Yes.
Caleb Barlow: I have to say, my first reaction was, "There is going to be a lot to talk about in the cyber law and policy world."
Dave Bittner: Yes.
Caleb Barlow: And I'm sorry for the -- people who suffered the effects of this outage, but we're going to crush some content here, right Caleb? Oh, I mean, this is definitely in the realm of wow, which is exactly why I think -- you know, anytime an incident like that -- this happens, whether it was, you know, WannaCry or NotPetya, it's time for everybody to regroup and this is one of the incidents that causes us to frankly rethink everything, which as much as all of this badness occurred, I get excited because it's an opportunity to say, "Okay, how do we improve what we're doing?" For sure.
Dave Bittner: I should say before we get too deep in here, of course, we're talking about the incident, it was last week as we recorded this with CrowdStrike where CrowdStrike had pushed out an update to their monitoring software, which caused Windows systems to go into a -- kind of a looping crash mode. And this had cascading effects. I think Microsoft estimated about 8 and a half million PCs went down.
Ben Yelin: The blue screen of death.
Dave Bittner: The blue screen of death.
Caleb Barlow: But it doesn't -- it doesn't have a good name yet. Like, everybody's just calling it the CrowdStrike thing. Like, this needs to be like, Falcon Down or something. Like --
Dave Bittner: Right.
Caleb Barlow: -- as an industry, we always have these really cool names, and like you know, this thing is like -- I mean, why have people not named this thing yet? It needs -- it needs a graphic. It needs a name. It's a big event.
Dave Bittner: Right. It needs a musical score underneath it like they do on CNN you know, when bad news happens. They -- someone creates a musical sting for it.
Ben Yelin: We need to write to the tabloids. Like see what the New York Post and the Daily News can come up with.
Caleb Barlow: I was literally looking at Falcon Down and it turns out that's a movie from like many years ago. So, maybe it's trademarked.
Dave Bittner: Yes.
Ben Yelin: Cross that one off the list.
Dave Bittner: So, Caleb, I'm really curious in your take on this. I mean, as the news breaks on Friday and you're starting to make your way through your day, can you give us some insights here? What was your reaction in real time as things started to unfold?
Caleb Barlow: Well, here is the funny part. You know, one of the things my company does, we do a lot of like crisis simulation events and you know, we -- we were actually in Paris doing a crisis simulation event for an Olympic supplier in -- that deals with like transportation systems. So, I was literally coming back from that event, and you know, you do one of these events and especially nowadays, it's always a little bit of like, "Could you believe this could actually happen?" Right? And we always try to make sure we're using things that are realistic. So, of course, by the time I land, all I've got is a ton of messages, "Oh, no, it actually happened. It's all going down. What are we doing?" So, you know, what's interesting about this is it was not positioned obviously as a security event, but I would argue first and foremost, well, actually this is a security event in that security teams had a massive influence on this. It's security teams that have the ability to stand up rapid response and incident command. The only difference is it wasn't a malicious actor. It was a accident. But that doesn't change our response in that every company was pulling out their -- their run books and saying, "Okay, massive outage, what do I do?" This wasn't any different than the response you would have had on WannaCry or NotPetya. Just it happened to not be a malicious actor.
Dave Bittner: I saw someone on one of the social media channels say that, you know, a significant enough technical error like this is indistinguishable from a malicious attack.
Caleb Barlow: Well, and I think one of the other things we have to -- that's so fascinating about this is, you know, in the security industry, every company has a reputation, right? Like, okay, look. I think it's fair to say without being overly negative that if this was Microsoft, everyone would have been like, "Oh, okay. Here's another incident. We've got to work through it. Blah, blah, blah, blah, blah." Well, this is CrowdStrike. Like, these are the good guys that everybody aspires to be. This is an incredibly trusted company. And I will say personally, both of the last companies I've run, you know, I step in and it's like, "Well, what are you using for EDR?" because it's such an important, you know, piece of your protection. But, yes, "Flip it over to CrowdStrike." Like, it's one of the tools that I sleep well at night knowing my company uses.
Ben Yelin: I'll just but -- sorry, I'll just buttress that quickly, Caleb. In talking to officials here in the state government of Maryland, when it's like, "What is the highest yield investment we can make in units -- in cybersecurity for units in local government?" It's just, "Get them CrowdStrike." Like it's not --
Caleb Barlow: Hundred percent.
Ben Yelin: -- yes.
Dave Bittner: Yes, it's like the old saying, "Nobody ever got fired for choosing IBM." You know, CrowdStrike is a safe choice or at least it was. Where do we see the fallout from this going? I mean, as we're recording this, mitigation is still under way. I think Delta's the last of the airlines to be kind of digging their way out of this mess here, but I mean, this was such broad effects all over the world. We see reports that Congress is going to bring the CEO of CrowdStrike for a stern talking to, which is par for the course, but what do you think the real-world repercussions will be here from a policy point of view?
Caleb Barlow: Well, the first thing we have to realize is, I don't -- I think we all kind of knew that CrowdStrike obviously is tapping into the kernel. Like, we all kind of technically knew that, but I don't think any of us really sat back and thought about, "Well, what are the potential implications if that goes bad? And why is Microsoft letting them do this?" And then your example with Delta, like there's this great meme going out about, you know, "Why was Southwest not as impacted as Delta?" And you know, the joke was, "Well, they're using Windows 3.1," which by the way, I'm sure is not the case at Southwest. But let's face it, Delta is an incredibly sophisticated digital airline, whereas some of the others aren't. Well, they're more impacted because they rely on their digital infrastructure. So, you know, what you saw here was this massive implication, and my takeaway is, and you know, I've said this on this show before, we've got to stop thinking of critical infrastructure in World War II parlance. Yes, water's important. Energy is important. You know, but the cloud is the critical infrastructure of all critical infrastructures. And we have got to completely retool our thinking in terms of how we release product, how we regulate companies, how we think about this. Cloud, I'm just going to say it. Cloud is the critical infrastructure, and if you're a regulator, if you're a CSO, if you have any authority in how things are released or tested and you are not treating this like the critical infrastructure of all critical infrastructure, then something is drastically wrong. And I think we saw that in this event.
Dave Bittner: Ben, what do you make of that?
Ben Yelin: I think that's exactly right. I think this was a -- kind of a warning shot where we saw a house of cards that fell down when such a major player, I guess we can refer to this as the mistake heard round the world, the failed update heard round the world. And we saw what the kinetic impacts were. It far exceeded, I think, what people's expectations would be. I don't know if this is the case for you Caleb, but certainly in people I've talked to, the impact exceeded what I think many people would have thought for an instant like this. I think now it's happened with this. It happened for example with Colonial Pipeline where people could not conceptualize that a cyberattack on this oil pipeline on the east coast could downstream force people to sit in gas lines for the first time since the 1970s. And the more we have incidents like this, the more it's going to be real for people, that this is critical infrastructure. This is what powers our economy and our everyday activities, and if we don't treat it that way in our laws and our regulations, then I think, just like you said, we're going to be stuck in a World War II, 20th Century mindset. And I think that's exactly right.
Caleb Barlow: Well, and don't think every adversary wasn't taking notice, right? I mean, if you want to impact an economy, if you want to impact the free western world, what do you do, right? Like, yes, you can build malware all day long, or you could go get somebody hired into one of these critical companies that you know, drives the cloud. Get them into a position in the test and build and release process and look at the mayhem that could be caused. I mean, I think a lot of people are stepping back away from this realizing, "Okay, this is CrowdStrike." Now, don't get me wrong. CrowdStrike is a big player in the cybersecurity world. They're not a big player of the cloud world. Like, there are much bigger players with much bigger impact that the good news is, you know, I mean, have we ever had a devastating incident like this from a Google or an Apple? No, they do an exceptionally good job. But it causes me to sit back and go, "Hey, I trust these guys. I trusted CrowdStrike." Well, things can obviously go bad. What does that kinetic impact look like? And what it means for a CSO is you better have your plans together for either how you don't have homogeneity in your environment, or you are well-prepared for a kinetic impact of having systems down for an elongated period of time.
Dave Bittner: One of the things that struck me here that I guess surprised me and was a good reminder of how much bubble we can be in, in the cybersecurity world, is how many people I saw commenting -- people in the tech press, commenting that this was the first time they'd ever heard of CrowdStrike. You know? A name that is ubiquitous in cybersecurity, but even just you know, one layer larger, right? The technology folks, not a household name.
Caleb Barlow: Well, they clearly don't listen to the CyberWire, Dave. I mean, this is clearly the root problem of this issue, which I think is --
Dave Bittner: Right.
Caleb Barlow: -- very solvable, right?
Ben Yelin: It's funny, because in the political world, people have heard of CrowdStrike because it was kind of in the bowels of the DNC Hack and Leak Operation in 2016.
Dave Bittner: Interesting.
Caleb Barlow: I think the more interesting way to see about this is what happens in the next phase, right? So, we have seen responses of regulators. We've seen responses of lawyers, no offense, Ben, of what happens when somebody screws up, right? And in some cases, those things evolve into class action lawsuits, or individual claims, or regulators stepping in saying, "Hey, you're going to do X and Y and Z." Now, I think this one's a little bit different because again, this is the good guys. This is the people that we've all trusted. This is a company that's been on the front lines of slaying cyber bad guys left and right for years. So, okay, but it's also the largest F-up of all time. Of all time, in all industries, right? Like, so what do we do about it? And I'm fascinated to see do government officials, regulators, step up and really think about this not of like, "How do we punish CrowdStrike?" but how do we think about this differently? But also, this is an opportunity for executives at CrowdStrike, at Microsoft, which was also, you know, they're not completely off the hook in this, to also step forward and say, "Hey, we learned something. Here's what we're going to do differently. Here's how we become the exemplar of best practice." And you know, that takes on a role of crisis calms. You know, so Shawn Henry, who I think he's the president of CrowdStrike or some senior position, he issued an amazing apology letter that I would encourage everybody to read. It's just -- it's so heartfelt and like, here's a guy who knows how to communicate, right? Like, okay, how does that kind of ethos convert into what we need to do next? I'm really intrigued to see that because, look, when something bad happens to somebody that we all admire, like in a strange, screwed up way, they actually now have the platform to say, "Okay, we learned something. Here's where we now need to go." And I really hope we see that out of CrowdStrike.
Ben Yelin: I think that's right, and they have the credibility to say, "If it can happen to us, it can happen to any other company." Like, "We -- we set the industry standard. And if we're the ones who made this epic mistake, then next time, it's going to be one of the big guys in cloud computing." So, I think that's absolutely the case. The legal question's really interesting to me. You rule out criminal charges, because nobody did anything wrong, per se here. I don't think there's a valid contractual claim. I mean, at best, the monetary damages on a contractual claim are going to be minimal and I don't think that's the right avenue to pursue. So, I think we're talking about a torts negligence claim here, which on first glance from a 30,000 foot view, I think you could make that case, and that might be the grounds for some class action lawsuit. I guess my instinct on this, I don't really know how to put it, but like, what would ultimately be the point of that lawsuit, given that the damages spread across the entire universe of people who are affected by this are going to be minimal, per individual? And do we really need a class action lawsuit to teach CrowdStrike a lesson? I don't know. I don't know the answer to that.
Dave Bittner: But what if you're the CEO at Delta, right? And you've had, what, five days of chaos, hundreds of -- thousands of flights cancelled, millions of dollars lost, and you've got to explain all this to your investors, to your employees, to all the people who, you know, track your company, don't you want to -- do we understand the impulse here that you want to extract a blood from CrowdStrike?
Caleb Barlow: I certainly understand the impulse. I will say this for sure. The renewal conversations on your CrowdStrike contract are certainly going to take a very different tenor than they probably would have normally. But, you know, Dave's got a really interesting point. I mean, other than making lawyers rich, which look, that's going to happen just --
Ben Yelin: Admirable goal, right? Specific lawyers. Very specific lawyers.
Caleb Barlow: Very -- you know, look, there are you know, I think what's most interesting about Ben's comment here is like you know, what is this going to really mean by the individual, right? Like, if you took all the money they had, you divvied it up, what's it going to be? A couple of dollars a person that was impacted? Like it's just immaterial. On the other hand, I do think that you know, something -- you know, you can't let everybody off Scott free on this. There's certainly going to be an incentive for someone at Delta. There's certainly going to be an incentive for lawyers that find a unique way to approach this. I mean, what I would love to see out of this, and I'm certainly not the lawyer in the conversation, but what I would love to see out of it is some sort of a pathway forward to address this in a different way, and maybe using whatever's gained out of lawsuits to fund that program, that infrastructure, that mechanism to say, "This shouldn't happen again." And by the way, let's not just look at CrowdStrike in this. Like, you know, Microsoft was quick to be out there saying, "Hey, this isn't us." Like, you know, "This isn't our fault." On the other hand, like, you've got a third party touching your kernel. You've got to be kidding me. Like, we all kind of knew this was going on, but like what's the testing process on the Microsoft side for this? Like, can any of these EDR vendors do this? And let's also not forget, CrowdStrike and Microsoft executives were in a little, let's just say, a little bit of tiff over kind of anti-trust because Microsoft has a competing product to CrowdStrike and that was also getting interesting in all this mix.
Dave Bittner: All right. We're going to take a quick break here. We will be right back after this message from our sponsor. [ Music ] And we are back. I'm trying to look at this from like the highest possible level of a policy adjustment as the result of this. And one thing I wonder about is could there be a push for major industries like the airlines? To your point, Caleb. Critical infrastructure, let's just use that label, that they have to be able to demonstrate that they have the ability to go into a failsafe mode when stripped of their digital infrastructure.
Caleb Barlow: So, I would position it a little differently, and let me actually use the example of a hospital, right? We're all aware of the incident that occurred down in Alabama where a hospital was impacted by ransomware. The expectant mother was coming in, wasn't aware that the hospital was degraded, and long story short, but because the hospital was degraded, they couldn't get a fetal heartbeat monitor at the nurse's station on the mother, and the, you know, the expectant child eventually died. And that's currently in litigation. Right? The example there isn't that, "Oh, we don't shut the hospital down." The example isn't that, "Oh, we don't move to paper charts," which is what they've done. The example is, "We're in a degraded state. That means there's some things we can't do safely and we need to communicate that. But that also means we need a plan for dealing with this." So, it all starts with what I call a Commander's Intent of understanding what is your intent when you respond to a system? Are you a company that creates life safety systems, like an airline, like what is the threshold at which you say, "We're not going to fly" if you're an airline? Or what is the threshold if you are a hospital that says, "We're going to cancel elective procedures or we're going to close the ER"? All of that thought process needs to be thought out in advance. I don't think it's realistic to say to any company nowadays that, "Oh, you need a mechanism to operate without your IT systems." That's ludicrous. I do think it's appropriate to say, "You need to understand when certain systems are down, what is your response? What can you live with? And do you have redundancy for that if you need it? You know, and most importantly, at the top of that list is life safety," right? Like, the Delta and the FAA said, "Hey, we're going to ground certain flights." Like, a lot of people look at that and go, "Oh, I'm frustrated with Delta, blah, blah, blah." I look at that and I'm like, "Damn, there's a team that was able to make decisions quickly. They understood what their risk posture was, and they pulled back to the level of risk that they thought was appropriate." Game on. Right? Yes, we're all frustrated, [inaudible 00:21:29], sitting in the airport and everything else. So, that shows me that in the case of like Delta Airlines, they had run books. Like someone made the calculus to say, "We shouldn't fly right now. We need to cancel these flights." And they understood what level they could operate effectively at. That is very different than let's say some of the examples we saw where certain -- you know, whether it's a transportation company that's allowing a large line to build up at a subway station, or a hospital that's not sure what it's going to do, like the thing we want to see here is you have a plan, you understand what gets degraded when you lose systems, and you understand what you need to do. Like, is there a life safety impact of what we're doing? Okay, we have to degrade what we're doing. Companies that have a plan, that's what this is all about. And exercising the living daylights out of that plan.
Dave Bittner: It's a good point. I think, I mean, you know, you're right. I suppose it's kind of like asking an organization to have a plan or what they would do if there was no electricity. IT infrastructure is so much a part of the heartbeat of any organization now. To your point, Caleb, it's unrealistic to think that they could function in any meaningful way without it.
Caleb Barlow: But the point here Dave is, so many entities have a kinetic impact that they haven't thought through. So, if you're a hospital, it's not just about shutting it down and going, "Oh, okay, we're out of this." It's well, "What's the impact of shutting down the ER? Are people going to die because they have to divert somewhere else?" You've got to think through that calculus. If you're a transportation company, what happens if lines start to build up at an airport or a train station and people start to shove and push? That's a kinetic impact that you've got to think through. "How do I mitigate that if this system is down?" If you're a petroleum company and you can't deliver product, is there a way you can do that manually so people can still heat their homes and commerce can continue? We have to think through the kinetic impact and we need to recognize these things are connected to IT.
Dave Bittner: Yes. Ben?
Ben Yelin: It's so basic from an emergency manager's perspective, because that's what I do in my day job. Like, I think we have to refocus on continuity of operations. You identify your mission essential functions, and you figure out how to bring those functions back online within the required 12-hour period, which is the FEMA guideline, or whatever it is for your organization, to prevent mass loss of life, bodily injury or significant financial harms. We test it for loss of personnel, loss of facilities, and loss of systems. Loss of systems has seemed sort of conceptual in the past. We've often thought about it as a loss of internal systems, like a content management system or our email's down. After this incident, we all have to think more broadly about it. Well, if it's CrowdStrike 2.0, how does that affect our inter-dependencies? Maybe our backup method for cloud computing is also going to be down because the incident is so large in scale. So, I think this is going to impact how every type of organization, public and private, does continuity planning for loss of systems. And I hope -- I hope organizations take that seriously.
Caleb Barlow: Well, you know, the other thing that enters into this is like, if your normal IT plan is, I have my technical stack, right? That's how we build it. Oh, you know, I use Palo Alto firewalls. I use QRadar as my sim. I use CrowdStrike as my EDR." Right? This causes CSOs and IT managers to now ask a different question, "Do I want a completely homogeneous environment?" which that's how we've all built our security stack. Do we need a heterogeneous environment? Now, if you flip over to emergency management doctrine, I'm imagining and, keep me honest here, Ben, one of the things that you teach is that like for any critical function, you can't just have a single supplier, right? So, that's a totally different shift for how we think about IT and security.
Ben Yelin: And when we do injects, it's like, somebody says, "All right, I'm going to call my vendor contact." No, you will not, because the inject is, that vendor is unavailable. They are currently boiling in a stew of their own making because of some massive incident. So, think on your toes what comes next.
Dave Bittner: Well, and I saw, some of the coverage of this is that there were many organizations who couldn't go to their backups because their backups were running CrowdStrike. And so, their backups were caught in the same loop, you know, blue screen of death loop that their main system was, because they hadn't thought that that would be the point of failure. I'm curious, you know, Caleb, suppose that I'm one of the companies that dodged this particular bullet, but I'm looking at this and I'm saying, "Okay, how do I adjust my run book from this point out?" What sort of things do you suppose they should be looking at?
Caleb Barlow: Well, I think there's three things that come into this. When we exercise our plans, historically, those are typically kind of technical tabletops. This now absolutely must involve the executive team to consider, "What is my incident command structure? Who is in charge and who's making decisions when this occurs?" And by the way, the answer is not the CEO and the board, because the CEO is on a plane going to who knows where, and the board, it can't convene easily. You have to have the authority to make decisions with the people in the room. You have to understand how you're going to communicate in a degraded incident. Like, the difference in this case is that people were still able to communicate. You know, phones typically still worked, but like you go back to NotPetya, all the voice over IP phones were out. People were communicating via WhatsApp. Like how are you going to communicate? What are you going to say? How are you going to interact with your employees and customers? Crisis communications, so critical. And then I think the third thing in all of this is you need to have that commander's intent. What is your intent in a degraded state? What do you -- must you maintain amongst everything else so that your team understands what you're actually chasing in this? I'll give you a perfect example. I had a -- this was years and years ago. I had a national TV broadcast news network that was down. And we responded to the incident. And of course, everybody wants to go chase, "What's the malware? And what variant is it? And who's behind it?" And the customer's like, "I don't care. I just need the ability to run commercials right now, because if I can't run commercials, I'm not making any money." And that was basically their intent. Like, go do your investigation later. Get me the ability to maintain my business. I think every business has that intent of, "What is it you need to do?" If you're a transportation business, I need to get people flowing. Do I need to collect revenue? No. Do I need people to be happy? No. I need people to move safely through the system. Right? If I'm a hospital, I need to not degrade the life safety of people. Like, you've got to think about what that intent is, and most importantly, you have to have to have to exercise it.
Dave Bittner: Yes, it's a great point. I mean, one of the -- some of the comments I saw again, you know, cybersecurity folks on social media were just emphasizing the importance of availability, you know? Like, if -- to what you're saying about the advertising with a broadcast network. You know, I need to be up and running. These systems, they need to be working, first and foremost. And I think it's easy to lose sight of that.
Caleb Barlow: Well, I mean, let's use CyberWire as an example, right? If you guys had some sort of incident, is there a life safety implication? No. Is there a revenue implication? Yes. But what's the most important thing for you? To be able to broadcast your podcasts, right? Super simple for a relatively simple company. That's totally different if you're Delta Airlines. Like, if you're Delta Airlines, there's some very real and tangible implications for life safety, for the movement of people, you've got to have those plans built in place. And again, as much as people are frustrated, it looks like they have those plans in place because they rapidly shut things down where they weren't comfortable. And I think in a strange, screwed up way, we should applaud that. Like, what worries me more, what scares me to death in this is those companies that maybe had a slight impact, and we didn't see any change in what they were doing and they didn't know what they were doing.
Dave Bittner: The airlines have the FAA, right, to say, to tell them, "This is what you will do." Like, you know, "We're going to get on the line together and figure out what's going on," but ultimately, the FAA has the authority to put airplanes back on the ground.
Caleb Barlow: Who's that equivalent in the cloud world, Dave?
Dave Bittner: That's my question. That's what I'm saying. Like, is you know, do you know, CISA has no authority to tell anybody what to do. Are we headed in that direction? Do events like this make people think that we need an FAA for cyber, for cloud providers?
Caleb Barlow: Ben, what's your thought? I mean, like you don't want to overregulate this, but on the same front, like we need some sort of step forward to say, "This is indeed critical infrastructure that we have to think of differently."
Ben Yelin: Yes, I mean, I think CISA does play a role here, even if they don't have the type of enforcement authority that some of these other agencies do, because otherwise, it's very siloed. Like if it's a medical incident, it's CMS and HHS. And if it affects the airlines, it's the FAA, and if it affects the Veterans Administration, it's the VA. And so, it's just -- it's this very siloed agency by agency response. I think what CISA can do is have an industry-wide standard. I think it's especially important for the public sector, but certainly for the private sector, where there is a protocol in how to respond to incidents like this. We see it in the emergency management world, because FEMA has documents and annexes for all different types of organizations and businesses in how to respond to natural and manmade disasters. And I think CISA could play a similar role in how to respond for cyber incidents. They can have templates available for people who don't work for Delta, but for smaller companies. What do you do when your EDR has failed, or there's a global outage? Here's a template that you should fill in that we've already drafted with the guidance from our best cybersecurity experts. I do think CISA has a major role to play in this.
Dave Bittner: All right, well gentlemen, we are going to leave it at that. This is a story that continues to play out and develop and I'm sure there's going to be repercussions from this one for a long time to come. Caleb Barlow is the CEO at Cyberbit. Caleb, thank you so much for joining us today. [ Music ] That is "Caveat," brought to you by N2K CyberWire. We would love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the Show Notes or send an email to caveat@n2k.com. We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's pre-eminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment: your people. We make you smarter about your teams, while making your teams smarter. Learn how at n2k.com. This episode is produced by Liz Stokes, our Executive Producer is Jennifer Eiben, the show is mixed by Tre Hester, our Executive Editor is Brandon Karpf, Peter Kilpe is our publisher, I'm Dave Bittner.
Ben Yelin: I'm Ben Yelin.
Dave Bittner: Thanks for listening. [ Music ]