
Security remediation automation.
Rick Howard: Hey, everybody. Welcome back to Season 15 of the CSO Perspectives podcast. This is Episode 4, where we turn the microphone over to our regulars who visit us here at the N2K CyberWire Hash Table. You all know that I have a stable of friends and colleagues who graciously come on the show to provide us some clarity about the issues we're trying to understand. That's the official reason we have them on the show. In truth, though, I bring them on to hip-check me back into reality when I go on some of my more crazier rants. We've been doing it that way for almost four years now, and it occurred to me that these regular visitors to The Hash Table were some of the smartest and well-respected thought leaders in the business. And in a podcast called CSO Perspectives, wouldn't it be interesting and thought provoking to turn the mic over to them for an entire show. We might call it Other CSO Perspectives. So that's what we did. Over the break, the interns have been helping these Hash Table contributors get their thoughts together for an entire episode of this podcast. So hold on to your butts.
Speaker 1: Hold on to your butts, butts, butts.
Rick Howard: This is going to be fun. My name is Rick Howard, and I'm broadcasting from the N2K CyberWire's secret Sanctum Sanctorum Studios located underwater somewhere along the Patapsco River near Baltimore Harbor, Maryland, in the good old US of A. And you're listening to CSO Perspectives, my podcast about the ideas, strategies, and technologies that senior security executives wrestle with on a daily basis. Merritt Baer is a very good friend of mine. She is a Harvard lawyer by training, became a legislative fellow working for Senator Michael Bennet, the current Democrat from Colorado; transitioned to the senior cybersecurity council for the US Department of Homeland Security and then later the lead cyber advisor to the Federal Communications Commission. She spent five years as the Deputy CISO for Amazon Web Services and is now the CISO for Reco AI. We met when our paths crossed on the cybersecurity conference circuit. We were on a panel together before COVID and before ChatGPT was a thing, discussing AI and machine learning and butting heads with opposing views; and we became fast friends. She is wicked smart and the perfect person to discuss the cybersecurity profession's potential decisions regarding how to use AI. So here's my conversation with my friend Merritt Baer. So, Merritt, when I was soliciting Hash Table volunteers to do their own episode, you immediately jumped in to claim one of the eight slots for the season. And you eventually settled in on this topic we're talking about today, which is making security decisions around AI use. But then you immediately ran into a big stumbling block. So what happened?
Merritt Baer: Yeah. You know, it's funny because, in theory, writing 1500 words is not a hard thing to do. But I found that this topic in particular, while it, I think, is really timely because we talk about AI in lots of sort of notional ways about the future or about, you know, ethics or things that are very kind of aspirational, I don't see a lot out there that's more tangible, like for folks actively making practical decisions around, you know, using AI, how to know what AI is in their environments, and so on. And, as a practicing CISO myself, I thought that would be a really good topic. But when I came to putting something on paper, it was really hard to just constrain it. I, you know, was tempted to go in the direction of talking about, you know, the dangers of building your own LLM. I was tempted to go down the path of, you know, talking about how to know, you know, how to discover shadow AI in your environment, copilots, or other manifestations. I was tempted to go down the rabbit hole of, you know, ethical considerations or, you know -- or of attacks, right, like poisoning and bias and drift and some of the ways that folks might actively skew your AI results. And it just kind of felt like anything that I chose would be both too much and too little. And so I ran into this blog; called you. And he said, Let's make it a conversation instead. So that's -- thanks for indulging me and doing that instead. And, with that disclaimer, you know, we will not cover the whole universe of practical AI considerations. But hopefully we will talk through some of them.
Rick Howard: I get that a lot from the stuff I do because, you know, Rick, you should, you know, limit it. Okay. Stop talking about the world. Maybe talk about one specific thing. So it is a common problem, Merritt. All right. We both need editors.
Merritt Baer: Right. And it kind of also felt like, if I constrained it, it would feel artificial.
Rick Howard: Yeah.
Merritt Baer: But you have to constrain it. So, yeah. And, to be honest, like, I also was tempted to go back in historic time to, like -- you know, because in a lot of ways AI is, in present form, really machine learning. And that is nothing new. That's been around for decades. So, anyway, yes. It's a --
Rick Howard: Well, let's talk about that because you and I are both sticklers about, you know, throwing these terms around. I know when you and I first met, we were on an AI panel, I think, in Colorado. And we were educating the audience about the differences between, let's say, AI and machine learning and LLMs. You want to give that a crack while we're here?
Merritt Baer: Yeah. I think that folks currently -- I have sort of given up that fight to some extent because I'm using -- even, like, in the intro here, 'm using AI in the term that -- like, in the definitional sense that it's currently being tossed around, which is the idea that, you know, folks are using capabilities that are basically, you know, analytics over large pieces of data. So what I'm talking about, I think, is ML in a traditional sense. We're not talking about this kind of generative AI idea, which I think is still debatably not real. You know, like the idea that an AI would have sentience or would be able to produce some original material, whether that's -- whether you -- I even hesitate to say the word thought, right, because I don't know that I believe that a machine can think. But, yeah, you know.
Rick Howard: I think most of us think -- you and I, I think, when we think of AI and we think in terms of, you know, movies like The Terminator and maybe the movie Her, when the artificial intelligence wakes up, becomes aware of itself and, you know, becomes some sort of being, in the Sci Fi world, that's called the -- what's it called when that happens in the world? I forget. The singularity. That's what I was looking for, right? The singularity, right. So what I -- yeah.
Merritt Baer: Debatable as a -- you know, I think it is a thought exercise to use terms like that. I don't think that we're -- at least in my view, we are not necessarily on a course where that ends in that.
Rick Howard: And I agree that, you know, and future forecasters, they've been saying the singularity has always been 10 to 20 years away, you know, 50 years ago, right? So it's always somewhere out in the future. But I will say that there are some experts in the field that it could -- say that it could happen as early as 2050. We'll see. Okay. You think -- you think it won't happen is what I hear you saying.
Merritt Baer: Yeah. I think that, if you look at the evolution of our tech wall, it has been punctuated by, like, really radical transformation. So, like, the internet or, you know, mobile computing, cloud computing. You know, these are really functionalities more than they are computers taking over sort of like human capabilities. And I think that there is something inherently human. You know, computers essentially do what you program them to so even if that is to build upon analytics that get sort of abstracted away from the underlying task. And so I think you can get many layers deep in that kind of, you know, computer reasoning process. But it is always a process. It is never, like, an original thought or a -- you know, like, I don't think that a computer can fall in love or have its feelings hurt or come up with something truly novel, you know, as a -- in the sense that humans can.
Rick Howard: So a subset, then, of AI is machine learning, which in the security realm, it's been around since, I don't know, 2015. I mean, it's really good at very specific tasks. You want to take a crack at explaining what machine learning is.
Merritt Baer: Yeah. I mean, I don't have a definition memorized, but I consider machine learning to be sort of like the convergence of the fact that we have a ton of data now, and then we can reason upon it in a more holistic and more timely way.
Rick Howard: Right.
Merritt Baer: And so I really think that the rise of machine learning is due to a couple factors, including, you know, the growth in processing power and, you know, the strength of compute that it has formed over the last few years, which, of course, has underpinnings in hardware and chips. Like, there's a lot of unsexy parts of this that have allowed for the parts that we see at the surface but also kind of the increasing reliance on sort of like data-driven answers to guide us in how we think about problem sets.
Rick Howard: Right.
Merritt Baer: You know. So I think --
Rick Howard: So it uses statistical models to make predictions about, you know, problems we're trying to solve. And, like I said, security vendors have been using that as an example very successfully with, you know, identifying malware. They can take a file that nobody has ever seen before and predict whether or not it's malware with like a 97% accuracy. So, like you said, machine learning has been around for a while; and it's very useful in very individual cases.
Merritt Baer: Yeah. Exactly. And I think there is -- you know, obviously there are use cases in the security world or doing security work. There's also use cases that are not security related but that need to be secured.
Rick Howard: Sure.
Merritt Baer: You know, like there's levels of relevance for a CISO who's considering how to kind of take these technologies and use them responsibly. As you mentioned, you know, in threat intelligence; in malware analysis; in, you know, the areas that, like, anti DDoS that are inherently reliant on scaled processing, I think it's especially helpful and influential. But it's also something where I think most sophisticated shops are also really concerned about the possibility of, you know, the poison and drift that happens when folks do targeted attacks on their own but also just the fact that these models have a tendency to get it wrong sometimes.
Rick Howard: Yeah.
Merritt Baer: And the more we've built upon them, the harder it is to interrogate them.
Rick Howard: So that brings us to large language models that kind of popped up in our imagination in 2022 when ChatGPT was -- ChatGPT was released, okay, which we all looked at that and said, Oh, my God. The world has changed. And, you know, these -- these large language models are, you know, their natural language processing, okay. It's a -- it's an application of AI and machine learning to generate human-like text, which, in the leap that we found, you know, back then was we were all gobsmacked at how wonderful it was, or how the -- you know, impactful it looked like it was going to be.
Merritt Baer: Yeah. You know, I resisted the urge to have ChatGPT write my essay for this piece. But, yeah. So, as with all forms of current quote, unquote, AI, ChatGPT has a lot of really good use cases; and then it also hits barriers, right? I was seeing this morning some folks were posting that they actually asked ChatGPT to unlock protected documents, and it did it. So I definitely think there are security relevant use cases. And, of course, like, in an enterprise you might have folks just like using some of these, you know, natural language models to generate their employee performance reviews or to, you know, produce finance reports. But we also know that they have some risks, right? Accuracy is one of them. We've seen some hallucinations. There was the case with the law firm that -- where the LLM had made up cases but also with, you know, the ability to have escapes where folks might be able to interrogate your model about what you have inputted with in terms of data underlying sources and also what questions you have asked it, which might reveal something proprietary about your business, for example. So, as we think through these, I think most CISOs sort of accept that their employees are using some forms of this but having guard rails around that use and being sort of conscious about what the acceptable uses are and where to put boundaries on that in terms of those considerations that I mentioned and others. I think, you know, folks need to be considering the kinds of, you know, natural language models that might be in your whatever, Microsoft Defender Copilot. It might be in a your chat bot that you're releasing as a product when you are a rental car company that can be hijacked, you know, and then have outcomes you don't like. So, on both the customer facing side and the internal side, I think these are considerations for CISO.
Rick Howard: Well, and that's the point that's kind of popped up here. It's really easy to use a large language model to put a demo together to solve a particular problem. That's really impressive. But when you, you know, when you get under the covers, it's about an 80% solution. Like you said, it hallucinates or doesn't give you the right answer 20% of the time. And when you're looking for real-world applications of that stuff, some of the things that CISOs have to be worried about, it needs to be 100% accurate, or you can't use it. Or least you have to be aware that it might -- the answer you get might be wrong. So we're not quite there, right? Am I wrong about that?
Merritt Baer: I mean, I don't think that CISOs assume that every data source they have is 100% accurate --
Rick Howard: What? What?
Merritt Baer: -- except long data. You know, I think you're always -- as the CISO, you're doing risk calculations all the time. And, for some tests, close enough is good enough. But I certainly think for, yeah, for those high fidelity, yes. I mean, as I mentioned, like, of course you're also going to need to be. And I think one of the skill sets that folks are developing is just, like, how to even ask the model for the thing you actually wanted, you know, like arranging your queries accurately. So, I mean, for example, with code generation, I think that folks are finding that LLMs are not necessarily even worth it because you have to validate so much of what it provides. However, it can be really helpful for reviewing, you know, something that you could validate then, you know, with a team of developers. So I think that, you know, as we navigate the kind of, like, tasks that we are willing to or that we think it's good enough at, you know, part of the consideration is that accuracy, and part of it is just like what is it suited to? I -- like I said, I actually also just don't think that ChatGPT is going to, you know, give you your company mission statement because it's going to sound, hello. You know. That's what -- that's what you need to do. Or, you know, there will always be inherently human tasks. But I think as we -- you know, and we're sort of at this inflection point where we had been giving computers more and more of the menial security tasks that we had 10 years ago, anyway, in things like automation. And, you know, like, I certainly don't see a quote, unquote AI taking over the workforce, for example, but -- because you will always need human decision-making. But if it can take over some of those tasks, I think that's a good thing. You know, with any of the things that it could be doing, we should let it.
Rick Howard: Well, we -- yeah. We've been hinting around about this idea for a little bit, so let me just be specific about it. What problems are we solving in cybersecurity with AI? What -- you know, because you talk to a bunch of InfoSec pros about it, and they view AI as a -- like it's a can of paint. We can just slap a coat on it, on everything, and our problems will be brighter and easier to solve, right? So there -- but there are specific use cases that people are considering, right? So.
Merritt Baer: I think the reason that folks are putting AI in the, you know, title of everything is just because venture capitalists love it. And it's, of course, like a shiny term right now. But I think that, you know, there's -- there are security tasks that are suited to having automated and whether you call it AI or not. So, for example, I spent five and a half years doing security at AWS. We -- everything is a trouble ticket. And I don't -- I don't know any shop that isn't captive to a trouble ticketing system. But, increasingly, those tickets get resolved by bots, right? And those are sensing when it has been remediated and sometimes enacting for mediation themselves where it is sort of like the factors that are -- that need to be met are met. So, for example, Rick requests access to this document. He has this level of title and permission. And we had already said, you know, when Rick needs access to this, we will allow it for 11 hours and then close it again.
Rick Howard: So let me restate. Let me restate what you just said there because I'm trying to narrow it down. That is like using large language models to eliminate some of the toil in our environment. This is, you know, reducing the manual efforts to do those things like you're saying, entering tickets, filling out tickets, making decisions about tickets. Potentially, LMs can help us with that. Is that what -- how Amazon was using it?
Merritt Baer: Yeah. I don't know that the bots actually made decisions unless you consider sort of any forcing a judgment to be making a decision, you know, like -- but, yeah. But they were helpful in automating the processes of doing, you know, like grant -- permission grants or other things that are kind of routine security tasks. And then also sensing when something, even if a human had to remediate it, you could sense when it's remediated and close the ticket, right? So there were ways in which -- and you can call this AI; you can call it automation. But I think, you know, in security world, there's a number of use cases, as we mentioned. And we're not going to like list the entire universe of them today. Some are on that kind of front lines of threat intel and the data itself and how to make those calls. But others are in just kind of the day to day of security shops and how to automate away some of the, you know, what had been manual tasks. And I think, as we get more -- you know, more sophisticated computing, you're going to have more of the -- that need for those kinds of automations. Like, we have to kind of pace ourselves with the tech.
Rick Howard: One of the ones you mentioned in your draft paper that we abandoned, okay, was the ability or to use an LLM to seek out and identify other LLMs running inside your network. As you know, as the years go by, people are going to try to do different projects for these things, and they're going to proliferate like shadow IT has done in the last 30 years, sort of a bill of -- software bill of material kind of a thing, keep track of all the LMS that are running and maybe figure out if they're acceptable because of all the ethical issues or the sources of data they use and all that kind of thing. An LLM might be perfectly suited for that.
Merritt Baer: Yeah. Agreed. I think using quote, unquote AI to do AI discovery is actually very helpful. And, of course, as folks think about -- and by folks I mean security practitioners think about sort of like the universe of risk here, I think the proliferation of these models across your enterprise is going to be something to worry about because, even if it just means that, like, you know, you don't have to be very good as an insider threat to go figure out how to triangulate around to the company's IP, right, because you could ask a Copilot to go find it for you and find the access paths, you know. But, at the same time, you're going to have AI on the defense side limiting those permissions for you, hopefully, you know, like identifying -- so, for example, Reco, the company that I work at, one of the things that we do is identify, like, overpermissioned OneDrive or shared drive files.
Rick Howard: Yeah.
Merritt Baer: You would be surprised. Well, you probably wouldn't be surprised because you're you, but one might be surprised by the number of enterprises that have, you know, links open to the world that have sensitive data in them. And without sort of a machine learning driven approach, you would just be playing Whack-a-Mole.
Rick Howard: One of the ideas you put forth in your draft paper that I hadn't considered is that many companies that are selling products might have a large language model product they give their customers. And the internal security of that model might fall to the CISO to make sure, you know, it follows all the rules that we're worried about. I haven't considered that. Do you see that happening a lot right today, or do you anticipate that happening sometime in the future?
Merritt Baer: I do see it happening today. And, of course, like you know, from my vantage point right now, I am working for a company that does a security product. You know, Reco does SaaS security. But that also means that, as the CISO, I'm heavily involved in giving insights into the product itself, like how a CISO will use this and then basically understanding the problem space, which includes the fact that, yeah. There's a Copilot in so many applications now. Folks may not even know which applications they have in their environment in the first place, let alone with one top Copilots. And as I mentioned, I mean, I think in a lot of ways that means that we democratized insecurity because, you know, five years ago, a bad actor had to, like, get in, lateral around looking for the kind of things that they cared about, whether they're looking to hold you hostage or to expel something that's valuable. You know, if you think about attacker motivations, and this could be an insider or an outside attacker. You know, they're looking for stuff that's either worth money or that they can hold you hostage for and, you know, extract a ransom from. And all of those things you used to be able to rely on some level of difficulty just waiting folks. And now it's just like, with a Copilot, you can just natural language. Like, they don't even have to be a very good attacker to necessarily get in. You really have to -- you, as a CISO, really have to care about that. And so your product, your customer facing products might be a source of insecurity for your enterprise.
Rick Howard: One of the use cases that has been around since, I don't know, 2015 or so that I thought was really going to work very well and it really hasn't is to have one of these models sift through the volumes of network and end point telemetry looking for bad guys, right? And there are all kinds of security products that claim to do this, but what they give you is more -- they don't give you a definitive answer like, you know, hey. We found wicked spider inside your network. What they give you is, hey. We found something suspicious that you need to go, you know, investigate. And I've got to tell you, Merritt, I don't need more things to go investigate. I don't have time to investigate the things I already have. So I think that's been a big failure so far.
Merritt Baer: Yeah. I think that's fair. I think, you know, a lot of security is nuanced where it's like, you know, behavioral So -- and, you know, this is one of the things that we work on internally and that we externalize Reco, which is, like, looking at not just should Rick have access to this but, like, was he accessing it in a pattern that we haven't seen before, looking at sensitive docs that he doesn't always need to look at from an IP that we don't recognize? You know, it's triangulating behaviors together in a kind of heuristics based way. And I think AI certainly helps us to be able to sort of prioritize an alarm on those kinds of behaviors. But it's never going to make -- it might just be that Rick actually did need access for those. You know, it's just an unusual day in Rick's work life, and he did need access for that.
Rick Howard: I think that's generally a bad idea to get Rick access to anything because he will absolutely screw that up.
Merritt Baer: Or it could be that he's stoned, right, and [inaudible 00:27:46] something.
Rick Howard: That's true.
Merritt Baer: Or it could be that Rick is a bad actor, and he accidentally -- you know, there have been all these headlines recently about folks accidentally hiring North Korean hackers. So there's lots of explanations for behavior that looks questionable. And, to be honest, of course, really sophisticated bad behavior wouldn't show up in a lot of these alarms anyway, or at least it would be obfuscated well enough with what looks like ordinary behavior. So, you know, the challenge of security is that you're doing a lot of kind of calculations based on your risk profile, based on, you know, the data or sensitive information that you -- that's at stake and that, like, your level of sensitivity to a security threat because you -- I don't think it's always a zero sum game. In fact, I think good security practices are -- look a lot like good architectural practices anyway. But I do think that you have to make some calls when you're the CISO, that sort of like tolerance when you're going to shut something down while you investigate it versus letting things roll. You know, there's a lot of business judgment calls that go into security day to day.
Rick Howard: So before we wrap this up, Merritt, I was very pleased to read through your draft. And you had a little section about Ada Lovelace, one of my favorite computer science heroes. Many considered her to be the first programmer, okay, ever in existence, back before we even had computers. What was your thought there about Ada? Why did you bring her up?
Merritt Baer: Yeah. You know, so I think I had two motivations there. One was, as we trace back, you know, the kind of like rise of ML, I think in a lot of ways we can credit folks like Ada. And she was one of the first to basically see computers as something more than just like a giant abacus. You know, like, it's not a calculator. It can reason upon the kinds of data that it has, that it is something that's more creative or at least that programmers can make it do more notional tasks. And I think we see that as certainly computers are not just like giant storage devices, that they are doing reasoning and processing in the way that machines can. And I think that she was creative in her exploration of that. And, of course, I see it as not coincidental that she was the daughter of Lord Byron, who was a poet. You know that, like, she saw manifestations of the kind of art world in what folks are understanding to be computer science. And I see that too. Like, I think good security professionals are really creative people and that this is a creative field at its heart. And on that, like, you know, I also was introducing her so that folks who may not have sort of like fit the mold of the hacker in a hoodie understand that, like, this field was built by folks who don't look like traditional computer geeks. You know, that, like, everyone is more than welcome; and in fact, like, it was built by folks who don't fit. And so we are kind of like reconstructing this world all the time. And folks who are architects or artists or 1st grade teachers or stay-at-home parents should feel welcome.
Rick Howard: She became fascinated with Charles Babbage's Analytical Engine, this giant mechanical computer that was never built. But when she read the specifications of it, she had all these ideas about, you know, how to use it and how to make it better. And so her writings about the Analytical Engine where she was the first one to come up with, like, a step-wise, step-by-step way to run the -- run a program through the Analytical Engine, right? So absolutely the first programmer. And she even speculated that you could use -- it could be more than just math. It could be symbols. So it's kind of a precursor to Alan Turing's Turing machine. So that gets us right back into AI right there, right? And so, like, yeah. Go ahead. No, you go for it.
Merritt Baer: It's pretty fascinating. I mean, higher order math is all letters anyway, right.
Rick Howard: Yeah.
Merritt Baer: So the fact that you have -- like the convergence of poetry and math is not so hard to -- you know what I mean?
Rick Howard: Yeah, yeah. Yeah. Absolutely right. I had students of mine who, you know, like, they were so brilliant. I could understand what they did after they did it, but I didn't have the artistry to do what they did. They were so interesting and brilliant. One last thing I'll say about Ada. The military sponsor -- the US military sponsored a programming language called Ada back in the days, back in the 1990s. It was built as a way to get programmers out of their own messes when they would write code in C and C++. It's still being used today if you want really safe programs that won't get themselves into trouble. But I think it's only military use at this point or industrial use of these things. But I actually taught it back in the day, so I just wanted to throw that out there, right, so
Merritt Baer: Interesting.
Rick Howard: Yeah.
Merritt Baer: Yeah.
Rick Howard: So here we are at the end, Merritt. Thanks for doing this for us. What's the takeaway here? You and I have been talking for about 30 minutes about AI. Is there one thing we should be telling CISOs about how to think about this?
Merritt Baer: Yeah. You know, I think that, as with most practical decisions, you're going to have some that come that you haven't expected. But having a sort of frameworked approach to what you're going to allow is the best sort of way forward. So having a set of criteria for what you're going to allow in your environment. So, one, you know, finding what you already have in your environment from an AI perspective. Two, having sort of like ongoing policies. And, three, enforcement of those, which is a mix of human and nonhuman enforcement. But having, you know, a way of deciding how you're going to approach AI security, that is -- it doesn't have to be perfect, but it has to be reasonable, defensible, adjustable, repeatable. And that will both give you a programmatic approach to how you're going to -- you know, what you're going to allow and how you're going to use AI. But it'll also protect you from the scrutiny of your regulators, give you something that you can externalize up to the board and other folks who are overseeing your operations. You know, I think that it probably does make sense to have a vendor if you are not doing a -- an approach today to do discovery, to have a vendor that helps you do discovery. And then also to, if you're going to use AI in your environment, to have a vendor that does, you know, guard railing. Encrypt AI for example, does this where you can, for example, allow your employees to talk about finance but not to use ChatGPT to, you know, go off the rails and do creepy things. But, you know, like, taking advantage of its benefits but being conscious of its pitfalls as well.
Rick Howard: Well, thank you, Merritt. We appreciate it. Good job. And we'll get you on the next one.
Merritt Baer: Well, thank you. Thanks for being understanding of my writer's block and hopping on with me.
Rick Howard: No problem. And that's a wrap. I want to thank my good friend, Merritt Baer, the CISO for Reco AI for helping us try to untangle the issues associated with cybersecurity professionals making decisions around the technologies of AI, machine learning, and large language models. CSO Perspectives is brought to you by N2K CyberWire, where you can find us at thecyberwire.com. And don't forget to check out our book, Cybersecurity First Principles, a Reboot of Strategy and Tactics that we published in 2023 where automation is a key first principle strategy that runs all through that book, and AI and machine learning and large language models fall within that automation set. And, by the way, we'd love to know what you think of our show. Please share a rating and review in your podcast app. But, if that's too hard, you can fill out the survey in the show notes or send an email to CSOP@N2K.com. We're privileged that N2K CyberWire is part of the daily routine of the most influential leaders and operators in the public and private sector, from the Fortune 500 to many of the world's preeminent intelligence and law enforcement agencies. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams while making your team smarter. Learn how at N2K.com. One last thing. Here at N2K we have a fantastic team of talented people doing insanely great things to make me and this show sound good. I think it's only appropriate that you know who they are.
Liz Stokes: I'm Liz Stokes. I'm N2K CyberWire's Associate Producer.
Tré Hester: I'm Tré Hester, Audio Editor and Sound Engineer.
Elliott Peltzman: I'm Elliott Peltzman, Executive Director of Sound and Vision.
Jennifer Eiben: I'm Jennifer Eiben, Executive Producer.
Brandon Karpf: I'm Brandon Karpf, Executive Editor.
Simone Petrella: I'm Simone Petrella, the President of N2K.
Peter Kilpe: I'm Peter Kilpe, the CEO and Publisher at N2K.
Rick Howard: And I'm Rick Howard. Thanks for your support, everybody. And thanks for listening.