If I only had a brain… Artificial Intelligence Gets Real at RSA 2017 - A CyberWire Special Edition.
Matt Wolff: [00:00:05] The human mind and AI work very differently in how they see things.
Lee Weiner: [00:00:08] A lot of times, people intertwine these words, and we believe they're actually quite different.
Shehzad Merchant: [00:00:14] It's a constant cycle, and the learnings from these feed on each other.
Lee Weiner: [00:00:18] We need to be careful of what we put into the AI bucket, what we don't put into the AI bucket, because it's easy to jump on a bandwagon.
Dave Bittner: [00:00:26] At the 2017 RSA Conference, artificial intelligence and machine learning were on just about everyone's list of hot topics. Countless companies are offering AI and ML solutions, with most of them claiming game-changer status. In this CyberWire Special Edition, we gather a group of experts to sort through the hype, try to agree on some definitions, demystify the technology and make the business case for artificial intelligence. Stay with us.
Dave Bittner: [00:00:58] Time to take a moment to thank our sponsor, Cylance. Are you looking for something beyond legacy security approaches? Of course you are. So you're probably interested in something that protects you at machine speed and that recognizes malware for what it is, no matter how the bad guys have tweaked the binaries or cloaked their malice in the appearance of innocence. Cylance knows malware by its DNA. Their solution scales easily, and it protects your network with minimal updates, less burden on your system resources and limited impact on your network and your users. Find out how Cylance is revolutionizing security with artificial intelligence and machine learning. It may be artificial intelligence, but it's real protection. Visit cylance.com to learn more about the next generation of anti-malware. Cylance - artificial intelligence, real threat prevention. And we thank Cylance for sponsoring our show.
Ravi Devireddy: [00:01:58] The way I define AI is - it's the science and engineering of making intelligent machines that can complement or offset the limitations of human operations in cyber today.
Dave Bittner: [00:02:14] That's Ravi Devireddy. He's chief technology officer for E8 Security. Full disclosure, E8 is a CyberWire sponsor. One of the challenges with an emerging, rapidly evolving technology like artificial intelligence is that not everyone agrees on how to define it. So we'll start there. Here's Ravi again.
Ravi Devireddy: [00:02:32] And there are several branches within AI, but predominantly, it's around machine learning-enabled AI where computers are taught to learn instead of being explicitly programmed to do something.
Shehzad Merchant: [00:02:47] Typically, I break these two into two different buckets.
Dave Bittner: [00:02:49] That's Shehzad Merchant. He's chief technology officer with Gigamon.
Shehzad Merchant: [00:02:53] The way I think about machine learning is that there is an element of what we do as defenders where we try to surface anomalies in your infrastructure, but anomalies are only relative to what is normal, right? So you have to build up context into what normal-like behavior looks like for your organization, and against that, you try to surface anomalies. I think that's the realm of machine learning. You learn what normal behavior is for your organization, and then you're trying let known bad behavior to surface out anomalies. The next pillar, I think, is artificial intelligence, which is a little bit different, which is that once they have surfaced an anomaly, that anomaly has happened. It's occurred, right? It's already done. It's in the past. The question is, what's going to happen next? And you have to predict. You have to apply some cognition to be able to predict what are the next stages in the cycle. And I think that's the realm of artificial intelligence, right? It's where you apply your knowledge based off - I've seen this kind of behavior before, and based on that, these are the next steps in the attack cycle. That's where the AI pieces come in.
Ravi Devireddy: [00:03:51] We often use AI too broadly.
Dave Bittner: [00:03:54] Ravi Devireddy from E8.
Ravi Devireddy: [00:03:56] When it comes to AI, we can think of it first as two distinct phases. There is, of course, a narrow AI and general AI. Narrow AI is around focusing on specific applications such as, let's say, a self-driving car or cyber threat detection, image recognition, NLP, national language processing and so on. That's around one specific AI application where AI can learn and do things better than a human could do. And then there is a second phase of general AI, which is an AI system that has intelligent behavior as advanced as a human being. It can do a range of, like, cognitive tasks and perhaps even have, like, emotional intelligence. In today's state, narrow A.I. is where most of the work is happening. General AI isn't available. I think researchers predict that it could be another 20, 30 years before we see that kind of AI come out. So within that narrow AI, there are multiple methods and tools and techniques that are being applied. Machine learning is the predominant technique that is powering most of the AI work.
Lee Weiner: [00:05:07] To us, machine learning is - it's a series of kind of human-curated algorithms that are built to adjust as data changes over time...
Dave Bittner: [00:05:17] That's Lee Weiner. He's chief product officer at Rapid7.
Lee Weiner: [00:05:21] ...Whereas artificial intelligence really is about a series of machine learning algorithms that will be modified and tuned over time with no human interaction, which, in theory, is to be able to have a much higher success rate than humans could actually do. So whereas machine learning requires people, AI does not. And so at a high level, that's how we think about it. I use machine learning as the thing to compare it to, because a lot of times, people intertwine these words, and we believe they're actually quite different.
Dave Bittner: [00:05:57] Monzy Merza is head of security research at Splunk. I caught up with him on the RSA Conference show floor.
Monzy Merza: [00:06:04] For us, machine learning is really about regression or progression. It's about classification, and it's about clustering. And so that's kind of math mumbo-jumbo, but what that helps us do is instead of getting caught up in the hype, we can be very specific in solving a customer problem. In the cybersecurity space, then, that becomes relevant when we start looking at user behavior analysis, for example. You know, Monzy's a person who generally comes in to work between 8 o'clock and 5 o'clock. And - but then if, for some reason, I somehow come in some place at 2 o'clock in the morning and start logging into a system or going into a room with a bad swipe and I'm behaving very differently, we can very quickly identify that because we're applying machine learning not just to the population but to the individual itself to classify and cluster that user's behavior.
Denis Kennelly: [00:06:49] There's a lot of different approaches in terms of cognition and machine learning, et cetera.
Dave Bittner: [00:06:55] That's Denis Kennelly, VP of Management and Technology for IBM security. They aim to make a splash in cybersecurity artificial intelligence this year with their Watson cognitive engine.
Denis Kennelly: [00:07:07] It's not a search engine. It's not a pattern-matching engine. Cognitive systems, they learn the scale and, you know, reason and interact with humans actually. I mean, that's really what our goal here is because when it comes to security and understanding a cyber threat, it's a complex set of dependencies and patterns. I mean, the attacker wants to cover their tracks very quickly, and it's not always obvious, you know, what the attack pattern is because if there was a well-defined pattern, then you could use basic machine learning to identify that. But we're using cognition and the ability to actually look at a pattern and then reason and then infer from that reason another set of questions and another set of queries, and that is much more expensive than doing a straight pattern match.
Dave Bittner: [00:07:56] When Kennelly says expensive, he means in terms of computational power, not necessarily dollars and cents. I also spoke with Matt Wolff, chief data scientist at Cylance - also a CyberWire sponsor - on the RSA show floor.
Matt Wolff: [00:08:11] So if you look at the history of AI, there have already been several sort of peaks in AI followed by AI winters, right? So people have hyped up AI before only to be brought back down to reality of what AI can actually deliver. Now, in the last few years, we've seen another resurgence in AI, and I think this one's very different. The underlying properties of machine learning in particular - which is driving most AI today - is that at this point, we now have an immense amount of data across all of the devices and industries and people out there today. And we have an immense capacity to the CPU to utilize that, to learn from it, to train machines, and that's driving a lot of the innovation in AI today. So some of the techniques we had 30 years ago still are relevant today, but the reason they weren't catching on in the past because the data and the CPU wasn't there to make these techniques effective.
Ravi Devireddy: [00:09:02] The third gen is what I call assisting the human operations.
Dave Bittner: [00:09:06] Ravi Devireddy from E8.
Ravi Devireddy: [00:09:08] And here, mostly what I've seen is around unsupervised learning where we are training these AI systems to discover and learn a particular environment and use that learning to identify changes of activities, patterns of activities or anomalies, right? That's what we're seeing today. The deep learning is likely the next evolution of this unsupervised learning techniques where we are limited to the features that we feed into these ML engines, and deep learning and these new techniques can help us overcome that limitation and where - there's a loop of learning that will happen within the engine itself.
Dave Bittner: [00:09:55] Deep learning is perhaps best known for being used with image recognition and natural language processing. Deep Instinct is one of a handful of companies using deep learning to tackle the challenges of cybersecurity. Guy Caspi is CEO of Deep Instinct. I caught up with him on the RSA show floor.
Guy Caspi: [00:10:13] Deep learning actually is a methodology which is skipping all the processes of manual feature extraction. Actually, if you know how to build it in the right way, you just take the pure data, like we're doing in computer vision, you pour it into the deep neural network, and it will get the result that the end of the process. Of course, this sounds like the Holy Grail and like a black box that - you know, you have the genie out of the bottle and it's solving everything. Still, the entry of barrier into this domain is huge because it's very complex both math-wise, and deep learning is not a single algorithm. It's a family of many tens of algorithms, and the implementation is super complex because you need to implement this over GPUs, which is a very, very complex task by itself.
Shehzad Merchant: [00:11:04] And there's a continuum.
Dave Bittner: [00:11:05] Shehzad Merchant from Gigamon.
Shehzad Merchant: [00:11:07] Right. So you do the ML. You surface the anomalies. You feed it into the AI engine. You determine intent, and then you take some action. And then you come back to the machine learning piece, right? So these - it's a constant cycle, and the learnings from these feed on each other. People are talking about this deep learning because this whole security paradigm functions as a constant feedback loop, and that's where the deep learning comes in.
Guy Caspi: [00:11:28] It's not like machine learning that you need to extract - I don't know - 2,000, 3,000 features, and this is what you have. As much data as you have, the system will be better every second day. And this is the major advantage between machine learning and deep learning, talking cybersecurity, which - we have 1.5 million new malware every day, which most of them are a mutation of a previously mutation of a previously mutated malware. When you have a methodology with skip feature extraction, you can provide super-fast answer, and you can deal with this unknown malware the first time you see them. And this is the uniqueness of deep learning implementing in the field of cybersecurity.
Denis Kennelly: [00:12:12] A lot of research is in what we call structured data, what are, you know, databases of well-defined objects like vulnerability, site domain, et cetera. And it's all well-correlated and stored in the database. Now, that constitutes roughly about 10% of the data to actually, you know, (unintelligible). The other 90% sits in, you know, these unstructured data sources - things like blogs, websites, Twitter feeds, et cetera. And to give you an example - and this thing is constantly evolving. There's about 60,000 new blogs being written every month about vulnerabilities and attack patterns - about 10,000 white papers every year. So when the security operator or the SOC operator is sitting in the SOC, you know, they have opportunities with structured information. Then we have, over the years, done a very good job of mapping that into systems in the SOC. But then the skill level really comes down to - in mining this unstructured information. And given the scale and the quantity and the rate of pace in change, it is almost impossible for any human being to be able to research this and keep up and, at the end of the day, remember all this information that is coming at them every day of the week. And that is the problem that Watson is setting out to solve.
Denis Kennelly: [00:13:39] Roughly, what we estimate is that an individual working in a SOC - a level one SOC operator - can deal with approximately 20 major events per day. Some events are pretty benign, right? They are actually - you know, if somebody puts - installs software on an endpoint, it starts to communicate with a host that it hasn't communicated before. But that might be normal behavior. But in some cases, it is abnormal behavior because that software came from somewhere that it shouldn't have come from, and it's communicating with some place that it shouldn't be communicating with. And then SOC operator has to deal with that and look at that every day and make a decision. And if you think about it, 20 of these per day, roughly, he has - or she has - between 15 and 20 minutes to make that decision to escalate or actually say, this is benign. So where Watson enters is - it helps in that adjudication and helps in that three-hours process and really speed up that three-hours process and make those decisions.
Lee Weiner: [00:14:41] One of the key challenges organizations and security have today is - again, is really understanding malicious activity.
Dave Bittner: [00:14:51] Lee Weiner from Rapid7.
Lee Weiner: [00:14:53] And I think both AI and machine learning - machine learning probably more in the short-term - can really help information security professionals identify malicious behavior. If you think about what we looked at before, three, four years ago, we looked for malicious software, right? But the reality is that attackers exhibit behaviors, and looking for malicious behavior is something that machine learning and AI, over time, can absolutely help with. It's hard to detect attackers' malicious behavior 'cause oftentimes, they masquerade as actual users and actual people. So I think there's a great example of a use case where automation and machine learning can really have a big impact. And I think it'll - you know, we'll continue to see that.
Shehzad Merchant: [00:15:39] I think we're at a time in history where the number of threats and the diversity of threats is only increasing. And the bad actors know this, right? And what they're doing is they're using diversion-based techniques, so they're creating threats in one direction. And because we are so bogged down through manual processes, or we get bogged down trying to identify the threat, figure out what's going on - whereas the real attack is happening somewhere else. And that's happening today. This is really happening today, right? And so, consequently, we have to be able to respond very quickly, and perhaps in an automated way, so that we don't get bogged down by these diversions, by the volume of threats and attacks that the bad actors are throwing at us. And so I do think that, as people deploy machine learning techniques and as the handover happens to the AI pieces, that has to become an automated process. And the less we get bogged down by human intervention, the better we will be able to scale and deal with these attacks.
Ravi Devireddy: [00:16:32] Making it even worse is the demand for security professionals is outstripping the supply, and that is where I think we should see a lot of new developments where AI will enable security operations. I call it the AI assistant security operations. So we should start to see that emergence in 2017 and around.
Lee Weiner: [00:16:57] The workforce shortage, which is significant in IT security, is not going to be solved with trying to enable more people to be able to do the job. You know, the security technology industry needs to take a little bit of responsibility for this problem - right? - because security technology products and solutions are not simple and easy to use. Many of them are built and designed for very sophisticated security - or security professionals - right? - that are very well-educated in information security and different aspects of it to be able to manage their program and manage their environment.
Lee Weiner: [00:17:38] And those organizations, which, you know, I would call the resource-rich organizations that have a lot of budget and could hire a lot of very well-skilled professionals - you know, they can become system integrators, but that will not solve the skills gap, right? The skills gap means that we need to develop automated mainstream solutions that a less sophisticated security pro could use, or even maybe an IT person can use. And I absolutely agree that we need to do that. We need to have a much broader focus on usability, have a much broader focus on adoption of this technology versus kind of the promise of what it might deliver. And yeah, I mean, I think machine learning and AI will be key to solving that problem.
Rick Grinnell: [00:18:21] Now, AI isn't going to replace your CISO. You'll still need, you know, strong security leadership and people who can do those jobs and apply human intelligence to the problem. But I think to shore up the gap, you know, in the workforce, we will need software that is AI-based to help.
Dave Bittner: [00:18:38] That's Rick Grinnell from Glasswing Ventures. They're a venture capital firm with a focus on investing in companies that are innovating in artificial intelligence. We'll hear more from him in a moment, but first, some more from Ravi Devireddy from E8 on the notion of teaming AI and humans together.
Ravi Devireddy: [00:18:55] It's not going to be a replacement for security operations. Where I think, truly, the AI will shine and deliver the promise is when it's human-assisted AI where all the - things that AI cannot do is ability to assess a situation and decide an action based on a specific mission or environment. So that is still a handicap for AI systems. But combining human knowledge of that particular environment with AI-enabled intelligence is where AI will offset the limitations of human operators and vice versa. This is where I think we will see the best outcomes in managing security. Human operators can assist AI by reinforcing the learning, providing the feedback to the AI models. And over time, we should expect the system to adapt its analysis based on these human inputs and creates what we call the learning loop between the AI and the human analysts. And that's an important point for customers or companies that are either building AI or even adapting to AI systems - to know that human beings are essential in maturing this and assisting AI, as well.
Matt Wolff: [00:20:16] The human mind and AI work very differently in how they see things.
Dave Bittner: [00:20:20] That's Matt Wolff from Cylance.
Matt Wolff: [00:20:22] And so the combination of both is - it's going to be quite a powerful solution for a while. And there's a lot of research going on into the best way to kind of intertwine those two.
Lee Weiner: [00:20:31] If you take a traditional IDS or even a traditional SIM that has a bunch of rules - typically based off of signatures of some sort, whether it's IP addresses or maybe hashes, whatever it might be - you know, it's pretty simple to create that rule. Now, that rule will likely be extremely noisy and very challenging for someone to investigate an alert off of 'cause you probably will get - it's difficult to build a rule that is very specific and low-noise. Now, if you look at machine learning, though - right? - with a person assisting that machine learning, they can look at trends over time. They can look at behaviors. They can look at a much broader set of data to create an algorithm that - you know, while it's not easy to do that, the effectiveness of that machine learning algorithm versus that simple rule is going to be extremely high.
Monzy Merza: [00:21:27] The machine allows us to be - maybe to be too crazy - to be more human...
Dave Bittner: [00:21:32] Monzy Merza from Splunk.
Monzy Merza: [00:21:34] ...Because we can spend the time doing things that we do as human beings, rather than worrying about some of the things that are deterministic, where we can use assistive things - something as simple as a lever all the way reaching out - so doing something very sophisticated and understanding the result of a medical therapy, for example, and applying that in a certain fashion because we can add more context. And I think human beings have that power, that we have context - environmental context, experiential context - that it takes time for machines to learn - that maybe someday they'll get there. But in the meantime, I think that augmentation is going to be very essential to our success.
Dave Bittner: [00:22:12] So far, we've covered the technology. But what about the business case for AI? As we said at the top of the show, artificial intelligence is hot right now. And that means there are a lot of startups and a lot of investors chasing AI-based products and solutions. Let's get back to Rick Grinnell, our venture capital investor from Glasswing Ventures.
Rick Grinnell: [00:22:32] In, I would say, 99.9% of all cases, you are looking at a minimum for a product company. And typically, you're looking for - as they would say in the business - a company, not a product - you know, that you should be investing in companies, not products; products, not features. But, you know, at an early stage, most of the things that we're looking at are, you know, early products. And the company isn't quite there yet. You are missing particular talents. Typically, early-stage companies don't have the VP of marketing that they might need down the road. But anyhow, you can help build that talent around the core technical team that's - typically starts the company. More often than not, you are looking, as an investor, for a product company that address at least a particular business problem and can do it independently. So those are the companies that are easiest to scale - where you are in charge of your own destiny, you're not reliant on a partner company to supply some part of the solution and you are an easy sale to explain to a customer, as opposed to selling a tool kit of technology.
Rick Grinnell: [00:23:43] And especially now that you're talking about AI and machine learning, I think that's a very difficult sale - to sell, quote, unquote, "technology" to - whether it's a chief security officer or chief information security officer, VP of marketing - you know, how do you prove that your AI is better than someone else's AI? I think you really need to show that your application solves a business problem more cost-effectively, more simply than other competing solutions. And I think the reason that that is accomplished, or the value behind these better applications, is better AI as a significant piece. Obviously, UI technology and whatnot is not driven by AI. There's a use - keys of use and scalability that is beyond just the math, which is also important to the success. But I think as you think about what will be different - 'cause there's a lot of good applications out there - but I think what will differentiate is having the better mathematics under the hood.
Monzy Merza: [00:24:39] We can get in a language game about, this is AI, or that's AI or - one way or the other. And it's interesting. It's - you know, we can beat each other - you know, beat our chests and say, my algorithm is bigger than your algorithm. At the end of the day, it's about customer value. It's not about AI or machine learning or the term. It's about, what's the value that that's bringing? And if my customer says that this particular set of capability helps them solve the problem that they're interested in solving, that's what we focus on. Our big focus at Splunk is to say, how do we focus on the human - whether that human is an analyst or whether that human is somebody in the C-suite or a board member - say, we want to take this machine information, contextualize it and provide risk-based analytics to go along with it, such that they can make a good decision. Now, if that risk-based analytics or that contextualization requires some sort of machine learning, that's the right answer. If it requires some basic statistical aggregation or just a count or a sum of something, that's the right answer.
Rick Grinnell: [00:25:37] For us, viable investments are, first and foremost, centered around viable machine learning and applied artificial intelligence technology. Step two is they have to be encapsulated in such a way that the technology addresses a particular business problem, particularly for the enterprise. It could be next-generation security technology for the end point or the middle point or things that would be next-generation SAS applications, next-generation marketing sales, HR applications. You could look at the marketing and advertising space, how to more effectively mine customer data, click-stream data and the like - again, these narrow use cases that, over time, may get broader. But I think it's easier to first go after a specific problem and do that well before you get too broad.
Rick Grinnell: [00:26:31] So I'm not, you know, as an investor, looking at things that are what I call research areas in AI. These are not 20-year projects. They're not 10-year projects. And we're not trying to focus on, you know, the competitor to Watson or Google DeepMind or things that would be, you know, human brain replacements. We're really looking at applied AI and machine learning that you can develop and get to market in - you know, in two-to-three-year time frame that fits within a venture cycle. You know, typically funds are 10 years in duration. You can, you know, oftentimes get an extension. But think of things that can be invested in and mature within a 10-year cycle. So that's typically not in the realm of, you know, more of the science fiction aspects of artificial intelligence.
Dave Bittner: [00:27:14] As AI gets to be more and more a crowded marketplace, the vetting process becomes crucial. Here's Matt Wolff from Cylance.
Matt Wolff: [00:27:22] Just 'cause it's AI doesn't mean it's very good or well-defined or well-designed. So if you're looking at technology in this space, you should consider really diving in and seeing what they're actually doing, how they're training their systems, what the people kind of will experience in the space - and are they effective at what they're doing? So for people looking at these technologies, certainly do your due diligence. A lot of companies now, in all industries, are touting the power of machine learning and AI. And so just make sure that they're actually correct in what they're doing.
Lee Weiner: [00:27:51] You know, not all AI is invented equal. We've certainly seen folks that are trying to pass off what I would call statistics or stochastic-processes-related technologies that I would've learned in undergrad, you know, over 25 years ago as AI. And that's not AI. You know, we need to be careful of what we put into the AI bucket, what we don't put into the AI bucket 'cause it's easy to jump on a bandwagon. And you can think of - over the last 10 years, there have been various bandwagons that we've all ridden on, thinking that that was the path to the next wave of success or something that differentiated each and every one of us in the startup world or in the venture world. And then, you know, it got over-hyped, overheated. And then people got cynical. So I'd like to kind of head off that cynicism that we might all see in a couple of years now by focusing on, you know, what really is AI and machine-learning-based and what isn't.
Shehzad Merchant: [00:28:48] It's also a little bit disturbing because people are mixing AI, ML and all of these pieces together. And it's becoming hard to discern where the right solutions fit in.
Dave Bittner: [00:28:57] That's Shehzad Merchant from Gigamon.
Shehzad Merchant: [00:28:59] For the industry over time, the supposed challenge - because people will not quite know where to position the right solutions. So what's missing in the industry is a model that says, this is where ML fits in. This is where AI fits in. This is where security orchestration and workflow fits in. And this is how the whole piece - the whole solution looks together. Right? And until we can articulate that very simply, it's going to be very difficult for people to discern where all of these different products and solutions fit.
Lee Weiner: [00:29:22] There are great companies out there. There are great opportunities out there to build interesting new companies. And I just think as long as we don't get caught up in all the hype, we'll all be OK.
Monzy Merza: [00:29:34] The other thing I'd like to add is for people who are wanting to get into this space - in terms of wanting to understand it, wanting to apply it, wanting to learn about machine learning in general - there's been a lot of work in the open source space to provide tools for people to kind of be able to do these things themself (ph) as well. So if you're interested in learning about this, there's a lot of great courses online for people to get started. And you don't need that deep a mathematical background to kind of at least get your feet wet. It certainly is a case of, once you get started - a whole new world that opens up that you can dive into for 30 years and still never understand everything. But it's gotten much easier to at least get your feet wet in this space. So for people who want to learn about this so they don't be intimidated by it, there are easy ways to kind of get started building your own systems to kind of see what these things actually do.
Ravi Devireddy: [00:30:17] The current stage of AI-enabled SOC, as we see evolving, is it's not going to be a replacement for security operations. Where I think, truly, the AI will shine and deliver the promise is when it's human-assisted AI, where all the - things that AI cannot do is the ability to assess a situation and decide an action based on a specific mission or environment. So that is still a handicap for AI systems. But combining human knowledge of that particular environment with AI-enabled intelligence is where AI will offset the limitations of human operators and vice versa. This is where I think we will see the best outcomes in managing security. Human operators can assist AI by reinforcing the learning, providing the feedback to the AI models. And over time, we should expect the system to adapt its analysis based on these human inputs and creates what we call the learning loop between AI and the human analysts. And that's an important point for customers or companies that are either building AI or even adapting to AI systems - to know that human beings are essential in maturing this and assisting AI as well.
Dave Bittner: [00:31:44] And that's our CyberWire Special Edition on artificial intelligence from the RSA Conference. Our thanks to all of our experts for taking the time to speak with us, to our sponsor Cylance for making this Special Edition possible, and to all of you for listening.
Dave Bittner: [00:31:58] To learn more about the CyberWire and to subscribe to our podcast and daily news brief, visit thecyberwire.com. The CyberWire Podcast is produced by Pratt Street Media. Our editor is John Petrik. Our social media editor is Jennifer Eiben, technical editor is Chris Russell, executive editor is Peter Kilpe. And I'm Dave Bittner. Thanks for listening.