Caveat 4.6.23
Ep 167 | 4.6.23

Insurable cyberattacks?

Transcript

Lee Rossey: Attackers are going after the data. The data is worth money. So to that degree, there's a large extent of what can insurance do to be able to help offset some of the damage and the costs.

Dave Bittner: Hello, everyone, and welcome to Caveat, the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today Ben discusses the RESTRICT Act, a bill making its way through Congress. I've got the story of folks calling for a pause in generative AI experimentation, and later in the show, my conversation with Lee Rossey, CTO of SimSpace, to discuss cyberattacks and whether or not they're still insurable. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, we've got some good stories to cover this week. Why don't you start things off for us here?

Ben Yelin: So I would like to talk about this bill making its way through Congress, the so-called "RESTRICT Act," and you know if it's Congress, that means we've got a great acronym and this is no exception. The Restricting the Emergence of Security Threats that Risk Information and Communications Technology Act.

Dave Bittner: Wow.
Ben Yelin: I'm going to give it a B-plus.

Dave Bittner: Okay, those interns are hard at work coming up with these things, right?

Ben Yelin: Yeah, I've definitely seen better, but I've definitely seen worse. Dave Bittner: Okay.

Ben Yelin: This bill was proposed at the beginning of March by Senator Mark Warner, a Democrat from Virginia, and Senator John Thune, a Republican from South Dakota, and I think in the public's mind, this was a bill to institute a ban on TikTok.

Dave Bittner: Right.

Ben Yelin: That's what most people associate it with, and it kind of came as -- the introduction of this bill came at the -- around the same time that the TikTok

CEO testified in front of Congress and did a pretty terrible job. So I think that kind of lumped together in people's mind that TikTok was in trouble.

Dave Bittner: To be fair, Congress didn't exactly put themselves in the best light in that either, right?

Ben Yelin: No, nobody looked good, except us commentators. It gave us a lot to talk about.

Dave Bittner: Right.

Ben Yelin: But for those who were present at the hearing, it certainly did not go well for either TikTok's CEO or any members of Congress.

Dave Bittner: Right.

Ben Yelin: So this RESTRICT Act does more than simply ban TikTok. In fact, it doesn't explicitly ban TikTok at all. The word "TikTok" or its parent company ByteDance or even social media itself, those are not mentioned in the legislation. Instead, the bill gives the power to the Secretary of Commerce to, quote, review and prohibit certain transactions between persons in the United States and foreign adversaries regarding information and communications technology. So the way the bill works is that the Secretary of Commerce would have authorization to identify, deter, disrupt, prevent, prohibit, investigate, or otherwise mitigate, including by negotiating, entering into or imposing -- I'm out of breath -- or enforcing any mitigation measures to address any risks arising from any covered transaction by any person or with respect to any property subject to the jurisdiction of the United States. If this threat comes from either one of the identified foreign adversaries or a -- if the Secretary, through a special committee, identifies a new foreign adversary, then they could take similar action. So the purpose of this bill is to empower the Secretary to restrict applications like TikTok that are allegedly controlled by foreign powers. So ByteDance is controlled by the Chinese government. This is the nature of Congress' concern, and I think that's the impetus behind this piece of legislation.

Dave Bittner: Yeah.

Ben Yelin: When this was first introduced, I think there was pretty bipartisan support for the idea of banning TikTok, even though to many of us who are not involved in politics, it seems like a pretty radical thing to do, especially with 200-some-odd million users, but both Democrats and Republicans in Congress think that there's a major national security threat that our -- one of our most popular applications is controlled by the Chinese Communist Party, in so many words, at least.

Dave Bittner: Yeah.

Ben Yelin: But there's been kind of a pushback, and the article I'm using for this segment comes from reason.com, which is a libertarian blog website, and they wrote on potential unintended consequences of the RESTRICT Act. For one, it would impose civil and/or criminal penalties on users who try to evade the bans on these applications by using something like a VPN to log on to a TikTok or any other application that were banned under this statute, and this is really one of the purposes of the legislation. They -- if we're going to actually ban something, we want to make it punishable, or at least impose some type of civil or criminal penalty on individuals who try to evade that ban using a VPN, but that certainly would be a major inhibition on people's privacies, especially privacy professionals and those who have taken an interest in digital privacy believe strongly in the power of VPNs, of concealing one's identity online. It's important for research purposes. It's important to maintain digital privacy. So even the threat of criminalizing the use of VPNs is something that certainly sticks out to those of us who are concerned about this issue. The civil penalties we're talking about here can be rather hefty, $250,000. The criminal penalties would be up to 20 years in prison.

Dave Bittner: That's more than murder.

Ben Yelin: It is more than certainly something like second degree murder or voluntary manslaughter.

Dave Bittner: Right, right.
Ben Yelin: I mean, it's kind of somewhat crazy. Dave Bittner: Okay.

Ben Yelin: Now, in order to actually be punished for that, you would have to be engaged in sabotage or subversion of communications technology. That would be really hard to prove, but if there were a Justice Department that wanted to set an example, and we've seen stuff like this happen in the past, they could really throw the book at somebody for simply logging on to one of these banned applications using a VPN.

Dave Bittner: Yeah.

Ben Yelin: Then the other threat is that, you know, this is supposed to be targeted to international actors, so applications or companies that are controlled by our foreign adversaries, but it could also be employed against new adversaries that the government itself identifies. So we're giving the power to the Secretary of Commerce and a special government board to determine who those foreign adversaries are, and there are no clear criteria in establishing what counts as a foreign adversary for the purpose of this bill. So there's concern, and this is expressed in this Reason article, that the bill could be used to block or disrupt something like cryptocurrency transactions or Americans' access to open source tools or protocols, and that's something that

would fully be within the purview of the executive branch once it's granted authority under this bill, and that would go far beyond banning one problematic application. So, you know, if Director of National Intelligence and the Secretary of Commerce decided they wanted to punish even a Western democracy just because they didn't think they were cracking down on cryptocurrency, even if it was the European Union, at least theoretically, that's something that could be done as part of this legislation. So we're leaving a lot of discretion in the hands of the federal government and the Secretary of Commerce, I think, more so than people are comfortable with and that's something that can be ripe for abuse under the right circumstances. You could certainly think of an example where this power could be used to crush political dissent in one way or another. You could have a flimsy justification that it's connected to a foreign adversary and it could be used against domestic applications. You know, one example, I think, might set off alarm bells, certainly in a place like Fox News, and it has set off alarm bells at a place like Fox News, is what if the Biden administration decided that Truth Social was closely connected with the Russian government and that it should be banned in the United States because they, in their view, Donald Trump, the CEO or whatever he is of Truth, Social, is beholden to the Russian government. That would be a major suppression on a pretty commonly used social network and --

Dave Bittner: Right.

Ben Yelin: I would be silencing people based on their political views. So it's certainly a threat, and I think it's going to be problematic for the prospects of this piece of legislation.

Dave Bittner: What about just the basic First Amendment issues here that the government is limiting how you can communicate? I've seen people raise that argument. Is that -- does that hold water in your estimation?

Ben Yelin: Yeah, we take the First Amendment very seriously in this country. The language of the First Amendment says Congress shall make no law abridging freedom of speech. In terms of whether this would pass legal muster under the First Amendment, it's uncertain because we've never had a case like this. We've never had an example of the government shutting down such a popular global social network that it has such a major inhibition on people's free speech rights. The government has been granted authority, similar authority in the past to take action against companies through the FTC, the FCC, that are -- that have a nexus with international powers, but never on this scale. So I don't know whether courts would see this as a major inhibition on the First Amendment. My thinking is that it is because it has become such a public square, and an outright ban from the government on this very popular form of communication would seem to me to be a major inhibition on speech. Now, courts might see this as a content-neutral prohibition. They're not just prohibiting certain speech on TikTok. They'd be prohibiting all speech on TikTok, and there are many other avenues. I mean, in the courts' mind, they

could just say, "Well, why don't you use an application that's similar to TikTok but that's not controlled by the Chinese government?" But that itself creates a slippery slope, and the reality is that people don't use different applications to the same scale they use TikTok, so there would at least be a transition period where people's free speech could be suppressed. So I think that's certainly a First Amendment concern and something that I think has been under- emphasized in the contemplation of this legislation.

Dave Bittner: The other counterargument that I've seen, and there was a really good editorial in the New York Times this week, and forgive me, I can't remember who wrote it, but the case they were making was that we're really going after the wrong thing here, that what we need is some sort of federal privacy legislation because if you ban TikTok and the Chinese government wants to know where you're going, who you're talking to, or all those other things we've talked about here so often, they can just go buy it on the open market. Like, it's -- banning TikTok doesn't stop -- it may make it less convenient for the Chinese government, if this is what they're after, but by no means does it stop them from getting what they want.

Ben Yelin: No, it's like trying to, you know, stop somebody who's hemorrhaging blood by just putting a tiny little Band-Aid on their finger. It's addressing -- it's sort of the lowest hanging fruit that Congress could address because this is such -- it's, in their view, it's egregious that they have such a close nexus to
the Chinese government, but really this is an issue, as we've said probably in 20 prior episodes, of not -- just not having a federal data privacy law that would cover not just TikTok but all applications that are playing fast and loose with our personal data. It would be better to have a comprehensive law that covers all threats, both foreign and domestic, to the integrity of our data, that would give users rights over the data that they provide to these companies, that would put restrictions on sales to third parties, but that's not what's happening here. I think this is just an easier target for Congress to go after. They're taking advantage of it without doing the hard work of coming to a -- some sort of compromise agreement that's eluded them over the past several years for comprehensive federal data privacy legislation. So, you know, I almost think that this is a cop-out with just kind of taking the easy way out and avoiding the hard work that they really tried to do in the last session of Congress but couldn't get across the finish line. So I certainly am sympathetic to that perspective, even though you can recognize that TikTok does bring with it unprecedented levels of risk considering how many users it has and how closely it is monitored by the Chinese government, but, again, these are all problems that would be better addressed with a comprehensive federal data privacy law and not something like the RESTRICT Act which is trusting the federal government and our agencies to police applications that it finds objectionable or that it finds are controlled by foreign adversaries.

Dave Bittner: What do you suppose the odds are of this making it through? How are things looking right now?

Ben Yelin: So I would have said several weeks ago that we were looking at like a 60%, 70% chance of this passing. They were talking about, in the Senate Commerce Committee, going through markup. Given that there has been this backlash, particularly a backlash on the political right, I'm more like a 30 now, thinking that, you know, when there is a monkey wrench thrown into legislation in the form of organized opposition, Congress is really good at not doing things. So inertia --

Dave Bittner: Their favorite thing to do is nothing.

Ben Yelin: Right. Inertia kicks in, you know, they ask the Chairwoman of the Senate Commerce Committee, Maria Cantwell of Washington, if she was going to proceed with a markup on this bill and she was pretty lukewarm on it. She was like, "Oh, you know, I'm still considering it. We're going to read through it. Let's take a step back and try and get this right." We don't know if the Republican leadership and the House of Representatives is going to be on board. Certainly there were indications when the bill first came out that they were supportive of it, but, you know, whether that's going to be reconsidered since Jesse Watters on Fox News, one of their highest rated primetime shows, did a full segment on how dangerous the RESTRICT Act was, you know, I wonder if that changes the calculus there.

Dave Bittner: right.

Ben Yelin: There's this funny moment on Jesse Watters' show where he was interviewing Lindsey Graham, Senator Lindsey Graham, Republican of South Carolina, and said, basically, "Here all the terrible things the RESTRICT Act does. Why do you support this?" And Lindsey Graham was like, "Do I support this?" And --

Dave Bittner: Really?

Ben Yelin: Jesse Watters was like, "Well, your name's on it. You're a co- sponsor." And Lindsey Graham was like, "Oh, I'm going to have to look into that for you."

Dave Bittner: Oops.

Ben Yelin: Just one of those hilarious "members of Congress are controlled by their staff" things.

Dave Bittner: Right.
Ben Yelin: Apparently, he is still a co-sponsor.
Dave Bittner: Okay.
Ben Yelin: But I think the indication there is Congress has a lot of issues they

need to work through, and now that there is organized opposition, I think something like this faces a much tougher row than I would have seen a couple of weeks ago.

Dave Bittner: Yeah, so frustrating. So frustrating. Ben Yelin: I know, I know.

Dave Bittner: All right. Well, we will have a link to that story in the show notes, of course. All right, so my story this week comes from the folks over at SC Media. This is an article by Sebastien Goutal and it's titled "Why We Must Hit Pause on Generative AI Experiments." This notion has been making the rounds. So we saw a letter from, I want to say, you know, a bunch of well-known tech leaders. I want to say like Steve Wozniak was on the list of folks who --

Ben Yelin: Didn't Elon Musk sign on to it? Dave Bittner: Could have been.
Ben Yelin: Yeah.

Dave Bittner: Could have been, yeah. So, you know, maybe not fair to say the usual suspects in this case, but folks who are of note, who have some experience and noteworthiness when it comes to tech things, have gotten on board with this idea of putting a pause, in this case, they're talking about a non-profit called "The Future of Life Institute." They published an open letter calling for a six-month pause to study the effects of generative AI and how we can innovate more responsibly. The idea here is that things like ChatGPT, things like DALL-E, which is an image generator, these have kind of been released on to the public, and the public, of course, has been captivated by them, I know I have, and we're playing with them and we're finding all of these things that these things can do. GPT-4, for example, you know, I think it's something like the 90th percentile of the bar exams or something like that. I mean --

Ben Yelin: Beat me probably, yeah. I'll never know my actual score, but certainly, I would guess ChatGPT 4.0 exceeded my capabilities.

Dave Bittner: Right. So the notion here is to stop development on these while we figure out how we want them to fit into our lives and do so in a responsible manner. I guess there's a part of me that understands the impulse here, but I'm really having a hard time seeing how this could possibly happen on a practical level given our global marketplace. If we hit pause here in the United States, China's not going to pause, you know, Russia is not going to pause. Who else is not going to pause? What do you make of this, Ben? Do you agree with me that the impulse is coming from the right place but might be hard to implement?

Ben Yelin: Yes, I do. You know, I think we have yet to under -- fully understand the consequences of generative AI. It is so new. I mean, they talk about in this article how it's affecting professions ranging from art, teaching, journalism, the legal profession, real estate, software development, and whether this has the potential to actually displace workers or create additional negative societal impacts beyond just losing jobs, things like copyright violations, appropriating people's creative work through generative AI. These are things that we haven't thought of because we've had this sort of dynamic process of developing the technology and, I mean, ChatGPT admitted that this was basically beta testing when they first put it out. People have loved it and the software over the last several months has continued to develop. From a practical perspective, you're exactly right. I mean, we can't just press the pause button. There is no, like, global order where there's some governing body that demands that all innovators pause for six months and show up at a couple of meetings in Davos to discuss the ethics of generative AI. That body doesn't exist. It's a collective action problem. If, let's say, ChatGPT decided that it wanted to, in the parlance of our former president, take a pause to figure out what the blank is going on out there, they would lose their position in the competitive marketplace. Maybe Microsoft, through Bing and it's generative AI, would decide not to take the pause and they would advance leaps and bounds, or as you said, foreign countries, certainly, particularly ones that we're not friendly with, would never comply with a self-imposed six-week -- six-month moratorium while we figure out some of the legal, ethical, and policy issues around AI. I mean, there is an understandable level of concern and panic, but I don't think that's a particularly realistic nor achievable goal in trying to solve some of these problems.

Dave Bittner: How do you suppose we could go forward, then, I mean, understanding that this is a problem, or certainly a potential problem? You know, I don't think it's overstating it that -- to say that it's possible that this is an inflection point, right, just for humanity.

Ben Yelin: It is.

Dave Bittner: I know that sounds like a breathless thing to say, but it's plausible that it might be. And so should we be careful? Yes, but I'm not sure how we can -- how do we put this genie back in the bottle, and do we want to?

Ben Yelin: And I don't know, if you -- yeah, I don't know if you can put the genie back in the bottle. The technology is out there. People are going to use it. If we were to impose a six-month ban, I'm sure some enterprising cybercriminal could put together their own -- their own system, their own generative AI, and make a profit from taking that space in the marketplace. You know, beyond the concerns that we've already talked about, there are cybersecurity concerns. One of the things this article talks about is Microsoft Research ran a series of experiments on GPT-4, and one experiment, these researchers actually executed a cyberattack that hacked a computer on a local network, so it was the AI that conducted the cyberattack.

Dave Bittner: Yeah.

Ben Yelin: So that's just an example of all the threats that we have yet to discover. I don't know that there -- there's certainly no way to put the genie back in the bottle. I think it's fine to set up informal ethics review boards, get some of the brightest minds in the room to try and address these issues, but you're not going to do that with some type of six-month unrealistic pause. I think we have reached an inflection point. I mean, I think people are dismissive of the inflection point talk because we've heard that with other forms of technology in the past.

Dave Bittner: Right.

Ben Yelin: But generative AI is very different. We've never had artificial intelligence that's actually creating something, and that's where this is so exciting/slash scary for people, that it's -- that it's moving so quickly.

Dave Bittner: Do you think there are parallels here when you think about like medical ethics, you know, that just because we can doesn't mean that we should and there need to be guardrails on some of these things?

Ben Yelin: Yes, although I'll say, you know, in the COVID experience we basically had developed a very rigorous process for the approval of vaccination, right? So it had to go through several years of institutional review. There were Phase I studies, Phase II studies. When there was a demand during the COVID period for a type of operation work speed where we could skip over a bunch of those steps and get an awesome 90% effective vaccine on to the market, people were fine moving at this lightning pace. I'm wondering if, because the tools are out there and they're so useful, I mean, certainly, the marketplace would support their use. We know that ChatGPT is very popular. It's performing a lot of functions that people find valuable. Maybe that will supersede the need to go through some of these institutional steps to protect the integrity of generative AI. I guess that's just kind of a warning sign that all the people standing atop the mountain yelling "stop" aren't -- those people aren't always on the winning side when it comes to these things.

Dave Bittner: Yeah.

Ben Yelin: We have -- even though you're right that in the past we've done medical experimentation where we should have considered the ethical and moral implications of that type of research, it has happened because there's been a demand for it, and that doesn't make it right or wrong. It's just kind of the reality, and we're dealing in a -- unlike the medical field, in relatively unregulated space. I mean, there's no governing board controlling AI. We have federal agencies that might dip their toes into it, but it's not like we have institutional players that we do in the medical field, like CMS. I mean, this is comparatively the Wild, Wild West here.

Dave Bittner: Yeah.

Ben Yelin: So I just don't know that we have the capability to just stand in front of this train and say "stop."

Dave Bittner: Reminds me of the interview I had with Richard Clarke about his book about Cassandras, you know, the people who sounded the alarm, you know, had the warnings, everyone poo pooed them, and they turned out to be right.

Ben Yelin: Yeah, usually the people who are poo-pooed in the beginning, like Richard Clarke, who was poo-pooed through several presidential administrations --

Dave Bittner: Right.

Ben Yelin: Very frequently they end up being vindicated. So I'm not saying that the people who signed this letter -- and again, these are very prominent people. They're not wrong. I mean, we are in uncharted territory. This does present a great level of risk that we just don't understand. I just don't think their prescription here is viable nor necessarily something that we want. I think we can have a separate conversation among some of the big players in the field about ethics and the implications of what we're doing, but I don't think you can just throw a wrench into the creative process here.

Dave Bittner: All right, well, we will have a link to that story in the show notes. We would love to hear from you. If there's something you'd like us to discuss here on the show, you can email us. It's caveat@thecyberwire.com.

Dave Bittner: Ben I recently had the pleasure of speaking with Lee Rossey. He is the CTO of an organization called SimSpace, and our conversation centers on insurance and whether cyberattacks are insurable. Here's my conversation with Lee Rossey.

Lee Rossey: In general, you know, more and more of the data is on -- is on the computer systems, so a lot of the companies have built up, you know, more and more of the information in applications of service that are traditionally within our networks or in the cloud and do that, and historically, you know, the data was, you know, not that well defended. People have been getting better at actually defending and building it all up. But I would say that it used to be just the big boys, the large insurance, the militaries that really cared and understood about the security and the importance of the data, but now every business and every company has data in their computer systems that they need to protect and maintain their own. So there's not a single company. It's not very many companies now that are not dependent on, on a large part, on an IT systems, whether it's on prem or in the cloud, to be able to run and

maintain their businesses. So with that, as the attackers look for where the money and where the data is, they go after, as usual, the weakest targets, and historically the weakest targets were the financials. That's where the money was. They shored up their defenses. Now they're going after, I would say, the ones who are not as strong and continue to go after that. And in my mind, the progression was, hey, let's go after where the money is, which is the banks, so they've been targeting the banks years ago. They've improved their defenses. Then they go after, you know, other industries, so media, large-scale enterprises, and they approved, and then finally, in my mind, right now one of the weaker areas is the ICS or the OT side, so there's a lot of manufacturing, industrial control systems, oil refinery, power and gas, that maybe are decent on the IT side, but they're not that well off on the OT security, and with all the access that people are making available, IT systems generally want to be connected, and opening up previously, I'll say, ice and air-gapped environments for various reasons is now opening up the attacks for -- sorry, opening up the surface for attacks to be able to get in and target them. So yup, attackers are going after the data. The data is worth money. So to that degree, there's a large extent of what can insurance do to be able to help offset some of the damage and the costs, and probably we can get into that in a second, but that was, I guess, my semi-long-winded answer.

Dave Bittner: Well, in terms of the insurance marketplace, I mean, what's available and who is buying for it? Where do we stand there?

Lee Rossey: Yeah, I mean, it's -- there's a lot of cyber insurance out there. I will say that in general everybody -- most people are getting to some degree cyber insurance, but the requirements to be able to get it are actually going up and justly so. Whenever an insurance company is perhaps losing more money than they're bringing in, they're going to start tightening things up. So the requirements to be able to get insurance are going up, which is a good thing, and to get insurance, they're demanding that companies show that they're doing a better job and providing some defenses to be able to, because, you know, no insurance company wants to be able to just say, "Hey, sure, I'll give you X amount of money for a very weak setup." It's kind of like ensuring -- I can think of my stupid analogy, but with climate change we just talking about, insuring people on the shore that you know they're going to get flooded every other year. So that's just -- you're asking for damage.

Dave Bittner: Yeah. I mean, it's interesting, you know, the regular listeners of this show will know that I've often wondered if cyber insurance is going the way of flood insurance in that it's, you know, the federal government are the only ones who are out there backing up flood insurance because it's not -- it's not reasonable for the private sector to absorb those costs. The costs are so astronomical when they happen, it's not a bet private industry is willing to make.

Lee Rossey: I think there's, again, I have no issue, you know, experts in

insurance, but my feeling is it's probably halfway -- it's probably going to end up to some degree there. Having said that, and my general view is there's two types of cyberattacks. There's the high probability ones that have low impact, and I'm going to put that in the case of ransomware and things like that, that, you know, the attacks are going to hit you, you know you're going to get it at some point, and you're going to pay out some money to do that. And then the other one is the low probability attacks that are going to be high impact, and what I mean by those, those are more the strategic attacks. So picture the ones that some -- a determined adversary, a nation-state is going to want to get in and take you out and cripple your business, beyond cripple, take -- just wipe you off the face of the map, if you will, on that one. And what I mean by those are, hitting a power company in a really bad time to really disrupt, you know, disrupt what's going on. So let's just say if you're at war, hitting the city of Boston during a snowstorm and taking out the power company, they're not doing that for ransom. They're doing that to inflict pain on the city and the population. So in that case, what do you do about both the low probability but, you know, high impact, but also the ones that are going to be the commonplace? And for the commonplace, I think companies can do a better job at improving their security, but there should be no expectation that a company can defend against a sophisticated nation-state that is willing to, I'll say, take them out, and we can talk in a second, but I think there are ways that companies can really improve their security without necessarily changing their budget that will give the confidence to insurance companies that, hey, this is now a worthwhile investment, if you will, to insure this organization, and I can expand on that if you'd like.

Dave Bittner: Yeah, I'm curious, do we have a sense for how many organizations are taking advantage of cyber insurance? Have we reached the point where it's the majority?

Lee Rossey: I think it is, by just reading up a little bit on the area. I think most people have cyber insurance. The requirements to get it are higher. There's more exclusions, but I think most of them have insurance at this point. The question is, is it enough to cover and what are they -- what are they really paying out?

Dave Bittner: Yeah. Is this -- you sort of mentioned this may be a good carrot for the organizations, the insurance companies saying, "Hey, if you want to be insured, you have to put these things in place." That seems to me like a good way to lead people along here.

Lee Rossey: I think it is. I think -- and maybe there's ones where people have had insurance and they got it, but any new policy, the requirements, and maybe the renewal, the requirements are really going up to be able to get, I'll say, a good price. It's like, you don't want to do anything, you don't want to show me anything, A, I may not give you coverage or the premium's going to be really high, but if the insurance company gets confidence and sees

evidence that they're really improving their security, then it's, I think, then it's a lot more palatable. But people have insurance. Yeah, a bunch of people have insurance, companies have insurance.

Dave Bittner: What are your recommendations, then, I mean, in terms of the things organizations can do to better protect themselves?

Lee Rossey: Yeah, I think -- this is my opinion over here, obviously. So I think a number of companies have made investments in cybersecurity and they continue to do that. In speaking a little bit historically, they have bought a lot of tools. They recognize that they need to have people to be able to run them. So it's not like people are doing nothing. They're spending the money. They're recognizing the potential loss in terms of reputational risk, dollars, other
things, so they are investing their money. The question, though, now is, how are you making sure that the investment is good and you're actually improving? And let me -- let me make a little bit -- let me build on that. Throwing more money at the problem isn't necessarily going to help fix it, per se. So in other words, buying more tools and buying more stuff doesn't help. What I think shop needs to be able to do is figure out what are the right tools that you want to be able to actually focus in on, and these can be by area, your firewalls, your network infection response, your endpoint protection. What are those right set of tools? And then here's the key is, how do I make sure that the security team members that I have in my SOC and other things are really skilled and proficient with those tools to be able to rapidly detect and respond? The longer somebody is in your network, the more likelihood the damage is going to be, and I'm going to assume that organizations are going to get breached, so assume they get in. Now the question is, how do you reduce the dwell time, or how do you keep -- how do you check it as quick as possible? One challenge is, I think many shops have way more tools [inaudible 00:35:25] Many shops have way more tools than they need, so they spent a lot on tools, but if you assume that the real damage is post-breach, then the question is, it's the operators that have to be able to rapidly know how to leverage those tools, detect the attacks, and then knock them out or restore it before they can really get too damaging. So in my mind, it's the right tools, which means -- may means fewer tools but operators with the right skills to be able to actually really take advantage of them. And then part two is, we can't think of it as an individual sport. It's more like, in my mind, football or anything else. You have to have great individual players, a great quarterback, a running back, offensive lineman, but it's the team that wins the games. It's the teams that's able to kind of rapidly pull everybody together, figure out what's going on, so your -- so your host guy, your network guy, your firewall guy, your seem [phonetic] guy rapidly converging on what may be looking suspicious, identifying, and remediating it out. So effective and well-trained teams with the right tools is a much better strategy, in my mind, to be able to do that. And then the question becomes, how do you build up strong effective teams to do that? So in my mind, people have made investments in tools, made investment in people, but now how do you actually make sure that they're an effective and ready team?

And that doesn't come with throwing five more tools at it to do it. More tools basically means more distractions because a team can't focus in on the real stuff.

Dave Bittner: I'm curious, getting back to what you touched on with things like OT and IT security, you know, critical infrastructure, keeping the lights on, all that sort of stuff. I realize I risk stretching the analogy here, but is there or should there be some sort of federal backstop here, you know, a cyber equivalent to FEMA?

Lee Rossey: Probably. Yeah, and especially for the regulated industries and -- and the ones that, say, the utilities, the power company, other ones, that don't have, like, these massive budgets to be able to actually put into that. Their margins, I think, are pretty low to do that, but I'd say that there's a couple of trends here that independent of the FEMA-like things that they can actually do to protect themselves, so the OT portion of it has generally not had a lot of cybersecurity on there. They're much more focused on maintaining or keeping the production lines running, so keeping everything operating with not as much emphasis on the security. I think that's starting to change a little bit, too. I'm talking about the specific OT portion of that side of it. But to that point is, since the security has been relatively weak, it's basically shifting the landscape for the attackers to be able to get into it and go through it. Colonial Pipeline is probably a good example. I think, if I remember right, it was a Cobalt Strike. A $3,500 attack tool was used to get into the IT portion via secured VPN, and

that encrypted -- or that took offline the billing database, or the billing system which had nothing to do with the OT component, per se, but they shut the system down for a couple of days because they couldn't actually figure out what was going on and how pervasive it was in there. But just like the rest of the financials and others, I think they -- the shops probably will do a better job at getting better, but at a certain point, like utility is probably not well equipped to deal with nation-states, but they are equipped to deal with -- they should be equipped to deal with, like, ransomware and annoyances.

Dave Bittner: Where do you suppose we're headed here? Do you anticipate something more towards an equilibrium?

Lee Rossey: Over time, yeah. Yeah, I do, and I think people are recognizing the problem. They are -- across sectors, they're starting to recognize the problem. They're putting investments into cybersecurity. They're now trying to optimize those investments. The attackers are going to continue to be there. Wherever the money is, that's where they're going to be going after, or the strategic target. So as the defenses get better, the attacks get better, and we can continue to do that. But even if you look at where we were 10, 20 years ago, I've been playing around with this for 27 years, there really wasn't much security 20 years ago across these networks and these enterprises, and the intelligence, the DOD, others, had a field day with trying to get in. The securities are getting a lot better. The teams are getting better. So I think over

time there probably will be an equilibrium to be able to do that, and it won't be just a few guys in the basement or a few guys just kind of taking out big companies. It's going to revert back to nation-states' intelligence agencies that have to get into the really hard targets to do that. But -- but yeah, I think -- I think there will be an equilibrium at some point.

Dave Bittner: Ben, what do you think?

Ben Yelin: It was a really good overview. I mean, I think one thing that's changed in the realm of cyberattacks over the last decade or so, as he said, it's no longer just the military or these high-profile government or private sector institutions that are facing cyberattacks. Many small and medium-sized businesses have valuable data and they are now at risk as well. I liked how he talked about a -- he used a football metaphor, which is always going to work with me, but just having like a teamwork approach where it's not isolated individuals trying to solve the problem. It's a group of people both prior to a potential incident and after an incident who will make things right. So I thought it was a really interesting interview.

Dave Bittner: Yeah. Our thanks again to Lee Rossey for joining us. We do appreciate him taking the time.

Dave Bittner: That is our show. We want to thank all of you for listening. The Caveat podcast is proudly produced in Maryland at the startup studios of DataTribe where they're co-building the next generation of cybersecurity teams and technologies. Our Senior Producer is Jennifer Eiben. Our Executive Editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin. Dave Bittner: Thanks for listening.