Caveat 4.28.21
Ep 75 | 4.28.21

Cyber insurance: still a work in progress.

Transcript

Paul Moura: It's definitely becoming increasingly standard for most every business across sectors to have cyber insurance in place whenever they're dealing with large amounts of data or consumer data.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner. And joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben. 

Ben Yelin: Hello, Dave. 

Dave Bittner: On this week's show, Ben has the story of a lawsuit from some WeChat users. I wonder if the FTC is cracking down on artificial intelligence. And later in the show, my conversation with Paul Moura and David Navetta from Cooley with thoughts on the importance of cyber insurance. 

Dave Bittner: While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 

Dave Bittner: All right, Ben, let's jump into some stories here. What do you have for us this week? 

Ben Yelin: My story comes from The Washington Post and their technology section, by Jeanne Whalen. And this is about a lawsuit emanating from plaintiffs in the state of California suing the parent company of the WeChat app, which is a Chinese tech giant called Tencent. They are suing this app and its parent company over what The Washington Post describes as alleged censorship and surveillance, basically a violation of their free speech and privacy rights. This is a civil lawsuit taking place in state court in the state of California. 

Ben Yelin: What caught my eye about this case is that the plaintiffs are wishing to stay anonymous. They do not want to be named in the civil complaint. And the reason for that is they are concerned that Chinese authorities, because this is a Chinese company, could pressure Tencent, the parent company here, to turn over their private information. And that, of course, could expose these plaintiffs to harassment or even worse, considering that we're talking about a totalitarian government here. 

Dave Bittner: Right. 

Ben Yelin: The law firm representing the defendant in this case, which is the company, is fighting against this request for anonymity and with good reason. They're basically saying they can't mount a proper defense in this case if they don't have full access to information. You know, if they can't file an answer with specific facts because they don't know who these actual plaintiffs are, then that's going to be a major inhibition on their case. 

Ben Yelin: So before this case moves forward, it has to go in front of a California state judge, and they're going to have to decide if the plaintiffs are required to reveal themselves for this lawsuit to continue. 

Ben Yelin: And so this is just a really interesting example on anonymity among plaintiffs and about, you know, one of the reasons why it's difficult to pursue a cause of action against a Chinese tech company, frankly. 

Dave Bittner: Now, what about a right to face your accusers? Is that only in criminal law? Or what are we talking about with that? 

Ben Yelin: Yes, that's a criminal law concept. So in - since this is a civil case, that particular doctrine does not apply here. I will say, though, it is the general practice of courts across the country as a default that all plaintiffs and all parties in all cases are de-anonymized. And there are very few exceptions. 

Ben Yelin: Anyhoo, as I said, you can understand why that is. In order to actually adjudicate a lawsuit, you're going to have to know personal information on people - what their name is, what they look like. You know, if you're trying to assess injuries, you're going to need information on perhaps their financial earnings or other pertinent information. 

Ben Yelin: You know, the default in civil cases is to rule against anonymity. There are a very small number of exceptions. Some of them are things like cases involving minor children, which we can understand. That's not at issue here. And then in other instances, you have cases where there might be a security threat against the individual. So we're talking about people who are involved in organized crime, something like that. 

Ben Yelin: Neither of those scenarios is per se relevant in this case. And the, you know, potential harm that these plaintiffs are alleging through their attorney is theoretical. I mean, they can't prove beyond mere allegation that the Chinese government is going to abuse them or harass them if their names are revealed. 

Ben Yelin: So, you know, the judge just kind of has to work with the information that he or she has. I'm not sure if it's a he or she. And, you know, I think this could end up being a pretty groundbreaking case in terms of the question of anonymity when you're suing giant companies based in a totalitarian communist country. 

Dave Bittner: How does it all come into play that, as this article points out, the plaintiffs are a mix of U.S. and Chinese citizens? They reside in California. And is it because WeChat does business in California that they are able to be sued in the state of California, yes? 

Ben Yelin: Yes. That is why the state of California has jurisdiction over the defendant. Yeah, they have personal jurisdiction over WeChat, who does business, obviously, in California. I believe they have an office there, if I'm not mistaken. 

Dave Bittner: OK. 

Ben Yelin: So they are certainly eligible to be sued under our civil procedure laws in the state of California for sure. 

Dave Bittner: The whole thing strikes me as a bit odd. And how do you suppose their anonymity will affect their ability to go through with this lawsuit? If you're claiming things like censorship and surveillance, you're going to have to have testimony to that effect. How do you prove that? 

Ben Yelin: Right. So you can have anonymous testimony. But at some point, there's going to have to be some sort of revealing information. Otherwise, they're not going to have a case. And so if I was the defendant in this case, I would file a motion to dismiss the lawsuit. You have to allege something to sustain a civil suit where there could possibly be a violation of the law, right? And as a plaintiff, if you're anonymous and you cannot specify those allegations, you know, to a level where it is only applicable to you, the plaintiffs, and not to everybody else, then I think that would be grounds for the judge to throw away that lawsuit. 

Ben Yelin: So what I expect - this is going to kind of be a cold war between the parties here. The longer the plaintiff pursues this strategy of maintaining anonymity, the more likely it is that the defendant is just going to repeatedly file motions to dismiss until the judge sort of says, we can't adjudicate this case unless we have some information on the plaintiff. 

Ben Yelin: Now, the plaintiffs' attorney, I think very reasonably, is saying they're not trying to be difficult here. They want this lawsuit to proceed. They're confident in their claims. But they're saying that we are comfortable in America, but you haven't dealt with communist China. The fear is real. 

Dave Bittner: Right. 

Ben Yelin: And they have seen it in other cases. They're worried that even if the judge does grant anonymity here, that the parent company will somehow find out the names of these plaintiffs, release them to the Chinese government, and there could be harsh consequences. 

Dave Bittner: Right, for the family members who are back in China. 

Ben Yelin: Absolutely. And so that's an area of great concern. So it's really just a quagmire here. I don't know how you can come up with an equitable solution where the court can hear their claims but can do so without the plaintiffs identifying even nominal personal information that would be required to sustain a lawsuit, especially if one of the allegations is a violation of privacy. I think it's going to be... 

Dave Bittner: Right, right. 

Ben Yelin: ...Very difficult for the plaintiffs here. And I think at some point, the court is going to go to them and say, you have to reveal something, or your lawsuit is going to be dismissed. 

Dave Bittner: Wow. Yeah, what an interesting combination of circumstances. Will be interesting to see how this one plays out. 

Ben Yelin: For sure. 

Dave Bittner: It's a good one, yeah. All right, well, we'll have links to that story in the show notes, of course. 

Dave Bittner: My story this week actually - what drew my attention to this was a tweet from a gentleman named Ryan Calo, who's @rcalo on Twitter. And his tweet says, whoa, whoa, whoa. An official FTC blog post by a staff attorney noting that, quote, "the FTC Act prohibits unfair or deceptive practices. That would include the sale or use of, for example, racially biased algorithms." 

Ben Yelin: I'll say to - just to start that any tweet where you say, whoa, whoa, whoa, and the third whoa is capitalized, you know that has to be serious. 

Dave Bittner: (Laughter) Serious stuff here, right? 

Ben Yelin: Yeah. 

Dave Bittner: So this led me to this publication that the Federal Trade Commission put out, and it's titled "Aiming for Truth, Fairness and Equity in Your Company's Use of AI." And it's written by Elisa Jillson. Some of the other commentary in these tweets, what they're getting at is that they think this could be a shot across the bow, that the FTC is signaling that they are going to start holding companies accountable for the types of things that their AI does. 

Dave Bittner: They got some quotes from this publication here. They say, "keep in mind that if you don't hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA." And so what - folks are reading this and they're saying that this is pretty strong language from the FTC. 

Dave Bittner: Another quote here. It says, "don't exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive and backed up by evidence." Imagine that, Ben. 

Ben Yelin: Yeah, that seems out of the question in our legal system, right? 

Dave Bittner: (Laughter) What do you make of this? What - do you agree that this is perhaps signaling from the FTC? 

Ben Yelin: I do. And I think this is worthy of the three whoas that we saw in this tweet. Actually, the guy who tweeted it is a University of Washington law professor, so it's not just... 

Dave Bittner: OK. 

Ben Yelin: ...Some dude on Twitter. 

Ben Yelin: First of all, I'll say that this is not something I was expecting to see from the FTC. It's really a striking blog post. Actually, I probably was once aware that the FTC had a blog, but I have to admit I'm not a regular consumer... 

Dave Bittner: Right. 

Ben Yelin: ...Of their blog. And it's certainly not something we would've seen in the previous presidential administration 'cause this really does have a racial justice element to it. They're saying that not only are you doing something immoral, potentially, or are you alienating your potential customer base by having biased algorithms, you will get in legal trouble. There could be criminal penalties or civil penalties or criminal sanctions. So that's really a direction that this agency has not taken in the past. 

Ben Yelin: I think the statutory language backs them up. So the FTC Act prohibits unfair, deceptive practices. And as this blog post says, that includes the sale of racially biased algorithms. There is the Fair Credit Reporting Act. So if one of your company's algorithms leads to denying people employment, credit, housing, et cetera, et cetera... 

Dave Bittner: Right. 

Ben Yelin: ...Then you have violated the Fair Credit Reporting Act and its sister, as you mentioned, ECOA, the Equal Credit Opportunity Act. And that applies not just to race, but a bunch of other protected classes, like sex, marital status, et cetera. 

Ben Yelin: And one thing that I think this blog post highlights is it's the responsibility of companies to watch out for discriminatory outcomes. I'm sure there are some organizations who are, you know, intending some sort of discriminating outcome from their algorithm. That is probably a tiny minority, maybe 0.1% of all companies that have developed algorithms. 

Dave Bittner: Right. 

Ben Yelin: Most of them don't think they're developing something that's going to lead to racial discrimination. But what this agency is saying in this blog post is that it's your responsibility to make sure that your algorithm doesn't lead to discriminating outcomes. So it's not just about intent. It's about actual outcomes here. They mention a presentation from PrivacyCon 2020. I did not receive an invite, unfortunately. 

Dave Bittner: (Laughter) Maybe next year you'll be keynoting. 

Ben Yelin: I sure hope so. 

Dave Bittner: (Laughter). 

Ben Yelin: I hope PrivacyCon 2022 is a real blast. 

Dave Bittner: Yeah. 

Ben Yelin: But researchers presented work showing that algorithms developed for benign purposes like health care resource allocation and advertising ended up being racially biased. So again, you're coming up with an algorithm not for any discriminatory purpose. You have no intent. But like many things, including artificial intelligence, that leads to unconscious bias... 

Dave Bittner: Right. 

Ben Yelin: ...And is illegal because we, as a legal system, can judge discrimination by outcomes and not just by intention. 

Dave Bittner: They make the point in this really interesting paragraph here titled "Do More Good Than Harm," basically just lays it out. It says, to put in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. And they go on to say that, you know, they're concerned that some of these algorithms, if they consider things like race or color or religion and sex, it could be the equivalent of digital redlining, you know, the way that they used to have unfair housing practices back in the '60s. 

Ben Yelin: Absolutely. 

Dave Bittner: I'm reminded of the case we saw a few months back when Apple first came out with their credit card that several people were reporting incidences where you'd have a husband and wife, you know, who had totally blended finances, right? 

Ben Yelin: Right. 

Dave Bittner: It's all just one big, shared thing. They live in the same house. Everything's in both people's names. And the husband would apply for an Apple credit card and get, you know, $5,000 worth of credit, and the wife would apply for the credit card and get $1,000 worth of credit. 

Ben Yelin: Right, right. 

Dave Bittner: Right. And so clearly, something was amiss in the algorithm. And to their credit, I mean, Apple took that seriously and, you know, said, hey, this is no good. We recognize it, and we're going to work to fix it. I don't know how successful they've been at that. But to me, that's an example of this kind of thing, where who knows what's going on under the hood? 

Ben Yelin: Yeah. 

Dave Bittner: I think it's interesting to see that the FTC is taking that seriously and certainly signaling that they may come after you if what you're doing doesn't pass their muster. 

Ben Yelin: You know, I will say just because there is a discriminatory outcome doesn't mean that you have a per se FTC violation. So, for example, I'm charged more for auto insurance than my wife because as a dude, as a guy... 

Dave Bittner: Right (laughter). 

Ben Yelin: ...I'm at greater risk for reckless driving, getting in accidents and having to use my insurance. That is something based on risk that is going to be legal without violating FTC regulations. But, you know, if there's a pattern in practice and it leads to - you know, particularly when we're talking about things like fair credit reporting, then they are certainly eligible for these criminal and civil penalties. 

Dave Bittner: All right. Well, we will have a link to both Ryan's series of tweets, which has some interesting commentary on this, and then the actual FTC post itself. Highly recommend it. Do check that out. It's a good one. 

Ben Yelin: I will also say that the next tweet down in this thread doesn't start with whoa, whoa, whoa but starts with holy S, so you know it must be a big deal. 

Dave Bittner: (Laughter) Right. Well, Ryan doesn't hold back when it comes to expressing himself. 

Ben Yelin: Exactly. 

Dave Bittner: All right. Well, we would love to hear from you. If you have a question for us, you can call in. It's 410-618-3720. Or send us an email to caveat@thecyberwire.com. 

Dave Bittner: Ben, I recently had the pleasure of speaking with Paul Moura and David Navetta. They are both attorneys at the law firm Cooley LLC (ph). And our conversation centered on where we stand when it comes to cyber insurance, how cyber insurance is evolving and why it's important for everybody. Here's my conversation with Paul Moura and David Navetta. 

David Navetta: I think right now we're at a very interesting point in the cyber insurance world. You know, for a long time, the privacy and data security risks were, I would say, sort of in gestation. They were real in the sense that we were seeing, you know, certain liabilities, certain types of data breaches, certain types of lawsuits, regulatory actions happen, but the volume wasn't necessarily huge. The theories of liability and other kind of gotchas that companies would face were not very mature. 

David Navetta: But I think we are now at a point in time where the cyberthreat environment is very active. It's almost constant. The lawsuits have gained legs, and plaintiffs are very interested, especially in the United States, to bring class-action lawsuits in the privacy and security context. And in Europe and then now increasingly in the U.S., regulators are starting to get involved in the game much more frequently and with much more of an impact on companies. So I think that is starting to affect how the cyber market is looking at the availability of coverage, the pricing of coverage and the scope of coverage. 

Paul Moura: Cyber insurance has been available for many years, but as the years have passed, the policies have gotten more sophisticated and more readily available for more insureds. But one thing that has always been inherently difficult, I think, for a lot of insurance carriers that are issuing cyber coverages is just how do you value them? How do you measure the risk? These sorts of cyber risks can be very difficult to measure. And as the years have passed, we're seeing kind of more attention focused on improving that sort of process. 

Dave Bittner: Has the product been around long enough and, indeed, the vertical been around long enough that we feel like we have, you know, good tables to be able to accurately set the rates on these sorts of things, or is that still a work in progress? 

David Navetta: Yeah, I haven't been involved in that end of it for a while, but I still think it's a work in progress. The risks have increased quite a bit, but also, you know, the data associated with these types of risks has also increased. So, you know, at the beginning of cyber insurance, when I was drafting some of these policies, there was, you know, zilch in terms of actuarial data. I think that that has changed significantly. There's data now that can be used as part of an underwriting process. 

David Navetta: Now, not every carrier has the same level of data. And there've been some players in the game since, you know, the early 2000s who probably have more experience and more ability to forecast risk. So, you know, that's not a level playing field per se. 

David Navetta: But what we also are actually seeing is sort of data collection coming from, you know, more of a security-type approach, where there are certain carriers who are going in and running certain types of tools and security kind of assessments in more of an automated fashion to be able to assess risk. 

David Navetta: And so we're seeing that on a kind of a scale basis where, you know, the middle market and smaller businesses who are still targets are now getting access to this type of insurance. And the underwriting process is more in a box, but also increasingly more supported by data that security-oriented type of practitioners are able to get on behalf of these insurers. 

Paul Moura: Yeah, and I think we're going to start to see more innovative types of underwriting processes like that over the next few years. And one example that I think was just recently announced by Munich Re and Allianz, they're working on a partnership with Google to develop a cyber insurance product where insureds who use the Google Cloud interface will be able to purchase coverage, and the insurers will have more refined data and information about their - the insureds' systems because they, you know, they have access to the Google Cloud. 

Paul Moura: So we're going to start to see more things like that to help insurance carriers kind of better underwrite and better price their products. 

Dave Bittner: Yeah, that's fascinating. I mean, it reminds me of some of those, you know, consumer insurance companies where if you're willing to put some sort of, you know, device in your car that - they'll give you a discount in exchange for keeping track of how often - or how you drive, you know? It seems like a similar sort of approach. 

David Navetta: I think we're getting to that point, right? I mean, when cyber insurance first kind of came out, the analog - one of the analogs was, you know, auto insurance. So if you can prove that you have, you know, certain types of controls in place, like you can prove that you have a seat belt in place and your car is in good order, you've got a driver's license, you know, then in theory, the premium should go down, right? But obviously, auto insurance has been around for a long, long time. There's a lot of data, a lot of actuarial ability there to crunch numbers. 

David Navetta: And that hasn't been the case with cyber yet. Arguably, cyber risks are more complex as well, and they're constantly evolving, so that makes it a little harder. But yet, as Paul mentioned and as we were talking earlier, some of the data is now available and is able to be crunched more readily. You know, artificial intelligence is playing a role as well, and machine learning, to be able to crunch and analyze that type of data. So, you know, the methods for actually capturing and crunching and understanding the information have improved significantly in the last, you know, 15, 20 years. 

Dave Bittner: Are people generally able to get the coverage that they need? Is there - how often are you seeing folks who are experiencing some, you know, sticker shock when they go out shopping for this sort of thing? Is that a reality, or are these policies falling into line where they're generally pretty affordable? 

Paul Moura: I think more recently, there's been more sticker shock. In the past, some of the cyber coverages were very affordable and, as a result, there were some insurance carriers who maybe mispriced the policies and were paying out more in claims than they expected, so it wasn't a profitable business. 

Paul Moura: So very recently, cyber policies and cyber coverages have gotten very expensive. And there are less players necessarily willing to offer cyber coverage to certain types of companies. If you're a startup, you might have a little bit of difficulty now getting cyber coverage from some of the bigger names. 

Paul Moura: But the cyber coverages have been improving as far as how they're worded and the types of offerings that they provide. There are endorsements and types of coverages that are becoming increasingly standard, which is very helpful. That's been a positive development as the cyber insurance industry has kind of progressed over time. 

David Navetta: It's worth stepping back a few years. So, you know, as I mentioned, there are big players that have been around for 15, 20 years. And I think a lot of second-tier-type players, even first-tier, really, insurers whose names you would know starting to get into the market after the, you know, the first movers. And for a period of time, there were - you know, almost every other week, there was a new cyber insurance company or a cyber insurance product coming out, hitting every aspect of the market, big, small, medium, startups, banks, health care, every sector, every industry. 

David Navetta: And so at that point, the competition started to become very fierce for the cyber insurance policies. The premiums went down. The coverage started to expand. There were more and more players willing to try to get out there and get some market share, and that included by, you know, providing more coverage and higher limits for less money. 

David Navetta: I think the market has hardened for some of the reasons we stated earlier. You know, the threats are much worse. Right now, we're going through a really bad ransomware attack type of threat that's been going on for the last - maybe the last three years, where insurers are having to pay bitcoin right out of pocket. And it's very much scaled and across the entire industry. 

David Navetta: We're now seeing things like supply side breaches, which are systemic breaches. So one of the big concerns and risks of carriers has been, OK, I have - I'm insuring a service provider, and they have 10,000 customers. What if that service provider gets hit and each of its customers gets hit, right? So in that case, you could have a mass risk, a mass loss if you're insuring not only the service provider, but all of its customers at the same time, which could really undermine and blow a company's book of business in terms of its insurance reserves and premiums. 

David Navetta: In fact, the New York Department of Financial Services just came out with a notice to the insurance industry, sort of putting them on alert to make sure that they're underwriting properly and accordingly and also taking these systemic risks into account. 

David Navetta: Another example, that was the SolarWinds issue that came up recently that hit many companies. And just last week or this week, actually, we heard that there's a potential breach around Microsoft as well, maybe affecting 30,000 Microsoft email servers. So a single incident like that can really put a dent in the insurance industry. So I think that is what has caused the tightening of the market, the increase in the premiums and maybe some of the less established players taking a second look and maybe not wanting to be in the game as deeply as they were before. 

Dave Bittner: Is there a regulatory component that you're tracking here as well? I mean, are we heading towards the possibility that certain sectors will be required to carry a certain amount of cyber insurance? 

Paul Moura: It's definitely becoming increasingly standard for most every business across sectors to have cyber insurance in place whenever they're dealing with large amounts of data or consumer data. Most businesses require it under their agreements with all their business partners that they work with. It's becoming something that every business really should get in place. 

Paul Moura: And as far as the regulatory aspect of it, what the regulators are focused on is what they can do to promote the stability of the cyber insurance market because of all the things that we mentioned earlier about it being difficult to measure these things and the market hardening recently because of things like systemic risk. Regulators are very much involved in figuring out ways that they can work with the insurance companies to incentivize good practices, good underwriting practices, good practices for educating their insureds about mitigating their cyber risk. And that's been their focus in recent years. 

David Navetta: We haven't seen a regulation requiring cyber insurance yet in any kind of statute. I think the drivers of it, what we see - any B2B-type company that is handling data on behalf of its customers, those customers are not only doing their due diligence around that company's security and privacy, but they're also, you know, demanding or requiring that their vendors have cyber insurance. So there's a commercial incentive, in many ways, that is driving a lot of the purchases in the B2B realm. 

David Navetta: In the public company realm, you know, the SEC has been requiring more and more detailed reporting around cyber-related risks. And one of the factors in reporting whether your controls may not be adequate is whether the risk has been appropriately addressed via cyber insurance. So again, not mandating cyber insurance, but it is a factor in whether a company is actually controlling its risks appropriately, you know, in a way that, you know, helps its shares - share values stay up when you're a public company. 

David Navetta: So there's been of a very kind of organic uptick in the purchasing of this type of coverage. And again, the news stories and everything we hear is probably also incentivizing that and making it more of a - you know, almost a, you know, mandatory or required type of coverage as opposed to an optional coverage. I think there definitely has been a mindset change over time in viewing this as something that is not something that's optional. 

Dave Bittner: I'm curious what you all are tracking in terms of thoughts in the industry about long-term viability of cyber insurance. And I guess I'm coming at this from the point of view of, you know, could it be headed in the direction of something similar to flood insurance, where insurance companies have come to realize that flood insurance is not a good business to be in, so they're backed up by the federal government because the losses can be so disproportionate. Is there any talk in the cyber insurance world that things might go that way? Or is the outlook over the long term more in alignment with other types of insurance? 

David Navetta: Some level of it actually already happened. After 9/11, TRIA was passed. I think it's called - it was the Terrorism Risk Insurance Act. And around that time was when cyber insurance was starting to gain its foothold. And there was a realization that a terrorist event could be, of course, something that happened like in 9/11 but also could be, you know, shutting down the electrical grid via a cyberattack and/or hitting some sort of key vendor, perhaps of the entire financial industry, bringing down the stock market or what have you, right? 

David Navetta: That Terrorism Risk Insurance Act was intended to be sort of a reinsurance mechanism for mass losses that could arise out of a terrorist event, which I think over time, many in the cyber world viewed as a potential issue for a cyber event that could be tied to a terrorist type of attack. So there is precedent for that type of coverage in this scenario and that type of risk transfer for, you know, the government being a fallback, essentially, if there is a mass type of cyber event. 

David Navetta: Now, there isn't a mechanism today in place for that. TRIA, you know, is an older statute, and it's more limited because it requires some sort of terrorist-type event. But, you know, I could see a situation if there's a nation-state-actor-type event happening that hits a broad segment of the U.S. that the U.S. government would come in to back up insurers to avoid, you know, insolvency and avoid, you know, the trickle-down effect that could, you know, really impact the stock market and the economy on the whole. 

David Navetta: So I'm not aware of a mechanism yet that's in place, and I'm not necessarily certain that people are talking about it in Congress or otherwise. But I do think there is more of a realization that some of the systemic risks and some of the nation-state activity could have a much wider-spread impact that needs to be looked at more carefully. 

Paul Moura: In terms of that realization, we are seeing the insurance carriers do various things to promote their long-term sustainability in their coverages that they're offering to policyholders now. We're seeing quite a bit more policies being issued with higher retentions and higher deductibles, more sublimits for certain coverages. 

Paul Moura: So the insurance carriers are making moves that are reflecting, you know, an effort to try to make this sort of product work more in the long term. But as a result, you know, policyholders are losing some aspect of their coverages. They're paying a price for that. Apart from an increase in premiums, coverages are becoming increasingly limited in some respects. 

Dave Bittner: All right, Ben, what do you think? 

Ben Yelin: Yeah, it was interesting. I mean, I think the insurance industry and basically all other fields over time has become so precise, as they were saying. Like, we've become very good at evaluating risk in all different types of domains - auto insurance, life insurance. The actuaries there know what they're doing because they have a lot of experience. 

Ben Yelin: We are still in a relatively early stage of cybersecurity-based insurance, and so it's going to take a while for the industry to develop where risks are being properly evaluated and insurance policies are being properly valued. So it's just really interesting to see where that's going to go as these insurance companies start to get more experience. 

Dave Bittner: Yeah. I think it's fascinating, too, the influence that the insurance companies can have on how people approach their cybersecurity. In other words, their ability to say, look - it's kind of like an insurance company saying to someone who puts up a new building, you got to have sprinklers, right? 

Ben Yelin: For sure. 

Dave Bittner: If you want to get insurance, you've got to have sprinklers and fire escapes and all that kind of stuff. These insurance companies can have the influence to make cybersecurity better by saying, listen; if you want us to insure us, you've got to demonstrate you have all these different basic cyber hygiene things in place, and then let's have a conversation. 

Ben Yelin: And I think that's completely reasonable and probably, you know, good for everybody in the long run. Now, not everybody is going to be able to comply, and not everybody, you know, especially, you know, some of the - some poor localities potentially aren't going to be able to obtain insurance. 

Ben Yelin: And unlike other industries, like the auto industry, for example, cyber insurance is not mandatory to engage in the online world. We're a long way from that. But I think, you know, it's an unadulterated good that the industry of insurance is encouraging people to use good cyber hygiene. I think that's a really good thing. 

Dave Bittner: Yeah. Well, our thanks to Paul Moura and David Navetta. Again, they are from the law firm Cooley. We appreciate them taking the time to speak with us. 

Dave Bittner: That is our show. We want to thank all of you for listening. 

Dave Bittner: The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Ben Yelin: And I'm Ben Yelin. 

Dave Bittner: Thanks for listening.