Caveat 9.28.23
Ep 189 | 9.28.23

Challenges in the cyber industry.

Transcript

Simone Petrella: Cyber insurance actually is pretty broad, but now they're trying to think about what sorts of systemic events might happen that they should exclude from policies because insurers can't withstand a catastrophic event that could affect many policyholders at once.

Dave Bittner: Hello, everyone, and welcome to "Caveat", the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hey, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben has the story of a TikTok account that targets ordinary people using advanced facial recognition technology. I've got the story of the trickle of information being publicly shared in the Google antitrust case. And later in the show, my N2K colleague, Simone Petrella, speaks with Monica Shokrai of Google about what Google does with its own actuarial team to calculate its own risk. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, we've got some good stuff to share this week. You want to jump things off for us. You want to jump right in.

Ben Yelin: Let's jump things off for us. So I am back on the Joseph Cox train. It's been a while since we've had an article from Mr. Cox, and I believe he's now in a new outfit, which is 404 Media.

Dave Bittner: Yes.

Ben Yelin: He had previously been with Motherboard Advice. Yeah, this is a fascinating story. It is entitled The End of Privacy is a Taylor Swift Fan TikTok Account Armed With Facial Recognition Tech. So it's about what seems to be a troll TikTok account. It has 90,000 followers, and this person is doxing ordinary and otherwise anonymous people on the Internet, and he is doing so through the use of readily available facial recognition technology. He's using this to create content to really give a lot of information on people who did not consent to giving that information out publicly. And it's become obviously a very popular account that is still up on TikTok, despite many complaints about it. So basically what happens is somebody will screenshot a video. I'll get to the Taylor Swift angle in a second.

Dave Bittner: I'm on the edge of my seat, Ben.

Ben Yelin: Right. I know. I mean, it turns out that this person is a huge Taylor Swift fan. So a lot of the pictures come from the Eras tour.

Dave Bittner: Well, who isn't really?

Ben Yelin: Right, exactly. She's dating NFL players. Yeah, I mean, you know, if only I could afford those tickets. But they will crop images of a person's face, run that picture through facial recognition software. Then once they do that, using this readily available, it's an advanced tool, they will reveal the person's full name, social media profile, sometimes their employer, to the millions of people who have liked the videos. And this is kind of an entire branch of content on TikTok. This isn't isolated. This is the most prominent account that does this. But it's kind of a way to show off your doxing skills, basically, through open-source intelligence. It's information that anybody could obtain online using the correct tools. And for the most part, that's done -- if you're a hobbyist, you do it generally by obtaining the consent of the person that you're trying to find information about.

Dave Bittner: Okay.

Ben Yelin: When you're not a hobbyist and you're just simply a troll who is trying to get some lols on TikTok, then you do not obtain the consent of the people who are being subjected to this type of surveillance here. So I tried to find a fascinating article and I was trying to think of the real legal angle here.

Dave Bittner: Yeah.

Ben Yelin: The legal angle here to me is that there's nothing illegal about what's going on here. It does not violate TikTok's terms of service. Joseph Cox, I think, closely read the terms of service and kind of ran it by a couple of attorneys that he knew. And it was basically like, this is voyeurism, but it's not illegal voyeurism. This is not a legal problem because all these people were showing their faces publicly. This is not taking a picture inside somebody's house where they have a reasonable expectation of privacy. It's not peering into somebody's window through a camera. It's people in a public place. They didn't consent to having all of their information shared on the Internet, but the technology exists to do so. So they've lost that expectation of privacy. They have no cause of action against TikTok. There's really nothing that they can bring to law enforcement, but they're being doxed. I mean, it turns out we're getting a horrible result here. So I think what Cox is arguing here, and I agree with this, is this isn't a political problem. This is a cultural problem, as our legal system and our political culture have accepted that if you're out in public, you're consenting to get your photo taken and anything that follows from that is fair game. If you are in plain view of somebody taking a picture, they can take that picture. They can use all different types of facial recognition tools to find information about you. That's too bad. It's your fault for putting yourself out there in public. And I think culturally, I don't think any of us ever explicitly consented to this type of system. And I think it's something that first, it's important for us to recognize that we don't have this expectation of privacy right now. But also, it's something where I think we could use a cultural change and maybe an account like this, which is prominent and is being featured in the story, might change the calculus on this.

Dave Bittner: Is this like a paparazzi problem? The paparazzi type of thing trickling down to regular folks like you and me?

Ben Yelin: Yeah, I mean, it is. It seems so. Joseph Cox tried to get into the motivation of why this person is doing this.

Dave Bittner: Right.

Ben Yelin: That's a great question.

Dave Bittner: It's a great question.

Ben Yelin: It's not like there's money to be made here from getting the -- I mean, maybe there is. I don't really think there is from getting the random identifying information for people that are in the background of pictures of a Taylor Swift concert.

Dave Bittner: Okay.

Ben Yelin: It seems like he's doing it for the thrill of it and just kind of for -- maybe it's sort of like a voyeurism, I don't want to say fetish.

Dave Bittner: Like kind of a flex, maybe? Like look how cool I am. Look what I can do.

Ben Yelin: Right. Look what I can do. I think that's a huge part of it. And so I think it's unmasking people for the fun of it to show that I can use these capabilities that this exists. There's a thrill to it when you're picking up 90,000 followers. I mean, I've gotten thrills when I get retweeted by a thousand people. And I understand the virality of it.

Dave Bittner: Yeah.

Ben Yelin: It does get into your head. And so I think there is something psychological about it. They found a way to obtain a bunch of followers, 90,000 by doing this. And I think that's kind of a tacit encouragement to keep doing it.

Dave Bittner: How is this person choosing their subjects?

Ben Yelin: So that's where the Taylor Swift angle comes in. It's kind of a bizarre side note to this whole story. But this person apparently is a big Taylor Swift fan. Many of the people in these doxing videos include Taylor Swift's music videos and also videos of people at the Eras Tour. Otherwise, we don't have a lot of information where this person is obtaining all of these photos. I think wisely, Joseph Cox did not post the account that is featured in the story just because that person does not seem, despite these complaints and despite the concern expressed, does not seem interested in taking down this account. So I think Joseph Cox didn't want to feed into it, which I get. But the person is doing this without the consent of the people that that he's taking pictures of. So I definitely think it's something that we need to be concerned about.

Dave Bittner: Could someone come after this person with a civil suit?

Ben Yelin: I don't think you can come after a person with a civil suit because I don't think you have a proper legal cause of action here, at least the way our legal system exists. I mean, what is the legal cause of action here? There's no type of invasion of privacy. I can't think of any tort. Obviously, this is not the government.

Dave Bittner: Is it harassment?

Ben Yelin: It's not really necessarily harassment. Now, there's one legal scholar here, a former colleague of mine, I will say, Danielle Citron, who said that this seems to violate one clear provision of the terms of service, the EULA with TikTok, which is against doxing. But I guess the legal question comes down to, is it actually doxing if you're posting this information about a lot of people and for the vast majority of them, no one takes any action and the person generally does not know that they've been doxed? So I think theoretically, you might have some type of harassment claim, but that would require more than just posting these pictures on the Internet. Somebody would have to follow through with the harassment using that information that they gained from this account, from this facial recognition software. And until that happens, I don't think you can make the claim that the photos themselves and the posting of the photos and the use of facial recognition is in of itself a type of harassment, even if it may arguably violate the terms of service for TikTok. And most legal experts that were quoted in the story doesn't think that this does violate the terms of service for TikTok.

Dave Bittner: If someone were to come up with some legislation that would stop this, what do you imagine? What would be the angle that they would come at this?

Ben Yelin: It's really tough to say. I mean, I don't think there is kind of one law that would be a panacea to solve all of this. Even a federal data privacy law, it's not going to be that effective of a tool when you're dealing with something with this large of a scale. This person is posting content on almost a daily basis. So you have instances in some of these state data privacy laws where people are given a cause of action if their information is released online without their consent or if one of these third party social media companies is negligent with their information. I think if you were to develop a federal data privacy law, it just might be too hard to do a full-frontal attack on something like this, given how large the scale is. It's possible that we could use our regulatory system to go after TikTok for allowing this type of thing to take place. I don't see that as really a viable option just because, kind of as we've been talking about here, this isn't necessarily illegal. So I'm not sure that the FTC or the FCC or any type of enforcement agency would want to publicly target something that might be untoward and might cause us to be uncomfortable, but isn't actually illegal. So in that sense, I mean, I do think it's going to take a cultural change. It's about changing this widespread perception that just because we've put something out there publicly, some type of photo, that shouldn't diminish our general expectation of privacy. I think this is true in a Fourth Amendment context, and I think this is true here when we're talking about private parties, that maybe it made sense 50, 60 years ago to have that be the standard. You're in plain view, you forfeit your expectation of privacy. But that was in a different time. There's only so much you can glean from a single picture. But with the use of facial technology, I mean, you're getting extremely personal, confidential, private information about somebody. So I think this is something that just we have to recognize as a culture and something that we have to change.

Dave Bittner: What about if someone, for example, had the right to demand removal of their image?

Ben Yelin: That would be good. Right now, there are a few states where there's even a mechanism to do that. But again, I mean, you're talking about an issue of scale here.

Dave Bittner: Yeah.

Ben Yelin: My guess is that most people who've had these photos taken have no idea that their photo has been taken and used in this manner.

Dave Bittner: Right.

Ben Yelin: And it's still a problem, even if the person doesn't necessarily know that their information has been taken. Maybe some type of kinetic harm can be done before they're able to file a lawsuit or petition TikTok to take down that information. So there are still going to be issues, even if we were to replicate what we have in several states that have enacted these data privacy laws, which is a way to take this information down or to request this information be taken down.

Dave Bittner: It really is fascinating. I mean, I'm imagining that we're probably not far off if this doesn't already exist, where you or I could upload a crowd shot from a ballgame we went to or a concert or something like that into one of these facial recognition systems and just, you know, mouse over the different faces and up pops that person's name, their address, what they do for a living, what their interests are, you know, just all basically everything that The System capital T, capital S knows about us. Right. Because we know these databases are out there. And so it's connecting the dots and that is one reach.

Ben Yelin: Yeah. And I think one of the things Joseph Cox actually in responding to one of the comments here said is all it takes is a random person to just feel like doing this. This isn't some type of nefarious effort through a shadow organization that has political motivations. This is literally a guy doing it because it's funny and entertaining and it's a way to build his follower account.

Dave Bittner: Right.

Ben Yelin: And so I think all of us need to recognize not only that this technology exists, but even if we don't think me and you are not very interesting, you could find out information about us using facial recognition. And, you know, they might find out that you work for N2K and that you like, you know, the Muppets.

Dave Bittner: Right.

Ben Yelin: But I don't think they're going to get much deeper than that.

Dave Bittner: Yeah.

Ben Yelin: But you could see how this could be a danger in situations like somebody who's been the victim of spousal abuse being tracked down, their information being posted online. So it could be doxing in that sense, which really is dangerous.

Dave Bittner: Yeah, no. And I think it speaks to how much, and I think politically we've learned a bunch of lessons about this over the past decade or so, how much of our society is built on adherence to norms rather than actual laws. Right?

Ben Yelin: Right. Exactly. And I think it's a recognition we need to have that the laws aren't necessarily going to protect us the way they exist now. So unless the norms change, and that's kind of a long-term problem, I think it's going to take our individual consciousness of this capability exists and that we need to be very careful with everything that we're posting on the Internet. The problem is it's not us, you know. In this scenario, it's not the person who's done the posting. It's just a random picture that was taken at a Taylor Swift concert. But I think all of us need to recognize that when we're out in public, we are really revealing ourselves in a way that I don't think many of us ever thought we were.

Dave Bittner: Would it be within TikTok's rights to just step up and say, hey, knock it off?

Ben Yelin: Yes. But they're not doing that.

Dave Bittner: Okay. They like the clicks as well.

Ben Yelin: Yep. For the clicks.

Dave Bittner: All right. Oh, man, it's interesting. We'll have a link to that story in the show notes. My story this week comes from Matt Stoller, who is the research director for the American Economic Liberties Project. They're an organization that, from what I gather, seeks to go after monopolies and they believe that we have too many monopolies here in the U.S. and around the world. And they're doing their part to try to come at that issue.

Ben Yelin: I mean, I was Googling monopolies and I just get it.

Dave Bittner: Yeah. Very good, Ben. Very good. All right. You get an extra cookie today.

Ben Yelin: Yes.

Dave Bittner: So Matt has a newsletter called The Big Newsletter, and he wrote an article focusing on the Google antitrust trial that is going on right now. And he contrasts what's going on with the Google antitrust trial against what happened with the Microsoft antitrust trial back in the late '90s. And back then, much of the Microsoft trial was out in the open, was out in public. We had testimony from folks like Bill Gates. We had just reams and reams of documents that were publicly shared. And evidently, that is not happening here in the Google trial. The judge here, Amit Mehta, they actually quote him in a pretrial hearing in August. He was speaking to Google's attorneys and he said, "Look, I'm a trial judge. I'm not anyone that understands the industry and the markets in the way that you do. And so I take seriously when companies are telling me that if this gets disclosed, it's going to cause competitive harm. And I think it behooves me to be somewhat conservative in thinking about that issue because, you know, I can't see around every corner." And evidently, the way that this judge has approached this trial is to be deferential to Google. At least that's the way that Matt Stoller describes it. And of course, you know, he has his bias when it comes to this sort of issue. But the way he's describing it is that the judge is being unreasonably deferential to Google. And that is a public harm because we're not getting to see the inside of what's going on in this trial. And there's a lot of information in something like this that should be publicly shared, and it's not. What's your take on this, Ben?

Ben Yelin: I mean, it's one of those things where a party is going to use every advantage it can think of in the courtroom. And I think Google knows that this would be a very public trial. The Microsoft one was highly publicized. There were protests surrounding it. It was just a very public event. I think Google knows that as the sort of Goliath in this situation, that any type of visibility into Google's trade practices is going to reflect poorly on them in the court of public opinion.

Dave Bittner: Right.

Ben Yelin: I think what's concerning here is that the judge has been so deferential. I'll note, by the way, that this judge, Mehta, just incidentally, I don't know how this has to do with anything necessarily, I just find it interesting. He's done a lot of the criminal trials for the January 6th defendants. So he's become pretty prominent. He's actually given out some of the stronger sentences in those January 6th trials.

Dave Bittner: Interesting.

Ben Yelin: But yeah, I mean, I do think he's being probably unduly deferential here. I'm sure there are certain things that would come up in this case that would really hurt Google's competitive practices. But we're also talking about a case that's grounded in anti-competitive practices.

Dave Bittner: Right.

Ben Yelin: So in some sense, I think the public deserves to know. If we're going to bring this litigation in the first place, what those practices are, even if it might hurt Google's position in the marketplace. So yeah, I do think he's being unduly deferential here. I certainly understand the impulse and I understand why he would want to be conservative and safeguarding that information. But I do think, like Matt Soller is saying here, this is a bit of a disservice to the public who deserves to have information about this tech behemoth that's dominating the industry.

Dave Bittner: Matt Stoller is also pointing blame at the Department of Justice here, saying that that the trial team from the DOJ should be pushing harder with the judge to make these things publicly accessible. What's your take on that? I mean, is this something where the DOJ needs to tread lightly with the judge?

Ben Yelin: Yeah, I mean, I think that's a huge part of it. They didn't support that brief that you mentioned, which argued that portions of the trial should be made public. I think this is about just not putting themselves at a disadvantage with the judge. It's just something you have to weigh in these cases. The value of transparency to the Department of Justice itself isn't as valuable as not having a judge that dislikes you for arguing against what he's already ruled.

Dave Bittner: Right.

Ben Yelin: That seems like that's what's going on here. In the past, the judge has expressed frustration that certain exhibits in the trial have been posted publicly. And when that's happened, the government has been diligent about taking those posts down. They want to work with Google to make sure that everybody is satisfied with the process. I think this is the Department of Justice just trying to stay on the good side of the judge here and not become a part of a kind of story of Google's martyrdom as this trial continues. But I do think in a perfect world, yeah, they'd be more of a zealous advocate to get this information publicly. Although the DOJ is not a media company, and it's not necessarily in their interest to have this information released publicly, although you just think in the general interest of justice, they try to do so.

Dave Bittner: Yeah, Stoller also points out that this is really potentially fuel for conspiracy theorists to say, what are you hiding, right?

Ben Yelin: Yeah, I mean, it's sort of like you always look at the genesis of how conspiracy theories develop. It's always through this type of secrecy.

Dave Bittner: Right.

Ben Yelin: Now, what bothers me is when some people see secretly as kind of per se evidence of a conspiracy. I don't think that's happening here, but it certainly lends credence to the idea that there's something shadowy going on here. And you'd think that both parties in this case and the judge would want to avoid something like that.

Dave Bittner: Right.

Ben Yelin: But I think the judge, in weighing all of those factors, is coming down on the side of being cautious, being conservative. That is a conscious choice that he's made. I'm not sure that it is in the public interest, but it's just something that he thought would be the best way to carry on with this trial.

Dave Bittner: Yeah, and he's a judge.

Ben Yelin: He's the judge. He gets to choose, right? He gets to make the decisions. We don't.

Dave Bittner: Right, it's his courtroom. All right. Well, we will have a link to that story in the show notes. And of course, we would love to hear from you. If there's something you'd like us to discuss here on the show, you can email us. It's caveat@n2k.com. Ben, my CyberWire colleague, Simone Petrella, recently had a conversation with Monica Shokrai. She is from Google. They met up at the mWise 2023 Cybersecurity Conference and talked about a bunch of things. Some of the challenges in the industry, from Google's perspective, and specifically some of the things Google does with its own actuarial team to calculate its own risk. Here's Simone Petrella speaking with Monica Shokrai.

Simone Petrella: So maybe to kick that off, because I had no idea Google did anything related to insurance, tell us a little bit about what your roles are as it pertains to insurance, Google and Google Cloud.

Monica Shokrai: Absolutely. So I have a couple of roles in Google, mostly due to the way Google is structured. You can wear a lot of different hats and do a lot of different things. The first thing that I do, which is not directly related to Google Cloud, is I actually lead Alphabet's actuarial team. So we quantify risk to understand the risk within Google, and we use that for insurance purchasing decisions. Second, I lead risk and insurance for Google Cloud, as I mentioned earlier, and that role is more of a risk manager role. So trying to understand the cloud business, the risk that it presents, and making sure I translate that to our insurance policies. The third one, which is more relevant to, I guess, customers within the Google Cloud space, is I lead a program called the Risk Protection Program that offers cyber insurance to Google Cloud customers, and is really trying to bring the cyber insurance industry forward.

Simone Petrella: Now, that's really interesting. Now, before we get to that, because I want to come back and talk about it, but how unusual is it for a company that's not Google to have their own actuarial team that's actually collecting data to sort of compare against their own purchasing power?

Monica Shokrai: That's a great question. So very unusual, highly unusual. The reason that I would attribute to Google having an actuarial team is that our risk manager, Loren Nickel, who runs all risk and insurance for Alphabet, is an actuary himself. And so, coming from an actuarial background, you can understand the power that actuaries can bring to helping make insurance purchasing decisions. But when you look at risk managers across the board, other tech companies, other large companies, generally speaking, a lot of times actuarial functions are outsourced. What we found within Google is that we started as a team that was trying to better understand risk, again, for insurance purchasing decisions. But risk quantification as a field and as a discipline is really gaining a lot of traction within the cyber insurance space, within the cybersecurity space, across the board. And so, we actually work with a lot of different product areas within Google as well, so that we better understand their risk, they better understand their risk, and that we can translate that. So it's a growing team, and it's interesting to have internal.

Simone Petrella: So I think to start on the kind of realities of the insurance industry in the market, because that's kind of one of the areas where risk quantification has become so critical and important as companies are really struggling in the current state of the market to either retain policies or get new policies for cyber insurance, can you just tell us a little bit broadly about what that landscape looks like now, and where do you see it going?

Monica Shokrai: Yeah, absolutely. So cyber insurance market, as you've mentioned, has been through quite a bit. It's an immature industry within the insurance space. And you can tell that because coverage is changing daily, premium is changing daily -- or maybe not daily, but regularly, right? And so, they're still at a point where they're trying to mature the coverage and make it more stable over time. I think as many know, no surprise, 2020, 2021 ransomware became a big issue for the insurance industry. Prior to that, as an actuary, the way that I would explain the way they were looking at cyber insurance was that it was a low-frequency, high-severity line of business. As soon as the ransomware epidemic started happening, it became a higher-frequency, lower-severity line of business. And so that really messed up pricing.

Simone Petrella: Yeah.

Monica Shokrai: Years ago, they struggled to sell cyber insurance. They were trying to convince buyers of the need. And then suddenly, we're now in a market where it's hard to actually buy the coverage, right?

Simone Petrella: Yeah.

Monica Shokrai: So it's been through quite a bit. In terms of trends, I think there's a growing trend to get better data that is more directly related to the risk. And that also comes from some sort of a scan, whether it be internal or external. That's a big trend we're seeing in the market. The other big trend, which as a risk manager or buyer of insurance, I'm not a huge fan of, is that in order to sell the coverage, they started including everything in it. There's a misperception from my perspective that cyber insurance doesn't cover claims. That comes from the fact that other insurance policies, like property policies, had cyber embedded into them. And there were exclusions that became court cases over the years. Cyber insurance actually is pretty broad. But now, they're trying to think about what sorts of systemic events might happen that they should exclude from policies because insurers can't withstand a catastrophic event that could affect many policyholders at once. And so, that was a very long answer, but we're starting to see more exclusions pop up.

Simone Petrella: So is it fair to say that that evolution of where we are in the market is kind of the reason that companies and risk managers who are actually charged with getting cyber insurance policies for their organizations are looking at much longer audit reports, questionnaires, just to be able to provide the baseline data to even potentially qualify for a policy?

Monica Shokrai: Yes. It's not necessarily exclusion-driven. It's driven by the uptick in claims. And so insurers come out ahead compared to others based off of risk selection. So those that can select the best risks do the best over time. And so what happened is as losses started to uptick quite a bit, they started to ask a bunch of more questions. You normally have, let's say, a 40-page PDF that has a bunch of questions that was standard. And then, with the ransomware epidemic, ransomware supplemental applications started coming up, and you're getting different ones from every single company. And it's been really tough.

Simone Petrella: Yeah. Well, hen on the reverse, even from my experience and background in the cybersecurity industry, there's kind of the vendor side of this, which is the holy grail is how can you think about being part of the quantification of risk in one of those questionnaires that gives actuaries and insurance providers enough comfort and visibility to be able to either issue a policy or potentially give you a better deal on a premium? Is that something we're still seeing? And are you seeing any kind of progress on that side today?

Monica Shokrai: Yeah, absolutely. So I think that there's an interest across the board for most security providers or most technology providers really to enter into that work stream from a quantification perspective. What we're seeing is that there's an uptick of different providers being used, but in order for an insurance company or an actuary to make use of that data, a couple of things have to happen. One, there has to be enough market share, where enough of my policyholders are going to give me those metrics up front, and then I need enough time for the losses to come in to start to correlate how do these metrics actually impact losses, right? And so that first part up front actually restricts the number of providers that can really make a difference in that space, because you need consistent data that you're getting the same way every time that you can then use, and that's a little bit harder. And I think where a lot of tech providers get tripped up is how do you convince the insurance market to utilize this tool that might not be used across the board?

Simone Petrella: And each insurer has their own data collection that they're using to kind of track and monitor this, so it's not even like you can apply it to one insurer and then hope that that somehow translates to another.

Monica Shokrai: Exactly. That's right. There are certain vendors, specifically with outside-in scanning, so the slogan might be like, see what a hacker sees, that are gaining traction in the market that are almost used, I can't say absolutely, but most insurers will use one or another provider that does some sort of outside-in scan, and there's a lot of traction there. The benefit there is that they don't need the customers to adopt those vendors. They can use them without the customer's consent, really. And so that's been a little bit more successful than inside-out data or scanning.

Simone Petrella: Interesting. So you mentioned the third part of your role now is around the risk protection program, and that's something that you're working with Google Cloud customers on. So I didn't know that Google was in the insurance business, and maybe I've mischaracterized it. Can you tell me a little bit more about what that really means?

Monica Shokrai: Yeah, absolutely. So just to clarify on our position, I see us more as a data provider, and I see us more as a customer advocate, but that doesn't mean we're not in the insurance industry. We're just not providing insurance. So what we do with the program is that we have a tool called Risk Manager. It's embedded within Security Command Center, another tool of ours, that scans a customer's cloud environment for metrics indicative of risk. Customers can use that with or without insurance just to understand their own risk. I mean, these are metrics that a CISO and their security team will normally use in their normal course of business. What we've done is that through a couple clicks, you can send that data directly to Allianz and Munich Re, and they'll start to use it to quantify the cyber insurance premium that they'll offer, and the policy that they'll offer is called Cloud Protection Plus, which provides broader coverage for Google Cloud. So the goal of this program is not to share data back and forth, right? That's not really what we're trying to get after. What we're trying to get after is recognition of the right security practices from an inside-out perspective. So if you think about a security team, they're faced with a bunch of toil of a bunch of questions and metrics, alerts that they're seeing day-to-day, and they might not know what matters more and where to prioritize their time. Once we link that to insurance and we have almost a feedback loop of, oh, if I do remediate, I'm going to get better premium or better recognition for this action, we start to create a loop where we're incentivizing better security practices, utilizing insurance. And so, that's one thing we're trying to do, right? Focus the industry on how do we improve security the most. The other thing we're trying to do is that we spend a lot of time securing the cloud. So Google as a whole, pre-Google Cloud, has a lot of security intelligence just from our main business, right? That's been transferred into Google Cloud and how we operate. And we want to allow our customers to benefit from our investments in security. So what we do is we work with the insurance industry so that they better can understand Google Cloud, which otherwise might be opaque for them, so that they can then help reward our customers for their usage of the platform.

Simone Petrella: That makes a ton of sense. How much of that is also like a transfer of the risk too, since you've done so much work to secure the cloud for Google Cloud? And then when customers come in, I know one of the challenges with migrating to a cloud environment is you have to do the configurations and you work in partnership with your cloud provider, but you are taking on some of that risk either contractually or in execution anyway. So is this kind of meant to cover both sides essentially?

Monica Shokrai: Yeah. So what you're touching upon, at least how I'm perceiving it, is this idea of shared fate that we have that actually was announced with the program. So Google Cloud has moved from the idea of a shared responsibility model to focusing more on shared fate with our customers. What do we mean by that? Not only are we going to start with secure blueprints, as you've mentioned, so once they come on the cloud, setting them up securely from day one. We have security products and a security product suite that they can then use for their day to day. And then now with insurance, not only are we providing them with tools, but we're helping provide them with an outcome. And hence the term shared fate. And when we think about an outcome, yes, we're partnering with insurers, but we work with the insurers to make sure that we're comfortable with the policies that they're offering and the way that they understand things. It's not a direct risk transfer from a Google perspective.

Simone Petrella: Yeah.

Monica Shokrai: But it's more of an investment to help bridge the gap in areas where there's friction in the market without our investment. And we're doing this ultimately for the ease of our customers.

Simone Petrella: I know I asked this when we were chatting beforehand, but I want to bring it back up because it's a topic near and dear to my heart. But when we go back to that topic of what are the risk calculations and how do you have the right data in order to think about risk mitigation and what would actually allow us to underwrite policies in the insurance industry, it always strikes me that we talk about things like two-factor authentication or multi-factor authentication and having endpoint protection is absolute necessities. And we're even seeing that with some of the people side of things, because we see questions on survey questionnaires for insurance around, do you have a training awareness program? Are you doing the baseline training to have a qualified team to implement these controls? But they're really still to this day, yes or no questions. Do you see that changing at any point to provide more quantitative or even qualitative data to kind of help inform this actuarial challenge we have in cyber insurance?

Monica Shokrai: I do. I think when you talk about personnel in particular, I haven't seen much in the industry to date. And that's just, again, my experience, but I haven't seen much from a quantitative perspective enter that space. That being said, on a broader level, I think it's absolutely the trend. I often talk about I meet with CISOs all the time to talk about the risk protection program. And the feedback they have is, why are they asking me a yes or no question that I can't answer in a yes or no way, right? And so I think over time, we'll have a lot more data-driven metrics that are pulled from within the customer's environment. The challenge is similar to what I mentioned before, coming to an agreement on what those metrics should be. And then when you think about insurance, we're always trying to make sure anything we're using to underwrite risk is directly proportional to loss and that you really understand that relationship. And so that needs to be proven out over time. The way as an actuary, we look at that correlation is, does it impact frequency? So for example, you think about a security program, the more secure you are, the less likely you are to have an event. So that impacts frequency. Or does it impact severity? Severity could be, I have backups, right? So if something happens, I'm more likely to get up and running because I have a backup. Whenever we're thinking about new metrics for the industry, if you can position it to them as this impacts frequency in this way or severity in that way, it's a little bit easier for them to digest and think about how to incorporate it.

Simone Petrella: Yeah, but to the point that we've made earlier, you kind of have to make that point to every individual actuarial team within the various insurers because there's not, as far as I know, a consolidated --

Monica Shokrai: Yes, that's right. Your friend in this would be brokers. So there are brokers that are working with insurers to help give them better understanding of what controls matter. For example, I think Marsh published the 12 top controls that insurers should care about, and they will help with that conversation and that dissemination of information. But it's the same point with brokers. There's a lot of different brokers out there. And so that, yeah, there's not a one-to-one relationship and there's not a group that's agreeing on exactly what to measure risk.

Simone Petrella: Well, maybe we'll leave it here in that, what's the one piece of advice if someone could walk away and say, what's the one thing I need to do to kind of navigate what is often seemingly a really complex cyber insurance market? What would that be?

Monica Shokrai: It's a good question. So I think I would say try to provide as much information and transparency as possible, at least upfront, to try to have that conversation, and start early. Because insurers, when you're working with them, will actually give customers feedback like, oh, if you implement X, Y, and Z, we're happy to ensure this risk. And so getting that information up front where you can have that time to implement the changes would be really helpful. And then also work with the insurance industry to show them, if I'm a security professional or a CISO, what I think matters. Because in your individual insurance renewal, it might feel like extra work. But at the end of the day, if they continue to see that from CISOs in an organized approach, I think that that will help generate more change.

Simone Petrella: Great. Well, Monica, thank you so much for taking the time today. Really appreciate everything. And I hope you're enjoying the rest of your mWise conference.

Monica Shokrai: Thanks so much. mWise has been great. And yeah, I really appreciate your time. Thanks.

Dave Bittner: Ben, what do you think?

Ben Yelin: Was that Simone's first interview for "Caveat"?

Dave Bittner: It may have been. Yeah, she's done a number of great things over on the CyberWire podcast, but I think that's her first time joining us here.

Ben Yelin: Well, it is an honor to have her do the interview. And I guess my invitation to the 2023 Cybersecurity Conference and yours must have been lost in the mail.

Dave Bittner: Yeah, could have been.

Simone Petrella: But Simone did a great job. I thought it was a really interesting interview and I learned a lot.

Dave Bittner: Yeah, absolutely. All right. Well, again, our thanks to Monica Shokrai for taking the time for us and to Simone Petrella for sharing that interview with us. That is our show. We want to thank all of you for listening. N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Eiben. This show is edited by Trey Hester. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.