Caveat 7.6.23
Ep 178 | 7.6.23

To pay, or not to pay, that is the question.

Transcript

Mark Lance: These cyber criminals are always going to continue to evolve and look for ways that they're making money and right now they're doing it effectively so I don't think we're going to see it going anywhere for the time being.

Dave Bittner: Hello everyone and welcome to "Caveat," the "CyberWire's" privacy surveillance law and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today Ben looks at a surprising federal district court decision that limits the Biden Administration's contacts with big tech companies. I've got the story of research questioning the veracity of AI detectors, and later in the show, my conversation with Mark Lance from Guidepoint. We're talking about ransomware policy negotiations and payment impacts. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on many of the topics we cover, please contact your attorney. Alright Ben, we've got some good stuff to share this week. I know you've got an interesting one here. Why don't you kick things off for us?

Ben Yelin: So while we were enjoying our 4th of July barbeques, a Federal District Court judge in Louisiana by the name of Terry Doughty, he was an appointee of former President Trump, issued a ruling limiting Biden Administration officials contact with social media companies. This is a preliminary injunction but at least for the time being, it prohibits members of a bunch of different federal agencies from contacting these social media companies for the purpose of recommending restricting content, really for any reason.

Dave Bittner: Okay.

Ben Yelin: The is a very, this was a very broad based lawsuit, it was filed not only by some private individuals and I don't want to necessarily disparage them but basically a lot of anti-vaxxers and other people who were canceled on social media having views that didn't align with the federal government or in some cases, reality.

Dave Bittner: So people who, people who are-- fair to say, at the fringe of some things.

Ben Yelin: Yes.

Dave Bittner: Okay.

Ben Yelin: Yeah, especially on things like COVID policy, vaccines--

Dave Bittner: I see.

Ben Yelin: Then the states of, the State of Missouri, or the states of Missouri and Louisiana, also joined the lawsuit. The attorneys general of those states, saying that they had an interest in fostering free speech in their own states. That it was in the states' direct interest to join this lawsuit and limit the ability of federal agencies to interfere with social media companies or what they saw as interfering with social media companies.

Dave Bittner: Okay.

Ben Yelin: This is a First Amendment claim. The reason this claim is problematic to me and this decision is problematic to me, is ultimately we're talking about the decision of a bunch of different private companies; Twitter, Meta, and others, Google, and YouTube--

Dave Bittner: Yeah.

Ben Yelin: --to take down content that they don't want to have on their platform. And they are protected in those decisions by Section 230 of the Communications Decency Act.

Dave Bittner: Right.

Ben Yelin: The allegation in this case is that the companies were acting at the behest of various government officials, therefore they were acting in concert or through coercion from the government and the plaintiffs here have a First Amendment interest, thus the court Ireland requiring the seizing of all contact between government agencies and these tech companies. I'll get into why I feel that is a little bit of a problematic viewpoint in just a second. So they go through all of these examples and they did a lot of discovery in this case, of emails between various Biden Administration officials, actually some Trump Administration officials too, it goes back a few years.

Dave Bittner: Right, because this was sort of, this kicked off because of the pandemic, right?

Ben Yelin: Yes, kicked off in 2020 because of COVID. Not everything in the case is related to COVID, there's a lot related to Hunter Biden's laptop. I mean--

Dave Bittner: Oh, there's something for everyone here.

Ben Yelin: Yeah, to understand this case, you do kind of have to be in the FOX News cinematic universe.

Dave Bittner: Okay.

Ben Yelin: And I clearly believe that this judge is in that universe.

Dave Bittner: Okay.

Ben Yelin: He's just well-versed in these types of topics.

Dave Bittner: Alright.

Ben Yelin: So there are all these conversations, some including rather explicit language where various government officials are pleading with these tech companies to take down false information about vaccines, sometimes they're sending emails saying, what the eff are you doing? Like this is very dangerous, you have to take down this information.

Dave Bittner: Right.

Ben Yelin: The social media companies are responsive, they're saying, you know, we're trying the handle your request, give us some time. We want to work with you. That's the general gist of these conversations. If you're just looking at this in your view, objectively, it's basically protected free speech on behalf of the government and the tech companies themselves. They are having a, you can disagree with the contents of their conversation, but it is merely that, a conversation about what the government thinks is in the best interest of the public. And they are pleading with these tech companies to assist them in a governmental wide effort to crack down on misinformation. Again, you can disagree that this is misinformation, many people do, and you can disagree that the government should be involved at all in this, but where the problem comes in for me is this idea that it was coercion. So how does he argue that it was coercion? He does that in a number of ways; one of them is saying that--

Dave Bittner: This is the, "he" is the judge.

Ben Yelin: He being the Judge Doughty, yeah.

Dave Bittner: Okay.

Ben Yelin: One of the ways he argues coercion is saying that the government implicated or implied through various remarks in private conversations that there would be consequences for the tech companies if they didn't comply. All of the language to that effect was very vague and I think it's definitely a stretch to interpret it as some kind of threat. Oftentimes it was the government just saying something like, "we need to do something about this, like you guys are-- you guys aren't being cooperative, something needs to be done about this."

Dave Bittner: Right.

Ben Yelin: Whether you view that as a specific threat sufficient enough to limit direct contact between social media companies and a presidential administration, that just seems to me to be a stretch. The other thing he mentioned is that there was a congressional effort at play, potentially to take away Section 230 protection for these companies. And that in and over itself was an implicit threat causing coercion. But, in my view, nothing in the conversations in the record really indicates that type of quid pro quo where the government says, "you cooperate with us or we're taking away your liability shield." That just never happened. Separately there was an effort in Congress, for members of both parties to cut again Section 230 protection for these, for these tech companies, but there was just no evidence that that was related to these requests to take down information. So where we are now is because of this preliminary injunction which applies nationwide, members of, for example, NIH or HHS cannot communicate with executives from Twitter, not that Twitter would communicate with them these days, Meta or Google, about false information and the need to take down false information. That would be a violation of this court order. This is just a preliminary injunction, theoretically this judge will hear the full case, they'll have a trial where he'll be able to consider evidence, the Biden Administration will bring the best Justice Department lawyers to the table to argue their case. I'm sure the Biden Administration is going to pursue an appeal to the Federal Circuit Court, once again, I believe that would be the 5th Circuit Court of Appeals, which is extremely conservative.

Dave Bittner: Yeah.

Ben Yelin: And eventually that could make its way up to the United States Supreme Court if the decision is sustained. And in my view--

Dave Bittner: We look forward to a 6-3 decision.

Ben Yelin: Yeah. I think we could probably see where that is going.

Dave Bittner: Right.

Ben Yelin: I just think you're really inhibiting the ability for a constructive dialog between government officials and big tech companies about how to be responsible regarding false information. Whether that information is about vaccines, whether it's about election conspiracies, at the very least, I think the dialog is productive and to prevent even that dialog, even if you're not demanding or, you know, coercing these big tech companies to take down content, the fact that we're limiting this dialog, not only is bad from a policy perspective in my view, but that in and of itself is a major limitation on free speech. So I think in trying to protect free speech, this judge is really inhibiting free speech. And that's why I just think this was a puzzling and problematic decision. Also note, he released, he could have released this decision any day, I believe the final briefs in this case were due in May. He decided to release it on the 4th of July, which I think was on purpose just as kind of a political message. First Amendment, America, Freedom.

Dave Bittner: You think, really?

Ben Yelin: Yeah. And I just am frankly a little bit cynical about that.

Dave Bittner: Okay. So one thing that caught my eye here is that among the groups who are prohibited from communicating is CISA Director Jen Easterly. This is the organization that is tasked with protecting our nation, our organizations, from cyber-attacks. So to mute the leader of that organization from communicating with the organizations she is tasked with helping protect and by extension, protect the security of you, me, and our nation and dare I say, the world? This seems short-sighted to me.

Ben Yelin: It certainly does seem short-sighted. I think that's going to be a major secondary impact of this decision is a major inhibition on information sharing as it relates to cybersecurity.

Dave Bittner: Yeah.

Ben Yelin: Even if it's not a direct prohibition as it relates to CISA, I mean I think they can have, they can get around this ruling with public meetings and not directly discussing some of the various prohibited items here.

Dave Bittner: Right.

Ben Yelin: It still could have a chilling effect where suddenly Jen Easterly and CISA is just concerned to have this conversation, which is bad in and of itself.

Dave Bittner: Right. Those back channel conversations are important, right?

Ben Yelin: Right, especially, I mean this is one of the goals of CISA is to foster information sharing.

Dave Bittner: Right.

Ben Yelin: And information sharing on potential issues. So yeah, the fact that CISA was even involved in this litigation is problematic. I mean kind of everyone was involved in this litigation, a lot of different plaintiffs had a lot of different complaints about a lot of different defendants.

Dave Bittner: Yeah.

Ben Yelin: And you know, I think we're going to end up seeing a ruling here that's extremely broad in how it's being handled. I think it applies to far more potential conversations than any of the parties involved might have anticipated and that's another secondary effect of this decision.

Dave Bittner: Part of what fascinates me about this is that, you know, during all the controversy with the election, you know, former President Trump's accusations and all that kind of thing, and we were talking about these platforms, and you and I discussed many times how these platforms are private companies and when a lot of the criticism that people would wield against these companies were that through their, through their moderation, through their censorship, that they were violating the First Amendment. And you and I would say over and over again, that's not what the First Amendment does. The First Amendment is to protect us from the government.

Ben Yelin: But what do we know? I mean.

Dave Bittner: Well no, no, no, no, but that's what I'm getting to though, is that I mean, that's, that's what this is going at, right? Is that they're saying that that's the exact thing that's going on here is that the government is having undue influence and that's violating the First Amendment rights of the platforms and by extension, their users. Isn't, that's the argument they're making, right?

Ben Yelin: Yeah, that is the argument they're making. And I just think that's an argument that goes too far. I think they are drawing lines and connections that are undue, this shouldn't be connected. I think the standard, even the court admits in its decision that the standard is relatively high to show coercion. It has to be something where there's a direct connection between the government's action or implied threats and the response of these tech companies. Now he does say that even if the tech companies were to make these decisions anyway, coercion can still be a valid cause of action here. It still could be an inhibition on free speech.

Dave Bittner: Right.

Ben Yelin: I just think expanding this definition of coercion to something so broad, where even a discussion or recommendation or you know, a plea to these companies to change their practices qualifies as that type of coercion, I just think is a huge stretch legally and I think most legal scholars agree that that is just a bridge too far and all different types of government actions, normal, everyday conversations between government officials and private sector industries, if that is going to be seen as coercion, even if there's no explicit threat, then we're going to have a very difficult time having kind of public private partnerships, engagement between federal agencies and companies on things like cybersecurity, so yeah, I do think it's, I really do think it's dangerous in that respect.

Dave Bittner: Do you think this is the kind of thing that could come back and bite them in the butt? Or you know, where there's-- if they get what they want through this case, could there be unintended consequences, you know, let's say things shift the other way, we have a different person, you know, a different parties in power in the White House and suddenly they can't communicate with the social media platforms in ways that they want.

Ben Yelin: Yeah, I mean certainly if the, you know, if the shoe could go to the other foot, I could definitely see that being a possibility.

Dave Bittner: Yeah. But I guess if, it's fair to say that one, if you're looking to get rid of misinformation, right, which is what we're trying to do here, public health, you know, those kind of things, anti-vax type of things, I guess it's fair to say the majority of that is coming from one side.

Ben Yelin: Yeah, and actually, and what's interesting is the judge on this case kind of reframes that issue and says, "all this bias here that we're alleging in this case seems to be happening against political conservatives." Well I think the other side would say, "that's because that's where the misinformation is coming from."

Dave Bittner: Yeah, although it's fair, I mean a lot of anti-vax stuff comes from the left.

Ben Yelin: Yeah. Certainly. It was a movement that started on the left, with people like RFK Junior, current presidential candidate.

Dave Bittner: Right.

Ben Yelin: And I do think restrictions on social media that are over broad, it's definitely a valid public policy concern, these restrictions can be over broad. But I just think you're trying to address this problem with a sledgehammer instead of a scalpel, and you're taking action that's going to have a more deleterious effect than simply having a constructive dialog with these tech companies about, you know, over broad regulation. Even something like the Twitter files, which I laughed at and disagreed with to a large extent--

Dave Bittner: Yeah.

Ben Yelin: At least that was just a conversation. It was a response from individuals online saying what Twitter was doing was over broad, was overly restrictive, and we should have a constructive conversation about it. And work together to come up with a better solution. But this is not that. I mean this is the full force of the judicial branch coming down on these companies and I think that's just going to be, that's just going to be a problem.

Dave Bittner: Yeah.

Ben Yelin: It's going to be a problem.

Dave Bittner: How do you, what's our timeline here for this playing out?

Ben Yelin: It's going to take a while. We have this preliminary injunction, maybe we get a full hearing and a complete decision by the end of this calendar year, probably goes to the Court of Appeals, maybe that's another year. I can see this being two or three years down the line before we get to whether the Supreme Court even decides to hear it. And depending on what happens on appeal, either in a full hearing in the District Court or in the Court of Appeals, it's kind of 50/50 whether the Supreme Court would take it up.

Dave Bittner: Yeah. What about the injunction itself? Is, does the Biden Administration have an-- are they appealing that to try to get that lifted in the meantime?

Ben Yelin: Yeah, they haven't yet, but it is almost a certainty that they will and I can see that happening in the next couple of days. I don't think they're just going to take this standing by. They've already released statements through the Justice Department saying they disagree with the decision and I could certainly see them seeking to appeal it probably by the time this podcast airs.

Dave Bittner: Yeah. I guess why, why now? Why, why, why the force of an injunction like this? What made this judge decide that this needed to happen at this moment? Are you, you're making the case that this is mostly just political posturing?

Ben Yelin: I don't want to cast dispersions on this judge, I will note he was a Trump judge who was confirmed 98-0.

Dave Bittner: Okay.

Ben Yelin: So I think he was a respectable figure, but it is hard to figure out why now. I mean I guess he would say, this is when the litigation came before him, but you know, I don't have a great answer to that question.

Dave Bittner: I guess my question is why, why an injunction rather than letting the case play out? What was so, what was so important that in his estimation that he needed from the bunch to put a stop to this immediately without, before the case makes its way through, right? It was that important.

Ben Yelin: I mean, if you really believe this is an inhibition on First Amendment rights, and that it would cause irreparable harm, that is the standard for a preliminary injunction. If you believe that the plaintiffs would succeed on the merits, and that this would cause irreparable harm, as he seems to believe here--

Dave Bittner: Yeah.

Ben Yelin: --than a preliminary injunction makes sense. I don't happen to believe either of those things.

Dave Bittner: Right.

Ben Yelin: So I think it doesn't make sense from my perspective but for the judge's-

Dave Bittner: You're not a judge.

Ben Yelin: Yeah, from the judge's perspective, yes.

Dave Bittner: Here from the cheap seats, right?

Ben Yelin: Exactly.

Dave Bittner: Yeah, yeah. Alright, well it's certainly an interesting one and obviously we will keep a close eye on this one. It's fascinating, right?

Ben Yelin: It sure is, yes.

Dave Bittner: Alright, let's move on. My story this week, a little lighter, but also something important. This has to do with artificial intelligence and detecting the use of it. Let me start out by asking you, Ben, because you are a professor.

Ben Yelin: Yes, I am.

Dave Bittner: You teach law and you have many students. So let's rewind the clock a little bit. Before any of this AI stuff hit the world, before ChatGPT was, you know, captured our imagination, you surely had cases where you suspected that a student may have been cheating.

Ben Yelin: Yeah, many, many times. Usually what would happen is I would catch them writing in a different font in our like Blackboard page.

Dave Bittner: Oh.

Ben Yelin: And as I get to know the students well end that I understand what their writing style is and when the writing style does not match what they've copied and pasted, I'll become suspicious, for sure.

Dave Bittner: Okay. And how would you address that?

Ben Yelin: I would copy and paste the entire passage and put it into a google search, and see if I got a direct match with some type of secondary source and unfortunately, very frequently I would get that match.

Dave Bittner: I see. Then what?

Ben Yelin: Then I would go through the official law school disciplinary process so I would usually give the student one chance to correct the behavior and if it happened again, I would refer the student to the relevant academic committee, they would conduct an investigation and decide what the appropriate punishment would be.

Dave Bittner: I see.

Ben Yelin: It's just according to whatever the institution's honor code is and our honor code obviously is adamantly against this type of plagiarism without attribution. Consequences ranging from failing the class to, if there are multiple instances, expulsion.

Dave Bittner: I see, yeah. So, this article caught my eye, this is written by a Jenelle Shayne, who writes a blog called "AI Weirdness," where they track the goings ons of different AI issues. And in this blog post they're noting some research that I believe was from Cornell, a study from Cornell, that was looking at AI detectors. So similarly to how you were saying, you know, you could copy and paste something into Google to see if you get a hit on it, there are a number of tools out there now that claim to be able to detect whether or not something was generated by AI. And what this study did was it put in text from non-native speakers, non-native English speakers, into these detectors and they found that these tools, between 48 and 76 percent of the time flagged non-native speakers writings as being AI generated. And this compared to zero to 12 percent for native speakers.

Ben Yelin: Yeah, that's a problem, isn't it?

Dave Bittner: You think?

Ben Yelin: Yeah.

Dave Bittner: You think? So obviously we have a huge false positive issue here. Now you and I have talked about before, about false positives with things like facial recognition, it seems to me like you know, any of these automated systems have these huge problems with false positives and just in the same way the facial recognition systems seem to have trouble with people of color, these systems have trouble with folks who are non-native English speakers.

Ben Yelin: Once again, our machine overlords are just as biased as we humans are. Sometimes even more biased.

Dave Bittner: Right.

Ben Yelin: But yeah, I mean this is going to have consequences because somebody who is a non-native English speaker is going to be accused of some type of academic violation, plagiarism based on a flaw in the software.

Dave Bittner: Yeah.

Ben Yelin: And that's a really bad outcome. I mean it's completely unfair for somebody to faced that accusation when they never used ChatGPT or any other AI for that matter.

Dave Bittner: Right.

Ben Yelin: So, that's where it becomes problematic.

Dave Bittner: The author of this blog post took a paragraph from their own book that they wrote, so your previous writing that they'd done, ran it through a detector and the writing was flagged as being likely AI written. So then, the author took that same passage, ran it through ChatGPT, and said rephrase this please. And then ran it through a detector and the detector said no, this is probably human written. So, the total opposite of what they were trying to achieve, right?

Ben Yelin: Yeah, it's more useless than if you just flipped a coin, 50./50 I ran them to determine whether it was AI generated or not.

Dave Bittner: Right. So, let's get back to the original thing here though, I mean you, as a professor, you've certainly thought about this.

Ben Yelin: I've thought about it a lot, yeah.

Dave Bittner: Yeah. Where do you stand right now when it comes to using these tools?

Ben Yelin: I would not use one of these detection tools based on this story. It's kind of choosing between the lesser of evils but it seems like I, using my own intuition on whether a student used artificial intelligence is just as effective, if not more effective, than using one of these detectors, and I'd rather have it be my own mistake than trusting some type of detector and having that as a determinant in potentially punishing a student.

Dave Bittner: Right.

Ben Yelin: So yeah, I mean at least for the time being, until these tools can become more effective and can take into account non-native English speakers, I have a lot of non-native English speakers in my classes. They are some of my best students and the thought of a false accusation against one of them is just too much for me to bear at this point. So until the technology gets better, I'm going to be very, very reticent to use it.

Dave Bittner: You know, from a higher level, I wonder about you know, when I was a kid coming up through school and through college even, this was a time when not everybody was using word processors for everything. By the time I was in college pretty much everybody was, but certainly through high school it was, some people had access to the technology and other people didn't. And so this brought up the issue of whether or not it should be allowed to use a spellchecker.

Ben Yelin: Right.

Dave Bittner: Right? Because for many teachers, particularly English teachers, spelling counts.

Ben Yelin: Yeah.

Dave Bittner: Well, spelling doesn't count anymore.

Ben Yelin: Right.

Dave Bittner: You know, I think my own kids coming up through school, I have a kid who's in high school, they don't have-- I mean spelling, spelling and grammar all get tagged, all the work they do they're using the Google suite of tools, the school uses the Google suite of tools, and it automatically flags grammar and spelling. And that's a change, and we're okay with that. We all assume now that that is a set of tools that everyone has access to so why not in the real world. And I wonder to what degree are these new AI tools just going to become like spellcheck, like grammar check, a tool that is part of the regular word processing suite that you have, and we just need to adapt to that reality.

Ben Yelin: Yeah, I think it's very possible. And sometimes it can just be like there can be a time lag between when the technology is available and when we can trust things like detection software. And maybe we're going to get better at it. I just think we're not at that point yet and it's better to be safe than sorry until we get to that point.

Dave Bittner: Yeah.

Ben Yelin: Yeah.

Dave Bittner: I think it may require rethinking though, of how we test our students.

Ben Yelin: I think it'll definitely require rethinking. I mean, I've already thought about it on my own courses, you can tell students, you know, on your honor as part of this honor code, you are not to use any type of AI software in exams and papers, et cetera, but if it's so difficult to detect, you know, law school is very, very competitive. Some students are going to do it.

Dave Bittner: Yeah.

Ben Yelin: And so, in the long run we have to think about integrating these tools. Maybe the test of aptitude is the inputs that a person puts in a ChatGPT. I think we're a long way from that being the case, but yeah, we are going to have to adapt just like we adapted to calculators and Grammarly and everything else.

Dave Bittner: Right. Right. Yeah, yeah, yeah, yeah, I'm just, I'm laughing because I'm thinking about all the teachers when I was coming up who said, no, you can't use a calculator, you're not going to have a calculator with you all the time.

Ben Yelin: Not in your pocket.

Dave Bittner: Right, it's like no, I don't have a calculator, I have a super computer with access to all the world's knowledge, all the time.

Ben Yelin: Little did you know Mrs. so and so. Yeah.

Dave Bittner: Exactly, exactly. You know my son was taking finals and for his English final, they had, the teacher had them do it handwritten. Okay? And my son was like, I haven't handwritten anything in years. And my wrist was exhausted because I haven't--

Ben Yelin: Yeah, oh yeah.

Dave Bittner: --my handwriting is terrible because everything they do these days, for everyone's convenience, is done electronically. So kids know how to type, they don't know how to write.

Ben Yelin: I know, the carpel tunnel is awful. I've definitely been there recently for sure.

Dave Bittner: Yeah, yeah no, I've recently wrote a bunch of handwritten notes and I was like oh man, this hurts.

Ben Yelin: This hurts, yeah, even just like signing forms now. We're just not used to it.

Dave Bittner: No, no. It's funny. Alright, well, I guess buyer beware when it comes to these AI detectors.

Ben Yelin: Caveat emptor if you will.

Dave Bittner: Yes, yes. It seems to me like they're just not reliable, they just, you can't-- and I would say also the message is that if someone is using one of these to try to, if you get accused of something, and it's based on one of these tools, you have a very, very good case to push back--

Ben Yelin: Right.

Dave Bittner: --against these tools and should certainly do so. Alright, well those are our stories for this week, we will have links to those in the show notes. We would love to hear from you if there's something you'd like us to discuss on the show, you can email us, it's caveat@n2k.com.

And I recently had the pleasure of speaking with Mark Lance from "Guidepoint," and we were talking about ransomware, particularly some of the policy issues, the ongoing question about negotiations, and the impact of making payments for ransom. Here's my conversation with Mark Lance.

Mark Lance: If you look at the history and just kind of the evolution of ransomware as a whole, it started out largely opportunistic and over time they started targeting businesses. Businesses are going to have larger sums of money than individuals would. And so, you know, using more targeted attack techniques, evolving into you know, clients knew that they should have backups, well specifically targeting backups, and then even when they were able to recover and restore effectively, that's when they moved over to the double extortion techniques where they're stealing a bunch of information and even if you're able to successfully recover from backups, and you know, restore your environment to full operations, they're still going to try to get their ransom payment, you know, through extortion of and kind of the data or information that they've taken from their environment and prevention from having them leak that on what they would consider their news or their name and shame sites. And so I think over time, you know, they've continued their evolution and sophistication, all with the goal of monetary gains. At the end of the day, that's what these cyber criminals are after is you know, specifically when it comes to ecrime is making money. So anything that they can do to try to drive that you know, the potential that they're going to receive that ransom payment, they're going to pull any levers or they're going to flip any switches that they can in order to try to achieve that goal which is those monetary gains. I think over time again, like you mentioned, the basis was, you know, don't pay ransomware threat actors, you don't know if you're going to get your information back, you don't know if you're going to you know, successfully be able to recover your infrastructure, and now you've funded a criminal group. Over time, again, these groups are built with the intent to make money. So now they rely heavily on things like reputation of their criminal organization, the brand that they're criminal organization has and even being able to successfully, you know, recover or decrypt people's information. Pardon me. As well as, you know, ensuring that you know, once they have received payment that your information isn't going to be posted to their name and shame site, that it isn't going to be leaked, because at the end of the day, if they're not delivering upon what people are paying them for, it's going to affect their brand, it's going to affect their reputation and people are going to stop paying that group. So for instance, if you're using something like Royal as an example, which is a threat group we track, if you know, people are paying Royal, they're not getting the decryption keys or they're still being posted and all their information is being leaked, they're going to have their reputation of don't pay Royal, because they still you know, release all your information. And then at the end of the day, that leads to them not getting ransom payments which again, is their primary motivation is monetary gains. So, again, long explanation to what you said, but you know, these organizations and criminal organizations are purpose built to make money and so they're doing whatever they can and taking whatever steps that they can to ensure that that occurs.

Dave Bittner: You know, as you mention, the early days, I remember specifically the FBI saying, you know, whatever you do, don't pay the ransom. You know, we don't want to, we don't want to fund the criminal enterprise and I'm curious, where do we stand now when organizations are making this risk assessment and they're trying to decide how are we going to come at this? We found ourselves victim of a ransomware group, perhaps they're threatening a double extortion like you say, posting out information out there. How does that conversation go among the decision makers as to how to approach the paying of the ransom?

Mark Lance: Yeah, there's a lot of different reasons or you know, what organizations might feel is business justification, to go ahead and make a ransom payment. Like you had mentioned, early on the guidance was, you know, don't pay them, you're not getting your information back. But we've seen you know, where over time that has changed and in most circumstances you'll see where they'll do everything within their power, including escalations to different members of their internal support organizations, and providing updated or revised versions of decryptors to make sure that you are actually able to get what you're paying for. Now, I think that again, there are a lot of potential reasons that our clients have to go and make a determination on whether they might considered making that ransom payment. It could be that they have lost critical information and access to systems, that if they're not able to recover because they don't have backups, they feel like it's going to cause issues with their business and they're not going to be able to appropriately recover operations in a sufficient manner, so that they can you know, recover from the incident. So in that situation, they might pay for decryption keys so they can actually restore and recover their environment. There are other times based on the extortion where some clients, even though they do have viable backups and they're able to recover operations, we've seen where they've made a determination to perform the ransom payment because they want to do the incident and breach disclosure on their own timelines instead of having it leaked on the threat actor's name and shame site, they feel like you know, they would prefer to do it through you know, assistance with external council, you know, public relations and disclosure requirements to do it more effectively on their own terms. We've also had instances where you know, clients have had access to backups but recovery or access to those backups was going to take a considerable amount of time and so it was actually more effective than them, for them, and cheaper for them to make the ransom payment and get the decryption keys so that they could expedite the recovery process. As an example, I mean we worked with one healthcare system that had offsite backups, but it was going to take them two weeks to get access to those and start the recovery process. Each day that they were down they were losing a million to two million dollars, which over the span of two weeks, you do the math, is anywhere from you know, 14-28 million dollars that they're potentially going to lose. Based on that, and that the ransom request was only two million dollars, they decided to expedite the ransom payment because by getting that encryption tool they were able to initiate the decryption process within a matter of four to five days, versus waiting those two weeks, and so paying the ransom was actually cheaper for them to, than you know, recovering with what they had available. So, a lot of different considerations in what businesses might consider justification to make a ransom payment, I think at the end of the day, clients should continue to take the position that we're not making a ransom payment if we don't have to or don't have the necessity to. But again, it comes down to a business decision on whether they believe there is that necessity.

Dave Bittner: And what part do insurance companies play in this? As this decision is being made, are they, are they at the table? Are they, you know, trying to minimize their own exposure and saying well, maybe paying the ransom is the best for everyone here?

Mark Lance: It ultimately is going to come down to the client's evaluation of their business requirements, and determination on what they want to do. Now, availability of cyber insurance and coverage does have an impact on that because if you're potentially paying a ransom out of pocket, versus having an insurance policy that provides coverage for ransom amounts up to a certain dollar value, you might be more inclined to go ahead and make that ransom payment because you have the coverage despite whether you know, you have a full necessity to do it.

Dave Bittner: When we talk about the professionalization of these ransomware groups, and one of the arguments for not paying them has been, you don't know whether or not they'll just come back for a second helping. You know, you'll pay the ransom, they'll come back and say hey, this is great, now give us some more. What's the reality of that? Are we seeing the ransomware organizations, are they generally, if you treat them professionally, you pay them, do they live up to their word and go on their way and let you get on with things?

Mark Lance: I believe they do, because they have again, this reputation and this brand that they have to uphold. So, again, say you're using another cybercriminal group, as an example. If they were to target your environment, you make a ransom payment to them, and to get access back to your information through decryption tools, as well as to prevent the name and shame, and then they comes back and hits your organization again, you know, three months later, six months later, people are going to be like, we'll stop paying them because they're just going to come back and continue to impact you. So where I think we see recurrence from a single group to be rare because again, they have that reputation that they need to uphold and that brand that they have to uphold that you're getting what you're paying for. I do think, and we have seen, where they are leaving backdoors and maintaining persistence into the environment, and again, they're driven by monetary gains, so what we believe happens in some circumstances is they then go make more money off of the access into your environment by selling you to a different group or in a different affiliate or somebody else, then they can make more money off of. And you might be impacted by a different threat group you know, three to six months down the road if you haven't addressed the methods of ingress and all the backdoors that they've identified but I think it's uncommon for them to hit a single group to hit a client multiple times. Instead, they would traditionally hit them, probably sell their access and their backdoor to another group who then could perform similar operations in the future but then comes across as a different brand or a different representative, that way it's not going to harm their reputation.

Dave Bittner: Where do you suppose we're headed here? I mean, it sounds funny saying it but have we reached a sort of point of equilibrium here where, you know, ransomware actors are kind of here to stay, we've got tools to parry them, we have insurance, we have, you know, we do backups, all those kinds of things, it doesn't seem like we're going to eliminate them any time soon. Is that fair to say?

Mark Lance: Oh absolutely. I don't think ransomware is going anywhere. If you're using Conti as an example, who disbanded earlier this year, you know, based on some of the results of the Russia Ukraine conflict, which I don't know if you want me to cover some of that, but using Conti as an example prior to them disbanding. Once the Russia Ukraine conflict occurred, Conti initially came out and said, we fully back Russia. Well, that wasn't smart because they have operators and people what or working out of Ukraine, so all of a sudden there's this inner turmoil within the criminal organization, and they start leaking information. Their SOPs, wallet information, and one of the things that we're able to track is that Conti, within the year and a half that they were operational, they had collected over two billion dollars in ransom funds. So when we're talking about monetary gains and dollar values associated with these criminal organizations, we're not talking about trivial amounts. And again, that was a year and a half of operations and two billion dollars. So, this is very lucrative for these threat groups, they're making a ton of money, and they're doing it effectively. So, for right now, I don't see ransomware going anywhere, I do think we are seeing some positive trends that are driven by, you know, even cyber insurance based on some of the insurability requirements and you know, minimum sets of technologies and processes and policies and people you have to have in place, that are driving positive trends in you know, core fundamentals and strategies that clients have that, you know, security researchers and consultants have been advocating for for years. So I do think that's driving positive trends, but if you're, these cyber criminals are always going to continue to evolve and look for ways that they're making money and right now they're doing it effectively so I don't think we're going to see it dulling anywhere for the time being.

Dave Bittner: Ben, what do you think?

Ben Yelin: Really interest, I mean definitely a topic we're going to study going forward, particularly as it relates to global network resilience and protecting public and private networks from ransomware, so I definitely appreciated the interview.

Dave Bittner: Is this something that comes up with your law students, the policy implications of paying ransomware and all that kind of stuff?

Ben Yelin: Yeah, I mean it comes up in law classes and also in ethics classes, sociology classes, I mean this is not a purely technological issue. Just like negotiating any type of hostage situation requires actual negotiation skills.

Dave Bittner: Yeah.

Ben Yelin: So yeah, it's definitely something that comes up, it's something we've thought about a lot.

Dave Bittner: Interest. Alright, well our thanks to Mark Lance from "Guidepoint" for joining us, we do appreciate him taking the time.

Dave Bittner: That is our show. We want to thank all of you for listening, we'd love to know what you think of this podcast. You can email us at caveat@n2k.com. N2K Strategic Workforce Intelligence optimizes the value of your biggest investment; your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Eiben, the show is edited by Elliot Peltzman, our executive editor is Peter Kilpe, I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.