Caveat 10.6.22
Ep 144 | 10.6.22

Should Safe Harbor laws be enacted country-wide?

Transcript

Scott Holewinski: And by not encouraging businesses to report certain cyber incidents, ultimately, we lose really valuable insights into what the current kind of cyber threat landscape is and how we can go about, you know, improving and actually protecting our businesses better.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hello, Ben. 

Ben Yelin: Hello, Dave. 

Dave Bittner: Today Ben discusses the Supreme Court taking up Section 230 of the Communications Decency Act. I've got the story of a judge in Buenos Aires declaring facial recognition software unconstitutional. And later in the show, we've got Scott Holewinski from Arctic Wolf. He's discussing the enactment of countrywide Safe Harbor laws. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 

Dave Bittner: All right, Ben, before we jump into our stories this week, we got a little bit of follow-up here. A listener named Neil (ph) wrote in and said, first, tremendous interview about Edward Snowden. Your interviewee's measured and thoughtful opinions were an utter joy to listen to. I agree. I had nothing to do with that interview, which made it even better for me. So that was - he's talking about Ben's interview with Robert Carolina, and, yeah, I'm all in. I thought that was a great interview as well, so tip of the hat to you and Robert. 

Ben Yelin: Yeah, I noticed he didn't compliment the interviewer, but I'll take it as a compliment. And, yes, Robert was fantastic. I thought it was a great conversation. 

Dave Bittner: Right, right. Neil goes on and says, second, Ben said in the Cloudflare Kiwi Farms discussion that profanity falls outside the First Amendment protection. Did Ben misspeak? I could swear that the courts have found that profanity is not obscenity and that profane speech, including all the usual letter-designated bad words, fell squarely under the First Amendment. Certainly, profanity is not on the traditional list of unprotected speech. Thanks for the show. Ben, what's going on here? 

Ben Yelin: Well, Neil is absolutely right. I try and make sure that I get every word perfectly in these interviews, and I have failed our listeners. 

Dave Bittner: (Laughter). 

Ben Yelin: So I said the word profanity. In my mind, I was thinking obscenity, which is the actual category of speech that is not protected by the First Amendment. I think in everyday parlance, obscenity and profanity are actually pretty similar. But in a legal sense, there is a major distinction... 

Dave Bittner: OK, which is... 

Ben Yelin: ...Which Neil gets at. Obscenity is something that - really, the Supreme Court has literally said, we know it when we see it. But it's something that is indecent to the point that it doesn't add any value to a public debate, whereas profanity is very constitutionally protected. In fact, one of my favorite examples comes from a case named Cohen v. California, where there was a statute in California prohibiting profane language in a courthouse, and Mr. Cohen went to that courthouse with a sweatshirt that said, F the draft... 

Dave Bittner: Wow. 

Ben Yelin: ...Although it was not F, as you can imagine. 

Dave Bittner: Right, right. 

Ben Yelin: It was the full word. 

Dave Bittner: Yeah. 

Ben Yelin: And the Supreme Court upheld Mr. Cohen's constitutional rights, basically saying there's no other way you could possibly convey that particular message without using the F-word. Saying, I strongly disapprove of the draft, really doesn't pack the same punch. 

Dave Bittner: (Laughter) Right, right. 

Ben Yelin: So, I mean, there certainly can be time, place and manner restrictions. If you're in a public school and you're a kid, don't go around dropping F-bombs. 

Dave Bittner: Yeah. 

Ben Yelin: You can still get suspended. But in terms of robust First Amendment protection, Neil is absolutely right. And thank you for our correction - for this correction. 

Dave Bittner: So just so I'm crystal clear here, so profanity could be part of obscenity, but it's not automatically obscenity. 

Ben Yelin: That's right. It's a subcategory of obscenity. And obscenity is generally defined as something that adds no artistic value or political value, social value and that runs against community standards. So it's interesting that it's judged against one's own community. So something that's obscene in Montana might not be obscene in, say, Hollywood. And that comes from a case called Miller v. California on the obscenity side. 

Dave Bittner: All right. Well, thanks to Neil for sending in the thoughtful question. Of course, we would love to hear from you. You can write us. It's caveat@thecyberwire.com. 

Dave Bittner: All right, Ben, let's jump into our stories this week. Why don't you start things off for us? 

Ben Yelin: So we have big news from the United States Supreme Court. Yesterday, as we're recording this, was the first day of their 2022-2023 term. And they granted certiorari on a number of cases, which means - when you grant certiorari, it means the Supreme Court will hear the case. They'll have oral arguments. And one of those cases is a case called Gonzalez v. Google LLC. And in this case, the Supreme Court is going to evaluate the right of these platforms - these Googles, Apples, et cetera - to have the shield of Section 230 of the Communications Decency Act. So, as we know, Section 230 shields companies from lawsuits based on their content moderation decisions. So if they decide to ban somebody, if they decide to allow a certain category of speech on their site, they're generally immune under Section 230 from lawsuits. 

Ben Yelin: The question presented in this case is really interesting. It's about whether the use of algorithms in directing users to particular content counts as the type of content moderation that is protected under Section 230. And the Supreme Court is going to evaluate that question. So the circumstances of this case are fascinating. 

Dave Bittner: Let me just interject and say who better than - to evaluate algorithms than the octogenarians (laughter)? 

Ben Yelin: Yeah. 

Dave Bittner: I'm - I mean, they're not all that old. But you know what I'm getting at here (laughter). 

Ben Yelin: I'm going to say that our two youngest justices, Justice Jackson and Justice Barrett, might have to counsel their elders on... 

Dave Bittner: Right, right. 

Ben Yelin: ...What an algorithm is. 

Dave Bittner: Yeah. 

Ben Yelin: At least they read their briefs. You know, they'll - they generally have 20-some-odd law clerks who can... 

Dave Bittner: Right. 

Ben Yelin: ...Explain to them how it works. 

Dave Bittner: I suppose I'm being unfair. 

Ben Yelin: Well, no. I mean, they are old. 

(LAUGHTER) 

Ben Yelin: For the most part, although... 

Dave Bittner: Yeah. 

Ben Yelin: ...This is actually among the younger Supreme Courts that we've had in recent history, just because there have been a lot of deaths and retirements in recent years. 

Dave Bittner: Yeah. 

Ben Yelin: So the circumstances around this case are fascinating. Ms. Gonzalez was killed in the 2015 terrorist attack that took place in Paris. She is a U.S. citizen who was traveling abroad in Paris and was killed as part of that complex, coordinated terrorist attack that took place in November 2015. And her family is alleging that part of the responsibility for this woman's death was the social media companies because ISIS and radical Islamist extremists are recruiting via some of these platforms, like YouTube, which is owned by Google. And Google is directing people to certain videos through the use of an algorithm based on other videos that somebody has watched. And as a result, they are sort of nudging, potentially - at least this is the allegation. They are nudging people who have a tendency for extremist views to view even more extreme content, get them more radicalized and potentially cause them violence. 

Dave Bittner: Right. 

Ben Yelin: The 9th Circuit weighed in on this. This is the judicial circuit based on the West Coast. And there was really a divide in the 9th Circuit on how they saw Section 230 in this case. It was a three-judge panel. So this is really an unsettled area of the law. The Supreme Court - generally, the informal, uncodified rule is that four justices have to agree to grant cert in a case. So what we know right now is there are at least four justices who believe that this is an issue ripe for review. The consequences of a decision against Google here would be really profound because companies would now face potential liability based on their algorithms, based on the content that they direct their users to. It might cause some of these companies to abandon algorithms altogether - even though that's been a valuable business model for them - because they're just going to be too terrified of lawsuits. 

Ben Yelin: The opposition to Google - or the opposition to this case, rather, comes from the trade groups that represent these Big Tech companies, saying, if you side with Gonzalez here against Google, you're going to disrupt decades' worth of internet law, internet precedent, and it's really going to hurt the free flow of ideas on the internet. People value these algorithms because generally, people like to be directed to content that will interest them. And so I think we're looking at a case that could really change the internet as we know it if Gonzalez gets a favorable decision here. 

Dave Bittner: Wow. You know, I'm trying to sort of evaluate my own thoughts on the algorithms. And using YouTube as an example, I watch a lot of things on YouTube. I enjoy a lot of content on YouTube. But I guess, like everything, in my mind, the algorithm giveth and the algorithm taketh away. You know, there are times when I - when it recommends things that I probably wouldn't have found otherwise and it's delightful. And there are other times when it's like, I'm not shopping for a car anymore... 

Ben Yelin: Right. 

Dave Bittner: ...You know? (Laughter) Like, just leave me alone. 

Ben Yelin: I mean, sometimes I hate-watch things. I'll be like, oh, this guy is so ridiculous. I'll... 

Dave Bittner: Right. 

Ben Yelin: ...Watch a clip of his show. And then now it directs me to every single clip of this person's show. And I'm like, can I just pretend that that never happened? 

Dave Bittner: Yeah. I guess I'm trying to imagine what something like YouTube would be like if it didn't have the algorithms, if it were only based on, I guess, search results. If they stopped recommending things based on what you have watched previously, that's a very different experience. 

Ben Yelin: I mean, frankly, it would kind of suck is the conclusion I come up with. I mean... 

Dave Bittner: Would it? I mean, I'm a little older than you, so I grew up in an era of television and TV Guide, right? (Laughter) So... 

Ben Yelin: That was not nearly as good as YouTube. I mean, think about getting... 

Dave Bittner: Good point. 

Ben Yelin: ...A personalized set of videos and how valuable that is to waste time when you're bored. 

Dave Bittner: Yeah. 

Ben Yelin: I mean, I guess if you want to try this out, use your incognito browser or log out. Go to YouTube, and you'll be directed to the most popular or searched-for content. Now, you and I have very particular interests. I doubt that our interests match up with the masses, so to speak. 

Dave Bittner: Yeah. Count on it (laughter). 

Ben Yelin: You know, the top 10 videos are going to be the latest Jonas Brothers music video or some viral TikTok sensation or Cocomelon, Little Baby Bum, the type of videos that babies watch 8 billion times, which means that the algorithm picks that up, and they get millions of views. So it really, in my opinion, would be significantly worse. Now, there is a trade-off because we have this example that's outlined in this case where the algorithm can do significant damage. 

Dave Bittner: Right. 

Ben Yelin: And we've seen it not just in this context, but there have been articles written about how young men in particular start by watching videos of other people playing video games, which I'll never understand. Maybe that makes me an old curmudgeon. 

Dave Bittner: (Laughter) I think it does. 

Ben Yelin: It does not interest me. 

Dave Bittner: Right, right. 

Ben Yelin: But apparently it does interest a lot of young men in particular. 

Dave Bittner: Yeah. 

Ben Yelin: And because a lot of other people who watch other people play video games end up flirting with alt-right content or other extremist content, it leads a lot of young men - the algorithm leads a lot of young men in a pretty dangerous direction. And in some limited number of cases, it leads to acts of violence. 

Dave Bittner: Right. 

Ben Yelin: I think the question is whether we should hold a company like Google liable for those types of outcomes when they are unusual and when we are trying to foster both a free internet and also an internet that is worth using. 

Dave Bittner: Well, so let's come at this from the other direction. Suppose the Supreme Court says that a company like Google is liable for this. What does that look like? If all of a sudden they're on the hook for things that people do as a result of the recommendations that they've made, how do they - what does the internet look like? 

Ben Yelin: Well, these companies are going to have to change their practices or hire the best army of lawyers known to man because if this liability shield is removed, every time there's an incident where you can even make a plausible case that somebody was radicalized on the internet, then that is not going to be a good outcome for these companies. Now, it wouldn't be a entire 180 because they would still not be liable for their content moderation decisions, meaning Section 230 would still protect them from being sued for suppressing certain speech, if they decided to ban content from, you know, extremists or ISIS or Nazis or whatever. Under Section 230, they would be allowed to do that. Now, that's complicated by this Texas law that we discussed on a previous episode, but we'll leave that aside for the moment. 

Dave Bittner: Right. 

Ben Yelin: But in terms of the algorithm, I think they would have to completely reorient their business model and get people to watch videos without the use of the algorithm because it would just expose them to too much liability. I mean, they could reconfigure the algorithm and still have it in such a way that it would never lead people to extremist content. I'm sure the technology exists to figure out a way to do that, but it seems like it would be a relatively complicated endeavor. There are videos on YouTube that are still on YouTube. They haven't been taken down through content moderation. And when people watch them and like them, I mean, the natural inclination is for more of those videos to be suggested. 

Dave Bittner: Right. 

Ben Yelin: So, I mean, they could try and ban certain categories of videos from being on that suggested videos list. Maybe that's what they would try to do, but that's very difficult to moderate when anybody can upload a video. And there are more uploaders than there are content moderators in the YouTube universe. So... 

Dave Bittner: Do you suppose they could shed their liability here through the use of a EULA? In other words, say, you know, by using our platform and agreeing to check the box, that means you're on team algorithm. You absolve Google of any liability that may come from your decision to do terrible things based on the things we've recommended. 

Ben Yelin: I don't think you can contract your way out of this with a EULA. Certainly, that would help the companies. They will have a provision like that in their EULA. 

Dave Bittner: Right. 

Ben Yelin: But EULAs aren't the be-all and end-all in legal cases. And we've seen suggestions that the government could actually regulate these Big Tech companies as common carriers. That's what Justice Thomas suggested in a concurrence in a recent case. 

Dave Bittner: Right. 

Ben Yelin: Meaning they can regulate these companies directly. And as a result, they - the government would have significant authority no matter what was in the EULA. They would have this regulatory authority. 

Dave Bittner: Right. 

Ben Yelin: So EULAs are helpful, but they are not the absolute final word on the matter. And if the actions of some of these tech companies violate statutes or lead to negligence or some other type of tort on behalf of somebody else and these tech companies play a role in it, then it doesn't matter what's in the EULA. They could still be held liable. I mean, that's the way our laws work. 

Dave Bittner: I see. 

Ben Yelin: When I sign a waiver saying, you know, if I get injured at the ski resort, I'm not going to sue you, but the ski resort still does something extremely negligent like not inspecting their chairlifts, that's generally not going to completely indemnify the ski resort, so... 

Dave Bittner: Right. 

Ben Yelin: ...I don't think that's going to be a crutch that these companies can rely on. 

Dave Bittner: What sort of timeline are we on here? The Supreme Court takes this up - when might we expect a decision? 

Ben Yelin: I would guess sometime around next June. So we have about nine months of waiting this one out. I think we'll have oral arguments sometime November, December. I think you and I can check back in on the story then. You can get - sometimes you can get a hint on how the justices are leaning based on the questions they ask at those oral arguments. And then they will draft a decision by the end of this term, which is June 2023. 

Dave Bittner: OK. 

Ben Yelin: So unlike some other instances where, you know, we talk about cases that are just starting through the process of making their way through our court system, we will have some finality on this and in a relatively short period of time. 

Dave Bittner: Relatively short. 

Ben Yelin: Yes. 

Dave Bittner: (Laughter). 

Ben Yelin: In the world of the law, nine months is relatively short. 

Dave Bittner: That's right, blink of an eye. All right. Well, I mean, I guess it's fair to categorize this one as a big deal, right? 

Ben Yelin: It's a huge deal. 

Dave Bittner: Yeah. 

Ben Yelin: I think there was some anticipation that the Supreme Court would take up this case, but certainly we didn't know for sure. The 9th Circuit maintained that Section 230 liability, although the judges on that panel were divided on the issue. And I think the Supreme Court saw this as an opportunity to clarify the full extent of Section 230 and its protections. So, yeah, I'm very curious to see how this goes in oral arguments and then when the case is decided. 

Dave Bittner: All right. Well, time will tell. And of course, we will keep our eyes on it. 

Dave Bittner: My story this week comes from the folks over at the Future of Privacy Forum, which is a nonprofit who track things like privacy - online privacy is what they do. This is an article written by Maria Badillo. And this is a fascinating story, I think. This is coming out of Buenos Aires - so Argentina. A judge declared a Buenos Aires fugitive facial recognition system unconstitutional. So let's start out here saying we're talking about the constitution of Argentina. 

Ben Yelin: I am no expert. 

Dave Bittner: (Laughter) Come on, Ben. 

Ben Yelin: I know. I have never been to Argentina. I don't like beef. So as a vegetarian, I'm not sure how many food options I'd have down there. It looks beautiful. 

Dave Bittner: Yeah. 

Ben Yelin: But I am not an expert on the Argentinean constitution. But we'll do our best with this story. 

Dave Bittner: Yeah. OK. Well, the main issue here that set this apart and made me want to include this, we'll get to in just a second. So basically, the city of Buenos Aires installed a facial recognition system as part of their video surveillance system. And several advocacy groups said that this was unconstitutional. They sued. They said, you know, we can't allow this. This goes too far. And this went eventually in front of a judge, and the judge agreed on a number of reasons. But the one that caught my eye that I really wanted to dig into is they talk about privacy as a collective right redressable through constitutional mechanisms. And this is something that - this is a phrase that's new to me. So I was hoping you could unpack this for us and explain what it means. 

Ben Yelin: So in our country, the rights are very particularized to the individual. That is one of the tenets of standing. As we know, our federal courts should only hear cases and controversies. And the Supreme Court has interpreted that to mean that every party has to have some type of personal stake in the matter. They have to have suffered an actual injury. The injury has to have been caused by the defendant's actions. And there has to be a way to redress that injury, whether through monetary damages, some type of declaratory judgment, being let out of prison, et cetera. We don't view our rights as collective, necessarily, in a legal sense. 

Ben Yelin: Now, in a moral sense, I think there are rights that we recognize as collective. I mean, I think we would agree colloquially that our foundational constitutional rights in some ways are collective rights. Attending a protest to exercise our First Amendment rights is a collective activity that we want to protect at the societal level. But in terms of how actual cases work in this country, they look at whether an individual or multiple individuals in a case, have a personal stake in the matter. And to understand whether those individuals have a personal stake in the matter, they have to do some fact-finding. Did this person actually suffer an injury? If it's a class-action lawsuit, did everybody similarly situated in this group suffer a particularized personal injury? 

Ben Yelin: The best corollary I can think of to this case, is a case called Clapper v. Amnesty International from about a decade ago, which was a challenge to our government's electronic surveillance under Section 702 of the FISA Amendments Act. And there were allegations from folks at Amnesty International and other nonprofit groups and attorneys basically saying, we can't prove it, but we believe that our communications have been intercepted under this program, and we've suffered a violation of our Fourth Amendment rights. And the court said that you have to prove with, quote, "impending certainty" that that harm is actually going to happen or has happened. 

Ben Yelin: And that's kind of the opposite of the approach taken by this Argentinean judge who says, even if we don't have a plaintiff's actual identity, that is not necessarily the relevant factor, as long as the case is related to a collective incidence affecting citizens of Buenos Aires - so whether the plaintiffs in this particular case are representative of a collective interest. It's something that I don't think we would see in this country. It reminds me of an old story of mine when I was taking a college trip to Spain and - with a friend of mine - and a couple of American ladies stopped us and said, can you help us move some of the stuff into our house? They saw that we spoke English. I swear this is going somewhere. 

Dave Bittner: (Laughter). 

Ben Yelin: And we were like, sure, whatever. And we went into this very creaky elevator that, like, clearly hadn't been inspected in years. And they were like, oh, this would never fly in the states. And my reaction to that was like, you know, maybe that's not such a bad thing that we have things like OSHA and elevator inspections. 

Dave Bittner: Right. 

Ben Yelin: You know, maybe that's something that's valuable about our system. And even though it might sound cool to say this would never fly in the states, maybe we're actually doing it right. I kind of think that our approach to constitutional law is similar in that respect. I understand the temptation of collective rights, but there is something majestic about our legal system that in every case, somebody has a personalized stake in the outcome. We don't burden our judicial system with things that are merely theoretical... 

Dave Bittner: Right. 

Ben Yelin: ...Or based on some type of hunch that something bad is going to happen. It is particularized to the individual. And I think because we have a common law system, meaning judges look to past decisions to guide them in future decisions, we can actually learn a lot about the law from multiple cases with some type of personalized outcome. And I think that's a little less wishy-washy than deciding whether a group - whether it's as large as a societal group - has a right under the collective good. So that's my general take on this. It might not be the most popular take. I'm certainly tempted to believe in something like a right to - a collective right... 

Dave Bittner: Right. 

Ben Yelin: ...Against this type of invasion of privacy. But I actually think, despite that temptation, our system is maybe better. 

Dave Bittner: So there are a number of things that happened in this case. I mean, the judge came down on this and used this notion of the collective right. They also pointed out there was lack of control and oversight. They were dealing with an unreliable database. There was some abuse of findings. So a number of things but, I guess, ultimately, I'm wondering like, what does this mean in terms of the direction the wind is blowing globally? This article points out that, you know, GDPR - the Europeans have seemingly more - they place more value on their privacy than we do here in the states. So what do you suppose something like this can mean on the global stage? 

Ben Yelin: I mean, every country has different constitutional provisions and political traditions. U.S. has more robust First Amendment rights than most other Western democracies. We care more about the marketplace of ideas, freedom of speech. Our European and, I suppose, South American allies seem to place more value on this type of collective right to privacy. And maybe it's just a difference in our experience, a difference in our political culture. 

Dave Bittner: Right. 

Ben Yelin: So it's certainly possible that Congress could enact something like GDPR. Many of the proposals they've considered that would grant some sort of data privacy rights or give people a private right of action against companies that violate those rights. I think the vast majority of those proposals don't match the scale of GDPR and don't represent the type of sea change that we're seeing with this Argentina case, where you no longer have to prove that any individual was unlawfully surveilled. I don't think the U.S. is necessarily close to reaching that point, just based on how we view privacy. 

Dave Bittner: Yeah. 

Ben Yelin: And it just comes from our experience, I think our post-9/11 experience in particular, where we - there was a backlash, but there was also a pretty strong movement, at least for the first 10 years after 9/11, saying we actually might have to sacrifice some of our civil liberties because security is a collective interest. And that's something that has stuck with us. So I don't think - I wouldn't anticipate that this is going to spread to the United States anytime soon just because of differences in our priorities, constitutions, political culture, et cetera. 

Dave Bittner: Yeah. I think it's worth noting, too - you pointed this out to me before we were recording here that although the judge made this declaration, the end result is ultimately that they sort of lay out a road map for the folks who are doing this facial recognition for how to do it legally, right? 

Ben Yelin: Yeah. 

Dave Bittner: You know, they're not saying don't - stop doing this and don't ever do it again; they're saying, here are the parts of what you're doing that don't fly, and here's a road map for still getting to do this but doing it within our legal boundaries. 

Ben Yelin: Yeah, it is interesting that they're talking about this pretty profound, conceptual idea of a collective privacy interest. 

Dave Bittner: Yeah. 

Ben Yelin: But then the outcome of the case is, well, if you do X, Y and Z, I think you could have a legal regime that supports the use of facial recognition. That is kind of similar to a lot of the cases we've seen in the United States. We haven't had that many prominent cases, but most courts have upheld the use of facial recognition as long as there are proper controls in place to protect people's... 

Dave Bittner: Right. 

Ben Yelin: ...Fourth Amendment rights. So that aspect of it is kind of similar. You figure out a way to make it legal. And sometimes judges, whether it's our country or other countries across the world, will give a road map to policymakers and say, here's exactly how to make this legal. I know we've seen that in multiple contexts in the U.S. So it is sort of interesting that even in this case that would give people a lot of optimism that we're cracking down on these types of practices that violate personal privacy, the outcome of the case is still, well, if you can properly regulate it, the use of facial recognition is going to be acceptable. 

Dave Bittner: All right. Well, we will have a link to that story in our show notes. And, again, we would love to hear from you. If you have something you'd like us to consider for the show, email us. It's caveat@thecyberwire.com. 

Dave Bittner: Ben, I recently had the pleasure of speaking with Scott Holewinski from Arctic Wolf, and we discussed the possibility of countrywide safe harbor laws. Here's my conversation with Scott Holewinski. 

Scott Holewinski: I mean, at a really high level, you know, safe harbors give, you know, businesses protections in certain situations. And, you know, referencing - you know, here, what we're talking about is related to cyber incidents or data breaches. And the whole idea is, you know, there's certain rules, regulations in place which really penalize businesses in instances where, you know, they report a breach, or it makes them subject to potential litigation. And by not encouraging businesses to report certain cyber incidents, ultimately, we lose really valuable insights into what the current kind of cyberthreat landscape is and how we can go about improving and actually protecting our businesses better. 

Dave Bittner: On the cyber side of things, what would a typical safe harbor law look like? 

Scott Holewinski: I mean, there's a lot of ways you can go with it. And, you know, there is a balancing act here, right? I mean, you know, typically, when you see penalties levied against businesses associated with a breach of some kind, you know, one of the most common is - that we all know about is HIPAA - HIPAA breaches. So, you know, a health care facility has a breach of a bunch of patient records. They need to report that breach. And based on the size of the breach - how many records were potentially impacted - they can be fined for that breach. 

Scott Holewinski: And the original kind of motivation behind those laws was - or those regulations was, you know, if an industry or a business was being negligent - you know, they just weren't doing the most basic things to protect, you know, private information of their patients - you know, they should be penalized. But what we've seen more recently with a lot of these breaches, in a lot of cases, the breach happens as the result of no wrongdoing of the business. Maybe they were using a piece of software that, you know, had a vulnerability in it that they purchased. You know, they didn't build the software, but there was a vulnerability, and the threat actors took advantage of it and breached the organization. 

Scott Holewinski: The question to be asked is, should that organization necessarily be penalized for that? You know, and that's where a safe harbor can come in. Ultimately, we want to know about the vulnerability. You know, we want to encourage the business to report it so that we can get out into the public space that, hey, there's a vulnerability in a particular piece of software that other businesses are probably also using. So, again, you know, putting some protections in place where - you know, again, you don't want to, you know, make it just kind of open. You know, if you're doing nothing to protect yourselves and you get breached, that's a little different. 

Dave Bittner: Right. 

Scott Holewinski: But if - you know, there needs to be some measure and balance here that really does encourage businesses who are doing things, for the most part, correctly and still get breached. We would like to encourage them to still report that. 

Dave Bittner: And yet not be some sort of Get Out of Jail Free card. 

Scott Holewinski: Correct. Yeah. You know, so I think using, you know, some kind of measurement - and, again, it's unfortunately not black and white, but in our business - you know, my background and what our business does is incident response. We're responding to thousands of incidents a year. In many of those instances, the businesses, you know, were not negligent. They were doing all the right things. And unfortunately, you still can be breached. 

Dave Bittner: Are there examples of states who are doing this successfully? 

Scott Holewinski: Not really. I mean, the concept of, you know, safe harbors has been - around this topic has been discussed. But as of right now, there's really nobody who's actually done it. So as of right now, there aren't great examples of safe harbors that have been put in place as of yet. 

Dave Bittner: And why come at this at the federal level rather than allowing it to go state by state? 

Scott Holewinski: I mean, you could go either route. I mean, federal is nice because at least it's consistent, which makes it easy on all of us in, you know, kind of the space that I operate inside of where, you know, we're responding to incidents across the United States and, quite honestly, the world. So, like, the more common the laws are, the better. You know, unfortunately, we've seen, from state to state, even from basic privacy laws, they vary quite a bit, which means that, you know, privacy attorneys and folks who are trying to guide and instruct their clients about how to, you know, properly notify and respond to an incident - it makes it really difficult. 

Scott Holewinski: So ideally, it would be at the federal level just because it makes it easier to be really consistent. I mean, most businesses are not just operating in one state. You know, they're operating across state lines. So, you know, when we see things like the California, you know, privacy laws come out, it'd be great if it was consistent everywhere. But it's just not there yet. So hopefully, we see in the coming years some standardization happen here. And it would be great if the federal government came along and provided that framework. 

Dave Bittner: Yeah, I guess I'm trying to imagine, you know, how you would implement something like this. You know, would it involve a neutral third party, you know, someone who could... 

Scott Holewinski: Yeah. 

Dave Bittner: ...Take the information and decide, you know, what happens next? 

Scott Holewinski: Yeah, I mean, that would - that's really the ideal way to go about it. And, you know, that would need to be formed. And maybe it's a panel of industry experts who is assembled to kind of do a review of the situation, you know, make a determination in terms of what happened. You know, and that would be one approach. But, yeah, you would need some kind of monitoring body that could - and, again, would be probably put together of industry experts kind of in the cyber space to really do it right. 

Scott Holewinski: You know, it starts with a conversation, you know? And unfortunately, you know, there hasn't been enough movement, in my opinion, in terms of acknowledging that, you know, businesses today are putting a lot of things in place to protect themselves. And, you know, I think, you know, we all owe it to those businesses who are investing in their security posture and doing things right to give them some protections in the unlikely event they still actually end up having some kind of, you know, incident. 

Dave Bittner: Has there been much traction for this sort of thing, or are there legislators who are taking a look at it? 

Scott Holewinski: There are. You know, again, it kind of - like most things, it depends. You know, last summer, where, you know, there was quite a bit of high-profile cybercrime that was, you know, kind of making the mainstream news, people did start talking about this because, again, it's very rare that we see a cyberattack of any type that's a one-off situation. Usually, these threat actor groups are using, you know, the same type of attack and usually the same vulnerability over and over again until it becomes hard to take advantage of that vulnerability because everybody knows about it and everybody shut the door. 

Scott Holewinski: So you know, when we see high-profile attacks that make it to the mainstream media and that vulnerability is known about, we very quickly see it close down. And, you know, that's the situation we really want. You know, we want - when one of these kind of zero day vulnerabilities comes out, we want it to get out to the mass media as quickly as possible so that people can - who also have that vulnerability can shut the door, which is - again, the whole idea is make the environment such that those businesses are encouraged to report that incident and how it happened so, you know, the general population can fix it as quickly as possible. 

Scott Holewinski: So yeah, when we see high-profile attacks like that happen, we see conversations around, hey, how can we better protect and encourage businesses who experience these attacks to report them? It comes up. But then when you have kind of lulls where you don't hear about high-profile attacks, unfortunately it becomes less interesting for folks, and we don't see a lot of forward movement. 

Dave Bittner: You know, there's just that old saying about how, you know, the cover-up is worse than the crime, and I wonder... 

Scott Holewinski: Right. 

Dave Bittner: ...You know, coming at that from the other direction, if you could incentivize organizations and say, listen, you know, the - how you handle your disclosure here could come into play on what, if any, penalties there might be. You know, try to give them some positive reinforcement to do the right thing. 

Scott Holewinski: Yeah, and we've seen some of that. You know, like, actually even around things like ransomware, which has obviously become very popular, but even in the HIPAA space, they'll have what they call, like, significant mitigating factors when it comes to penalties. And a lot of those mitigating factors are around, one, were you negligent or not. So they'll kind of try and make a determination of, were you doing the basic things you should be doing? So that's part of the mitigating factors. But the other one is how you responded, how quickly you reported it and things of that nature. 

Scott Holewinski: But in my opinion, the language around the mitigating factors - I don't think it quite goes far enough. You know, it doesn't say you're not going to be penalized. It simply says, hey, we'll take it into account. But, you know, you still may be crushed by this. So like - but - so we do see some level of it's not black and white, and we'll kind of take some of these things into account. 

Scott Holewinski: But I think, personally - and maybe I'm biased because of the industry I'm in - but I'd like to see it go a little bit further to really encourage businesses to just open it up. And, you know, hopefully as more and more businesses experience this themselves, as the mainstream media picks it up and reports on it, businesses get a little more comfortable saying, hey, this can happen. And, you know, it's something that we need to learn from, as, you know, kind of an entire population and all work towards getting better. 

Dave Bittner: Ben, what do you think? 

Ben Yelin: I think there's a lot of promise in these safe harbor provisions. I mean, I think it's a recognition that everybody faces threats in the cyberworld. Yes, there are opportunities to be negligent. Yes, there are opportunities to not have robust security standards, not follow the NIST framework, etc. But the more we can make this a proactive effort to root out threats and have better information sharing, I think having a safe harbor provision could be a good incentive for that. So I find it to be something that's very promising. 

Dave Bittner: Yeah. All right. Well, our thanks to Scott Holewinski. Again, he's from Arctic Wolf. We appreciate him taking the time. 

Dave Bittner: That is our show. We want to thank all of you for listening. The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Ben Yelin: And I'm Ben Yelin. 

Dave Bittner: Thanks for listening.