Caveat 1.12.23
Ep 156 | 1.12.23

New year, new data privacy.


Chris Gray: It's been really a really interesting year, based on the fact that we're seeing a continuance of where we've been, and we're seeing an introduction of some pretty new stuff that's - it's been going on, I believe, under the covers, if you will, for a long time. But the last year has really brought it out into the open and made it very evident in ways that it wasn't before.

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law and policy podcast. I'm Dave Bittner. And joining me is my co-host, Ben Yelin, from the University of Maryland Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave. 

Dave Bittner: Today, Ben covers a Seattle public school district lawsuit against big tech companies. I've got the story of an AI avatar generator that just can't seem to resist sexualizing Asian women. And later in the show, Chris Gray, AVP of security strategy at cybersecurity firm Deepwatch, shares his insights on data privacy regulation in 2023. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right. Ben, we've got some good stories to share this week. Why don't you kick things off for us here? 

Ben Yelin: So I saw my story on GeekWire, and it is about a new lawsuit that's been filed. The Seattle Public School District is suing TikTok, YouTube, Instagram, Meta and some other companies seeking compensation for the youth mental health crisis. I'm going to start by saying what I always say about these cases, which I know is very frustrating - this is going to go on for years. We're not going to have a resolution for a long time, so just to get that out of the way. 

Dave Bittner: Oh, goody (laughter). 

Ben Yelin: I know, I know. I wish we could have, like, a - you know, just a magistrate judge give us a preliminary conclusion tomorrow, but... 

Dave Bittner: A legal lightning round. 

Ben Yelin: Exactly. But, unfortunately, that's not going to happen. 

Dave Bittner: Right. 

Ben Yelin: So the basis of this lawsuit is really interesting. Basically, what the Seattle school district is saying is - our students have suffered from a basically decade-long mental health crisis. They have the statistics to back it up - that's all in the complaint - an increased number of students using mental health services, a higher suicide rate among teenagers, public school students in Seattle. And they're basically alleging that that's the fault of these big tech companies for poisoning the brains of students through social media. So they're suing basically the big purveyors of social media - Meta, Instagram, TikTok - saying that they have poisoned student's brains. They particularly targeted young people because young people are a lucrative target for advertising purposes. And as a result, they've suffered a legally recognizable harm. I'm going to start by saying that I am very skeptical of this lawsuit... 

Dave Bittner: Yeah. 

Ben Yelin: ...For a number of reasons. 

Dave Bittner: OK. 

Ben Yelin: For one, I'm not sure how you can - basically, in order to prevail in courts, the school district would have to prove that the proximate cause and the but-for causation of these increases in mental health spending, mental health care, suicide, et cetera is students' use of social media. And there are so many cofounding factors. If you were to ask me, informally, you know - hey, Ben, how do you account for the increase in mental health services in the past 10 years in the school district? - I'd probably say social media, sure. 

Dave Bittner: Right. 

Ben Yelin: But they're - that's much harder to prove in a court of law. And if I'm a - an attorney representing one of these companies, I would say you could attribute it to any number of factors. You know, maybe it's carbon - increased carbon emissions. Maybe it's political unrest. You know, there are just a lot of reasons that students could be suffering a mental health crisis. So that's going to be very hard to prove. And then there's the question of standing. 

Dave Bittner: Yeah. 

Ben Yelin: So anybody suing anybody else basically has to have an actual stake in the outcome. You have to have suffered a particularized injury, and the injury has to be traceable to the alleged illegal action. And there has to be a level of redressability, basically a way for the court/the defendant to ameliorate the effects of the alleged injury. I think you might be able to make a case for injury, in fact, here. They do provide statistics - a 30% increase in the past 10 years in the number of students who said they felt sad or hopeless almost every day for two weeks or more in a row, that they caused them to stop doing usual activities. Maybe they'd be able to prove injury in fact, although I still think there's that causation problem. There are a lot of reasons why there could have been a 30% increase in the number of school students who felt sad and hopeless, and I'm not sure you're going to be able to pin that on social media companies. 

Ben Yelin: For their part, the social media companies are responding to this by saying, hey, we've done a lot in the past several years to make our platforms more accessible to youth - cleaning up bad accounts, shielding the youth, shielding young people from some of the worst aspects of the social media websites. So they are arguing that they should not be held accountable in this lawsuit. So I'm not sure that this is really going to go anywhere. But this is the first time that a school district and not parents representing individual students - or we've seen a case that we've talked about where the social media companies have been sued for facilitating a terrorist attack. This is different than that because it's an entire school district. So I just found it to be a very interesting potential case even though it's probably going to go into the proverbial legal trash can. 

Dave Bittner: Does the school district at all outline what they're hoping to get out of this? Are they after money? Are they after changes in the algorithms? What do they want? 

Ben Yelin: I think they're after money. They're asking for damages. I think, you know, they probably would settle for some type of injunction or, really realistically, they could look for policy changes. So maybe it would force - just the threat of this lawsuit would force social media companies to curb some of the excesses of - the addictive excesses of these social media sites as it relates to children. But they are seeking monetary damages because they're alleging that the school district itself has suffered some injury. If you have students with increased - using increased mental health services, that's, like, overall cost and an added burden to the public school system... 

Dave Bittner: Right. 

Ben Yelin: ...In a financial sense. And I think you could allege more diffuse harms, citing these statistics that students are more depressed, that are, you know, discontinuing normal activities. I think you can make an argument that that's a concrete injury against the school district that can be fairly compensated if they win this case. I don't think they're going to win this case. I think their best hope is settling. And a settlement might be, get the social media companies to curb some of their more aggressive targeting of young people. You know... 

Dave Bittner: Could they say, oh, we're - could the social media companies say, we're putting together a consortium, and we're going to fund, you know, $100 million in mental health services for, you know, online mental health services for students nationwide, so if anyone's feeling sad, you know, they can log on, and they'll be able to talk to an expert, and we'll pick up the bill for that? Thank you very much. 

Ben Yelin: Yeah, and while they're there, if they want to explore some of our site's wonderful features, communicate with friends, maybe click on a couple advertisements... 

Dave Bittner: (Laughter). 

Ben Yelin: ...We'd have no problem with that either. Yeah, I mean, I could see that being part of a legal settlement here. 

Dave Bittner: Yeah. 

Ben Yelin: It has been - there have been similar cases in different contexts in the past. One of them they talk about in this article is the Seattle school district suing Juul Labs over its marketing of e-cigarettes to youth. So that's another kind of diffuse injury that would be pretty hard to prove for a school district. But they pulled it off, and they were able to reach a global settlement of a bunch of consolidated cases across the world, alleging that they, as a school district, suffered a legal harm based on students' addiction to Juul products. 

Ben Yelin: So it's not entirely out of the question. I just think drawing that explicit connection between the social media sites themselves and these pretty alarming statistics that they lay out here is going to be really difficult to prove, whether they're seeking an actual victory in a court of law or a settlement. And I think social media sites are going to come in and argue that there are so many co-founding factors here, that there's no way you could ever definitively prove that a 30% increase in the use of mental health services is directly attributable to decisions we've made as social media companies - for example, to market to high school students. So in that sense, it's just kind of - it just seems unlikely to succeed from my perspective. Maybe I'm being too cynical, but... 

Dave Bittner: You (laughter)? 

Ben Yelin: I know, I know. I wish there was, like, a betting market for these types of things... 

Dave Bittner: Uh-huh.  

Ben Yelin: ...Where... 

Dave Bittner: Yeah. That's what we need, Ben, is more betting... 

Ben Yelin: More gambling. 

Dave Bittner: ...Because I... 

Ben Yelin: Yeah. I know. 

Dave Bittner: ...Haven't seen enough ads for that on (laughter) TV lately (laughter). 

Ben Yelin: Legalized sports betting was just instituted in Maryland, so... 

Dave Bittner: Right. Right. 

Ben Yelin: ...I may or may not have blown a few dollars on it. 

Dave Bittner: (Laughter). 

Ben Yelin: The other funny thing about this for me is I watch a lot of "Law & Order." It's kind of, like, a fun thing that I do with my wife. And usually within the first 20 minutes or so, if they've identified who did it - who committed the murder or whatever - our question is always like, all right. Well, what happens in the next 40 minutes? This is an hour-long episode. 

Dave Bittner: Right. 

Ben Yelin: And it's always about suing somebody. It's like, all right. They're suing the manufacturer of the antidepressant that caused the person to commit this crime or the leader of this biolab who did... 

Dave Bittner: (Laughter). 

Ben Yelin: ...Unlawful experiments on the alleged perpetrator. And I'm just like, this could be a "Law & Order" episode where, you know, a high school student commits a heinous crime and we find out who did it in the first 20 minutes. The last 40 minutes is the lawsuit against the social media companies. So, Dick Wolf... 

Dave Bittner: (Laughter). 

Ben Yelin: ...If you want to reach out to me for some writing credits, I'd be happy to oblige. 

Dave Bittner: I'm just thinking of your dear, lovely bride, you know, sitting - and I - just sitting through an episode of "Law & Order" with you nitpicking, you know (laughter), all the things they get wrong (laughter). 

Ben Yelin: Right. Although they do a better job than - I think they're wary of the fact that lawyers watch the show, so... 

Dave Bittner: You think? You think they get letters (laughter)? 

Ben Yelin: They get a lot of letters. I mean, the most unrealistic thing to me about that show - and I realize this is a tangent - is just how quickly everything is resolved. 

Dave Bittner: Ah - right (laughter). 

Ben Yelin: Like, you know... 

Dave Bittner: Oh. 

Ben Yelin: Yeah. There's a crime happens, there's a nice, tidy case within a week and we get, you know, the sentencing three days after that. Like... 

Dave Bittner: Right, right. 

Ben Yelin: Doesn't really work that way in our overburdened court system. 

Dave Bittner: Do you think this suit from Seattle is the kind of thing that will attract attention from the usual suspects in Congress? 

Ben Yelin: Yes, I do - maybe not directly based on this lawsuit. But certainly if the lawsuit, against my expectations, goes anywhere, that might be more of an impetus for Congress to institute some types of - some type of regulation. The interesting thing is states have taken action recently against social media companies, but not for the reasons described in this lawsuit. We've seen states ban the use of TikTok on government devices not because of the corrosive effect of social media but because of their close connection with the Chinese government. What that does tell me is states, if they wanted to - really any jurisdiction - could take aggressive action against these social media companies. But the companies themselves are powerful. And like I said, the connection between social media use and some of the statistics that we're seeing, while they're very believable to me - and I certainly would expect that there would be some causation there - it's just very difficult to prove. 

Dave Bittner: Right. 

Ben Yelin: So even if you are a policymaker, it might be a difficult policy argument to make. And plus, you have a lot of kids who really enjoy social media and a lot of social media companies who really enjoy all those advertising dollars. So it's an uphill fight. 

Dave Bittner: Could the - I'm just - could the social media companies make a legitimate case that while there are some kids who find this harmful, there are other kids who find it very pleasurable, and it's a positive influence in their lives? 

Ben Yelin: You could absolutely make that case. I mean, they could hire their own statisticians and come up with their own statistics that refute what the Seattle school district is saying, that, you know, 80 - and I'm just making these statistics up. But, like, 80% of users have reported learning more about global affairs because of their use of social media websites or have made new friends. I mean, I think there are... 

Dave Bittner: Right. 

Ben Yelin: ...Positive effects that you could allege in court, and that would be just as provable as some of these negative effects. Again, I mean, my instinct is that it has been a major net negative... 

Dave Bittner: Right. 

Ben Yelin: ...For school kids. 

Dave Bittner: Yeah. I would agree with that (laughter). 

Ben Yelin: Yeah. I just think there's a difference between what our opinion is and drawing that legal connection in court. 

Dave Bittner: Yeah. All right. Well, this is an interesting one for sure, and we will keep an eye on it. We will have a link to that story in the show notes. 

Dave Bittner: My story this week comes from the folks over at MIT Technology Review, and this is written by Melissa Heikkila. Apologies to Melissa if I butchered your last name there. And this is titled "The Viral AI Avatar App Lensa Undressed Me - Without My Consent." So, Ben, are you at all familiar with Lensa? 

Ben Yelin: I was familiar with it, but not - I mean, reading this article made me more familiar with it. 

Dave Bittner: Yeah. I only knew of it by name. I've never played with it or anything like that. 

Ben Yelin: Right. I've never played with it either. 

Dave Bittner: Yeah. 

Ben Yelin: I knew it just - it was out there in the ether. 

Dave Bittner: So evidently, this is one of, you know, the many apps that are categorized as digital retouching apps. So this is - you take a picture of yourself - I'm sure we've all seen pictures of our friends that are posted, and you say to yourself, my goodness, what kind of filters have they run themselves through? 

Ben Yelin: Yeah. 

Dave Bittner: Because you sort of - like, they've smoothed out their skin to be unrecognizable. There's usually the - that - to me, that's the biggest tell with these sorts of things. Some of them, they make your eyes bigger. You know, they just do all the things that folks have found make you more attractive in a photo. 

Ben Yelin: Right. And eventually, you say this does not actually look like the person. 

Dave Bittner: Right. 

Ben Yelin: Yeah. 

Dave Bittner: Right. So Lensa added a feature that would generate avatars for people, and it would use artificial intelligence. So it would take photos of you, and it would come up with a artistically created - or what appeared to be an image of you that would have been created by an artist. In other words, something that is more painterly. You know, it doesn't look like a photograph. It looks like a piece of art, right? 

Ben Yelin: Right. 

Dave Bittner: So the author of this article ran her image through this AI. And she got - of the 100 avatars that were generated, 16 of them were topless, and 14 of them had her in skimpy clothes and what she described as overtly sexualized poses. Now, this is contrasted against her co-workers, her male co-workers, where, when they ran theirs through the system, it came back with pictures of them being astronauts or athletes or, you know, all sorts of other professional types of things. But it did not revert to sexualizing the images the way that the images were for this woman. Now she writes here - she says, I have Asian heritage, and that seems to be the only thing the AI model picked up on from my selfies. I got images of generic Asian women clearly modeled on anime or video game characters. 

Dave Bittner: Now, the article goes on to say that they suppose that the reason for this is that these AI models are trained on these open-source datasets, and these datasets are full of images that come from things like racist stereotypes, pornography, explicit images of rape. Things like that are in these models, and so the model spits back what it was trained on. I - to step aside for a minute, I think it's fair to say that if - you know, if you and I went on the internet - if anybody went on the internet, and you did a search for Asian woman, chances are you're going to come up with a lot of these sorts of images. 

Ben Yelin: Absolutely. 

Dave Bittner: Right? 

Ben Yelin: Yeah. I mean, I guess a theme on our show and other people who've talked about AI is - the ideal vision is it would remove some of the biases inherent in human-designed products. 

Dave Bittner: Right. 

Ben Yelin: But I think what we've found pretty definitively - and this article is kind of the nail in the coffin here - is that all of the biases and discrimination and deleterious effects of over sexualization and racism, etc. - all of those things that exist in the real world, you can't remove those from artificial intelligence because it's - the inputs here, which is the open source - what am I trying to say? 

Dave Bittner: Libraries. 

Ben Yelin: Right, the open-source libraries, it's garbage in, garbage out. So I think the broader lesson here is that AI is not going to solve our societal problems. You can try and train something to be unbiased, but in its natural state, it is going to reflect what's been put in it. And what's been put in it, if you've spent any time on the internet, is some of what this author alleges in this article. 

Dave Bittner: Yeah. It strikes me that it's kind of an uncomfortable mirror back on us, right? You know, it's - in other words, we have this idea about ourselves of what we would like to be - this aspirational idea of what we would like to be and how we would like technology to reflect it. But when you actually send the technology out into the world to see and say - hey, what are humans like? - this is what you get back. And it ain't great, right? 

Ben Yelin: Right. It's kind of depressing. It's like we have - we're getting confirmation from an artificial system that we, as a society, are screwed up and have prurient and somewhat racially biased desires and internet habits. Yeah, I mean, it is kind of a poor reflection of us as a society. But it also, to me, is - shows the limits of AI, generally. Certainly, this is not the only context we've seen where the promise of artificial intelligence runs into the reality that AI is just as biased as we are as conscious human beings. So I think this story is really just an extension of that. We've seen it in the context of law enforcement where artificial intelligence ends up being biased against racial minorities. We've seen it as it comes to things like predictive policing, where the algorithms themselves end up being statistically biased in one way or another. Facial recognition software - we've talked about that a good deal on this podcast. 

Ben Yelin: So yeah, I mean, if you understand it less as what we - an ideal version of ourselves and more of an accurate reflection of how screwed up we are - or, you know, to put it more charitably, all of our human - all of our fallacies as human beings, I think that's a better way to view AI. And I think it's incumbent upon policymakers to more closely align our vision with what actually ends up being put out there. And to do that, I think you have to pay more attention to the inputs, what goes into these open sources of data. But, you know, frequently, you want to get the largest universe of data into the system that the algorithm is going to use. And the larger the universe - there should be some law of nature about this, but the larger the universe of data, the more screwed up that data is going to be. 

Dave Bittner: (Laughter) Right. It's just - it's Yelin's theorem law. 

Ben Yelin: Yelin's law, yeah. Yelin's law of the internet. 

Dave Bittner: That's right. That's right. 

Ben Yelin: Copyright. 

Dave Bittner: Yeah. I wonder, too, to what degree does this reflect the notion - I'm pretty sure we've talked about it here before, about the whole idea of operating at scale, right? That if we had - if we were running a system like this at a scale where humans were reviewing the output before sending it along - right? - and saying, ew, wow, that's no good... 

Ben Yelin: Right. 

Dave Bittner: ...Or reviewing the input before putting it into the AIs to do their thing and saying, yeah, maybe we shouldn't include that - but that's not how any of this works, right? You'll talk to the technologists, and they'll say, well, we can't do that at scale. 

Ben Yelin: Right. 

Dave Bittner: And my response to that has always been, well, then maybe you shouldn't do that, right? 

Ben Yelin: Yeah. I mean, it would just be impossible to do. We've seen it a little bit with ChatGPT where they've actually tried to protect against some of their worst potential outputs. And so they've done things like, you know, they will not answer requests to build Molotov cocktails, for example. But people have found ways around it, and there aren't enough humans to correct some of these problems and protect against bad outputs when you're talking about a system that's operating at such a large scale. It's just not practical. 

Dave Bittner: Yeah. 

Ben Yelin: So, yeah, I think that's absolutely the problem. If you build a system that has so much promise, part of that promise is we have access to so many sources that go into our inputs. And I think the natural consequence of that is going to be that you get a lot of the garbage that's out there on the internet. 

Dave Bittner: Yeah. 

Ben Yelin: I think we've seen that in a bunch of different realms. I think there's - you know, your ultimate point in saying that maybe we just shouldn't do this, I think is certainly valid. There is value to artificial intelligence. I think people use it not just for nefarious purposes. 

Dave Bittner: Sure. 

Ben Yelin: There's value to things like ChatGPT. I think it could have a societal benefit if used in the right way. We just have to solve this dilemma of - do we really want to maintain this mirror where the output in these artificial intelligence system accurately reflects who we are as a people and we're not liking what we see? Is there any way to correct against that when we're talking about such a large scale? And that's not an easy problem to figure out. 

Dave Bittner: Yeah. This article quotes Aylin Caliskan, who's an assistant professor at the University of Washington, who studies biases and representation in AI systems. And they say the stereotypes and biases it's helping to further embed can also be hugely detrimental to how women and girls see themselves and how others see them. I think that's right on. 

Ben Yelin: Very profound, yeah. 

Dave Bittner: Yeah. 

Ben Yelin: I mean, and I think that's ultimately what's going to be the biggest negative effect out of all of this - is it just perpetuates these stereotypes. If you can't even escape them in an artificial universe, how are we ever going to overcome them in the real, physical world? 

Dave Bittner: Right. 

Ben Yelin: Yeah. So I think that's absolutely the long-term consequence of this. I think that's a very profound quote. 

Dave Bittner: Yeah. All right. Well, we will have a link to this story in the show notes. 

Dave Bittner: Again, we would love to hear from you. If there's something you'd like us to discuss on the show, you can email us. It's 

Dave Bittner: Ben, I recently had the pleasure of speaking with Chris Gray. He is AVP of security strategy at cybersecurity firm Deepwatch, and our conversation centers on where he thinks data privacy regulation may be going in 2023. Here's my conversation with Chris Gray. 

Chris Gray: 2022 has been, in many ways, a continuation of 2019, 2020, of the last few years. A lot of the same issues that we have been facing have absolutely continued. But this last year, we've also had a pretty significant rise in what I'm going to call geopolitical hacktivism - whatever you want to call it - but where we've got definite malicious use activities that are in place based around, you know, the conflicts going on around the world, etc. So it's been a really, really interesting year based on the fact that we're seeing a continuance of where we've been, and we're seeing an introduction of some pretty new stuff. It's been going on, I believe, under the covers, if you will, for a long time. But the last year has really brought it out into the open and made it very evident in ways that it wasn't before. 

Dave Bittner: And what about movement from, you know, nations around the world - here in the U.S. and elsewhere - in recognizing that we've got to make some changes when it comes to privacy and security? 

Chris Gray: Well, so from a privacy perspective, that's been visible. That's been something that's been heavily observed - you know, all - going back to GDPR and everything rolling forward. And we can talk about that more in detail. But one of the areas where, you know, we're seeing, within the United States, for example, a large amount of activities that are going on - you can look at what DHS and CISA, the new roles and rules that they're rolling out for minimum baselines for critical infrastructure. You can look at the FTC reporting requirements for public companies. You can look at some of the cyber bulletins - some of the cyber framework ideas that are coming out in the Biden administration - like they come out for every president. But there is a large amount of information that is now being released in a much more aggressive fashion than probably what we've observed in the past. 

Chris Gray: So I believe the awareness from that security baseline is absolutely there, and the help is coming out. It comes with its strengths and weaknesses, obviously. And then, again, from a privacy perspective, we are seeing more and more teeth attached to privacy regulations - more and more protective regulations. And the interesting thing to me is we're starting to see it inside of the United States, which had a significantly different privacy model than what was commonly seen in Europe. We're seeing more of that European tact being adopted inside the United States. 

Dave Bittner: Well, let's dig into that some because, you know, you talk about GDPR, of course, covering the EU, but it seems to me that, you know, here in the States, it's been much more of a state-by-state kind of thing, and we haven't really gotten any traction at the federal level. Is that a fair way to describe it? 

Chris Gray: Yes, it's absolutely a fair way, although there has been motion within the last year, leading up into 2023. In many ways, when you're looking at privacy rules that are brought in at the federal level in the United States, the federal government tends to take what I'm going to call the foundational ruling - or the low watermark, if you will. These are the foundational baselines that are required, as opposed to - I set the floor. I don't set the ceiling. But you do have a number of things which are pulling forward this year. You had the American Data Privacy Protection Act, which was submitted back in May of 2022 by a bipartisan group. It has not passed yet, but the fact of the matter is it was submitted as consideration for law. You've got the SEC with the new privacy and security requirements that they're pushing through, which, according to some, you know, may actually go out and effectively include updating the COPA standard. 

Chris Gray: These are things that the federal government is doing around this. There's still more to be done. But the fact that those rules are popping out, along with, like we said before, you know, Department of - DHS and CISA - they're putting out alerts every day, new security foundational baselines, aligning the NIST cybersecurity framework to minimum capabilities that are expected across areas of critical infrastructure. You can look at the cyber insurance considerations that are currently being looked at by the government, as to whether or not the federal government needs to step in to establish ubiquitous coverage for all or the capability to obtain it. There are a lot of things that are moving inside the U.S. federal government right now. They're just still a little bit, I think, in their infancy, but I think we'll see more motion on that with certainty in 2023. 

Dave Bittner: What is your sense in terms of the appetite for increased regulation here when it comes to privacy? Or are organizations - is it a matter of acceptance? Are they looking forward to these sorts of things, or are they fighting it tooth and nail? Where do we land? 

Chris Gray: That's a yes to all of the above and a... 

Dave Bittner: (Laughter). 

Chris Gray: ...Geopolitical answer. 

Dave Bittner: OK, fair enough. 

Chris Gray: I'm going to paraphrase it and take this to a very high level, but there are different privacy models that have existed around the world over the last years. You have the U.S.-centric model, which in many cases almost translates into you have to prove to me why I should not be allowed to have your sensitive information. You have the European model, which is you have to prove to me why I should ever surrender my personal information to you. And then you have what I call the rub model. And that's where you have nations that are kind of stuck between the two, and they're trying to navigate the shark-filled waters to conduct business and operations there. 

Chris Gray: In Europe, the push for more and greater privacy is a common thing. We're seeing it - you know, again, going back to GDPR, which is a standard that far surpasses anything that we've seen in the United States to that date, it was a common thing. Inside the United States, there's been significantly more pushback, with the exception of, you know - and when I say this, there will be people who will roll their eyes, but, you know, California's been leading the way on many of the privacy regulations. And you'll hear people in the United States - oh, yeah, that's just California being California - making it hard to do business. Well, is it really, though? The difference being there - it's that very critical mindset of - what is privacy, and to whom does it belong? And how critical is it? And there are different views. 

Chris Gray: So I would say there are a fair number of people that look at it and say, this is getting in the way of me making money and me doing business, and it's - the regulations are in the way, and I don't need them. And then, similarly, you're going to have people who are going to say, no, these are the foundational basics of how we should interact as human beings and how individual identity should be respected. What's the answer? Ask me in a decade because it's still very much up in the conflict mode. But I will say, generally speaking, that the world is moving more and more towards that European model. 

Dave Bittner: Are you seeing any interesting experiments around the world? Are there any nations that are taking, I don't know, a novel approach to this? 

Chris Gray: Let's take a look over in the EU right now. We're seeing a renewed interest in protection regulations that are popping up. You've got the French Blocking Statute, which, quite literally, is saying, even though, yeah, there is GDPR, there are situations where, in addition to that or around the outside of that, there are things which should not be shared because they are potentially putting French national security at risk. So you're not allowed to do this even if there's nothing else that says you're not. 

Chris Gray: There's the Transatlantic Data Privacy Framework, which you're seeing that's popping up, and it's requiring adequacy decisions. You've got the standard contractual clauses, which have been around forever. But now, you know, after December of this year, it's changing, and it's going to - you know, they're getting rid of the old SECs, and you're having to put the new ones in. And these are directly going to affect the United States and, interestingly enough, now, the UK. Since Brexit, when they departed from the EU, they're kind of now in that not us category, the way this is going. 

Chris Gray: Is that novel? Is that strange? Is that new? Not necessarily, but it's absolutely showing, if you will, the flexing of muscle - of this is the way we do business. You know, U.S., we understand your economy. We understand your power, but you're going to have to do it our way because this is how we're going to operate. And, you know, the entire concept of the data transfer impact assessments, which are going to be flying back and forth very rapidly, where - is it net new? No, it's something we've been doing in - after a fashion, but now the requirements are becoming more codified and more strictly enforced. So, you know, there's significant play there. 

Chris Gray: We can go over, and we can look at China. China has put in place a lot of cybersecurity law - the Data Security Law, the Personal Information Protection Law. They put a lot of standards in play, which the pessimists in the world, if you will, will kind of laugh and shake their head and say, well, yeah, but China, your laws protect you from us. They don't protect us from you. And I don't know that anybody's laws are any different, but it's just very interesting seeing, you know, players, nations, organizations or whatever that have historically had a very, you know, we're the right - or we're in the right - kind of a mindset. And now, the stage is getting very full with conflicting requirements, conflicting laws, conflicting approaches. And it's going to require a much more nuanced interaction, if you will, between those various standards. 

Dave Bittner: Where do you suppose we're headed then, and who can take the lead in kind of, you know, matching up all of these different requirements around the world and finding a common denominator? 

Chris Gray: Well, as I tell my children on a regular basis, if you ever want to understand why anything is done, find the money and follow it because that's really a kind of a well, duh, statement, if you will, but it is one that matters. I think one of the points that is going to drive the most interaction and the most give and take in this is going to be around international business, obviously. We all want to have - you know, there's what we want, there's what we need and there's what we can live with. And I believe we're going to find ways to navigate this and come up with what is a minimal acceptable answer that will concede what needs to be conceded on both sides to allow us to get to where we need to be to spend money, work together, engage in trade, tourism, everything else of that nature. That is going to happen and it's going to be a back and forth over time. Is there any one organization that's going to be able to stand in and say, I'm in charge of this, and I'm fixing it? No, I really don't think so. 

Chris Gray: I think what it's going to come down to is you're going to see power brokers. And in that you're going to see - you know, you're going to see the United States. You're going to see the EU. You're going to see, you know, various entities within the Asia-Pacific area. You're going to see those pop up, and they're going to come together as players and try and drive through where they need to be. That's going to be, really, the future as I see it on this because there is no single entity in the world that everyone's going to just agree to bow down to the regulations that come from them and say, OK, I'll do what you tell me to do. 

Dave Bittner: What are your recommendations for folks in organizations who are responsible for privacy and perhaps security as well to kind of hedge their bets against the things that might be coming over the horizon? 

Chris Gray: Study. Study, study, study, and start planning your approaches rapidly. Security is not privacy. Privacy is not security. But they both enable and are enabled by one another. They really can't exist in our connected world without, you know, paying respect to one another. Anyone who's having to do this - you have to look at privacy as a driver for your security program. You enable privacy through the security controls and the practices that you put in place. And they're absolutely going to be necessary. So they need to be considered there. 

Chris Gray: From a privacy perspective, the individuals who are responsible for that need to make sure that they're getting in front of the security teams and that they're making sure that the upcoming message - the intent - is understood so that not only are we designing systems with security by design, but privacy by design is a significant cornerstone in how we're doing our development. We're seeing technologies that are popping into place now that - you know, they're security technologies, but their play and how they're going to be able to affect the outcomes of, you know, the privacy community and everything that we're trying to do. They're anything but, you know, inconsiderable. You've got - I mean, if you stop and you look, you've got everything from, yeah, we've got the quantum security stuff that's coming. We've got the biometrics, and we're all there coming through. There's - there are eight laws being considered in the United States right now, in 2022, that are directly attributable to, you know, biometrics and the plays that are in there. 

Chris Gray: You've got blockchain. You've got the various forms of encryption that are coming up right now that are absolutely designed towards how are we going to do a better job at it. But at the same time, when they come around, they're probably going to have the ability to completely nullify all the cryptography work that we've done in the past just by the advancements in technology. There are so many things coming, and we've got to have these two teams come together and make sure that true DevSecOps kind of a mindset - that we are working together from the initial stages to make sure that both needs are being addressed by the single solution. 

Dave Bittner: I'm curious. I think, certainly on the consumer side, you could make the case that folks feel a bit of resignation or surrender when it comes to having their privacy violated or, you know, all their information soaked up. You know, we have an endless stream of EULAs that we just click through, and who knows, you know, what's being gathered up. But I'm curious - are you optimistic, in the face of what I think is a lot of pessimism on the consumer side, certainly? Do you think that there's reason to have good thoughts about what might be in our future? 

Chris Gray: I absolutely do. I think a lot of the pessimism that you're talking about when you just click through that as well - it doesn't - you know, there's a level of pessimism - well, it doesn't matter what's in this. I've got to click yes or I can't use this. And companies are going to do what companies are going to do, and I just might as well accept it. 

Dave Bittner: Yeah. 

Chris Gray: At least in the United States, that's the mindset. Again, in Europe, it changes. Or not necessarily Europe - but, in other parts of the world, it changes. But I think one of the things that we're going to be seeing - and it's very interesting as we do look at it - the rise in, you know, artificial intelligence, the things like homomorphic encryption. There are a lot of ways, a lot of technologies, a lot of capabilities that are rolling out now that will allow what people refer to as, you know, a cybersecurity mesh approach, where, basically, the effect of breaches is going to be minimized just due to all the interacting capabilities that, well, mesh together as a protective net, if you will. I think that is going to start to minimize the effect of loss where things are happening. 

Chris Gray: I think that as privacy is becoming more critical and, as we've been talking about, how it has to be a consideration from the start. I think you're going to start seeing it just part of the baseline design, and we're going to get better at it. I think also, to be thoroughly honest with you, if I address the security nihilist mindset of the breach is inevitable and it will happen - OK, if we accept that fact and we don't want to be in the pessimistic world, the other part of that is we're getting better at how we respond. We're getting better in our ability to detect when anomalous activity is happening, to respond to it and to minimize the impact because we live in a world where it does happen. The idea that fewer incidents makes for a better world has been statistically proven to be wrong. The more incidents that you suffer, if you will, the better you get at your response capabilities in dealing with them. 

Chris Gray: And that's going to be - something that is going to pay dividends down the line is we're getting better at the response capability. So even if the bad man does get in, our ability to detect it, respond to it, minimize the harm done by it, learn from it, continuously improve our world and do better at what we're trying to do - it's inevitable as well. 

Dave Bittner: Ben, what do you think? 

Ben Yelin: That was a really good interview. I thought it was a little more optimistic than I would have been on this. 

Dave Bittner: (Laughter). 

Ben Yelin: You know, to me, we're coming off this year where there was a lot of promise for a federal data privacy statute in Congress that didn't materialize. And I think there's less of a chance of it happening over the next couple of years. But I think this interview could make you more optimistic just because there is a lot of innovation at the state level. There are now several models out there, both from GDPR and in other countries, as to how to set up data privacy regimes. So I take away a little optimism from the interview. 

Dave Bittner: Yeah. All right. Well, again, our thanks to Chris Gray for joining us. We do appreciate him taking the time. 

Dave Bittner: That is our show. We want to thank all of you for listening. The "Caveat" podcast is proudly produced in Maryland at the startup studios of DataTribe, where they're co-building the next generation of cybersecurity teams and technologies. Our senior producer is Jennifer Eiben. Our executive editor is Peter Kilpe. I'm Dave Bittner. 

Ben Yelin: And I'm Ben Yelin. 

Dave Bittner: Thanks for listening.