Caveat 10.26.23
Ep 192 | 10.26.23

Privacy landscapes for children.

Transcript

Cobun Zweifel-Keegan: We're not just concerned with kids anymore, we are also concerned with teenagers, with -- there's been a broadening of the ages to which new rules apply. We're not just thinking about privacy but also about mental health and safety, and thinking about ways to safeguard children generally, and the particular harms that started to emerge when it comes to both kids and teens using, particularly social media platforms, but other types of online services too.

Dave Bittner: Hello, everyone. And welcome to "Caveat", the CyberWire's privacy, surveillance, law, and policy podcast. I'm Dave Bittner, and joining me is my co-host Ben Yelin from the University of Maryland Center for Health and Homeland Security. Hi, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today Ben discusses whether AI giants may soon have to worry about defamation lawsuits. I've got the FCC's renewed interest in net neutrality. And later in the show, Cobun Zweifel-Keegan from the International Association of Privacy Professionals talks about the children's privacy landscape in the US and around the world. While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. All right, Ben, we've got some interesting stuff to cover today here. You want to kick things off first?

Ben Yelin: Yeah, this one is absolutely fascinating to me. It comes from Ars Technica. Actually, both our stories today are from the good folks at Ars Technica. But this is by Ashley Belanger. And it's about ChatGPT and other AI giants' potential liability for defamation lawsuits. So, the hook here is there are a couple of individuals who have sued AI companies. The most notable is this guy in Georgia, last name is -- oh, I guess his full name is Mark Walters. He is a radio host at Armed America Radio in Georgia. And he is suing OpenAI claiming that ChatGPT falsely stated that he had been charged with embezzlement. This is not the first time this has happened. It turns out that ChatGPT, as we know, hallucinates. It just makes up stuff about people.

Dave Bittner: Yes, it does.

Ben Yelin: The most famous incident of this was the attorney in New York State who tried to understand the case law for whatever his case was about by doing searches on ChatGPT, and they just made up a bunch of cases and case citations.

Dave Bittner: Right.

Ben Yelin: They were all false. So, Mark Walters' case is among many others where ChatGPT is putting something out there that is simply not true. Here I think it's just the fact that Walters shares a name with somebody who actually was charged with embezzlement. That's happened in other cases where people who are kind of semi-famous might share a name with somebody else who's a criminal, and as far as the input is concerned, ChatGPT doesn't really know the difference between that one semi-famous individual and the alleged criminal. So, we're having all of these instances, Walters just being one of them, where ChatGPT is making stuff up that might end up hurting somebody's reputation. So, obviously, that opens up potentially ChatGPT and other AI giants to defamation lawsuits. You can't say things publicly that might ruin somebody's reputation. Now, this is a very fact-specific inquiry. The case takes place in Georgia, so we're using the Georgia State law on defamation to evaluate the case here. But I'll note that the case law or the rule in Georgia is relatively similar to most other rules across the country. There's something really unique about this. So, there are really four elements to a defamation or liable lawsuit. First is that the publisher publishes false and defamatory statements, that's element one. Element two is that those statements were communicated to a third party. Number three is that the speaker, so in this case OpenAI was at fault in publishing those statements. And part four, Walters was harmed. It turns out that the first element there is going to be the most interesting and we might get some groundbreaking case law. Basically what OpenAI is arguing is that they're not actually publishing false and defamatory statements. When you log into ChatGPT and sign the ULA that I'm sure you've all read, it's 300 pages.

Dave Bittner: Oh, the ULA giveth, and the ULA taketh away.

Ben Yelin: It sure does. OpenAI and its terms of use makes it clear that ChatGPT is a tool that assists the user in the writing or creation of draft content and that the user owns the content they generate with ChatGPT. So, in other words, the user is taking ultimate responsibility for the content being published. As a matter of law, OpenAI argues, they are not publishing any information. I think this could be a compelling argument in a court of law. And I think that's kind of bad news for the general public. We might get a situation where searches are manipulated or somebody is able to manipulate inputs, maybe on something like Wikipedia or just through a Google search, that goes into the ChatGPT box, it spits out false information about somebody that does hurt their reputation and that's passed on by a user to a public source. In that instance, if this interpretation of the law is upheld, the one that's been advocated here by ChatGPT, they are shielded from liability, and it's actually the user who might be subject to defamation even though the user would just be copying and pasting what they saw on ChatGPT. So, I don't know how courts are going to decide this relatively novel legal question. But I think it has very large implications. Whatever the first case is, it's going to have a lot of precedential value. I think the Walters case is a good test case. He is semi-famous but he's not that famous. So, if somebody is trying to get information on him, it's not like he's Donald Trump where every single piece of relevant information is online somewhere and you can wade through and figure out the fact from the fiction. This is somebody who's semi-famous, he has a local radio show in Georgia. So, it's fairly reasonable that if somebody saw this on ChatGPT, they might believe it, and this might hurt Mr. Walters' reputation. So, just a very interesting novel area of the law. The legal field around AI is still in its infancy. And I think this is going to be a really groundbreaking question no matter how it goes.

Dave Bittner: So, this makes me think about -- I think we've had some recent cases where they found that work generated by AI is not subject to copyright protection, particularly some of the image generation engines, right?

Ben Yelin: That's right.

Dave Bittner: Because an AI is not a person and you can only assign copyright to a person. That makes me wonder if we're headed down the same path with this where you can only charge a person with defamation, not an AI. In other words, you know, for the purpose of generating an AI image, if the courts have said that that image cannot be assigned the copyright, and indeed the owners of the AI cannot be assigned the copyright for the image that the AI generated, can we use that as evidence or support for the notion that the AI companies shouldn't be held responsible for what the AI generates. If they can't get the benefits, the copyright benefits of what an AI generates, then are they also shielded from the liabilities of what the AI generates?

Ben Yelin: I think that's exactly what they are going to argue. It's like either we are a creator for the purpose of copyright, or we are not a creator for the purpose of defamation. It can't be -- you can't have it both ways. So, I think ultimately, they might argue that since previous case law, and obviously it's been in different jurisdictions, has held that they're not creators for the purposes of copyright, they therefore, are not humans for the purpose of defamation, they are not publishing information that's being used to ruin somebody else's reputation. And I think that very much could be where the case law is headed. I'll note, you know, it's unclear enough how this is going to be worked out in court that OpenAI and other AI giants, including Microsoft when they've been threatened with lawsuits, they will usually go in and try to ameliorate the problem at hand. So, in one case, Microsoft was being sued, Microsoft just went in and there were some false result being put up in Microsoft's version of the AI chatbot.

Dave Bittner: Yeah, their Bing Chat.

Ben Yelin: Bing Chat, yeah. And they manually went in when they were threatened with the lawsuit and removed that false information. That says to me that these companies are unsure enough about where this case law is going to settle, that it's worth it to them to try and nip the case in the bud before it makes it into court. So, I think OpenAI, while they have not directly responded to this Walters lawsuit, it's very possible that they'll try to ameliorate Mr. Walters himself, maybe offering him some compensation and making sure that the particular false information about him isn't put up in the chatbot. Because I think there is enough uncertainty about how these cases are going to come out in court. And I'll note if courts decide that ChatGPT and its competitors are liable, that's going to be a big hit to the industry. They're going to have to go in and institute some type of manual or automatic protections to make sure that false information is not being posted in the output of these chatbots, that's going to be very costly, it's going to be time-intensive, and ultimately, it's possible that that's going to turn off consumers. So, there are really, really high stakes here.

Dave Bittner: To what degree can they protect themselves using the ULA? I mean, what happens if every time I go to use ChatGPT, a window pops up that says, "Use it at your own risk. This thing is likely going to lie to you and make up things that are untrue. Have at it." Right? Does that protect them? Can you shield yourself with a statement like that?

Ben Yelin: You can partially shield yourself. ULAs and really any terms of service, whether you're talking about a ULA for using an online service or signing away your liability when you go skiing, those aren't 100% absolute. There are laws and court precedents that end up trumping the ULA. There are certain things that you cannot contract away in a ULA. I think it's unclear at this point whether defamation and liable, in this particular context, is one of those things that cannot be contracted away. That's going to be a very relevant portion of the lawsuit. It's possible that the result of this lawsuit is that the ULAs are going to have to be more specific about disclaiming liability. Instead of the kind of general warning they have now which is whatever is produced through our chatbot is your responsibility, they're going to have to say something like you just said where, you know, our chatbot is capable of being a bold-face liar and just making stuff up that hurt somebody's reputation, we are giving you this explicit warning. That might give them a greater degree of legal liability. So, the ULA is not automatically going to get them out of this lawsuit. It might cause them to adjust it to avoid further lawsuits, similar to the one at hand here.

Dave Bittner: All right. Well, what's our timeline for getting the next steps on this? Any idea?

Ben Yelin: OpenAI is in the process of responding to the lawsuit, they're going to file an answer. So far, we just have the civil complaint that Walters filed. That answer is expected in the next couple of months. OpenAI because of the novel issues are trying to delay trial proceedings here just so that they can -- I guess, what they're saying is that they need time to review the relevant legal issues and make sure that their filings are proper and set them up well for the case. I think under the table, they might be trying to push this off as long as possible to see if maybe one of the other companies has to go through a lawsuit first and they can figure out what the most effective defense is. But sometime in the next year, I think we could get a resolution on this. And I think it's going to have major downstream effects to the world of generative AI. I think --

Dave Bittner: A year seems like a long time when it comes to AI, right?

Ben Yelin: I know. I mean, that's what's so frustrating.

Dave Bittner: That's the mismatch we're faced with these days, right?

Ben Yelin: It is. We talk about this all the time, the legal system just moves so slowly relative to technology. And who knows what the technology is going to be like when we finally get some type of adjudication on this case? I mean, maybe there's going to be a new market competitor that swamps OpenAI out of the market, they're not really a relevant player, maybe that plays into whether they can actually be held liable for defamation in the first place. We just don't know. But it is frustrating that our legal system is so slow in resolving some of these cases. But in fairness, this is really complicated stuff. We're talking about defamation and liable common law that goes back to our English legal ancestors being applied to a technology that's really less than a year old at this point, for all intents and purposes.

Dave Bittner: Yeah, certainly in the public's imagination.

Ben Yelin: Exactly. You know, you nerds I'm sure knew about it in the technology world, but normal people, if you asked them in October, November of 2022 what generative AI was, I think a very small percentage of people would be able to say they knew what it was. So, I think it is in its infancy and I think whatever happens in this case and similar cases is going to have a big impact on the industry going forward.

Dave Bittner: Yeah, all right. Well, very good. Obviously, that's the one we'll keep a close eye on here. My story this week also comes from Ars Technica, this is written by Jon Brodkin and it's his coverage of a recent move by the FCC. The article's titled "FCC Moves Ahead with Title II Net Neutrality Rules in the 3-2 Party Line Vote". So, a little bit of the background here. We had net neutrality during the Obama era. The Trump administration reversed that. And under the Biden administration, they've been trying to get back net neutrality, but they faced -- the FCC Commission had four members and so they were stuck in a two-two party line deadlock. And recently the FCC got another commissioner on board who is voting with the Democrats so now we have a 3-2 party-line vote to --

Ben Yelin: Actually a quick aside here.

Dave Bittner: Yeah.

Ben Yelin: When people are voting for Congress, I think people don't appreciate how big the stakes are for party control of the House and the Senate. The Democrats maintained control of the Senate last year, that allowed them to continue to confirm Biden nominees with the simple majority vote. And if that had not been the case if they had lost two more Senate races, no way whatsoever that a Republican majority would have allowed a Biden nominee to the FCC to sail through. It just wouldn't have happened. We'd be stuck in this deadlock. So, if this is something you care about just, you know, remember that when you go to the voting booth. This stuff really, really matters.

Dave Bittner: So, before we dig into the details here, I don't want to put you on the spot here, but can you give us a brief little description of what we're talking about with net neutrality itself?

Ben Yelin: Yes. So, net neutrality is a concept that's been very controversial over the last 10 to 15 years basically, it's about whether to classify broadband as a telecommunication service which would allow the FCC to regulate internet service providers under the common carrier provisions of the Communications Act. This is what the FCC decided to do in 2015. So, in 2015, they used that same statute to prohibit fixed and mobile internet providers from blocking or throttling traffic or giving priority to web services in exchange for payment. So, the neutrality is no web service is going to be throttled as a result of not being able to pay. That's what the neutrality principle is.

Dave Bittner: Right. So, correct me if I'm wrong here, but my recollection is that an issue at hand was if you had an organization like Netflix who requires a ton of bandwidth, and some of these providers, you know, Verizons of the world, the AT&Ts of the world would say, "Hey, listen -- "

Ben Yelin: Pay us up for that bandwidth, buddy.

Dave Bittner: Right. You're using up a lot of our available bandwidth and it'd be a shame if anything were to happen to that bandwidth like, you know, your users were unable to enjoy your service because they're not able to get the bandwidth required to view a movie. So, why don't you pay us and we'll make sure that that traffic will make its way through our network without any issues? And that's what the FCC took issue with with net neutrality. Do I have that mostly right?

Ben Yelin: Yeah, exactly. This was basically an implicit threat to companies like Netflix. Netflix was the main one, but now obviously we have more -- additional streaming service competitors. So, I think it has a broader impact now than it did in 2015 where Netflix was kind of an island on its own in streaming services. This was prior to Peacock, and Paramount Plus, and even Disney Plus. So, I think it has even larger implications now than it used to. But, yes, it was about whether these companies could potentially be throttled if they weren't willing to pay the Verizons, the AT&Ts of the world for that greater bandwidth. The Obama administration in 2015 instituted those rules under Title 2 of the Communications Act. Those were repealed under the leadership of Ajit Pai in the Trump administration. He was a pretty prominent figure on the FCC. It was confirmed, and to his credit, I mean, I think he said all along that he thought that this was an undue regulation that if we were going to have a robust market in the telecommunication sector, then we should be able to market the amount of bandwidth that these companies have. That should be a source of revenue. So, he repealed the Obama era rule, they reclassified broadband as an information service under Title I of the Communications Act. And then you've had this kind of nebulous gray zone for the past several years while there's been this 2-2 deadlock on the FCC. Now, the process here is not over. We are simply at the first stage, which is the notice of proposed rulemaking. Meaning that all of the stakeholders are going to have their opportunity to weigh in. I'm sure the telecommunications companies are hiring the best lawyers/writers to offer some comments in opposition to this new net neutrality rule.

Dave Bittner: Right. And they'll likely sue as well, right, that this article points out that after the rulemaking is done, brace yourself for the incoming lawsuits from the broadband industry.

Ben Yelin: Yeah, I think that's exactly what's going to happen is if they go through this regulation, they're going to be facing a barrage of lawsuits. Obviously, our court system is probably going to be, you know, when you look at it in its totality more friendly to these telecommunications providers than the current leadership of the FCC, or Congress for that matter. So --

Dave Bittner: Do you mean the make-up of the Supreme Court as it stands or --?

Ben Yelin: Yes, and also the make-up of various circuit courts that might end up deciding these cases. The telecommunications companies since they're located across the country can do something called forum-shopping where they can search the jurisdiction where they have the best chance of winning since they kind of have a basis in all 50 states in the United States. And so, they can choose a circuit where they think they're going to have the most success. And something like the Fifth Circuit based in Texas, the south -- kind of the central south of the country is going to be their best avenue to try and get this overturned. So, yeah, I think we're just at the beginning of this battle. It's happening now because we have full membership of the FCC. We'll see when they publish the final rule. That goes into effect -- that would go into effect sometime probably in February or so, and that's probably when we'll see the lawsuits.

Dave Bittner: This article makes the point that the Republican members on the FCC Committee here, they make the argument that, you know, back when the Trump administration made their changes, when they repealed the Obama era rules, that the argument from the folks who were in support of net neutrality were saying that basically, this is going to break the internet, that this is, you know, this -- terrible things are going to happen. And that didn't happen. The internet is working fine. Say, just, you know, from my own personal point of view, it seems like the streaming services are all working fine. I can say that wasn't always the case. I remember in the early days, there were frustrations with available bandwidth, but I think we've come a long way in terms of just the build-out, the bandwidth that most people have these days. And admittedly, you know, I don't know what it's like if you're in an area that is bandwidth starved or you're in a situation where you can't afford to have the fastest internet there is, perhaps it's a different story. But I think the Republicans certainly can make a good case here that, you know, those warnings didn't come to pass.

Ben Yelin: Yeah, I frankly think they have a very compelling argument to make there. I mean, I remember the scene in 2017 when Ajit Pai at the FCC was trying to reverse Obama-era net neutrality rules. And the absolute freak-out in the activist industry. Some of the very institutions that we quote frequently on this show were having people sign petitions, the Electronic Frontier Foundation, Epic, and others, saying that this would ruin the internet as we know it. One of the Republican members of the FCC referred to those warnings in 2017 as a hoax. And basically said, "What we've had for the past five years is better than having 1930s-era government controls imposed on the modern internet, which might end up increasing costs on consumers." And I frankly think that's pretty compelling. I admit that I was a pretty strong believer in the principles of net neutrality. And I think we all have to reflect on what's happened over the past six years since these rules have been reversed. And think about whether -- or think about why those dire warnings failed to lead to these awful outcomes.

Dave Bittner: Can I inject a little bit of snark and say that the internet doesn't need net neutrality to ruin itself?

Ben Yelin: Right, exactly. I think Elon Musk is doing a fine job of that himself.

Dave Bittner: Right. There's plenty of blame to go around.

Ben Yelin: Yeah, I mean, it's almost like that's the least of our problems. Like, not only do we talk about all the other legal issues but just in terms of the universe have issues with the online world, especially with AI coming online, referencing our last story, this feels very 2015, 2016 in terms of its relevance. But we'll see. I mean, maybe the three Democratic commissioners have a compelling argument that these rules need to be reinstated. Maybe the fact that we've had more of a deregulated internet over the past several years, maybe the negative results are still to come, and that's an argument they're going to have to make. But, yeah, I mean, it's certainly compelling to me that there were all these dire warnings in 2017 that so far have not come to pass.

Dave Bittner: All right. Well, we will have a link to both of our stories in the show notes if you want to check those out. And, of course, we would love to hear from you if there's something you'd like us to consider for the show, you can email us, it's caveat@n2k.com. [ Music ] Ben, I recently had the pleasure of speaking with Cobun Zweifel-Keegan. He is from the International Association of Privacy Professionals. And our conversation centers on the notion of the privacy of children in the US and around the world. Here's my conversation with Cobun Zweifel-Keegan.

Cobun Zweifel-Keegan: This is an evergreen topic for sure so I'm excited to sit and chat with you about this today. It has been a really busy time for new requirements around children's privacy in the United States and beyond. I would point to big themes and what has changed recently. One of those broadening themes is that we're not just concerned with kids anymore, we are also concerned with teenagers, with -- there's been a broadening of the ages to which new rules apply. And then they're also expanding in terms of the scope of concerns that are reflected in new laws and regulations. We're not just thinking about privacy but also about mental health and safety and thinking about ways to safeguard children generally and the particular harms that have started to emerge when it comes to both kids and teens using, particularly social media platforms, but other types of online services too. All of that is building on the foundation in the US of COPPA, the Children's Online Privacy Protection Act which is the oldest kids' privacy law in the world. That has always applied to just children under 13, and it's very focused on parental consent and parents' ability to determine how their children's information is going to be used by companies. These new trends are expanding in those two ways that I've mentioned. And we're seeing that at the state level, lots of federal conversations. And other global regulators are tackling this issue too.

Dave Bittner: Where do the regulators feel like we stand these days? I mean, is COPPA serving us well, or is it time for an update?

Cobun Zweifel-Keegan: Yeah, that's a good question. And I think folks disagree on whether there's a need for new regulation always, of course. But over the last couple of years, regulators have started to take notice of some big issues on social media platforms that were raised by whistleblowers and others around how these types of platforms impact children and teens develop that includes things like addiction, both to the screen itself, the kind of addictiveness of these types of interaction modalities, but also focuses on specific themes might fuel mental health issues like certain addictions or eating disorders and things like that. There's also major concerns around safety, around how kids interact with each other, and how they can interact with adults in certain platforms. And the other concerns I think just following from the fact that we recognize that teenagers are subject to specific privacy concerns that may not be true for other demographics, they're still developing, they may not want to have sort of a permanent record of their activities that could follow them around when they're still experimenting with their identity and things like that online. So, all of that together has led to this concerted and enhanced effort around thinking through ways that regulators can approach this issue and provide new protections for that demographic and new guardrails for platforms to implement. The UK really led the way in that conversation. They had what became known as the age-appropriate design code, which is an enforceable regulation that expands on their privacy protections. And that translated to a law by the same name in California, the Age-Appropriate Design Code Act of California, which was scheduled to go into effect next summer but has recently just been blocked by a court decision. Similar ideas are reflected in federal legislation, KOSA, the Kids Online Safety Act, or even what's known as COPPA 2.0 which has a different name, the Children and Teens Online Privacy Protection Act. Both of those are really the top priority for the Commerce Committee in the Senate. In the House, they've been much more focused on comprehensive protections, on protections that would cover all consumers, not just kids. And those House stakeholders will mention the fact that their proposals also protect kids and they think that they're actually stronger than the proposals that are in the Senate at the moment. But President Biden too has made it a priority and has mentioned in both of the last two State of the Union Addresses this need to do something to help change the way that kids and teens use online systems. And mentioned a lot of the same harms that I did before. But, yeah, it might be worth chatting a little bit about the fact that that law was just blocked in California because that does kind of put major asset risks on some of these efforts.

Dave Bittner: So, let's dig into that. I mean, what was behind the effort to block the law?

Cobun Zweifel-Keegan: Well, the effort to block the law stems from industry push-back on the new requirements that what have been included in the California law. And those are the main qualm as it relates to possible content restrictions, restrictions on how platforms determine the types of content that is provided to their users. That kind of creates the contours of the debate there which center around the First Amendment and whether the government is overstepping to restrict companies' practices when it comes to speech on their platforms. And that kind of puts this privacy versus First Amendment tension in place, although these laws are a little bit more focused on kids' safety than they are on privacy. So, it's unclear whether those same arguments are sort of a threat to general privacy legislation everywhere. I mean, in the US at any level. But, yeah, an organization at the Trade Group named NetChoice has brought challenges to the California law and also to other similar state laws which are more -- some of the other laws are more focused on -- they are somewhat more aggressive in some ways and some of them ban the use of social media by kids and teens unless they have parental approval, things like that. But the California law is a little bit more nuanced but also still subject to some of these same concerns. The judge in that case really heard the complaints of the NetChoice raised. She has issued a preliminary injunction, blocking the law from going into effect. And in doing so, she kind of peeked behind the curtain at the substantive issues here and she is influenced very strongly by the Supreme Court case called Soro, which people have warned before could have broad implications for a really broad understanding of First Amendment jurisprudence that kind of treat all data as speech. And so, if that reading of the Supreme Court precedent continues to hold at multiple layers of the judicial system, that could mean that these laws will need to be a lot more targeted and a little bit -- very focused on not triggering that First Amendment red flag.

Dave Bittner: Can we talk a little bit about the IAPP itself and the role that you all play with these sorts of things? What is the mission of your organization?

Cobun Zweifel-Keegan: Yeah, that's a great question. I'm always happy to talk about that. Yeah, the IAPP is the International Association of Privacy Professionals. And we are a global nonprofit professionals association which means we do all of the things that other professional associations might do like you can think of like the Realtors Association. Basically, we're the people that do the work of privacy and data protection in organizations around the world. We have about 80,000 members all over the world. And that allows us to be what we call policy-neutral in these types of conversations. We aren't advocating for or against certain types of protections to be represented in law. But what we are saying is no matter what rules are in place, you need to have skilled and responsible people to do the work inside of those organizations to put those into practice. And that is sort of behind -- that's been the philosophy behind the rapid growth of the privacy profession over the years. And now, we're actually expanding more and more into governance and to try and grapple with the new issues and the broader set of interdisciplinary practices raised by AI. But that's a different conversation, I think. Somewhat. Although kids are also exposed to as well, but, you know, that's for now some -- there's no specific AI kids laws out there yet.

Dave Bittner: Yeah. It's interesting to me the kind of natural tension I see when it comes to teens in particular online. Because I can imagine, you know, I think all of us who are adults were once teens and we understand how -- what a challenging time of life that can be. And especially when it comes to your privacy. I can imagine kids online who -- you know, they are figuring out who they are. Some of that they want to share with friends and people they met online. Some of that, they probably don't want to share with their parents. But at the same time, we realized that their parents should have a certain right to know what's going on in their kids' lives. That strikes me as an interesting thing for these platforms to try to balance.

Cobun Zweifel-Keegan: Yeah, exactly. And I think that's a major tension point as well in how we try to craft regulations and also in how platforms are engaging on this issue on their own. And I think that is important to underscore the fact that trust and safety teams, privacy teams inside of these organizations have been deeply engaged on these issues, partially in response to regulations like the UK's but also just recognizing on their own that these are things that need to be wrestled with and that they need to take into account the specific needs of that demographic. I mean, this is playing out in real-time, some kind of diffusion of different regulatory approaches. In Utah, I think we saw probably the most extreme version of this -- of what you're talking about here where the idea of having parents in the driver's seat was expanded to this 13 to 17 demographic, the under-18 group. And that really does draw attention to teenagers aren't the same thing as children. As we grow and mature, we expect that there should be more autonomy. Laws do recognize the privacy rights of young people, especially as they get older into those higher-numbered teen years. And also, yeah, those interests in development and autonomy are not always in keeping with letting our parents know everything that we're doing. And I think the privacy interests vis-a-vis parents are not always reflected in some of the regulations that we're seeing now. And some of that flows from this -- from kind of not thinking seriously of this age-appropriate concept. In the UK, that original code is very focused on different age groups, thinking about how different practices may apply and different considerations may need to be taken into account for like much more segmented age groups as you move up. Some of the other laws are a little less nuanced and saying like let's treat everyone under 18 with similar restrictions. And, yeah, there is a lot of conversation to be had I think around that and how -- and what that could mean for the development of future generations of online people.

Dave Bittner: You know, my co-host Ben Yelin and I talk on this show a lot, and it's practically a cliche, you know, we use the phrase, you know, "but what about the children". And I think sometimes you'll see legislation proposed that wraps itself in that phrase. And I'm curious, you know, from someone like you who's on the frontlines of this, how do you go about separating the good faith attempts to protect kids and look out for them and kind of separate that from people who are using it as a shield, you know, that we're really trying to do something else here but by summoning that phrase, how do we protect the kids, because everybody wants to protect the kids, you can often or -- I don't know, it's sometimes I guess maybe make it easier to push through what you're trying to do here. Just, first of all, does that make sense?

Cobun Zweifel-Keegan: No, it certainly does make sense. And it definitely is a cliche in the policy community for sure. I think we have seen the cycles of policymaking and discussion for a long time that prioritize kids' issues over other demographics. And some privacy advocates have taken issue with that and focused on the fact that, hey, if we don't start from a firm foundation of comprehensive privacy protection for all types of consumers in the United States, why are we focusing on even more protections for children? And that's part of the argument that actually House Commerce Committee Chair, Cathy McMorris Rodgers, made the other day in an event focusing in on the fact that she wants to prioritize comprehensive legislation if possible. And that that in and of itself would give protections to children but also it has special protections. We've most recently seen drafts of the American Data Privacy and Protection Act do have specific requirements that apply to children as well. But you're right, I think this is -- it is a common policy trick, I guess, to focus on the less debatable groups of people who do need the most protections. And I think it does become a challenge then to separate kind of the main ideas and the goals of those types of efforts from the kind of -- yeah, I don't know, ethics washing idea of saying like, well, this also will help the children or -- yeah. And I think it so far has resulted, not necessarily in new legislation on the federal level but we end up with a lot of conversation around the children over and over. And I think it's important to think about the fact that there are other groups that are worthy of special privacy protections, whether that's minority groups, marginalized people of other stripes, different age demographics like older adults, there's a lot more robust conversation we can have beyond the average consumer and their privacy needs that I think sometimes does get overtaken by a focus on children. [ Music ]

Dave Bittner: Ben, what do you think?

Ben Yelin: Won't somebody please think of the children?

Dave Bittner: I know, I know.

Ben Yelin: Yeah, but I think it is a really important conversation. Oftentimes we're weighing competing interests, the interest of the children and online safety but also the interest of a free and open internet with robust First Amendment protections. And I don't think this is an issue that's going to be easily resolved.

Dave Bittner: Yeah. I mean, I think it's interesting that we have organizations like the IAPP where Cobun, who represents them, that are really taking a good-faith interest in trying to thread this needle. But between the legitimate need for protecting children online and, you know, those who will use that need far beyond its real-world intentions or needs for their own purposes.

Ben Yelin: Yeah. And every couple of years we have new proposals in Congress for online safety for children. And usually, organizations like IAPP come out in strong opposition because these regulations as proposed are just over-broad and might end up stifling the free speech and free internet of adult users. So, I just think it is a very difficult needle to thread.

Dave Bittner: All right. Well, our thanks to Cobun Zweifel-Keegan for joining us. We do appreciate him taking the time. [ Music ] That is our show. We want to thank all of you for listening. A quick reminder that N2K Strategic Workforce Intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. A quick reminder for our Pro subscribers to check out our "Caveat" newsletter with all of the latest stories and links to online policy issues. You can learn more about that and become a Pro subscriber on our website, cyberwire.com. Our Senior Producer is Jennifer Eiben, the show is edited by Elliott Peltzman, our Executive Editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: And I'm Ben Yelin.

Dave Bittner: Thanks for listening.