Caveat 8.17.23
Ep 183 | 8.17.23

ChatGPT: The ban chronicles.

Transcript

Chris Denbigh-White: So, can we actually trust the outputs of these large language models? Yes, they produce content that is very self confident in its tone. Large language models will write a very well reasoned argument. But are those arguments in fact true and can they be trusted?

Dave Bittner: Hello, everyone, and welcome to "Caveat," the CyberWire's privacy, surveillance, law, and policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin from the University of Maryland's Center for Health and Homeland Security. Hello, Ben.

Ben Yelin: Hello, Dave.

Dave Bittner: Today, Ben discusses the police raid on a local Kansas newspaper and its implications for digital privacy. I've got the story of the Supreme Court possibly taking on a social media moderation case. And later in the show, my conversation with Chris Denbigh-White of Next DLP. We're talking about ChatGPT and how the US is looking to create rules around it. While this shows covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. Alright, Ben, let's get started here. And I have to say, your story their week has had my attention since it broke earlier in the week. What have you got for us?

Ben Yelin: Yeah, it's a really remarkable and disturbing story. So this takes place in the small county of Marion, Kansas. And it concerns a local newspaper there, the Marion County Record.

Dave Bittner: Been around for over 100 years.

Ben Yelin: Yes. And it was owned by a family and we'll get into the details of this. The current owner is Eric Meyer, and his mother, who died. I think the proximate cause was this police raid. But she died over the weekend, she was 98. And she also had a history as being an owner and publisher of the newspaper. So a little bit of backstory here. There was an event at a restaurant featuring a local congressman, Jake LaTurner. And a restaurant owner named Kari Newell. So Kari Newell had kicked newspaper staff out of an event with this congressman. The congressman had nothing to do with it, his staff was apologetic. But the restaurant owner was upset at the press that were there. The press ended up reporting on this incident, and this made the owner of the restaurant even more upset. And she posted something on her Facebook about just something hostile about this newspaper.

Dave Bittner: Now, is it reasonable for her to choose who can attend the event at her restaurant?

Ben Yelin: It sure is. I mean, it's not polite to kick out a member of the press, but this is a private restaurant, she can do whatever she wants.

Dave Bittner: Okay.

Ben Yelin: So somebody, and we don't exactly know who this is. It's a confidential source. Contacted the newspaper, and provided evidence that Newell had been convicted of drunk driving. So this is significant because it would cause her to lose her liquor license. Somebody at that newspaper tried to check the records to see if that allegation had been true. So a reporter looked up, and it's public information, whether Ms. Newell had a drunk driving incident. Meanwhile, the Newspaper, under the direction of Meyer, was suspicious that this confidential source was basically trying to set up Ms. Newell. They were actually concerned on her behalf.

Dave Bittner: Right.

Ben Yelin: They thought it was an ex-husband trying to gain an advantage in a divorce proceeding.

Dave Bittner: Wow.

Ben Yelin: So, somehow this all led to a warrant that was signed by a judge that authorized law enforcement in this Kansas town to raid the offices of the newspaper and the home of this publisher and his mother. So, the warrant was, like I said, signed by a judge. I think there are a lot of issues with this warrant. It's over-broad. But it authorized law enforcement to take basically all of the digital records relating to this newspaper and this story. So they stole hard drives, laptops, they --

Dave Bittner: They seized.

Ben Yelin: Yes! They seized them, yeah, they took them.

Dave Bittner: Right.

Ben Yelin: Digital communications, servers, they stole things with passwords. They took utility records. All documents and records pertaining to Newell were rife for seizure as part of this warrant. There were a series of incidents related to these raids themselves. Allegedly one of the officers injured a reporter's ringer during their raid of the newspaper by trying to grab a cell phone out of her hand. Officers at Mr. Meyer's home took photos of his bank account information. They wouldn't answer Mr. Meyer's question about why this raid was taking place. So, our angle really has to do with digital privacy rights. That might be sort of a niche concern when we're talking about a story of such broader significance.

Dave Bittner: Yeah.

Ben Yelin: But I think it ties into the First Amendment issues here. So there are special protections in federal law for journalists. Because I think we anticipate rightly, because it happens in third world countries, that governments left to their own devices are going to go after journalists. It's a way to disincentivize journalists from writing negative things about the government, from speaking the truth, telling stories that any sort of authority doesn't want the populace to hear. So we have special protections in federal law against seizing the material of news reporters unless that material is connected to a specific crime. Whether that crime has been committed or is about to be committed. And here there's no allegation that anybody involved here was engaged in criminal activity. They were simply reporting on something. They searched Ms. Newell's arrest record using a public database. And they actually did not print the story about Ms. Newell because they were cautious and concerned about the confidential informant and his reliability. So, it just seems like this warrant, which like I said, was signed by a judge who I think is going to be in a lot of trouble for this, was just completely defective and runs afoul of First Amendment protections.

Dave Bittner: Wasn't the warrant for fear of identity theft? Wasn't that the charge or the motivation?

Ben Yelin: That was the allegation. They were concerned that by posting any information or having any information about Ms. Newell that it was going to subject her to identity theft.

Dave Bittner: Which is to me the thinnest of premises on which to base a warrant where you're confiscating --

Ben Yelin: It is an extremely thin premise. Now the other element of this is I think Ms. Newell's pretty prominent in her community. And she had testified at a city council hearing complaining about this newspaper. And how it had been snooping in on her restaurant and how they wrote about this incident where she kicked newspaper staff out during this event with a congressman. So.

Dave Bittner: Well there's another element with the police chief too, right? Have you heard about that?

Ben Yelin: I have not heard about --

Dave Bittner: So the element -- listen to this.

Ben Yelin: This story is an onion. There are just a lot of layers to peel back here.

Dave Bittner: Right. So, and I'm speaking off the top of my head here. So forgive me if I don't get all the facts exactly correct. But it is my understanding that this police chief came to this town from another town where there were multiple allegations of sexual impropriety with his coworkers. And so the allegation is that he was probably on his way out in the other town because of these allegations and so there was -- there's a concern in the search -- so the newspaper was also investigating those allegations.

Ben Yelin: So he might have a bone to pick with the newspaper, in other words.

Dave Bittner: Not only does he have a bone to pick with the newspaper, but he'd be very interested to know who the newspaper's sources were when it came to those specific allegations. So that is in play as well.

Ben Yelin: Yeah. So we have a lot of rather illegitimate reasons for instituting this raid not only on journalists themselves in the office of this newspaper, but on the publisher's home. And like I said, there's this extra element of this 98-year-old woman who was traumatized by these events.

Dave Bittner: Right.

Ben Yelin: They took her Alexa device, which she had been used to really allow her to carry on her daily life. It was her source of news, it was her source of entertainment when nobody was at the home. She was relatively healthy for a 98-year-old. And because of the stress incurred as part of this raid, she passed away this weekend.

Dave Bittner: Yeah.

Ben Yelin: So that is a very specific and disturbing consequence of what was very clearly in my view an illegal raid. What's nice is that the entire media landscape has rallied to the side of the Marion County Record. There's a group called the Reporters Committee for the Freedom of the Press. It's based in Virginia. They organized a letter written on behalf of this newspaper signed by basically all the major newspapers in the country and other media sources. But this is ultimately a story about a judge who improperly authorized what was very clearly an illegal seizure. And just the vast amount of digital data that can be collected in such a circumstance if there are no checks and balances preventing such a thing from happening. And then also the extra element of this being an attack on journalists and the fact that we have legal provisions in federal law to protect the interests of journalists. A proper way to obtain this information would be via subpoena. And that was not done here. So this is just a pretty egregious example of an invasion of privacy and an abuse against First Amendment rights.

Dave Bittner: Is this mostly on the judge? I mean, should the judge have been the back stop here?

Ben Yelin: Yes. So, so far the judge has not commented on this. I think the warrant was defective. I think all of the legal analysts have -- that have reviewed it and there has been a copy of this warrant that's been publicly -- or it's publicly available. I don't know if it was released by the court. But somebody got a hold of it so it's been posted online. And it seems like the authorized seizures here were over-broad and poorly written and were drafted in a way that would violate both federal law, because they searched and seized materials from journalists, and that goes against prevailing case law, which would require or would necessitate a subpoena instead of this type of invasive search. The judge is supposed to be a back stop. This is a Marion County District Court Magistrate judge. You know, they are also prone to they get allegations, somebody tells them they have probable cause to search something. There is the concern that they can just kind of be rubber stamps and not read the entire record. And not be fully aware of exactly what they are authorizing. I'm not saying that's the case here.

Dave Bittner: Right.

Ben Yelin: But there's always the risk of that. A magistrate judge has to sign a lot of different things.

Dave Bittner: Okay. So it's possible this is an overworked, you know, judge who doesn't have enough time to do the thing. And I'm not making excuses for this judge, believe me. But there is perhaps there's a rationale to be made.

Ben Yelin: Yes. I'd almost rather have it be that, where it was clearly a mistake. She didn't read it carefully enough, she didn't realize the implications of authorizing this type of invasive search on the homes and offices of local journalists.

Dave Bittner: Yeah.

Ben Yelin: I'd rather have it be that than her intentionally authorizing this. She has not responded, she nor the court have responded to comments for the story as of yet. So we don't know what her explanation is. In terms of next steps, there's an office in the state of Kansas that looks into these types of civil rights violations. The Kansas Bureau of Investigation. And they said that they are looking into the allegations of criminal wrongdoing in Marion, Kansas. The investigation is ongoing. I think depending on how the facts present themselves, certainly there could be a civil case brought by some of these journalists. The publisher. And perhaps the estate of this 98-year-old woman who died as a result of these events. Just under a number of common law torts. I mean, certainly this feels like the intentional inflection of emotional distress.

Dave Bittner: Yeah.

Ben Yelin: For starters. So there's that. And then getting some type of relief for what seems to be an unconstitutional search and seizure. Whether that would end up with criminal charges on members of the Marion Police Department or other law enforcement agencies, that's unclear at this point. But I certainly think it would be conceivable if they can prove that this activity here isn't protected by good faith, isn't protected by any sort of lenience towards law enforcement.

Dave Bittner: Would those charges potentially come from the state, or would it be a federal level thing? Or both?

Ben Yelin: Well because there's this federal law protecting journalists and it applies at both a federal level and the state level because we're talking about First Amendment issues here, it's possible that there could be a federal investigation. And there could be federal charges filed against members of this police department. It's happened before. The federal government does bring cases against local police departments for abuses of civil rights. And this would be no exception.

Dave Bittner: What about the general chilling effect of something like this? I mean, is the -- I could imagine at the federal level, they're concerned about that. That you know, if I'm a small town newspaper and I'm going to see this and be hesitant to take on my local law enforcement if this is what they can do to me.

Ben Yelin: Oh, 100%. I mean, I think that's a very important implication here. The message from the police department is back off, stop investigating us, stop investigating our ally, stop investigating people in the community that we don't want you to investigate, or this is what could happen to you. And so the fear here is that journalists not just in Marion, but all over the country, are going to be more fearful about publishing these stories. Then there's the practical effect. Because of everything that was seized from the offices in these households, they -- the Marion County Register wasn't sure that they we are going to be able to actually print their newspaper.

Dave Bittner: Right.

Ben Yelin: Or post an online edition. Just because of all of their records had been seized. When we're talking about seizing servers, hard drives, et cetera, that's the machinery that gets these news articles online. So it might not just be a chilling effect, they might literally be making it impossible for them to put forward an edition. Although the publisher of the newspaper promised the community that they would do everything they could to get their newspaper back online and reporting on this story. And it appears that they have been able to do so in the last couple of days.

Dave Bittner: Just seizing a 98-year-old woman's Alexa. I mean, that'll show them, right?

Ben Yelin: I know. I mean, it seems just particularly cruel. I just, you know, I know members of law enforcement were just doing their jobs here. But the fact that nobody within this police department nor the magistrate judge who signed off on this just kind of stopped, thought about it for a second, and said this feels blatantly unconstitutional and it feels like something that's going to get us in a lot of trouble. It's concerning that that never crossed their minds. I mean, maybe they thought that this wouldn't become a national story. But they were badly mistaken.

Dave Bittner: Yeah.

Ben Yelin: And I think the community of journalists and reporters has really rallied around the Marion County Record. I might have said Marion County Register. It's the Marion County Record.

Dave Bittner: Okay.

Ben Yelin: Is the name of the newspaper. I think they're rallying around this newspaper for that reason, because it was so egregious.

Dave Bittner: Yeah. And a loss of life here, potentially triggered by this.

Ben Yelin: Absolutely.

Dave Bittner: It's tragic. Yeah. Alright, well we will have a link to that story in the show notes. More to come with that one, for sure.

Ben Yelin: Yes. That is definitely not the end of that story.

Dave Bittner: My story this week comes from Wired. This is an article written by Jeff Kosseff. And this is about the Supreme Court, an internet free speech case that perhaps is on a collision course with the Supreme Court. Which is, you know, we say that it's practically a trope these days. But in this case, it's true. And this is involving some laws in Texas and Florida which aim to restrict platforms, so we're talking like social media platforms, from moderating speech and requiring transparency about user content policies. This article points out that the Texas law state that, "large social media platforms may not censor a user, a user's expression, or a user's ability to receive the expression of another person based on viewpoints or the user's location." And there's an organization called NetChoice which represents tech companies and they're challenging the law. So the nuts and bolts of this is that about a year ago, the US Court of Appeals for the 11th Circuit, they struck down Florida's moderation restriction. And then the US Court of Appeals for the 5th Circuit upheld the Texas law. They said, "Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say." That was judge Andrew Oldham. So, Ben, you and I have talked about before how when you have these disagreements with the circuits, that can get the attention of the Supreme Court, right?

Ben Yelin: Yeah, I mean, you have here what's called a circuit split. The laws are not identical but they are very similar. And you have two different circuits that came to two distinct conclusions about the constitutionality of these types of laws. The Supreme Court, as this article notes, gets about 7,000 petitions every year for certiorari, to have a case be heard in front of the Supreme Court. They only end up taking about 100 cases. So your odds are never that great. But the way you can increase your odds is when there is this type of prominent split on such an important constitutional issue like the right to regulate content on one's own platform. There's kind of a theme to this show here. This is another First Amendment case.

Dave Bittner: Yeah.

Ben Yelin: It's about not just the right of users to post content on these websites or the question of liability or a liability shield as was the case in Gonzalez v. Google, which we talked about, but here we're talking about the First Amendment rights of the platforms themselves to monitor content. And I think the implications here are very significant.

Dave Bittner: Is this a shot across the bow for Section 230 of the Communications Decency Act?

Ben Yelin: I do think it is. I mean, the interplay between these laws and Section 230 is complicated. There's not like a -- it's not a direct nexus between Section 230 and these laws. Because we're going beyond what Section 230 is about is a liability shield.

Dave Bittner: Right.

Ben Yelin: So it prevents these platforms from being sued for their content moderation decisions. We are going a step beyond that here. It's not about whether they can be sued, it's about whether they have the right at all to moderate content as they see fit. And in the view of Judge Oldham on the Fifth Circuit, they don't have an unrestricted right to regulate content. Just based on the fact that they are such large public platforms. And the Texas law specifically targeted large, public platforms. So this isn't for your local mom and pop shop internet comment section. This is for X and Meta and I'm using the annoying names for these companies.

Dave Bittner: Right. The companies formerly known as.

Ben Yelin: Exactly. So I think this goes beyond Section 230. And would be even more significant than a case like Gonzalez v. Google where you're just resolving questions of liability. Here you're resolving really profound First Amendment protections. And how far the First Amendment extends in protecting the platform's right to regulate content on their own sites, on their own platforms.

Dave Bittner: And what's the case law here? What's the history? I mean, it seems to me like you and I have talked about here before that a platform is not the government. And the First Amendment, you know, is to protect us from censorship from the government. So, but a platform owner, it's a First Amendment right for them to be able to choose what is and isn't on their platform. Is -- am I off base there?

Ben Yelin: You are not off base. So there's a very prominent 1997 Supreme Court case called Reno v. ACLU.

Dave Bittner: Okay.

Ben Yelin: In that case, the Supreme Court struck down a federal law that restricted the online transmission of indecent images. So the government had argued in that case that just as the government can restrict television stations from broadcasting indecent content, it could limit that type of material on the internet. But the Supreme Court disagreed. And said that the internet is a unique and wholly new medium of worldwide communications. In other words, the internet is not like TV or radio broadcasting. It deserves the full force of First Amendment protections. So obviously, TV stations to an extent have their own First Amendment rights, but they are subject to regulations by entities like the FEC.

Dave Bittner: Right. The notion being that the public airwaves are a public good and therefore subject to regulation from the government.

Ben Yelin: Exactly. Now this was a case that happened 25 years ago and a bunch of things have changed. The internet is not the same as it was in 1997. In 1997, there wasn't a such thing as a large social media platform. We're talking about the pre-Friendster days here. So.

Dave Bittner: Wow.

Ben Yelin: This is a long time ago. And then the make-up of the court is vastly different than it was in 1997. It is a more conservative court. I think it's a court, depending on how you look at it, that might be more amenable to the argument of Judge Oldham in the 5th Circuit. So with those two changes, I don't think we can definitively say that the Reno v. ACLU case is going to be controlling here. So, I think it's very possible that the Supreme Court reconsiders the holding in Reno in putting the internet and online communications on this pedestal. And that they allow these regulations to stand. I think we had mentioned on this podcast that we kind of started to get hints of what the Supreme Court thought about this -- at least this Texas law. Because they put an administrative stay on it. So there was at least enough preliminary evidence that this might violate the constitutional rights of the platforms and there were going to be further hearings at the 5th Circuit. But I don't that gives us any definitive indication on how the justices are going to rule on the merits here. If this case does come in front of the Supreme Court, and I would love to see it, because I think it's an important dispute that we need to resolve, you and I will have a great time going over oral arguments and --

Dave Bittner: Oh yeah.

Ben Yelin: -- all the crazy hypotheticals that these justices come up with in you know, their two hours in the spotlight.

Dave Bittner: Right, right. I'll let you summarize them for me, Ben.

Ben Yelin: Oh yeah. No, we could go on for hours about those. And I'll nerd out and get my CSPAN stream going.

Dave Bittner: Yeah. It's just -- it's so, I mean, every example where we've seen somebody try to be free speech absolutists on these platforms just ends up as a terrible dumpster fire. It just doesn't work.

Ben Yelin: It doesn't work because if you are a quote "free speech absolutist," by necessity, you end up allowing speech that your users simply do not want to see and it will drive people away from the platform.

Dave Bittner: Right.

Ben Yelin: Almost by definition.

Dave Bittner: Right.

Ben Yelin: That's why you need some type of content moderation. Now there's the question of line drawing. Was it a mistake to temporarily censor the story on Hunter Biden's laptop? Yes, it was. Twitter's previous leadership admitted that that was a mistake. They corrected it within 48 hours. But when we're talking about things like Nazi propaganda, of course these companies have an interest in regulating that content. They want to make their platforms marketable. They want platforms, at least they used to want platforms, pre-Elon Musk.

Dave Bittner: They want to run ads!

Ben Yelin: Yes, exactly!

Dave Bittner: And the sponsors don't want to put things next to that stuff.

Ben Yelin: Ads require eyeballs and there aren't going to be any eyeballs if your platform is a cesspool of Nazi propaganda.

Dave Bittner: Right.

Ben Yelin: That's just the way it works. So, free speech absolutism sounds great in theory. I believe in it hypothetically as a concept. But when we get down into the nitty gritty of content moderation, you know, I think that 5th Circuit decision is problematic. It's just not something that seems practical or seems to recognize the First Amendment rights of these private companies.

Dave Bittner: Yeah. Alright, well we will have a link to this story in the show notes. Again, this is from Wired. The article by Jeff Kosseff. Interesting read. Definitely worth checking out.

Ben, I recently had the pleasure of speaking with Chris Denbigh-White. He is from an organization called Next DLP. And our conversation centers on ChatGPT, some of the potential regulations coming down the pike. Here's my conversation with Chris Denbigh-White.

Chris Denbigh-White: The EU is approaching it quite trepidatiously. They see a very powerful technological solution that can bring a whole lot of benefits to various industries and people. However, the uptake has concerned them somewhat in that legislation and controls, establishing what trust looks like in this platforms, hasn't necessarily caught up to the role of technology. And they're trying to reign that in in a lot of ways.

Dave Bittner: And how does the contrast what we're seeing in the US?

Chris Denbigh-White: I think it's very similar in the US. However, the EU seems to have more of a privacy slant on this. Whereas the US kind of organizations like NIST and NTIA are looking to implement frameworks around things like vendor risk management, so an accountability mechanism, or a framework of reliable metrics to be able to assess the efficacy of these tools. You know, do they do what they say they're going to do on the TIN?

Dave Bittner: Is it worth saying or perhaps is it overstating that I think to a lot of folks, technology like ChatGPT is a bit of an inflection point here. It really has captured people's imagination.

Chris Denbigh-White: Indeed it has, and for a number of reasons. You know, for your every day worker or for those who are tech curious, it's an intriguing thing to play with. To have a conversation with a large language model. And have it produce to you almost human like responses is something that's very, very attractive, very endearing for people to use. But on the flip side, businesses as well are starting to ask themself the question around efficiencies that they can make in relation to the use of ChatGPT. So across the board, for various reasons, a definite uptick. And it has certainly captured the imagination for a number of reasons.

Dave Bittner: When we look at the potential policy decisions here that may come to pass, what are some of the things that folks are looking at?

Chris Denbigh-White: I think in a really broad sense, they're looking at two, maybe three things. The first being trust. So, can we actually trust the outputs of these large language models? Yes, they produce content that is very self-confident in its tone. Large language models will write a very well reasoned argument. But are those arguments in fact true and can they be trusted? The moment we're in a very testing phase of our use of large language models. However, with them being brought into common use and into business practice and maybe even public sector use cases, we need to really be able to trust what our large language models are telling us. The next one is privacy in relation to kind of, in relation to privacy. And this is a concern inside of the EU and also a lot to do with the Italian ban on ChatGPT. It's that the prompts and the information that are posted into large language models. So our questions around age of sovereignty is with the states is housed. But in a more simple sense, by asking ChatGPT, a large language model outside of the control of a company, of an organization, our end use is inadvertently putting potentially commercially sensitive information, PII, PTI, into these large language model prompts. And by doing so, putting it out of the control of the companies at charge to protect it.

Dave Bittner: Yeah, it's a really interesting point. You know, I've seen examples of folks -- things that you would think might be benign. Like saying, you know, here's some of the data from our annual report. Can you format this for us? But by doing so, you're adding it to the corpus of information that it has.

Chris Denbigh-White: Yes, absolutely. And whilst the terms and conditions off of ChatGPT, I can speak to that, say that automatically ChatGPT doesn't ingest the prompts and include them in their large language models, what it does say is that the researchers inside of Open.ai have access to those prompts and do use it for improvements of surface. So, it's not necessarily an automatic thing. It's a human will be reading or potentially reading these prompts and having access to that information. Which is equally as bad.

Dave Bittner: Yeah. So, what sorts of things do you think are likely to come to pass here when it comes to regulation?

Chris Denbigh-White: It's around accountability here. Whilst the use of large language models most differently represents a risk, so does every action inside of a business. And what businesses and what the regulators are wanting to see is a framework by which that risk can be added to quickly assess. Now the moment large language models are a bit of a black box. So what they're generally be looking around, and things like inside of a GPU, GBU AI law, they've got a number of articles around this. Article 41 about common specifications for large language model AI systems. And this is about giving the users of these models, or those who are charged with risk assessing them, enough information to determine how they work to make an adequate risk choice. You know, these can be things as simple as okay, so where is the data housed when a prompt is submitted. What happens to this data? How are these decisions made by these large language models around the output conducted? So it's increasing visibility, really. That's where this will hit. And again, in relation to the use of them as well. You'll have companies that will be engaging in risk analysis. Not dissimilar to the risk analyses they were doing when they were migrating to the cloud. And how comfortable are we in having certain chunks of our data being exposed to large language models that we may not 100% control? So it's a case of codifying how these large language models are going to be used, and then making a risk decision based on the outputs of that. But there has to be some rationalization of how the industry and how these models work. And some frameworks to be able to do that. And at the moment, there isn't. Which is one thing that the EUAI law is trying to implement.

Dave Bittner: Is there a danger here that with something that's moving as quickly as this technology is? The rate that it's improving, that perhaps some regulation might inadvertently overreach?

Chris Denbigh-White: Yes. I think there's also the counter-risk as well that regulation might not go far enough and becomes ineffective for the real world states. And I think we're seeing this at the moment. Because the deployment of things like ChatGPT, it's out there. People are using it. Either sanctioned by their companies or not sanctioned in the realms of shadow IT. And I think it's about closer collaboration between the industry and the lawmakers and the regulators to be able to get on the same path. I think that collaboration needs to happen. I was reading a research paper recently about the importance of common specifications and working globally in relation to this. And it drew the parallel between the implementation of TCP/IP as a network protocol to communicate globally. Now that was handled globally and the result of which is that, you know, my email clients on my Android phone can send an email that's received by an Apple device somewhere else in the world. And that all works perfectly seamlessly. But you hold that in contrast to something like the electrical plug socket. Which there isn't a common standard for. It was implemented almost kind of in different ways across the whole world. Now when I travel, you know, in August over to DEFCON, for example, I need to buy many different travel adapters. There's different voltages. And I think it's a stark reminder to us all, the importance of industry and standards working together in a global sense. To really avoid some of those pitfalls. I know it's quite a low tech example, the electrical plug socket, but I think it's quite [inaudible].

Dave Bittner: Yeah. I agree. I think it's an interesting analogy. You know, I wonder in the way that the EU sort of led the way when it came to privacy with GDPR and how that had an effect on certainly global corporations, do you see the EU taking a similar leadership role when it comes to regulation in this space?

Chris Denbigh-White: I think it certainly will because it has GDPR already. In relation to control of personal data, the use. And there's certainly an overlap when we look around regulation and when you look at large language models. When I speak at the prompts of things uploaded, then, you know, there will be global scope potentially to that. So they certainly have a lot of skin in the game as far as you know, being able to fall back on GDPR's enforcement mechanism in a lot of ways for some of the articles inside of the AI Act in relation to what happens to this data. When it comes to global standards of, and being able to harmonize those standards globally, I think it is going to be something that the international community needs to work out who takes the lead or whether it's a collaborative lead. Is it something, for example, like an international technology standards organization like the ITF or somebody who can really spearhead that. But I don't think necessarily for those technical standards the EU are necessarily going to get the traction globally.

Dave Bittner: Yeah, it's really an interesting point that so far, it seems like there really isn't a set of agreed upon best practices.

Chris Denbigh-White: No, absolutely. I think the agreement that exists so far is various regulatory bodies have come to an agreement that oh my goodness, something needs to be done. This technology is spreading in its usefulness and the knowledge of it. You know, somebody needs to do something to almost put this pandora back in the box and release it in a regimented and organized way. And the US and the EU have started to do various things in relation to that. You know, cybersecurity strategy. And kind of the US side AI acts in the EU. And what I hope to see is harmonization of that. You know, kind of a trans-Atlantic partnership of universal standards for AI. Because that will help both the end users, but it will also help the companies developing these. Because there's no doubt that these are useful technologies which will change the fates of the way in which we do business. But they need to be usable. Companies need to be allowed to use them. And if a technology exists that just can't be risk excepted, then that potentially is either going to lead to lots of failed audits, or if technology not having the uptake that it really deserves.

Dave Bittner: Yeah. And I think you brought up a really interesting point which is that of shadow IT. Where if employees find real utility in this, and it seems like they are, then they're going to find a way to use this. Even if the organizations say no.

Chris Denbigh-White: Absolutely. And that then kind of falls back on the realms of software like user behavior analytics, not monitoring data loss prevention, in a certain sense. Because then companies without adequate controls over the transfer of data. It's a similar argument to do we allow our users to upload company files to their personal cloud storage or use unsanctioned cloud. PDF converters, for example, were all the rage a few years ago. And it falls into a similar bracket to that. And they're all technologies that exist to be able to content inspect those kind of activities and bring visibility and control to companies that have a concern around that use of shadow IT. And I think that's unfortunately going to be necessary in the short-term until the risk picture really solidifies in relation to the use of large language models.

Dave Bittner: Chris, I'm curious for your take on what is the European view of those of us here in the US when it comes to our approach to this sort of technology and privacy and so on? From my point of view, it seems as though, you know, the EU, as I said, is taking the lead, in many ways is ahead of us. And we seem to be lagging here. Is that the global view? Do people look at us with sort of raised eyebrows?

Chris Denbigh-White: I think certainly when I come to complete vendor risk management questionnaires around where data is housed, there is always a question mark over the US and data sovereignty there. And I think it's because there isn't, as there is with the EU, a unified global United States data privacy standard. And a law. You have individual state bound laws most obviously the one in California, there's one in Utah for example. And other states that are looking to do things on a state level. And in a similar way, but I think for many people outside of the US, the lack of aligning the founders to this is how we as a data processor respect the privacy and in turn respect the data that you give to us, you know, it's certainly a definite concern. I think were there to be some federal version of a privacy act, you know, a US version of GDPR, for example, that affected all states, then that would certainly calm a lot of the people inside of the European Union.

Dave Bittner: Ben, what do you think?

Ben Yelin: Really interesting. I mean, we are in the very early days of regulating things like ChatGPT. I think states are confused about how to go about this and many of them are doing what governments when they don't know what to do which is forming bipartisan commissions.

Dave Bittner: Right.

Ben Yelin: We've seen a lot of states try to do that. To its credit, I think the Biden Administration has been pretty proactive in putting forward federal guidance on AI.

Dave Bittner: Yeah.

Ben Yelin: And we've seen, I think, an interest among both parties in the United States Senate on getting briefed on the issues with iterative AI, ChatGPT, et cetera. And trying to figure out both the benefits and detriments. So I just thought, you know, it's really an interesting area and I don't know exactly where this is going to go over the next several years.

Dave Bittner: Yeah, absolutely. Alright, well our thanks to Chris Denbigh-White from Next DLP for joining us, we do appreciate him taking the time.

That is our show. We want to thank all of you for listening. N2K's strategic workforce intelligence optimizes the value of your biggest investment, your people. We make you smarter about your team while making your team smarter. Learn more at n2k.com. Our senior producer is Jennifer Eiben. This show is edited by Elliot Peltzman. Our executive editor is Peter Kilpe. I'm Dave Bittner.

Ben Yelin: I'm Ben Yelin.

Dave Bittner: Thanks for listening.