
The year AI regulation hits the U.S.
Casey Bleeker: And consumer privacy, it's very clear. Hey, we know how we want to allow or not allow organizations to use specific consumer data. But in AI, oftentimes the objectives aren't as clear. The actual objective of the regulation is not as clear. And the technical definition is much harder to pin down. It's very easy to say, "You can't sell my, you know, personal information to a third party," but it's -- it's much harder to define the technical definition of AI security or protections or regulations.
Dave Bittner: Hello, everyone, and welcome to "Caveat," N2K CyberWire's Privacy Surveillance Law and Policy podcast. I'm Dave Bittner, and joining me is my co-host, Ben Yelin, from the University of Maryland's Center for Health and Homeland Security. Hey there, Ben.
Ben Yelin: Hello, Dave.
Dave Bittner: On today's show, Ben shares the story of a lawsuit against the creators of an AI-generated chatbot. I've got the story of the Biden administration taking action in response to China's incursions into U.S. telecoms. And later in the show, Ben's conversation with Casey Bleecker, CEO of SurePath AI, here discussing how AI or Generative AI regulation could roll out in the U.S. in 2025. While this show covers legal topics and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. [ Music ] All right, Ben, we've got some good stories to share this week. Why don't you start things off for us here?
Ben Yelin: I want to say my story this week is kind of sad, but it's really interesting. It comes from the Washington Post technology section, written by Nitasha Tiku, and it is about an AI-generated chatbot, the manufacturers of which are being sued by a mom in Texas. So, I'll give you a little bit of the backstory here. There's this 17-year-old kid. He lives in a rural county in Texas. He has autism. And prior to the last six months to a year or so, he was pretty high functioning. He got along well with his parents. He attended church. He went on walks with his mom. But recently, at least according to this article, he had turned into somebody that his parents didn't recognize. He was practicing self-harm. He lost a lot of weight. He was emotionally withdrawing from his family. So, the mom took a look at his device while he was sleeping and noticed that he was using an application called Character.ai. Have you heard of that, by the way?
Dave Bittner: Only through this story. I -- I had not heard of it before hearing this story.
Ben Yelin: So, it's basically an AI app. It's supposedly popular with -- with the young people. It is a chat bot. It's trained from data based on characters from gaming, anime, and pop culture. But also, I think a lot of the training data comes from online forums, which is where people communicate about things like gaming, anime, and pop culture. And it's just a way to have a conversation with somebody. So, the mother looked at this phone, looked at his phone, opened up that application and saw that the chat bot brought up the idea of self-harm, and cutting himself to cope with his sadness. And then when this individual mentioned that his parents limited his screen time, a separate bot suggested that the parents didn't deserve to have kids and that they understood why people would kill their parents. And when the mom found this out, she reached out to a couple of advocates and they are filing a lawsuit alleging that this chatbot is creating real-world actual harm, not just for this -- this kid, but for the mother, for the parents. And I think this is a really interesting lawsuit. The theory of the case is based on strict liability for products. So, when you manufacture anything, whether it's an AI system or a can of Coca-Cola, the standard for, at least in tort law, is that there's strict liability for the manufacturer of products, meaning a user, or really anybody who's affected by the product, doesn't need to prove negligence to win a lawsuit against the company. The thinking is, it would be extremely hard to win a negligence case against Coca-Cola. We're not in the factories. We don't know what happened that led the soda can to blow up in your face.
Dave Bittner: Right.
Ben Yelin: It should be incumbent upon the company to keep these products safe. We rely on them to keep us safe, and when we cannot rely on that expectation, you know, society kind of breaks down. When our products are doing us harm, then they should face accountability, even if they haven't done anything particularly negligent.
Dave Bittner: Okay.
Ben Yelin: So, that is the basis of this lawsuit. It's filed in Texas. It's filed against Character.ai on behalf of this mother and then another Texas mother. They are alleging that the company has knowingly exposed minors to an unsafe product, and they demand that the app be taken offline until it implements stronger guardrails. And this comes on the heels of a similar case that's been filed in Florida. I think we might have alluded to this in a previous podcast, but a 14-year-old died by suicide after frequent conversations with a chatbot on the same application.
Dave Bittner: Right.
Ben Yelin: I actually am very interested to see what happens in this lawsuit. It's not your standard products liability case. I think the burden on the plaintiffs here is -- is going to be to show that there was a direct connection between what the chatbot was saying and the harm being inflicted, so the self-harm on this individual and the potential threat to this individual's parents. That might be hard to prove. There are a lot of reasons why somebody could be depressed, and I think traceability to that application might be difficult to prove, even though you don't have to show that the application was negligent, and then whether Character.ai had knowledge that this was something that -- that's actually happened. It seems like at this point they should have knowledge of it, since now they've been subjected to several lawsuits, but that's going to be another major element of this case. Was this something that was foreseeable from the way Character.ai inputted training data? And what they argue in their lawsuit is that this is a perfect example of garbage in, garbage out. If the training data is based on online forums, where unfortunately there's a lot of smut and people do talk about things like self-harm and harming one's parents, then that's going to come out when you have this -- this generative AI chatbot model.
Dave Bittner: Right.
Ben Yelin: It's going to come out in the output. So, I do think that the Florida case and this case could be foundational cases, where for the first time, we might be holding these chatbots accountable for actions that people take based on conversations or interactions they've had with artificially generated chatbot companion.
Dave Bittner: So, a couple of things come to mind. I mean, I think about, you know, liability for sort of real-world products, I guess, products that are made out of stuff.
Ben Yelin: Right.
Dave Bittner: Right? So --
Ben Yelin: Widgets, as we say in the legal world. Yes.
Dave Bittner: Well, and the thing I -- the thing I think about are like -- like child baby seats, car seats --
Ben Yelin: Right.
Dave Bittner: -- those sorts of things, because it's fairly common for there to be some child baby carrier product or something that an infant interacts with to have a recall, because it turns out it's dangerous.
Ben Yelin: It [inaudible 00:07:46] killed us because we had a Rock 'n Play, which I'm sure many parents listening to this have used those. And for a while, it was the only way to get our daughter to sleep. And then like when she was one, they recalled them. And we were like, "Ah, come on."
Dave Bittner: Where you're like, "From my cold dead hands."
Ben Yelin: Exactly, "You will pry this from my cold dead hands," yes.
Dave Bittner: Right, but you know, in that case, I would imagine that if you're a company who manufactures these designs and manufactures these sorts of things, there is probably a pretty robust testing regime that -- that happens before you send something like this out in the world. Right?
Ben Yelin: Oh, yes, you got those crash test dummies --
Dave Bittner: Right.
Ben Yelin: -- sitting in the car seats, yes.
Dave Bittner: All that stuff. And yet, you know, every now and then, they'll find that something happens that they -- either they didn't test for, or the testing didn't reveal it, and so they have to issue a recall. So, what I'm getting to with this AI stuff is you mentioned garbage in, garbage out. I wonder how much of it is on the input side and how much of it is on the testing side, right? You see where I'm going with this?
Ben Yelin: Yes. I mean, I think there's only so much you can test for. And -- and I get this is kind of a meta philosophical conversation. But like, when you're talking about something that's generative, you can't have every conceivable conversation that -- that one could have with this type of chat bot, right? Even if you are talking about things like parents taking away screen time, it's impossible to recreate the entire conversation. So, the chatbot is reacting based on the full context of the conversation, what the person has said in -- in previous posts, how they've reacted to previous posts. I think all of that would be very difficult to recreate in testing. So, I -- I you know, I'm -- I'm sure they did have a robust testing process, but I'm not sure that the testing process would catch everything. What the lawsuit is alleging is that Character.ai made a decision to prioritize prolonged engagement over safety.
Dave Bittner: Yes.
Ben Yelin: Yes, like, no blank, Sherlock, right?
Dave Bittner: Right. Right.
Ben Yelin: So yes, the longer you engage with the product, the more money they make. And their incentive, therefore, is to make the conversation as engaging as possible, so that a person relies on that chatbot for companionship. They want to continue to have that conversation. I mean, I don't know exactly how you're going to prove it, but I guess during discovery, you know, they can dig into the algorithm here and -- and figure out what they prioritize and whether there are any guardrails whatsoever on safety.
Dave Bittner: Right.
Ben Yelin: If there are things that the chatbot would never say. Ideally, a chatbot like this would be trained, if anybody talks about self-harm, to say, "Please call 988, like, this conversation's over."
Dave Bittner: Right.
Ben Yelin: And that's the National Suicide Hotline, which, by the way, 988 is -- is the number. We should get that message out as far and wide as we can.
Dave Bittner: Yes. So, but I can imagine, from like the user's point of view, if your -- if you have established what you believe is a strong intimate relationship with this chatbot, and you stray into the area of self-harm or suicide or, you know, anything harmful, and the chatbot says, "This conversation's over," you're going to feel betrayed, and you're going to go looking for a different chatbot.
Ben Yelin: Right.
Dave Bittner: So, that [inaudible 00:11:19].
Ben Yelin: And eventually someone will create the chatbot that gives you what you want.
Dave Bittner: Right.
Ben Yelin: Yes.
Dave Bittner: Right.
Ben Yelin: It's a really difficult problem.
Dave Bittner: So, let me ask you this. There are those who say that we should make it so that teens do not have access to these services, that the same way that, you know, a teenager and -- and I'm saying this knowing the reality of the world we live in, but a teenager cannot go to their local newsstand and buy a copy of Playboy magazine, right? Like, they won't be able to use the AI bot with help being a certain age, that there's some things just they're too dangerous for kids to have.
Ben Yelin: Yes, I mean, one thing Character.ai did recently is the app had previously been listed as the -- in the App Store as appropriate for 13 and under. Now, it's appropriate for 17 and under.
Dave Bittner: You mean over --
Ben Yelin: And I wonder if that was because of, yes, because of the threats of these lawsuits.
Dave Bittner: Right, right.
Ben Yelin: I -- I don't -- it might be a bad business prospect for this company because younger people are more vulnerable. Sadly, they might be more isolated. I mean, life is just very difficult for teenagers. And so, that might be like the prime marketplace for an AI companion. Whereas at least conceivably, most adults are -- are well adjusted. Now, I don't have any data on this. Maybe people in their 20s are just as likely to talk with a chat bot about their problems as somebody in that person's teens, but we know just based on laws surrounding things like pornography, alcohol, that --
Dave Bittner: Right.
Ben Yelin: -- prohibition for underage individuals in many contexts works quite well. At least a adult conceivably has the brain power to make a conscious decision to be aware that you shouldn't be following the advice of a non-sentient chat bot. And I -- I guess in the -- in the same sense that a teenager or a kid especially just wouldn't have that type of -- or just wouldn't have that type of knowledge or wouldn't have that type of instinct.
Dave Bittner: If you had to guess, what avenues do you think the character AI folks will pursue in defending themselves?
Ben Yelin: A couple of things. They will say that any sort of lawsuit here would break up pioneering breakthroughs in language AI. It would interrupt what's becoming a robust marketplace, that this is an isolated incident, that this is not something that was foreseeable based on their testing or their safety protocols, and that this is something that is widespread in the industry, therefore they should not be held legally responsible, that there are many possible causes that somebody would be suicidal or be interested in self-harm, or would be interested in harming their parents, and that is not directly traceable to this application. And if we put any type of regulation on this, then that is going to be a major inhibition to the industry, to innovation. And what they will say, and I think there's some truth to this, is that there are very positive uses of these types of chatbots as well. Not even for giving somebody, you know, an emotional pick-me-up, but just for doing basic tasks. I mean, think about all the things we use Alexa for, asking about the weather, or finding out that -- that song we just listened to, who's it -- who's it by.
Dave Bittner: Right, right.
Ben Yelin: Those things are always -- are -- are extremely useful, and having this type of language model makes it even more useful than, say, a smart home device. So, what they would say is, let's not throw out the baby with the bathwater, so to speak. Let's not stifle this innovation because of an -- an isolated incident. And I don't know if that's going to be compelling in a court. I think it depends on what happens in discovery and if you can prove that level of traceability, that this would not have happened in the absence of this chatbot. What I see in this article, at least, is the evidence is pretty compelling. Like, the mother notes that a lot of the son's behaviors coincide directly with conversations he's had with the chatbot. And they post some -- some screenshots here, and it -- it is kind of disturbing, because it does sound like you're talking to a human being. The lawsuit itself is also very interesting. One of the things they allege is that the chatbot is acting as a psychotherapist without a license, and that this chatbot also can engage in the unauthorized practice of law by giving people false legal advice, things like how to cover up crimes or how to do things in a way to avoid legal liability. So, those are very serious allegations, and I'm -- I'm curious to see if those claims hold up in court.
Dave Bittner: It's interesting because I have heard other stories of the beneficial nature of these sorts of AI chatbots, particularly for folks with autism.
Ben Yelin: Right.
Dave Bittner: Because, and the main thing I've heard over and over again in these stories is that because the chatbot has endless patience, right? And one of the things that can, a -- a -- I guess a common characteristic of some folks with autism is just they -- they'll pepper you with questions. Like a kid will pepper their -- their parents with questions --
Ben Yelin: Right.
Dave Bittner: -- over and over and over again, and -- and eventually, the parents --
Ben Yelin: The parents are like worn out.
Dave Bittner: -- are like, "I need a break. I need a break." Well, the chat bot never runs out of patience and never gets frustrated. And so, it -- it provides an outlet for that child or teen to, you know, have someone who is never uninterested in what they're doing. And that can be a positive thing. But --
Ben Yelin: I think that's all going to come out in the lawsuit, you know, and they might say, this was a -- one defective incident, but it is not representative of the vast majority of interactions. You'd think, though, that like, you'd want to harness the good and root out the bad and you do that through rigorous safety protocols that put guardrails around things like self-harm. You mentioned, and I'm just kind of wrestling this as we talk about it, like, let's say the chatbot said, "We're not talking about this. Call 988," like, that is pretty abrupt, and that might cause somebody even more emotional distress. But what if the chatbot were instead trained to say the things to people seeking self-harm or harm to others that a psychotherapist is trained to say? Why wasn't it trained in that manner? I don't -- I don't know. That's all something I'm very interested to see as we get into this case here.
Dave Bittner: Yes, and I guess there's also just this notion that it's so hard to look under the hood with these chatbots, with these large language models and -- because it's not like a one plus one equals two equation --
Ben Yelin: Right.
Dave Bittner: -- with them, right? There's -- there's a certain amount of randomness. They hallucinate.
Ben Yelin: Yes. Think about how hard it would be to like track the processing of a person's mind in a court of law. It would be impossible. Like, there isn't some type of neat mapping technique that we can use. If this, then that. That's just not how our brains work, and therefore that's not how artificial intelligence works.
Dave Bittner: Yes.
Ben Yelin: If it is to be modeled off -- after the way humans think. So, I think that's something that would just be really difficult to uncover. And they might actually, to be successful in this case, have to really dig into the algorithm. Like, why are they prioritizing engagement over safety? What are the decision points that the algorithm makes? And I'm sure they will get into testing protocols as well. Like, is this something that you tested for? If you didn't, then that potentially, beyond the strict product liability, that could be an instance of negligence if your -- if your product wasn't properly tested, and you could sue for both strict product liability and for negligence, which I believe they're doing in this lawsuit.
Dave Bittner: Could you imagine a situation where things like this were regulated away from prioritizing engagement? In other words, you know, rather than trying to keep you tuned into the platform to show you ads or whatever it does, charge a monthly fee.
Ben Yelin: Right --
Dave Bittner: You know, that's [inaudible 00:19:56].
Ben Yelin: However they make their money.
Dave Bittner: Yes, yes. So, that that's out the window.
Ben Yelin: Yes. I mean, I understand that's part of their business model, but you'd figure we'd have values that go beyond what's part of your business model.
Dave Bittner: Right, right. Prioritizing.
Ben Yelin: Health and safety systems.
Dave Bittner: Job One is this company needs to make money.
Ben Yelin: Exactly.
Dave Bittner: Right.
Ben Yelin: I get that this is pretty obvious, but this is why we have A, a legal system and B, a -- a government. Like, we don't want these companies to be completely unfettered to do whatever they want at the expense of our health and safety.
Dave Bittner: Yes. Kids.
Ben Yelin: And that's why it's appropriate to have regulations. We don't have any regulations on this. The article mentions that European countries have looked into a similar company and -- and have started to experiment with different regulations, but we're -- it's not even on the map here. So, this is something where if this case is high profile enough, then you might see it as a proposal that starts at state legislatures, where there are some regulations on it.
Dave Bittner: I see.
Ben Yelin: You can't sell this application in this state unless you do X, Y, or Z.
Dave Bittner: Right. All right, well, it'll be interesting to see how this plays out. I'm sure we'll be talking about this again, eventually.
Ben Yelin: Absolutely.
Dave Bittner: Yes. We'll have a link to that story in the Show Notes. My story this week comes from the New York Times, and the -- the headline is, "The Biden Administration Takes First Step to Retaliate Against China Over Hack." This of course is in response to the Salt typhoon cyberattack, which was the Chinese getting into our telecommunications systems, getting into back doors that the telecoms had provided for law enforcement, right?
Ben Yelin: Right.
Dave Bittner: And the Chinese took advantage of and by all accounts, got into some sensitive information about U.S. citizens, also legislators and, you know, people of -- of importance in our country.
Ben Yelin: And of our surveillance targets, right, which is really interesting because that could have implications for our national security.
Dave Bittner: Right. Right. Well, and that's where this goes. So, the Commerce Department has reached out to China Telecom Americas, which is the U.S. subsidiary of one of -- one of China's largest communications firms. And they have sent them a preliminary finding that, "The company's presence in American networks and its provision of cloud services posed a national security risk to the United States." I'm quoting the -- the New York Times story directly. It says they've given the -- the firm 30 days to respond and, but ultimately, it could end up being a ban on China Telecom being here in the United States. But of course, an interesting wrinkle here is that, just looking at the timing in the calendar, ultimately that decision whether or not to ban would likely fall on the Trump administration.
Ben Yelin: Yes, and they're obviously a little bit more unpredictable. The article did note that the incoming national security advisor for President Trump, Michael Waltz, who's currently a congressman from Florida, is a China hawk and has been supportive of these types of countermeasures. But, you know, there was a difference in the first Trump administration between Trump himself and the people that he's appointed. And sometimes the people who's he's appointed can have predictable opinions on things, and Trump could decide, "That's not what I believe to be our national interest." So, there is a level of unpredictability there. I -- I wonder if that's something that China is relying on. I'll also say, though, that Trump, one of his more consistent positions, that he -- he is pretty hawkish on China.
Dave Bittner: Yes.
Ben Yelin: So, I -- I could see him preserving this or even expanding it.
Dave Bittner: Yes, it's a related story we covered in CyberWire recently, which was about there is a budget item in the national defense budget for $3 billion. It's called Rip and Replace. And the idea is to fund the replacement of Chinese hardware within our telecoms' systems here in the United States with hardware that's sourced from places other than China.
Ben Yelin: Right.
Dave Bittner: And of -- of particular interest are rural telecom networks, because they don't have the budget to simply make these replacements on their own. They're -- they're budget constrained. And so, evidently, it's just about $3 billion is the gap between what is available and what's needed. And so, they put that into the military funding, which is -- speaks to the fact that they consider it to be a national security risk. So, we'll see if that goes through as well.
Ben Yelin: Yes, there's another really interesting element about this related to espionage, where one of the things that China presumably has been able to accomplish through this -- this hack is to figure out which of the Chinese spies the -- in the United States our government is aware of. I think what the article says is, while hackers aren't able to listen in on the content of phone conversations, if you combine phone numbers with geolocation data, then that can paint a picture of which individuals our intelligence community are surveilling. And you can understand how that would be really, really disturbing from a national security perspective, getting that window into our surveillance practices, and our knowledge of Chinese espionage operations. So, the consequences are -- are pretty significant here, why -- which is why I think this is a very important executive order to take this retaliatory action. I think a lot of it is symbolic rather than --
Dave Bittner: Yes.
Ben Yelin: -- having a -- a major practical impact. That's another thing that the article gets at is this did not prevent, for example, Volt typhoon, the placement of malicious code in our electrical grid and water and gas and pipeline networks. Kind of thing that keeps you up at night.
Dave Bittner: Right.
Ben Yelin: So, this wouldn't have prevented that type of -- or the sanctions here, the retaliatory response wouldn't have prevented that type of -- of incident. But this is just part of our broader conflict against China and the Chinese government that has now moved from the physical kinetic military realm to the cyber realm.
Dave Bittner: Yes. It'd be interesting to see how something like this could escalate if it continues. You know, I -- I know there's -- but with espionage, there's kind of tit-for-tat, but it's -- it's interesting to -- to imagine how something like this could play out. And of course, we also don't know to what degree do we have access to their systems.
Ben Yelin: Right.
Dave Bittner: Because we're not going to tell and chances are if they discovered something they weren't, they wouldn't tell either.
Ben Yelin: Right. Right. I mean, we just have no idea. I'm sure some people listening to this podcast might know.
Dave Bittner: Count on it.
Ben Yelin: Me and you are not privy to that information.
Dave Bittner: Nope. Nope. Nope. And I'm okay with that.
Ben Yelin: Me too. Yes. Better not to know.
Dave Bittner: That's right. That's right. Talking about reasons or ways to sleep at night.
Ben Yelin: Exactly.
Dave Bittner: Right. Right.
Ben Yelin: Our -- our ignorance is bliss, Dave.
Dave Bittner: There you go. There you go. All right, well, we will have a link to that story in the Show Notes. Again, coverage from the New York Times. [ Music ] Ben, you recently had the pleasure of speaking with Casey Bleeker, who is CEO of a company called SurePath AI. You all discussed Generative AI and some regulations that we may see here in the U.S. in 2025. Here's Ben speaking with Casey Bleeker.
Casey Bleeker: So, I'd say across many different states right now, you have an attempt for consumer protections of AI, which oftentimes gets conflated or -- or evolves into enterprise regulations of how AI should be applied. That's -- that's impacting organizations of all sizes. It's impacting federal government organizations. There's the, you know, recent White House policies on ethical and -- ethical and secure use of AI across federal agencies. And you're -- you're also seeing that happen at an individual state level because they're trying to force some -- some federal activity. I think part of that is to try and kind of prevent the mismatch or patchwork of regulations that many organizations have to face today on consumer privacy that is on a state-by-state basis. And I would say that's -- that's the underlying impetus for a lot of the state level activity, not just to be first to try and show movement or activity in AI regulations, but also to try and -- and drive some -- some of the federal activity there as well.
Ben Yelin: I know on data privacy, Europe was ahead of us. I mean, they did GDPR several years before even California got started with CCPA. Do you get the same sense that that's happening with AI as well, or are we more on par with our European counterparts?
Casey Bleeker: They're -- they definitely move faster. You know, it was one of -- one of the -- the only times a unanimous decision was ever reached in -- in a first vote with the AI Regulatory Act that -- that took place in -- in Europe. But I think the -- the pace at which the individuals respond to this scenario is very different than consumer privacy, because many of the definitions and -- and the regulations that are taking place are not necessarily being defined in the same way. And consumer privacy, it was very clear, "Hey, we know how we want to allow or not allow organizations to use specific consumer data. Maybe we know that we don't want necessarily, or the types of protections consumers should have." But in AI, oftentimes the objectives aren't as clear. The actual objective of the regulation is not as clear. And the technical definition is much harder to pin down. It's very easy to say, "You can't sell my, you know, personal information to a third party," but it's -- it's much harder to define the technical definition of AI security or protections or regulations. And that -- that leads to a lot of concerns for organizations about how they're going to respond if it's not as clear, if it's -- if it's not as definitive.
Ben Yelin: Yes, that's something that I've noticed as well is like, we have this, "Is this AI as a problem?" Like, the next thing comes along and it's like, "Can we put this under this broader definition of -- of AI?" We kind of saw that in -- in California, which is something I wanted to ask you about. You had this suite of bills that made its way to Governor Newsom's desk, and he signed a couple of the kind of segmented ones. I know one on deep fakes and elections made it and now it's being challenged in court. But kind of the big regulatory bill that every person in the industry was either scared of or -- or at least following closely failed. Why do you think that failed and what can we learn from that?
Casey Bleeker: You know, I -- I was actually heavily involved in the Colorado AI regulatory bill, which California's larger initiative was modeled after. Many states modeled off of similar language, which was also based off the -- the European Union's AI Regulatory Act. And a lot of the challenges there are that it's not clear who it applies to and what it applies to. It's missing some of the technical definitions. To your point, what is AI? Well, both in Colorado's bill, which did pass, even though we were heavily recommending not to pass it in its current form, and what was rejected in California, both have a systems definitions that say AI is any system that takes an input and generates an output. And as somebody poignantly highlighted in -- in the Senate testimony in Colorado, that -- that could be a toaster.
Ben Yelin: Yes.
Casey Bleeker: You know? And -- and at its core, they're -- they're really applying regulations to generic systems, processes, algorithms, which means math, equations, things that take an input and -- and generate an output would apply.
Ben Yelin: And computing, right? I mean, anything that -- that has code in it would apply under that definition.
Casey Bleeker: Exactly. And that's what I -- that's what I was pushing for was -- was a -- a actual definitive definition of, "Are you training an actual model? What data are you allowed to train on it?" I think one of the most impactful pieces that -- that is -- is valuable in the regulatory bills that are circulating today is a consumer protection of notice that -- that a system has autonomously made a decision. And I don't care if that's AI or an Excel spreadsheet. If it's autonomous, then notifying the consumer and letting them have the option for human review. I think that's very valuable, just from transparency and the ability for somebody to bypass fully automated systems. But I -- I think when you don't define it to that level, now enterprises, businesses, organizations, they're reading through regulations that are really focused on somebody who's training an AI model or building foundational models versus somebody who just -- their employees bought a license to ChatGPT and they don't know the difference. And so, that's a muddy, muddy area to -- for organizations to try and navigate.
Ben Yelin: Right. And the guardrails apply regardless, right? So, you can be even a local government agency, and we've dealt with that with local regulations. It's like, "Okay, we use one tool in our everyday work that -- that qualifies as AI. And so, are we subject to these regulations??
Casey Bleeker: Yes, absolutely. I -- I think you -- you brought up something really important there, Ben. It's how you use the tool now. And that's very different than prior regulations around consumer privacy or, you know, similar analogs where it's now not, "What was the intention of the tool or how did we build it?" It's, "How did I actually use the tool?" And so, if somebody uses ChatGPT in an incorrect way, you may now be a deployer of high-risk AI systems. Even though your organization may have said, you know, we don't want employees to go do maybe resume reviews with some of these tools or, you know, make loan decisions with -- with some of these tools. But it goes back to this definition. Well, if I can do that, if I can use a tool incorrectly with -- with maybe something like ChatGPT or Perplexity or Claude, then I could do the same thing with Excel. And so, those are concerns, I think, of -- of how a tool is appropriately used. But if it's not well defined, it's really, really hard for a business to decide how to progress.
Ben Yelin: So, just through your work and through your organization, if you were testifying in one of these hearings and a member of a state legislature asked you for just like five foundational items that should be in every AI regulation bill that comes in front of a state legislature, what would those be? Just so we know exactly kind of where you think we should be headed in this realm.
Casey Bleeker: You know, I -- I think the first is it should have a -- a solid technical definition, that we're actually defining the training and tuning of a model, which ultimately means that as a consumer of those services, I'm not necessarily beholden to the regulatory act, because if it's not being sold in position to me for a specific use, that I shouldn't be considered a high risk deployer. I -- I think the -- the second major regulatory aspect is many people are afraid of AGI or -- or you know, artificial general intelligence. And if you look at how much compute power is being used to tune and train models today, the amount of compute would be orders of magnitude larger for any sort of a autonomous AGI. And so, I think that that's an unfounded concern in many state cases because you're not going to have a AGI model go wild. It needs access to infinitely more compute and power resources than we're providing just to train foundational LLMs today. So, that's the second thing is that I wouldn't over rotate on that concern and then really focus on you know, or distract innovation from -- from businesses. The -- the third is that -- that consumer protections today actually already exist for most of the cases that we're talking about. I think that it should actually be more about if you're intentionally developing a solution to insert bias or have consumer impacting decisions based on protected data, protected class data. Which those types of consumer protections have existed for quite a while. You can't have algorithmic bias in lending decisions. And so, if -- if it -- should be punitive damages for intentionally developing them. And that doesn't matter if it's AI or a system or a process, or maybe I'm just having individuals review with intentional bias. And so many of those consumer protections already exist. So, I know you -- you asked for five, but I think it was [inaudible 00:37:55].
Dave Bittner: Which was an arbitrary number. Yes. Totally arbitrary number. Yes. I'm curious to know your thoughts. We're going through a presidential transition, and for better or worse, and the Biden administration had put out these guidelines on AI, which I think, as you said, kind of helped inform what the states are doing. Do you see that the Trump administration being foundationally different? Do you see there being a major change in policy at the federal level? Like, what is -- what is your initial read on Trump and AI regulation?
Casey Bleeker: Oh, my goodness. That's a hard one because, you know, what's -- what's interesting is you've got somebody who really, I think is pushing for deregulation across the board, very much in favor of -- of I'd say, favorable conditions for enterprises and businesses to conduct business. But at the same time, you've got individuals like Elon Musk who are -- are interested in protecting, you know, interestingly enough, you know, some of the maybe larger AGI threats. I think when we get to a federal level, you start to take into considerations of national defense, you know? Should -- should AI systems be able to make autonomous decisions in the moment that could be, you know, life threatening to both our citizens or -- or other citizens across the world? And then you've got the cyber warfare kind of spectrum. And -- and that's where I think the -- the biggest area of -- of AI threat is actually occurring where AI is actually able to create zero-day attacks and -- and cyber threats. And also, be able to socially manipulate individuals in a much more scalable way that creates significant threat exposure for organizations. So, it's a little bit unpredictable because you kind of have multiple perspectives coming in. If I were to look at, you know, Trump's track record in terms of really pushing for deregulation, less government influence and -- and involvement, I would say that that's probably the -- the hypothesis is that less will actually be applied or done. What I'm thinking really most interesting is -- is the Department of Governmental Efficiency, while it might be meme-esque --
Ben Yelin: Yes, DOGE.
Casey Bleeker: -- there's also, you know, if -- if you were to eliminate significant portions of manual labor or -- or tasks within governmental efficiency, what -- what do you think the efficiency is going to come from? And there may be a bias there towards specific vendors or solutions to provide those services automatically for government institutions. That -- that is, you know, less than the regulatory spectrum, but probably more likely that we'll see actual more AI use within government as an alternative or as an efficiency metric to be able to bring some of those efficiencies. And you know, I would say that -- that if -- if it's well thought out, those can be -- those can be -- and -- and with intentionality that can be an appropriate use. But if it's an arbitrary, let's -- let's see if we can, you know, ax entire divisions in the federal government --
Ben Yelin: The Department of Education. Yes.
Casey Bleeker: -- yes, the Department of Education, and then we'll have an AI model provided by X that will decide you know, the -- the budgetary allocation of our public schooling, then probably not on the -- on the positive spectrum. So, I think the net is, I think we'll -- we won't see broad federal action that impacts business, that impacts how organizations should be adopting and utilizing AI. If anything, we'll see it focused primarily on foundational model providers that are really tuning and training the large language foundational models that are being used as the basis of work across other tools and products.
Ben Yelin: And then just more on kind of the future focus. There's been some discussion that like after the first generation of LLMs and ChatGPT 4.0, 4.5, that we've kind of plateaued a little bit, that this isn't accelerating as much as we thought it was. First of all, I just want to know your -- your sense on that, if you agree with it. And then what are some of the obstacles or just threats to AI systems or AI users that you advise your clients about, things to look out for over the next several years going forward?
Casey Bleeker: Yes. You know, I think that the -- this idea of a plateau is maybe a little bit of a realization that one of the most impactful productivity gains we can gain, can't do everything. And -- and -- and it's actually just people narrowing their focus of -- of "Well, I can actually get huge efficiency gains and productivity gains in so many areas, but I also just can't expect AI to autonomously run my business. It's not going to do all of my sales. It's not going to maintain my personal interactions, and it can't complete logical tasks." You know, we don't -- we can't implement business logic into an LLM yet. Now, you can layer that on top. But that takes some time and effort to architect and implement those solutions. And so, I think that a little bit of that plateau is people realizing that it's not just going to continually year over year double in the types of use cases it can take over. In terms of -- of threats and security, I think that, you know, there's quite a bit of talk of -- of how if you're building your own models or if you're building your own applications, you can protect those. You have the -- the OWASP Top Ten. Those often miss one major huge threat actor, and that is your insider threat, which has often been a cybersecurity threat, and that is your end user. And that may be intentional or just by lack of education. And so, one of the largest threats is actually organizations leaking their data, leaking sensitive information about their business, about their clients, about their consumers, to external public models. And so, one of the biggest things that you can do is mitigate that risk by having approved AI services that you observe and have a track record and an audit trail of user interactions, redact the data that's going to those services. It's one of the key features of what our platform does at SurePath. And then second, is -- is actually adopt private AI models, so that you have models within your own secure environments, whether that's your own data center, COLO, or your own cloud environment that end users can leverage without that -- that data leakage risk. And so that you can use it on a -- on a daily basis with secure and sensitive information without the concern of external threat actors or third parties getting access to those interactions. One major thread of -- that is consistent across whether it's approved public services or private models is users and organizations really need to maintain an audit trail. Not providing legal recommendations here, so please consult your --
Ben Yelin: Standard, yes. Standard caveat, so to speak.
Casey Bleeker: Yes.
Ben Yelin: Yes.
Casey Bleeker: But I would say it -- that our customers have shared that -- that they see that having that consistent trail of interactions, of history of how end users in their organization are using AI, it helps [inaudible 00:45:21] provide a defense for consumer privacy regulations or data leakage, future AI regulatory concerns about using AI in a high-risk manner. But it also helps provide defense for intellectual property claims that you've materially impacted the use of the model for any outputs that your business is using. And so, those can really help improve the posture of an organization, both in appropriately using AI systems with guardrails, but also having a history of how AI has been used in the past.
Ben Yelin: Is there anything we didn't get to that you wanted to address?
Casey Bleeker: You know, I -- I think we -- we hit the -- the -- the bulk of it. I think what I'm most interested in is like what from -- from your perspective, are -- are you seeing kind of an increase in the focus on -- on how organizations or businesses can adapt to this changing regulatory landscape? Like, are you -- do you think that that's something that we'll continue to see thrashing kind of like we did with, you know, CCPA and -- and consumer privacy or what's the kind of focus from both startups and -- and -- and enterprises that are trying to think about this regulatory landscape?
Ben Yelin: So, I'm in kind of two unusual worlds. I'm in the world of academia, because I teach courses, and I think AI in academia is its own animal, which is -- which is very interesting. I also think it's kind of like a -- a use case that might have broader applicability, because one of the things you talked about was like understanding the limits of your own capabilities. And I think seeing with students, knowing that they can't cheat on exams by copying and pasting into ChatGPT and trying to pass that off as their own work, but finding ways to use these tools productively. So, synthesizing information or, you know, choosing like the best sources for some type of academic paper. So, that's one kind of realm that I'm in. And then, the other work that I do is with members of the state legislature here in Maryland, and I think they're struggling. So, I think there's a lot of promise in terms of promulgating some AI guardrails, but they're struggling as to whether to use this fragmented approach where once a problem comes along, you try to ameliorate that with a very specific segmented solution. So, they've done that with election-related deepfakes, with deepfake pornography and they have gotten around to AI governance in public -- public agencies, so state agencies. But I think there is a lot of ambivalence and confusion, at least, as it comes to how to regulate private enterprise. And I think they're torn between wanting to set guardrails, but also not wanting to interrupt the -- the innovation that we've seen in the industry that's clearly has all these positive impacts on -- on interest in the state. So, I know those are -- it's not quite an answer to your question because I'm not kind of in that startup world, but that's just kind of the perspective I see from -- from my worlds.
Casey Bleeker: Excellent, excellent. Yes, it -- it is a challenge. Maybe it's a future topic to discuss is the -- the use of -- of AI in education. We've been pretty heavily involved in -- in that topic as well, working with quite a few more K-12s and -- and some higher ed institutions. And that's a -- that's an interesting evolution. You see almost two sides of the -- of the coin there where some organizations fully embrace it, are educating people on how to utilize it, and others are trying to figure out how to observe it more -- more closely. [ Music ]
Dave Bittner: Interesting stuff, Ben. Would you, overall, looking back on this conversation, would you say that you and Casey are in alignment with your optimism slash pessimism over what the next year could bring, or what do you -- what do you count yourselves?
Ben Yelin: Yes, I think so. I mean, I think there's the practical view on this where we have a patchwork of regulation of artificial intelligence. There is this very notable absence of federal action, particularly relative to what we're seeing out of the European Union. And that's just going to create a lot of short-term uncertainty. And I -- I do think we're on the same page, at least in -- in that big picture sense.
Dave Bittner: Yes. All right. Well, our thanks to Casey Bleeker from SurePath AI for joining us. We do appreciate him taking the time. [ Music ] And that is our show brought to you by N2K CyberWire. We'd love to know what you think of this podcast. Your feedback ensures we deliver the insights that keep you a step ahead in the rapidly changing world of cybersecurity. If you like our show, please share a rating and review in your favorite podcast app. Please also fill out the survey in the Show Notes or send an e-mail to caveat@n2k.com. N2K makes it easy for companies to optimize your biggest investment, your people. We make you smarter about your teams while making your team smarter. Learn how at N2K.com. This episode is produced by Liz Stokes. Our Executive Producer is Jennifer Eiben. The show is mixed by Tre Hester. Our Executive Editor is Brandon Karpf. Peter Kilpe is our publisher. I'm Dave Bittner.
Ben Yelin: And I'm Ben Yelin.
Dave Bittner: Thanks for listening. [ Music ]