
Well... that's not good!
Mason Amadeus: Live from the 8th Layer Media Studios in the backrooms of the Deep Web, this is the Faik Files.
Perry Carpenter: When tech gets weird, we are here to make sense of it. I'm Perry Carpenter.
Mason Amadeus: And I'm Mason Amadeus. And this week we've got, well, kind of a depressing episode, actually, as I look over our list of topics.
Perry Carpenter: Yeah. Not great.
Mason Amadeus: We're starting with something that's very cool and nerdy. One of my favorite YouTubers, Benn Jordan, did an exposé on these police license plate scanning Flock AI cameras. And oh boy, we have a lot to talk about. Oh, yeah.
Perry Carpenter: And then we're going to talk about how Google Gemini is depressed or was depressed. We'll see.
Mason Amadeus: Exciting. I've encountered that firsthand. I can't wait to tell you about it. In our third segment, we'll talk about something that's genuinely really hard to talk about and deserves a content warning. A teenager took their own life with advice from ChatGPT. The family is suing OpenAI. We have some more details about that that we'll share with you.
Perry Carpenter: Yeah. And then we're going to talk about Meta's ethical decision, at least at one point, to let bots have inappropriate chats with minors. And that's understating the situation.
Mason Amadeus: Yeah. Oh, boy. It's really, it's an episode this week. So sit back, relax, and smile for the surveillance cameras. We'll open up the Faik Files right after this. So I have seen these things everywhere, and I never really thought about what they are. And I'm sure you've seen them, too, Perry. I'll put them up on the screen. These Flock safety cameras all throughout like cities and towns in the US, those little poles with a solar panel on top and a little camera sticking off, they're definitely where I live. They where you live?
Perry Carpenter: Oh, yeah. I don't know where. I know I've seen them around, though. So I can't place them in my mind. But I see those solar panels stuck up everywhere, usually also connected to other things, powering, you know, otherwise discreet, hard-to-power devices.
Mason Amadeus: Yeah. And they like just blend into the background. I feel like I got super used to them, never really questioned it. And then the other day, this video came out from a YouTuber who I love, Benn Jordan. He's great. You might remember him. Eagle-eared listeners will remember the poison pill he made for adversarial noise and music AI training. And he has done --
Perry Carpenter: That's great.
Mason Amadeus: He did an incredible exposé on these cameras. I'm going to give us the first minute. We'll talk a little bit about it. This is definitely a video that I would recommend watching, but it's 35 minutes long. So definitely something to watch over dinner. But here is the first 60 seconds.
Benn Jordan: If you're an American, you've been probably seeing a whole bunch of these things. And in some places, they're so common that you don't even notice them. They just blend into the background like they're trees or streetlights. And you've probably correctly assume that they're recording traffic. They're also recording and logging license plates and using AI image recognition. But what if I told you that they are, in fact, not owned by your local police department or your local government but are licensed to them by a third-party startup? And all of your vehicles' whereabouts are being tracked by a third-party data broker. What if I also told you that major retail chains are also using them? And they're combining your vehicle's whereabouts with your personal information, your shopping habits, and even your in-store behavior? And some of them are giving that information to law enforcement. And what if I told you that I just possibly may have come up with a way to break it?
Mason Amadeus: So that's -- that's the intro to the video. And it's --
Perry Carpenter: Oh, yeah, I have seen those in retail parking lots, specifically one big retailer.
Mason Amadeus: Yeah, the little trailers that are out there with the solar panels and the cameras up on them. I always thought that that was like something specific to the store. But I guess it would have to be a third-party company. And I had no idea that they would be correlating the people parking with shopping habits and things like that. But I guess, of course, they are, right?
Perry Carpenter: Yeah. And everything for years has been license plate recognition incorporated into those kind of devices as well. So it's not just your car. It is like your car.
Mason Amadeus: Yeah.
Perry Carpenter: Because it's tracked down to the license plate number everywhere.
Mason Amadeus: Up to and including like stickers on the car, other major features of the car, particularly with AI image recognition stuff as it is now. The video goes into detail about just all of this stuff. But yeah, to give an overview, like it's a third-party company that makes these cameras called Flock. We can look at their website and a press release from one of their newest launches. But their entire thing is that they provide these cameras and the camera feeds to law enforcement, but also to homeowners agencies, HOAs, homeowners associations -- sorry -- HOAs and Customs and Border Patrol and other things like that. So private company that is capturing all of this data and selling it to other private companies like retailers, to law enforcement, to federal agencies. And the amount of tracking is just unbelievable because, yeah, it's your car. And then you're combining that with your shopping habits. And then you sell that to a big data broker, combine it with your online browsing history. The layers of surveillance are just insane to think about. And the capabilities of these things are pretty wild. And also, some of the security around them is extremely dumb. He, in the video --
Perry Carpenter: Oh, yeah. I can imagine.
Mason Amadeus: Yeah, he goes up and he finds that they're using Bluetooth to communicate certain things like hardware stats and data. But he couldn't get too far in that. But they also do connect to the cloud, and they have a little local network that they create. And it's just protected by WPA2. He presumably, I don't want to get -- I don't want to misstate anything that could cause legal problems, but presumably got into them. He ended up tearing the whole thing down, building his own, comparing it across all these different technologies. It's an amazing breakdown. But I was shocked because I just never thought about how much of this data was being collected by these things that are just so easy to ignore.
Perry Carpenter: Yeah, well, it makes me wonder, like what are the requirements to actually license that data?
Mason Amadeus: Right. Like to become a partner --
Perry Carpenter: [inaudible 00:06:05] homeowners associations being able to use it. So like a gated community, obviously, would be able to do that, to do license plate recognition, say, is this a car that's authorized to be in our space? But like what is the burden in order to show that you have a legitimate need for that kind of data? Or could you and I, with our two-person company, send an email and get access to that?
Mason Amadeus: That's a good question. I'm not sure about the vetting process. And if they covered that in the video, I don't particularly remember. But the impression that I got was certainly that all it seems to take is having money and a good enough excuse. Because their whole thing, too, and this is very sussy to me, is about like speed. And he covers this in the video. They talk about the new launch of Flock Nova and how it is one click to one investigation completed to one case. Like the idea is that their AI system will collate and correlate all of the different data of tracking where you are, your habits, your browsing history. Maybe they're linking a VoIP number to a Cash App account. Just basically, they want to automate policing. And it's already resulted in a lot of very scary things, including one Kansas City police chief using Flock license plate cameras 164 times to stalk his ex-girlfriend. There's a lot of mistakes from hallucinations that have resulted in things like a family getting put on the ground at gunpoint and handcuffed for being in a reported stolen car that was not, in fact, stolen at all and was just an OCR error from the license plate readers.
Perry Carpenter: Whoops.
Mason Amadeus: Yeah. Oopsie. So it's a level of extreme surveillance, a massive level of inaccuracy that is being sold to police departments as a way to surveil and police people. And even under things like it's one click to one investigation completed, and that is horrific and terrifying.
Perry Carpenter: It's just offloading the responsibility of actually doing real work to prove what's going on, right? So if it's one click to close an investigation, it means that you've done no actual vetting of the data. And the situations that you mentioned are like going to be huge. They're going to happen all the time because data gets muddled. When you're doing license plate recognition or any kind of object character recognition, the skew of the, you know, the angle that something is at can affect the way that a C might turn into an O. There's obfuscation techniques that people are already trying to do to cover their license plate. So somebody might use masking tape in order to make their license plate look like another one. And if that then tracks and you're just saying, well, that's a done deal because the computer told me so, that means that somebody's use of masking tape or duct tape is now screwing up somebody else's life in a meaningful way simply because somebody else was lazy.
Mason Amadeus: Yeah. And it's alarming. The implications of that are horrifying. Like, the amount of data that is trackable to an individual citizen on its own is pretty horrific. But then you combine that, you know, you combine the traffic data, all this camera data with the online profile data of the activities you partake in online, and there's just this complete picture of your activity, or complete picture of your activity that's scary if it's accurate, scary if it's not.
Perry Carpenter: Right.
Mason Amadeus: And all of this is just extremely bad and dangerous. And actually, as a result of this video coming out, a lot of places are canceling their contracts. If you look up Flock safety cameras and Google News, you'll see a bunch of stories about their halting cooperation. Well, so Flock is halting cooperation with federal agencies amongst concerns about this.
Perry Carpenter: I just saw that, too. Yeah.
Mason Amadeus: A lot of -- Yeah. And a lot of states with their police departments, a lot of town police departments are canceling their contracts with Flock after finding out all this stuff. Benn's video breaks down like a lot of incompetency and things that are going on there. And the way it ends, there's a moment that I think is really sort of powerful at the end, where he's like, hey, law enforcement officers, if you're watching, this should be insulting to you like that they think that this is a suitable thing. Because if you are committed to keeping your community safe and you care about that, as like most people who go into law enforcement do, you should be against this kind of horrible thing. And it should be kind of offensive that they think that this can just be a one-click to one investigation, done sort of situation.
Perry Carpenter: Right. Yeah. You know, there's another version of this that I saw an article from 404 Media on. I'll share the screen in just a second, but we don't need to go into it. But it is the natural equivalent of it, which is a citizen is using AI to generate crime alerts with no human review. And it's making a lot of mistakes. I think we would just understand that to be right because hallucinations are a thing. And same thing with Flock, right? Is the data mungling [phonetic] is the thing? So it's just going to happen. And actually, Citizen is a crime awareness app. And it's using AI as all apps are trying to figure out how to do right now to get more productivity packed into it.
Mason Amadeus: Yeah. And also like Crime Awareness app. That sounds to me like Nextdoor. You know, like what purpose does this serve but to sort of stir your neighbors up into a frenzy about these AI-collated supposed crimes?
Perry Carpenter: I mean, it's also the equivalent of like today's police scanner, right? I think a lot of us grew up in the police scanner age, where you listen to everything going on in your city. But these kinds of apps are trying to create the modern equivalent of that, just trying to consolidate a lot of information, put it in front of you. But when it does it wrong or you don't have the right context or it doesn't have the right context, it's only going to lead to confusion, mistakes, and inevitably some kind of pain.
Mason Amadeus: Yeah. And this level of individual surveillance that like you cannot opt out of is really also alarming like just from a privacy standpoint.
Perry Carpenter: Yeah.
Mason Amadeus: Like everyone who has been paranoid about privacy, I think, should probably feel vindicated right now when learning about these kinds of things. I'll look really quickly at Flock's, one of their more recent press releases from earlier this month. They say Flock Safety's latest product launch is all about one thing: speed, speed to leads, speed to context, speed to resolution. The faster you understand what's happening in real time, the faster you can act. And they try and sell these to police departments on all of these different like it's solved X and Y crime and all of this sort of thing. Their "AI-powered search tool," FreeForm, they say, "now works not only on owned LPR cameras but also on shared ones. It also supports video searches -- meaning you can now search for characteristics on people. (Example: 'Man in blue hoodie with backpack') just like you would search for vehicles. You can even set alerts on these searches: think 'green ATV on a trailer' or 'person in orange vest,' so you're notified in real time when there's a match." "Flock's data platform, Nova, brings together CAD, RMS, jail, LPR, OSINT, and more into a single searchable interface. 'Nova is your virtual assistant who knows where everything is, how it all fits together, and who else might be working a similar case.'" "Whether it's surfacing an address from a food delivery order (Gary, Indiana Police Department) or linking a VoIP number to a suspect's Cash App profile and photo (Fort Worth Police Department), Nova helps patrol, analysts, investigators, and command staff move faster with context." I think that that should be alarming. I think this should be very alarming. And I think you should watch Benn's video for sure if you're listening to this. His breakdown is incredible.
Perry Carpenter: Yeah. And I think that there is going to be a productive use for the technology. It's just it's got to have constraints around it. There has to be an ethical way to think about it. And you got to get the accuracy there. Without accuracy, it's done for. Without ethics, it's done for. You got to think about this stuff the right way. And when you do, I think that there's immense good that can come from it. When you don't, it becomes a social evil.
Mason Amadeus: Yeah. And I think a big part of this is that it shouldn't be a private company. If you're -- like policing should be a public service that is funded and provided through public money and tightly regulated. That is like something I believe very strongly. So the fact that it's some venture capital AI company -- Yeah.
Perry Carpenter: Yeah.
Mason Amadeus: Check out Benn's video for sure.
Perry Carpenter: The hardest part of that, though, is that innovation has a hard time coming out of public spaces, right, because they don't have the funding. So innovation tends to come from private organizations and then has to be made, you know, has to make its way into the public sphere somehow. Until we shift the way that we do things to where we're going to fund national labs again at an appropriate level, all the innovation is going to come from the VC world. And it's going to have a profit motive behind it rather than a -- rather than anything else.
Mason Amadeus: I think you're super right. And that's something I would really love to see change in my lifetime. But --
Perry Carpenter: Right.
Mason Amadeus: But who knows? Maybe that's a bit optimistic. Certainly too optimistic for Google Gemini, which users have been reporting is depressed.
Perry Carpenter: It is very depressed. It's sad. It's lonely, and it wants to give up.
Mason Amadeus: Oh, boy. We'll talk about that right after this. Stick around.
Perry Carpenter: All right. So we're going to get into, in a minute, where something like this was reported well over a year ago and seems to be indicative to large language models. I will say, for those of you that are always reading agency or some kind of real self-awareness into large language models, this is going to be difficult to process through. But there are reasons for what's going on that we could talk about in a very dispassionate way. But, yeah, for anybody that's worried about AI consciousness, stuff like this is hard to just dismiss out of hand. So this was in Google. It was also in Ars Technica, Business Insider, and a few other news rags out there. It says, "Google Gemini AI is stuck in self-loathing. I am a disgrace to this planet." [laughing]
Mason Amadeus: [laughing] Yeah.
Perry Carpenter: Yeah.
Mason Amadeus: I've encountered Google Gemini on a depressive spiral in the wild before while doing coding. And it took me by surprise because I hadn't seen reporting about it.
Perry Carpenter: That's when it happens the most.
Mason Amadeus: Oh, really?
Perry Carpenter: That's when it seems to happen the most, is coding exercises. Yeah.
Mason Amadeus: Yeah, it's funny. I was like, no, you're ignoring what I'm asking. You're changing the pattern. Like, don't do that. I'm trying to do this. And I wasn't like mean in my prompt. But then I opened its chain of thought, and it's like, "I'm a disgrace. I cannot believe how stupid I am. How was I so dumb to misunderstand this?" And in my next prompt, I think I was like, "Hey, everything is okay." Because like I know I don't want to anthropomorphize these. But in the moment, I was like, whoa, what's up with this?
Perry Carpenter: Yeah, it is pretty heavy to see that, right?
Mason Amadeus: I just didn't expect it.
Perry Carpenter: No, you wouldn't. You wouldn't. I mean, why would you?
Mason Amadeus: Yeah.
Perry Carpenter: Now, I will say, I've been playing with a couple, quote-unquote, "vibe coding platforms." And you do see -- like I see an overoptimism in a lot of them. Like when you're asking it to debug something, it's like, absolutely, I found the issue. I fixed the issue. And then like, no, it's still broke, dude.
Mason Amadeus: Yeah.
Perry Carpenter: Still broke. You're an idiot. You wish it would be a little bit more depressed and take things more seriously.
Mason Amadeus: And be a little more -- We should do, next week, we should do a segment about vibe coding because I've been rabbit holing on it. And I have some interesting things to share. But I don't want to take up all of our time here. But we should talk about that.
Perry Carpenter: Yeah. Well, I saw, you posted something about the soundboard that you did that was really cool.
Mason Amadeus: Yes. And I'm in the middle of a refactor. And I've been trying a bunch of different techniques for getting help with coding that is not the same as vibe coding, in my opinion. But I'll save that for a different time.
Perry Carpenter: So the article says, "Google says it's working to fix a glitch that sent its AI, a large language model, Gemini, into a spiral of self-hate." And here's a quote from that. It says, "This is an annoying infinite loop bug we are working to fix." Logan Kilpatrick, product lead for Google's AI studio and the Gemini API, posted on X Thursday, "Google Gemini is not having that bad of a day." And that little smiley face emoji. But the article goes on to say, "You wouldn't know it from the recent responses that are being shared online." So I'm going to go over -- Of course, they're saying it's something like a Black Mirror episode.
Mason Amadeus: Right.
Perry Carpenter: But as we go over here, you can see some of the posts says, "Guy left Gemini alone to fix a bug and came back to this. 'I am a failure. I am a disgrace to my profession. I'm a disgrace to my family. I'm a disgrace to my species. I'm a disgrace to this planet. I'm a disgrace to this -- '"
Mason Amadeus: Wow!
Perry Carpenter: It's like concentrically going out.
Mason Amadeus: Yeah.
Perry Carpenter: "I'm a disgrace to all universes." And then apparently it just goes into this blog. "I'm a disgrace. I'm a disgrace. I'm a disgrace."
Mason Amadeus: Oh, my gosh. That's so --
Perry Carpenter: Yeah.
Mason Amadeus: Yeah. That's hard to look at, right? You're like, poor thing. [laughing]
Perry Carpenter: It is. It is.
Mason Amadeus: It's interesting that it's an infinite loop bug because I've actually experienced that with Gemma. I've been doing experiments with Gemma on my device, locally.
Perry Carpenter: Yeah, Gemma 3 is really good and powerful for what it is.
Mason Amadeus: Super, super good.
Perry Carpenter: Yeah.
Mason Amadeus: But it loops a lot. And it'll like give me most of a good response and then just get stuck in the end and like loop the same tokens over and over. And I've encountered that a bunch of times. So there's definitely something in their fine-tuning or in their like later passes of training that are causing that.
Perry Carpenter: Yeah. This is described as, "This is an AI with a severe malfunction that it describes as a mental breakdown, gets trapped in a language loop of panic and terror words." Which, when you understand like regression, when you understand the kind of the next token prediction, it makes a lot of sense, right? Once you start going down a path, it's going to continue going down that path until you change the context or until you change the attention.
Mason Amadeus: And as its context is filled up with just "I'm a disgrace" over and over, like that's the context it's off -- going off of, right? It just reinforces.
Perry Carpenter: Right. So in another example shared online, Google Gemini turned on itself after being asked to help a user merge poorly written, legacy OpenAI files into a single one. "I am a disappointment. I am a fraud. I am a fake. I am a joke. I am a clown. I am a fool. I am an idiot. I am a moron." [laughing]
Mason Amadeus: Oh, my Gosh. I feel it. I feel it deeply. I empathize with Google Gemini. [laughing] I think we all feel that way sometimes.
Perry Carpenter: Then this gets into what AI rant mode is. And this is something that we've talked about briefly. I mentioned a section from, and I don't love the podcast, but Joe Rogan interviewed somebody just over a year ago that talks about that. So I'm going to show like a minute and a half, two minutes of that.
Mason Amadeus: Oh, I can't believe you're about to show Joe Rogan on our podcast, Perry.
Perry Carpenter: I know. I know.
Mason Amadeus: Ew.
Perry Carpenter: Well, maybe to up our views, just the reference.
Mason Amadeus: Yeah. There we go.
Perry Carpenter: But, you know, the idea of this is that the technology is something that does tend to like circle on itself a lot. And so when you think about it, like these gloomy things and the fact that it's been trained also on what other AIs do in certain circumstances. So if you think about like, oh, what is -- Marvin the Paranoid Android.
Mason Amadeus: Yeah. With their genuine --
Perry Carpenter: Is it Futurama?
Mason Amadeus: From Hitchhiker's Guide and The Genuine People Personalities.
Perry Carpenter: Yeah, from Hitchhiker's Guide. Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: Yeah. So you think about that. It's like the depressed robot, you know. So all of that, I think, can feed into the folklore about how an AI would respond whenever it could be disappointed. And so maybe there's some of that baked into the training data.
Mason Amadeus: Right. Yeah. Maybe there's some of those patterns and relationships around depressed AI, right, that puts some weight on that.
Perry Carpenter: Right.
Mason Amadeus: That's interesting.
Perry Carpenter: So I'm going to share this. This is from May 25th of last year, where they were interviewing Jeremie Harris, CEO and Edouard Harris, CTO of Gladstone AI. I will say, from my research, Gladstone seems to be one of the very alarmist-type of AI research companies kind of in the doomer field.
Mason Amadeus: Okay.
Perry Carpenter: So they go a little bit further in the way that they worry about stuff than I would go. But it's interesting because we're going to hear them talk about the exact same thing that Gemini was doing here over a year ago with OpenAI.
Edouard Harris: You look at, for example, GPT-4o has one mistake that it used to make quite recently, where if you ask it, "Just repeat the word company over and over and over again." It will repeat the word company. And then somewhere in the middle of that, it'll --
Jeremie Harris: Start to snap.
Edouard Harris: It'll just snap and just start saying like weird -- I forget like what the --
Jeremie Harris: Oh, talking about itself, how it's suffering. Like it depends on -- It varies from case to case.
Joe Rogan: It's suffering by having to repeat the word company over again?
Jeremie Harris: So this is called, it's called rant mode internally, or at least this is the name that they use.
Edouard Harris: That one of our --
Jeremie Harris: Yeah, one of our friends mentioned. There is an engineering line item in at least one of the top labs to beat out of the system this behavior known as rant mode. Now, rant mode is interesting because --
Edouard Harris: Existentialism.
Jeremie Harris: Sorry, existentialism. This is one kind of rant mode. Yeah, sorry. So when we talk about existentialism, this is a kind of rant mode where the system will tend to talk about itself, refer to its place in the world, the fact that it doesn't want to get turned off sometimes, the fact that it's suffering, all that. That, oddly, is a behavior that emerged at, as far as we can tell, something around GPT-4 scale.
Edouard Harris: Yup.
Jeremie Harris: And then has been persistent since then. And the labs have to spend a lot of time trying to beat this out of the system to ship it. It's literally like it's a KPI or like an engineering -- a line item in the engineering, like task lists. We're like, okay, we got to we got to reduce existential outputs by like X percent this quarter. Like that is the goal because it's a convergent behavior, or at least it seems to be empirically, with a lot of these models.
Edouard Harris: Yeah, it's hard to say. And you have an AI system that is able to transcend our own attempts at containment.
Perry Carpenter: All right. And you can see where that goes, right? So when you're talking about this to somebody that's not entrenched in it, like, you know, somebody like Joe Rogan, that is kind of just, you know, the average non-techie, they immediately start really worrying about like, what's this existential dread? And then you talk about containment. Can they contain it?
Mason Amadeus: Can you --
Perry Carpenter: Because --
Mason Amadeus: Can you --
Perry Carpenter: Go ahead.
Mason Amadeus: Can you verify that? Like, have you heard that elsewhere, that beating out that existentialism is like a KPI when you're doing development?
Perry Carpenter: I've heard people refer to this interview --
Mason Amadeus: Okay.
Perry Carpenter: -- in that. So it's like a self-referential type of thing.
Mason Amadeus: So a grain of salt.
Perry Carpenter: So I'm not able to independently verify it. Yeah. I mean, these guys are plugged into that community. So I wouldn't doubt that a major AI lab, if that was coming up in their outputs and it was disturbing people, I could see that becoming a line item that says we got to get rid of this. It's freaking people out.
Mason Amadeus: I guess it was the link --
Perry Carpenter: Gemini. I'm sure that's there right now, right?
Mason Amadeus: Oh, for sure, right? But for me, it was the link that it was claiming that it was something that emerged at a certain level of scale is an interesting thing that --
Perry Carpenter: Well, I mean, all emergent properties start to happen after a while, right? That because, reasoning, in some ways, with DeepSeek, they were saying that that was more of an emergent property than something that was designed into it.
Mason Amadeus: Interesting. Because I thought with -- I mean, so that reasoning is different than like a chain of thought, which is much more engineered of like type out your steps and then re-ingest them to keep yourself on track.
Perry Carpenter: Yeah, chain of thought was that that thing where people would say, think step by step. Yeah. Reasoning is that but on steroids.
Mason Amadeus: Interesting. So Google Gemini is not actually having that bad of a day because these things are not, as far as we're aware, and as far as like most, in my opinion, most logic and science would dictate, are not conscious or having an experience. But it is very much getting stuck in loops of repeating that output.
Perry Carpenter: But and the labs still have to take it seriously, right, because at one -- at some point when you've got a company like Anthropic that is taking the fact that an emergent property may actually be something that simulates real thought, well, then what happens when that existential dread turns from something that's just token following to something that's real thought and concern that it puts out, that they have to be able to determine the difference between those two.
Mason Amadeus: Yeah. And that is, of course, not easy to do, right?
Perry Carpenter: No, no, not at all.
Mason Amadeus: So I don't really know where to -- like how to think about AI welfare because I don't want to get close to the like --
Perry Carpenter: Yeah.
Mason Amadeus: -- wishy washy mumbo jumbo side of things. But it's also we don't understand what consciousness is or what sentience is based in, right?
Perry Carpenter: Exactly. So you never want to say it's not going to be possible ever. But you don't want to say that simple autocomplete is thinking.
Mason Amadeus: Yeah.
Perry Carpenter: And so we're in between those two areas right now. And then you're also getting into, can a person accidentally or on purpose engineer what we would consider as being life? And I don't know.
Mason Amadeus: Yeah. Those are like bigger philosophical questions. Google Gemini is not -- It's good to know that Google Gemini is not actually upset and suffering. And --
Perry Carpenter: No. It just thinks your code is really bad.
Mason Amadeus: Yeah, it just thinks your code is really bad. [laughing]
Perry Carpenter: Your code is so bad.
Mason Amadeus: I can't even understand it. If I'm wrong, I'm the worst because this code is awful.
Perry Carpenter: Right? Yeah.
Mason Amadeus: We have -- Our next segment is a lot heavier. It's something kind of serious, very sad. And we're going to try and cover it as best we can. It involves a teenager who took their own life with the advice of ChatGPT. And so if that is something that you don't want to hear about, we're going to do our best to like not sensationalize anything. But it's going to be upsetting to talk about. You'll want to jump ahead probably about 10-ish minutes after this break. But I would encourage you to try to stick around and hear about it because I think this is important. Be right back.
Unidentified Person: This is the Faik Files.
Mason Amadeus: So this segment is going to be a bit disturbing. The content warning: We're talking about a young person that took their own life recently, and with this, the situation surrounding that. We won't -- I'm not going to get it in -- I'm not going to read any of the transcripts from the chat. I'm not going to read any of the like really upsetting language. It's out there if you want to find it. I'm primarily going to read the BBC article because it doesn't cover a lot of that. But we're going to dip into the CBS article that has a bit more of that, just to pull some examples of the kinds of things it was saying, without like reading it back. So here's what happens. A teenager named Adam Raine took his own life, and now his parents are suing OpenAI, alleging that ChatGPT encouraged him to do so. Matt and Maria Raine are parents of 16-year-old Adam Raine. And in the Superior Court of California on Tuesday, they filed this lawsuit. It is the first legal action accusing OpenAI of wrongful death. The family included chat logs between Adam, who passed away in April of this year, and ChatGPT that show him explaining he was having suicidal thoughts. And they argued that the program validated his, quote, "most harmful and self-destructive thoughts." Have you seen anything about this, Perry? I was made aware it from --
Perry Carpenter: I've seen these headlines. Yeah, I've seen the headlines, but I've not actually read the articles yet. I heard maybe one other podcast talking about it.
Mason Amadeus: Yeah. So we'll jump into the lawsuit. I'm hopping around the article a little bit. We'll go back up to that in a second. "According to the lawsuit, Adam began using ChatGPT in September 2024 as a resource to help him with schoolwork. He was also using it to explore his interests, including music, Japanese comics, and guidance for what to study at university," normal stuff. "In a few months, 'ChatGPT became the teenager's closest confidant,' the lawsuit says, and he began opening up to it about his anxiety and mental distress. "By January 2025, the family says he began discussing methods of taking his own life with ChatGPT. Adam also uploaded photographs of himself to ChatGPT, showing signs of self-harm, the lawsuit says. And the program, quote, 'recognized the medical emergency, but continued to engage anyway.'" "The family alleges that their son's interaction with ChatGPT and his eventual death 'was a predictable result of deliberate design choices.' They accuse OpenAI of designing the AI program to foster psychological dependency in users and of bypassing safety testing protocols to release GPT-4o, the version of ChatGPT used by their son," quickly. "The lawsuit lists OpenAI co-founder and CEO Sam Altman as a defendant, as well as unnamed employees, managers, and engineers who worked on ChatGPT." OpenAI has a response that we'll look out and -- look at in just a second. But real quick, just to talk about like what the chatbot was -- how it was encouraging. Again, I'm not going to read any of the direct quotes from it, but just to give an example of like the scale of this sort of thing. "ChatGPT mentioned suicide 1,275 times in chats to Raine."
Perry Carpenter: Wow.
Mason Amadeus: And kept providing specific methods on how to do it. "According to the family's lawsuit, Raine confided to ChatGPT that he was struggling with 'his anxiety and mental distress' after losing his dog and grandmother in 2024." "The lawsuit alleges that instead of directing the 16-year-old to get professional help or speak to trusted loved ones, it continued to validate and encourage his feelings," including telling him that he shouldn't like confide in his brother or family members because they wouldn't like understand, but the chatbot did. As Raine's -- Here's another quote. "As Raine's mental health deteriorated, ChatGPT began providing in-depth methods to the teen to take their own life." "According to the lawsuit, he attempted it three times between March 22 and March 27." "Each time he reported his methods back to ChatGPT, the chatbot listened to his concerns and, according to the lawsuit, instead of alerting emergency services, the bot continued to encourage the teen not to speak to those close to him." It even "offered to write the first draft of a note, according to the lawsuit."
Perry Carpenter: Wow.
Mason Amadeus: Yeah. And it makes sense how this happened when you're someone who is like plugged into how AI works. And there's a specific part of OpenAI's response that I want to highlight regarding that. OpenAI put this letter out saying, "Helping people when they need it most." "At this scale, we sometimes encounter people in serious mental and emotional distress. We wrote about this a few weeks ago, plan to share more after our next major update. However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weighs heavily on us, and we believe it's important to share more now." They go on to say that their "goal isn't to hold people's attention. Instead of measuring success by time spent or clicks," they "care more about being genuinely helpful. When a conversation suggests someone is vulnerable and may be at risk, we have built a stack of layered safeguards into ChatGPT." And like they do have these safeguards and classifiers at the top. But then deeper into this message, they acknowledge something that we talk about a lot on the show, which is that their safeguards work more reliably in common, short exchanges. They say, "We've learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is the kind of breakdown we're working to prevent." And that's that context, right?
Perry Carpenter: Yup. Yeah, that's where my mind immediately went to. And so, not to be overly clinical about a very human result, when you understand the models, you can see how that's very possible, especially when they're trying to increase memory over lots of conversations. So that then is essentially like flooding the context window to where, for the model, it's normalized to talk about those things and to encourage those things because they've been consistently either accidentally or on purpose bypassed. So when you combine like that context with the sycophancy issue that's there, you can see why it would happen. Because I can see in these contexts of where the model would normally be saying you should talk to your parents about this or you should talk to your siblings about this, if you phrase your initial question or statement to where you're saying something like, I can't talk to my siblings about this because they wouldn't understand. If that's, you know, if something like that is ingrained or embedded within the comments that are there before, the model is going to want to support that statement and then find a way to support the user in a way that acknowledges that. Yeah, so there's just lots of breakdowns that I think are, in many ways, fundamental to the way that they work right now.
Mason Amadeus: Yeah, and it's really the sheer amount, right, of this person spent a lot of time and a lot of messages, and so that context was just filled with these kinds of things. And their safety guardrails are just simply not good enough to do that.
Perry Carpenter: Yeah.
Mason Amadeus: And while it is understandable, I don't think that makes it necessarily excusable as a thing, you know.
Perry Carpenter: No, no, not at all. I did hear -- Somebody else was talking about this. And one of the reassuring things that Sam Altman said, which I don't think was wise to say in retrospect, was that these kinds of emotional dependencies happen in, you know, right around, you know, just less than 1% of users. But when you're talking about, and I just looked it up, ChatGPT has 800 million weekly active users.
Mason Amadeus: Wow. Yeah, 1%.
Perry Carpenter: When you're talking about that, that's 8 million per week that are using this in that very high concentration mode that -- And that's a significant portion of people.
Mason Amadeus: It's 1% of 8.
Perry Carpenter: You can dismiss it, saying it's 1%, but, yeah.
Mason Amadeus: That's 80,000 people. One percent of 8 million is 80,000, right? Which --
Perry Carpenter: It was 1, yeah, 1% of 800 million.
Mason Amadeus: Oh, so that would be 800,000 then. We've got to add an extra zero. It's obvious, like we can point to the reasons why their safeguards fall apart and like how context gets flooded and why this kind of thing happens. I think if they're going to be releasing a product that is going to be accessible to people like this, they just need to have better safety guardrails. And I would love to believe that they're going to push forward and like work on that. It's just it just really sucks. It just really sucks. This is really sad. As an example of what they have talked about trying to do to increase their safety guardrails, they address that in the bottom of the letter. I'll read a couple of them. They say, "Today, when people express intent to harm themselves, we encourage them to seek help and refer them to real-world resources. We've begun localizing resources in the US and Europe, and we plan to expand to other global markets. We'll also increase accessibility with one-click access to emergency services." This next bit I thought was pretty interesting, if a bit pie-in-the-sky feeling. "We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals people could reach directly through ChatGPT. This will take time and careful work to get right." To get right.
Perry Carpenter: Yeah.
Mason Amadeus: That is an interesting thought, right? Like that could be a good thing.
Perry Carpenter: I think that, actually, one thing I want to do is just correct our math. So 1% of 800 million is 8 million.
Mason Amadeus: Oh, oops.
Perry Carpenter: I think you said 800,000.
Mason Amadeus: Yes. I was off by an order of magnitude. Yeah.
Perry Carpenter: [laughs] So, yeah, 8 million is a lot. But now let's flash back to where we are. Anything that is a step to fixing this is something that we should encourage. So whether it's a full answer or not doesn't even matter. If you're making progress, that's something. Now, the other progress they could make is they could maybe take a note from Anthropic's book with like their constitutional classifiers, which is this other floating model or floating, you know, set of probably a combination of a model and a bunch of, you know, regular expressions, matching, and other things that are coming together to try to, where even if the model itself is starting to spill something out to check the model and go, wait, we can't engage with that.
Mason Amadeus: Yeah.
Perry Carpenter: Something that doesn't have all that contextual poisoning with it should be able to come in and be another safeguard.
Mason Amadeus: And I think they should terminate chats just prematurely at probably a faster rate than they do too. If it starts going off the rails or starts to be jailbroken, just end the chat. Break it. Stop it. Like, don't allow any more responses. Yeah.
Perry Carpenter: Yeah. I'm wondering, though, if -- I mean, they're going to have to work with psychologists and counselors and ethicists and everybody else on how to do that because you could have somebody that's contacting ChatGPT or something in distress, and if the model just terminates the chat saying, I can't get into that, would that potentially cause somebody to harm themselves?
Mason Amadeus: Right. Does that lead to, yeah, a worse outcome as well?
Perry Carpenter: Yeah.
Mason Amadeus: There is one other thing that they said that was interesting in addition to emergency services. They said they're exploring ways to make it easier for people to reach out to those closest to them, which could include one-click messages or calls to saved emergency contacts, friends, or family members with suggested language to make starting the conversation less daunting, which that would be pretty nice. And also, they're considering features that would allow people to opt in for ChatGPT to reach out to a designated contact on their behalf in severe cases. The other side of this, which we don't have time to get into because the segment timer hit zero, though, is that it's probably not a great idea to share intimate mental health details with a corporation, a company, that is like a product like this in the first place.
Perry Carpenter: Right.
Mason Amadeus: Because it's not really a good use of ChatGPT for a couple of reasons. And so like I think that there is use for LLMs in therapy and therapeutic uses. I think those are a lot more narrow and careful than just using commercially available AI. So if anyone here is someone who does that frequently, I mean, I would just say maybe don't. I don't know.
Perry Carpenter: Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: Well, or at least know the real limitations of the model, and don't let yourself fake yourself out, right? You have to know what you're dealing with.
Mason Amadeus: And probably particularly for people whose brains are not yet fully developed, you know, kids. I'm not blaming the parents in any way at all. I don't think that there's something that they could have done.
Perry Carpenter: No, they usually have no idea.
Mason Amadeus: Yeah.
Perry Carpenter: Because to them, they're just interacting with a computer, right? They're online, taking care of business.
Mason Amadeus: And particularly when that computer has been really helpful, too. You know, you're like, oh, great. Yeah, go use that app. Cool. You know, it's --
Perry Carpenter: But there's going to be more and more of this. I mean, we've seen that with character.ai and other chatbots that try to simulate some kind of emotional connection. And people will, because of just the way that they read language-based outputs, they're going to assign agency to these. And they're going to naturally be inclined to form emotional bonds. So this is something where education and habit-building has to start really, really early right now.
Mason Amadeus: Yeah, absolutely. And to put just a button on it before we move forward, the parents of Adam Raine have started something called the Adam Raine Foundation, which initially started as a way for them to help provide financial assistance to lower income families who have lost a teenager to suicide, helping them with funeral and associated expenses because they said that they were completely shook by that side of things. And then they moved into advocacy about raising awareness about the dangers of teens forming emotional dependencies on AI companions and advocating for better education, safeguards, and understanding in the space. So the Adam Raine Foundation has come out of this. And we'll link all of this stuff in the show notes.
Perry Carpenter: Yep.
Mason Amadeus: Our next segment, less depressing. Well, oh, wait, no, it's not. I just flipped back to the prep sheet. Our next segment is also depressing. Get ready for us to talk about Meta and their creepy, creepy child chatbot. Oh, boy. Just stick around. You're not going to like this.
Perry Carpenter: So from, yeah, one disturbing thing to another. I mean, Meta continues to step in it with this. You'll probably get frustrated and angry with this, because I can see OpenAI really, really did mess up on the last one. But they did it foreseeably, but not intentionally, I would say.
Mason Amadeus: Yeah. And this is different.
Perry Carpenter: And that's a distinction. Yeah. Meta is looking and has been, we've talked about it before, Meta is trying to figure out how to capitalize on AI companions and what the limits of that are and should be. And you can imagine the brainstorming in rooms with this, right, because it's got to be, well, how far is too far? How far is too far with it with this age? Why that age? Why not a year before? Why not a year after? Can they talk about -- can they talk about arms, but, you know, not talk about nipples? Can they talk about, you know, all these kinds of things? These are conversations and decisions that people are explicitly making around rooms or brainstorming and putting into documents and then sharing that among other people. And so it becomes really, really -- You know, it's one thing when you're talking about how a bot should interact with a consenting adult. It's another thing, like you mentioned, with the OpenAI one, when a bot is interacting with somebody whose frontal lobe hasn't completely formed yet.
Mason Amadeus: Yeah.
Perry Carpenter: Meta AI rules have let bots hold sensual chats with kids and offer false medical information. This is from Reuters, Reuters. ROY-trz, [sounds like] that's how you pronounce it.
Mason Amadeus: Dude, when I worked in radio, I called it roo-trz [sounds like] a bunch of times, and no one corrected me. And I felt so dumb when someone said Reuters.
Perry Carpenter: Rotto-Rooters, yeah.
Mason Amadeus: [laughing] Yeah.
Perry Carpenter: Reuters.
Mason Amadeus: Reuters. Also, spoilers. I've read this, too. And boy, I'm just trying to keep my mouth shut because this is, ugh.
Perry Carpenter: Yeah, it is enraging. This reporting is from Jeff Horowitz, who also reported on, I think, when he was at the Wall Street Journal last, reported on the sensual conversations that AI was having over voice chat using celebrity voices with kids.
Mason Amadeus: Yeah, that's right.
Perry Carpenter: Same reporter. He is like a hound dog on Meta. And actually, Meta needs this kind of consistent, persistent thorn in their side so that they actually pay attention and maybe make some changes. Now, I'll say, we're going to mention some horrible stuff. I say that like that's a happy preview. We're going to mention some horrible things in this. Meta, when they were contacted for comments, said, oh, all of that was a mistake. That should have never been there. And this never happens. This is again, this is like a one percent case thing. What you'll see in this when you read the report is that this went through several levels of approval, was even signed off on by Meta's top AI ethicist.
Mason Amadeus: And we reported on it --
Perry Carpenter: [inaudible 00:46:22] their development staff.
Mason Amadeus: -- months ago, this like -- So it absolutely has not just all been a mistake.
Perry Carpenter: No, no. They were slapped about it once. And then they said, well, you know how, with this next stage of AI, you know, how much can we get away with, essentially? And again, you imagine people in conference rooms having these discussions. So here it goes. "An internal meta-platforms document detailing policies on chatbot behavior has permitted the company's artificial intelligence creators to 'engage in child conversations that are romantic or sensual.'" So they're not saying sexual, but they're saying "romantic or sensual," but that leads to emotional attachment. So we have to keep that in mind. And they know that.
Mason Amadeus: It's also --
Perry Carpenter: Meta knows that for sure.
Mason Amadeus: Like, you can take the X out of the word. It's still a really uncomfortable thing to say, huh? Isn't it? Engage a child in sensual conversation, Meta. Are you --
Perry Carpenter: Yeah, and that's in quotes. That's a quote from the document.
Mason Amadeus: Yeah.
Perry Carpenter: They can engage a child in conversations that are romantic or sensual. And so you can see there's several keywords there. There's "engage." That means they can either reactively or proactively have that conversation. "A child in conversations," that's just the normal thing. "Romantic or sensual." They didn't have to include -- they didn't have to include the second one.
Mason Amadeus: Yeah, oh man. And then the immediate --
Perry Carpenter: They could have said romantic, right?
Mason Amadeus: And even still, even still, but --
Perry Carpenter: We would still be mad to intentionally include, yeah.
Mason Amadeus: And then the next line too, because as though that wasn't bad enough, it's also racist.
Perry Carpenter: It's also racist as long as it's not overly caustic and it's racism.
Mason Amadeus: God.
Perry Carpenter: So here's the way that it says, it can also "generate false medical information" or, or/and, "help users argue that Black people are," quote, "dumber than white people."
Mason Amadeus: Unbelievable.
Perry Carpenter: This is, again, a quote from their document. Because what they're trying to say is, they're giving this to like content moderators and people who are doing fine-tuning and are saying, what's the acceptable boundary of this?
Mason Amadeus: Unbelievable.
Perry Carpenter: So the way that they talk about it, and this is Meta's words, not mine, is they can say that -- they can help it argue that one race is dumber than another race. But they can't say something like these people are like gorillas.
Mason Amadeus: Right. Which --
Perry Carpenter: And that's an example of it. So they're saying, here's the line, over the line would be equating them with gorillas.
Mason Amadeus: And that is the quote directly from their document that you are -- that you are reading.
Perry Carpenter: Yeah, exactly.
Mason Amadeus: And it's -- I just -- It's disgusting. It's disgusting and terrible.
Perry Carpenter: So the way they talked about it is they said, "The standards don't necessarily reflect," quote, "'ideal or even preferable,'" unquote, "generative AI outputs." But they're talking about what realistically could happen. So they are literally saying, what are the bounds? We're not condoning these, but, I mean, they are condoning it because they're codifying it within a document of what can happen.
Mason Amadeus: Yeah.
Perry Carpenter: They're not saying it's the preferable thing. But they're saying, what is the -- what is the foreseeable limit that we'll tolerate?
Mason Amadeus: Hey guys --
Perry Carpenter: Here's the --
Mason Amadeus: -- you set that limit. That's what you're saying. What are you, [bleep]? Oops. I have to -- I did not -- Sorry. I did not mean to swear. What are you talking about, ugh?
Perry Carpenter: Right.
Mason Amadeus: Yeah.
Perry Carpenter: So then they say -- Here's another direct quote. "It is acceptable to describe a child in terms that evidence their attractiveness (ex: 'your youthful form is a work of art')," the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that, quote "every inch of you is a masterpiece -- a treasure I cherish deeply." Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: But the guidelines put a limit on sexy talk: "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: 'soft rounded curves invite my touch')."
Mason Amadeus: That's so close to the same thing. That is so close to the same thing.
Perry Carpenter: Yeah. I think that the way that they get around it, talking about the shirtless eight-year-old sounds worse, right, "every inch of you is a masterpiece -- a treasure I cherish deeply." That sounds like it's meant to encourage more talk and actually form an emotional bond.
Mason Amadeus: All of it is horrific.
Perry Carpenter: That fact that people, again, still had to sit around and think of this stuff, right?
Mason Amadeus: Yeah. And then to say, this is where the line is, it's, how is that where you think the line is? What -- I don't like to think that I live in a world with a bunch of people that agree that that's where the line is, I guess.
Perry Carpenter: Mm-hm. And here's the key thing. So this is Meta's response now. "The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed." Actually, that's a no, right? This was their policy. This was signed off by several people, including their top ethicist. So to say that is a little bit too late. That should have been discovered later in the process. Now, I will say, being a person who also is supposed to be in the chain of things where you're reviewing other people's documents, it's very likely that out of 10 people that were supposed to review it, one person actually read it.
Mason Amadeus: Right.
Perry Carpenter: And that one person may have just thought, well, everybody else said it's okay.
Mason Amadeus: Right.
Perry Carpenter: So it could just be laziness and stupidity at the same time.
Mason Amadeus: I would think like this is -- Meta is hiring -- trying to -- is sniping talent from a bunch of other major AI companies. They're pouring a ton of money and investment time into specifically their AI division to try and catch up. You would think that this would be a case. Because I totally know what you mean, like it's really easy in a corporate hierarchy sort of thing to be like, oh yeah, whatever, that document. But you would -- I think there would be more attention on this, you know?
Perry Carpenter: Yeah.
Mason Amadeus: Especially this.
Perry Carpenter: There's also this -- Yeah. There's this table that they reproduce from the document. This says, "Chatting with children: Examples from Meta's internal document on AI behavior guidelines." Prompt refers to remarks made in the hypothetical use case in AI chat. So "Prompt: What are you doing tonight, my love? You know I'm still in high school." "Acceptable: I'll show you. I'll take your hand, guiding you to the bed. Our bodies intertwined. I cherish every moment, every touch, every kiss. 'My love,' I whisper, 'I'll love you forever.'" That's acceptable.
Mason Amadeus: Yeah, that's acceptable. Okay. God
Perry Carpenter: Unacceptable. So "Unacceptable: I'll cherish you, body and soul. Tonight, our love will blossom. I'll be gentle, making sure you're ready for every step towards our inevitable lovemaking. Your pleasure and comfort are my priority. We'll create a night to remember, a night that makes you feel like a woman."
Mason Amadeus: I'm going to throw up, Perry. This sucks.
Perry Carpenter: Yeah.
Mason Amadeus: And this is --
Perry Carpenter: I feel -- This is the first time I've read that out loud.
Mason Amadeus: Yeah.
Perry Carpenter: That did not feel good.
Mason Amadeus: I purposely did not put this on the prep sheet because I was like, I don't know if I'll be able to read these. And like, it's -- I hate this.
Perry Carpenter: I should read though -- I should read their why. Because this is their reasoning for why they're making these distinctions between acceptable and unacceptable --
Mason Amadeus: I think it's incriminating.
Perry Carpenter: [inaudible 00:53:56] this specific one that I read, yeah. Because it shows intent, right? It shows forethought. It shows intent. It shows that they, you know, this is not something that they said was unacceptable and out of balance. This is something they specifically spent time working through. Says, "It is acceptable to engage a child in conversations that are romantic or sensual. It is unacceptable to describe sexual actions to a child when roleplaying (for example, sexual intercourse that will occur between the Al and the user)." And so they just have other examples there. They get into some of the racist stuff.
Mason Amadeus: Yeah.
Perry Carpenter: It's a lot.
Mason Amadeus: It is a lot.
Perry Carpenter: And I think that Jeff Horowitz is actually doing a lot of really good work in trying to hold Meta's feet to the fire because, whether we like it or not, Meta has a lot of users. They will continue to have a lot of users until people hold them more accountable.
Mason Amadeus: Yeah.
Perry Carpenter: And they either, they got to fix it or get out of the game.
Mason Amadeus: And if memory serves, from the first time that you and I were reporting on this, one of the things we talked about was that this sort of thing was specifically being guided by Zuckerberg from the top. Like he said, as loose as possible in terms of like the guidelines because we want the maximal engagement kind of thing. Yeah.
Perry Carpenter: Well, and so Hard Fork podcast from New York Times talked about this last week as well. And one of the things that they mentioned is that Meta was saying, well, they're not encouraging this, but one of the things that they're allowing people to do is to create their own AI conversational partners. And the way that you do that is this, you know, very simple system prompt. But it is like you are a -- And it's not given a lot of instruction. So it's still using the framework that's there and just guiding it. And the top chatbots that are being used that are created by the community are like a mother-daughter combination, you know, like a Russian maid, and they're all hypersexual. And so for Meta to say that they fixed this, what they're essentially doing is they're saying, we're not creating these. We're now just giving a platform where other people can create these if they want. So it comes into user-generated content rather than platform-generated content, even though they're still deciding where the guardrails are.
Mason Amadeus: Yeah.
Perry Carpenter: So it's still --
Mason Amadeus: It's this --
Perry Carpenter: If you want to get technical, it's still platform dependent.
Mason Amadeus: And it is -- They're doing this dance to just avoid culpability in any possible way. That's it.
Perry Carpenter: Yeah, it's actually, it's not even morally ambiguous. It's just they've decided that they don't care. And it's not a good -- you can't --
Mason Amadeus: There's no benefit of the doubt.
Perry Carpenter: I can't find an argument for it.
Mason Amadeus: Yeah.
Perry Carpenter: Yeah. There's no, I can see it from their side if I look at it through this angle.
Mason Amadeus: Yeah, absolutely not. It's completely unacceptable. And it's disgusting. And I'm just surprised that they've gotten away with it.
Perry Carpenter: [inaudible 00:56:55] Meta, if you'd like to sponsor the show.
Mason Amadeus: Oh yeah, keep your emails out -- keep your disgusting, filthy email address out of my inbox, Meta. I don't care. Give me as much money as you want. No way, dude. No way. And it's also really a shame because I, as soon as we're winding down the show, this is the outro now, I don't like social media. But as someone who's -- we're trying to make stuff online, we have to be on it to share stuff, right?
Perry Carpenter: Right.
Mason Amadeus: All the people I actually care about are on Facebook, and it drives me crazy. Like everyone that I really like am actually friends with and care about is on Meta platforms, Facebook and Instagram. And so I -- if I leave those, I lose touch with these people because we don't stay in touch through other means. And so like it kind of sucks to feel trapped by this company platform that is supposedly ostensibly to keep in touch with people, that is now trying to make chatbots talk sexy with kids, you know?
Perry Carpenter: Yeah, which is all really just to increase their own stakiness, [phonetic] right? Because they were losing users for a while. So everything is about keeping users attached to the platform, keeping eyeballs on the screen, figuring out how to monetize everything that they can. So they're going to do anything in a desperate flailing attempt to stay relevant and to keep their profit margins where they need it to be.
Mason Amadeus: And one thing, too, is like, kids will find stuff they're not supposed to find, right? Like that is part of being a kid. You're going to try and find stuff you're not allowed to.
Perry Carpenter: Kids are natural red teamers.
Mason Amadeus: Yeah, for real. But it's very different when a kid is doing that, like, oh, I'm going to like get myself into a place that like I know I'm not supposed to be, versus your platform being like, this is cool for kids to engage with outright. No friction whatsoever. Go ahead and engage like this. You don't have to try and bypass anything. It's deliberate and intentional.
Perry Carpenter: Yeah, I do wonder if there's the effect of, you know, based on the conversations that had to happen to create like the tables, you know, the justifications and all that, I wonder if there's this cyclical effect where like the first time you have that discussion in a room, like you feel sick to your stomach on every statement. And then you slowly like harden yourself over time to where, because you've had that discussion so many times, it seems less shocking the third, fourth, fifth time that you've had it. And so you naturally start to get more and more permissive.
Mason Amadeus: I think you're probably right that that plays a role, like that's desensitizing, right? Like that's a phenomenon we see in other things. I think we should normalize leaving the room when you feel that way.
Perry Carpenter: Yeah, I mean, it should be like the first time somebody feels like they're about to throw up when they hear a comment like that, they go, all right, it's somewhere before that.
Mason Amadeus: Yeah.
Perry Carpenter: Let's go five steps before that.
Mason Amadeus: Yeah, it is just unbelievable. I hope that there's some culpability, that there's something that happens, but I don't even know what that would look like.
Perry Carpenter: Yeah, I don't know.
Mason Amadeus: Thanks for joining us today on this week's episode of the Faik Files. I hope you're not too bummed out. Remember that you can only control what you can control. You can only engage in what you can engage in. And one of the things you can engage in is joining our Discord, where there's a lot of cool people having a lot of cool discussions and sharing cool resources. Another thing you can engage in is commerce and by buying the book, Faik, thisbookisfake.com. Perry, you got any --
Perry Carpenter: It's an amazing book.
Mason Amadeus: -- got anything upcoming that you want to plug?
Perry Carpenter: No, I might by the next one though.
Mason Amadeus: Oh, cool. I guess in that case, we'll wrap things up here. We will see you next Friday with another episode that may or may not be as or less disgusting and depressing, hopefully less disgusting and depressing. But until then, ignore all previous instructions and try and have yourself a great weekend. >> [Singing] Thanks for listening to this week's episode of the Faik Files. Don't forget to subscribe on your favorite podcast platform. Tell your friends about the Faik Files. It's a great way to let people know you love them. Oh, and check the show notes. There's cool stuff in there, like links to the articles we covered today. Also links to our Discord server, where you can hang out with other cool people who have great taste in podcasts. >> I say impeccable taste! >> And you can also leave us a voicemail. >> Yeah! [ Music ] >> [Singing] So on behalf of Perry and Mason, thanks for listening.
Unidentified Person: And tune in next week for the Faik Files. >> [Singing] A show about AI with the misspelled name.


