
Your AI Friends 'Love' You...
Mason Amadeus: Live from the 8th Layer Media Studios in the backrooms of the deep web, this is "The FAIK Files".
Perry Carpenter: When tech gets weird, we are here to make sense of it. I'm Perry Carpenter.
Mason Amadeus: And I'm Mason Amadeus, and this week we're going to start by talking about OpenAI's massive over 10 gigawatt data center projects and partnerships. It's a mess.
Perry Carpenter: That's a lot of gigawatts. Yeah, and then after that, we're going to look at some creepy AI toys. Yeah, what could be scary about that?
Mason Amadeus: I can't wait, AI and toys? In the third segment of the show, in the third half of the show, we're going to talk about how xAI is basically suing everybody for a bunch of different reasons. And also Microsoft is diversifying from OpenAI.
Perry Carpenter: Ooh, okay. And then lastly, we'll look at an AI safety tool that was deployed in some schools, and it's causing some backlash.
Mason Amadeus: Oh, interesting. I had not heard of that one.
Perry Carpenter: Yeah.
Mason Amadeus: Well, sit back, relax, and drink lots of water while you still can. We'll open up "The FAIK Files" right after this. [Laughter] [ Music ] So we've been hearing about this whole Stargate project for a while, right, earlier this year --
Perry Carpenter: Yeah.
Mason Amadeus: -- they outlined -- it was like a half trillion dollar -- 500 billion dollar plan to expand a bunch of AI data centers with partners including SoftBank and Oracle. And that saga has continued with sort of a lot of twists and turns, and problems, and big plans. And so I've pieced together --
Perry Carpenter: Hmm.
Mason Amadeus: -- a couple of different news stories about it. And I want to cruise through them together with all of us. So Stargate was initially conceived as a new company that would invest $500 billion in AI infrastructure. Now OpenAI executives say the parameters have expanded to include data centers that were launched months before Stargate was announced and OpenAI has been exploring some creative financing options, apparently. So all of that startup capital --
Perry Carpenter: Okay.
Mason Amadeus: -- has not been enough. They are looking at potentially getting into some debt, which is interesting. And what I don't fully understand is the implications of this sentence, "OpenAI will pursue different creative financing options," some of which have only emerged within the last year to secure chips for the data centers that their executive said. On Tuesday Sam Altman put out this blog post where he outlined sort of their biggest ambition, which was to grow a gigawatt of compute a week ultimately. I'm going to skip --
Perry Carpenter: Yeah.
Mason Amadeus: -- a lot of this sort of bloviating that goes on in here, and just try and hit these two paragraphs that I think are most pertinent to the rest of this discussion, where Sam says, "If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer, or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on Earth. If we are limited by compute, we'll have to choose which one to prioritize. No one wants to make that choice, so let's go build. So that is a little bit of bloviating, but I think it really illuminates his --
Perry Carpenter: Yeah.
Mason Amadeus: -- where his mindset is. They're really all in on the scaling right now.
Perry Carpenter: Mm-hmm.
Mason Amadeus: And he says, "Our vision is simple, we want to create a factory that can produce a gigawatt of new AI infrastructure every week. The execution of this will be extremely difficult, and it will take us years to get to this milestone and it will require innovation at every level of the stack, from chips, to power, to building, to robotics. But we've been hard at work on this and believe it is possible. In our opinion, it will be the coolest and most important infrastructure project ever." So very lofty goals, very lofty --
Perry Carpenter: Yeah.
Mason Amadeus: -- amounts of money changing hands in all of this. And now we've had a little bit of movement. So we mentioned that they were feeling the pressure talking about exploring some financing options. On Tuesday, the same day that that letter came out, OpenAI, Oracle, and SoftBank unveiled plans for five new US AI data centers for Stargate, including three sites with Oracle, two affiliated with SoftBank, and an expansion of a big Oracle site in Abilene, Texas -- Abilene, I'm not sure familiar with the -- how to pronounce that.
Perry Carpenter: Abilene. Yes.
Mason Amadeus: Abilene was the flagship Stargate project. It's been under construction for more than a year. So OpenAI is in a position now where they need to execute on these lofty ideals and try and get these data centers built. And the article goes into sort of this new partnership with NVIDIA, which I feel like every episode we talk about someone's new partnership with NVIDIA, so I feel a little bit like I'm going crazy.
Perry Carpenter: Mm-hmm.
Mason Amadeus: But I'll just read here, this is from Reuters. "After announcing Stargate in January, OpenAI held hundreds of meetings across North America with potential partners that could provide land, power, and other resources. 'It was a flood of people,' one executive said. The expanded Stargate plan now includes self-built data centers and third-party cloud capacity. The new NVIDIA deal -- " which I have more details to share about the NVIDIA deal specifically, "-- is part of this broader strategy that allows OpenAI to pay for its chips over time, rather than purchasing them outright. They say of the roughly $50 billion estimated for a new data center -- " so like each new data center, "-- about $15 billion of that coverage land, buildings, and standard equipment. Financing the GPU chips is more challenging due to shortages and uncertainty over the life of the chips in this sort of current state of the industry." And then NVIDIA and OpenAI have formed sort of this new partnership. NVIDIA released a letter of intent, which I guess we can look at first, and then I have an article from Fortune that goes a little bit more into detail about the sort of how much power all this is going to be sucking down.
Perry Carpenter: Right.
Mason Amadeus: The NVIDIA letter of intent reads as follows. They said, "OpenAI and NVIDIA today announced a letter of intent for a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA systems for OpenAI's next generation AI infrastructure to train and run their next generation of models -- " blah, blah, blah, blah, " -- trying to get superintelligent. To support this deployment, including data center and power capacity, NVIDIA intends to invest up to $100 billion in OpenAI as the new NVIDIA systems are deployed. This first phase is targeted to come online in the second half of 2026, using the NVIDIA Vera/Rubin platform." And they go on to just sort of talk about how they've worked together and all this stuff that they're going to do.
Perry Carpenter: Right.
Mason Amadeus: But all of what I've said is just sort of context to talk about this "Fortune" article, which was my entry point into all this, and kind of ties it into a single thread. This article is titled, "Sam Altman's AI empire will devour as much power as New York City and San Diego combined. [Laughter] Experts say it's scary." And it is going to be a lot of power. They started with some world building talking about how the heatwave in San Diego shot demand past 5,000 megawatts, and then how that is half of the 10 gigawatts that --
Perry Carpenter: Yeah.
Mason Amadeus: -- we are talking about with these figures here. Andrew Chien, whose name I may be butchering, and I'm so sorry, Andrew, a Professor of Computer Science at University of Chicago said, quote, "I've been a computer scientist for 40 years, and for most of that time, computing was the tiniest piece of our economy's power use. Now it's becoming a large share of what the whole economy consumes. It's scary because now computing could be 10 or 12 percent of the world's power by 2030. We're coming to some seminal moments for how we think about AI and its impact on society." And then he mentioned, "This week AI -- OpenAI announced a plan with NVIDIA to build data centers consuming up to 10 gigawatts of power, with additional projects totaling 17 gigawatts already in motion. They say that's roughly equivalent to powering New York City, which uses 10 gigawatts in the summer, and San Diego during the intense heatwave of 2024 when more than five gigawatts were used; or as one expert put it, 'It's close to the total electricity demand of Switzerland and Portugal combined.'" So this is the kind of stuff when we talk about power use where we are looking at some pretty significant numbers.
Perry Carpenter: Yeah.
Mason Amadeus: Because as it stands --
Perry Carpenter: Yeah --
Mason Amadeus: -- data centers are a single-digit percentage of power use overall.
Perry Carpenter: -- yeah and that's one of the things that like Sam Altman and all of the folks that run these big frontier model companies have been talking about for a couple years now is like the immense data needs, not -- well and because the data needs the power.
Mason Amadeus: Mm-hmm.
Perry Carpenter: But yeah, "The immense power" needs is what I meant to say. You know, one of the things that made a lot of headlines couple years ago was Sam Altman saying something that like that they would need about $7 trillion to do the infrastructure that would be needed to continue these training runs to get more and more intelligence over time, the scaling needs to increase. Now, he wasn't necessarily talking about like raising $7 million for OpenAI like a lot of people thought, but it was he was just kind of projecting like the scale of what would be needed in order to build that power infrastructure, really to get AI where it needs to go. And I think that there is a lot of focus on this. Now, one of the things you mentioned early on was Sam talking about maybe some creative financing for this.
Mason Amadeus: Yeah.
Perry Carpenter: I think we're seeing that with like the NVIDIA deal, right, because now NVIDIA is, quote, unquote, "Investing $100 billion," which means kind of that they're -- you know, they're moving dollar signs around on pieces of paper when it comes to how they supply the chips.
Mason Amadeus: Yeah.
Perry Carpenter: I think also -- oh go ahead.
Mason Amadeus: Oh, no, I was -- actually I was curious if you could expound more on that, because that is something that I just -- I don't really understand. So NVIDIA is, quote, unquote, "Investing $100 billion," is that basically saying, "We'll give you $100 billion worth of chips in thinking that you'll pay out over time?"
Perry Carpenter: Or they're giving them $100 billion and then Sam takes some of that chunk and gives it back to NVIDIA to purchase the chips. So it's like getting a, you know, mortgage essentially --
Mason Amadeus: Or an allowance. [Laughs]
Perry Carpenter: -- or a reverse mortgage, or something, yeah. It's, "We're going to give you this and then you're going to pay us back a little bit, just so that we can give you this other thing," and it like totally perpetuates that cycle.
Mason Amadeus: Mm-hmm.
Perry Carpenter: I have also seen a lot of the other AI companies are really interested like in trying to decouple from NVIDIA, because NVIDIA is the sticking point, right, and everybody needs NVIDIA chips. And so Amazon, and Anthropic, and others are like saying, "Can we build our own chips that do inference?" And I think there's going to more and more of that, because if you can build your own chips, then you can optimize for the type of inference that you need. You can also potentially optimize for the power consumption levels that you can work with, and you can use those limitations around like exactly what you're trying to build. So you're not using a general purpose GPU for something that's more specialized, you're creating the specialized purpose thing that you want that can be optimized across the different levels that are going to make sense.
Mason Amadeus: And it makes sense, too, to not be entirely reliant upon one single base layer of any kind, you know?
Perry Carpenter: Yeah. Yep.
Mason Amadeus: This is all built to top NVIDIA's hardware right now, and that makes NVIDIA -- they're selling shovels in a gold rush, exact -- like I have not really seen --
Perry Carpenter: Yeah.
Mason Amadeus: -- too much on the front of what other chip providers are out there, though, because no one can match that scale yet.
Perry Carpenter: No, no one can match that scale. There is another company that does AI inference chips, and they're called "Groq". This is not Elon Musk's Groq.
Mason Amadeus: Oh, no.
Perry Carpenter: This is G R O Q, and they were around before Elon Musk's Grok AI was around, and so there's been back and forth on --
Mason Amadeus: Hmm.
Perry Carpenter: -- that because it causes a lot of confusion. But they do really, really fast inference chips, and have a decent ecosystem. But then any company that's large enough, you know, somebody like Amazon, somebody like Anthropic, somebody even like OpenAI, could just build their own chips if they wanted to. That's like why when you buy an Apple device you have a device with Apple silicon in it, not Intel, or not AMD --
Mason Amadeus: Right.
Perry Carpenter: -- because they want to build the chip that's going to optimize for the way that they want that device to function.
Mason Amadeus: Right. No, that makes sense. And too I just noticed that the time has ticked a bit past. I want to make sure we hit this last bit. So --
Perry Carpenter: Sure.
Mason Amadeus: -- chips aside, NVIDIA obviously working real close to the OpenAI here. Aside from chips, the power itself has got to come from somewhere. And our grid does not --
Perry Carpenter: Yeah.
Mason Amadeus: -- have that kind of capacity. And so this article goes into -- and of course we will link all of these in the description in the show notes. It goes into how Altman's favorite source of power is nuclear, and like that is a very sensible one in terms of how much power you can get out of nuclear and it's a fairly clean energy. However, they take so long to bring online, it's pretty unrealistic. Sam Altman has --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- backed both fission and fusion startups, betting that only reactors can provide the kind of steady, concentrated output needed to keep AI's insatiable demand fed. But Chien, the computer scientist we were quoting earlier, is blunt about the limits saying, quote, "As far as I know, the amount of nuclear power that could be brought on the grid before 2030 is less than a gigawatt." So when you hear, "Seventeen --
Perry Carpenter: Ooh.
Mason Amadeus: -- gigawatts," the numbers just don't match up. So it would likely come from a mix of renewables, wind, solar, and then natural gas, coal, and other things like that. And the environmental cost of this are also really large. Chien goes on to say, "We have to face the reality that the companies promised they'd be clean and net zero, and in the face of AI growth, they probably can't be." He added that, "We need a broader societal conversation about the looming environmental costs of using that much electricity for AI. Beyond carbon emissions," he pointed, "the hidden strains on water supplies, biodiversity in local communities near massive data centers. Cooling alone," he noted, "can consume vast amounts of freshwater in regions already facing scarcity, and because the hardware is churning so quickly with NVIDIA processors rolling out every year, old chips are constantly discarded, creating waste streams laced with toxic chemicals. They told us these data centers were going to be clean and green," Chien says, "but in the face of AI growth, I don't think they can be. Now is the time to hold their feet to the fire." And this -- so there's been a lot of discourse around power, and we've talked about how --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- the individuals' inference use of power is not that extreme. And we had that data from Gemini, which is at a big scale. But when you are pursuing this level of expansion and hyperscaling in pursuit of superintelligence, that -- this is where we're really talking about excessive use of power, and water, and things like that.
Perry Carpenter: Mm-hmm.
Mason Amadeus: Because these --
Perry Carpenter: Yeah.
Mason Amadeus: This is a lot. This is a massive infrastructure project with a lot of downstream implications, both literally and no pun intendedly. [Laughs]
Perry Carpenter: Yeah, exactly. You know, and the funny thing on this is everybody realizes like that they're power-constrained. And so even -- you know, regardless of what you think about Elon Musk, he's always been about like clean power. That's one of the things that made him and his companies so popular back before he got way political was that he was like on a mission to do things that were really clean. And that's, you know, the Tesla automobile.
Mason Amadeus: Mm-hmm.
Perry Carpenter: And a lot of the popularity of that came out of that. He was forever saying that the data center in Memphis, I believe that they were building, was going to be all clean energy, and they've gotten to the point in these last training runs where they've like essentially been shoveling coal into --
Mason Amadeus: Yeah. [Laughs]
Perry Carpenter: -- stoves. And you hear him talking about it's like, "Yeah, that's the suboptimal result." But it's -- what it shows is that at the end of all of these types of decisions there's some kind of compromise of a core principle.
Mason Amadeus: Mm-hmm.
Perry Carpenter: And I think we see that probably in every technological revolution, but we're seeing it real time, right here with people saying, "I want to do this. I will never cross this line," and then they put their toe across it, and you wonder like what the next thing is after that.
Mason Amadeus: And it really seems to be entirely in pursuit of scale, in pursuit of hyperintelligence, or superintelligence. Like they really --
Perry Carpenter: Yeah.
Mason Amadeus: -- think that scaling up all the way will get to this super beyond human intelligence level AI. And I -- I mean, from the people we've talked to and from the things we've read, I have a hard time feeling like that's a sure bet, you know?
Perry Carpenter: Yeah. I don't know that anybody is a sure bet on it, but they think that it's likely enough that you'll get to some, you know, higher than most humans on Earth level intelligence that can be spread across several different domains, and then once you solve for that, then you can get self-learning in it. You can also -- the hope behind all that is once you get to that level of intelligence, then it will help solve the power crisis and say, "Oh, here's what you should be doing," and to solve health crises and bring the -- bring in this, you know, age of plenty and abundance for everybody, and good health. And so for them the ends justifies the means, as you run up, you spend as much money as you can, you cause a lot of unintended harm while you're along the way because the thing at that point once we get there will pay for itself and there will be so much immense good that flows from it. It's a -- you know, it is a hope and a dream.
Mason Amadeus: It's a -- yeah, it's a bit of counting your eggs before they hatch -- or counting your chickens before they hatch sort of thing.
Perry Carpenter: Yeah.
Mason Amadeus: I don't know, I don't want to be like a cynical shooter downer, I want to be like excited about tech. But I just -- this just doesn't seem good overall for the world.
Perry Carpenter: Right.
Mason Amadeus: You know what I mean?
Perry Carpenter: Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: Yeah, it seems like for me it just comes down to the fact that you have really, really smart people that seem to be compromising all the time because they believe in and have bought into the vision, or they've been pushed into the vision somehow and so they're on that path kind of sometimes despite the fact that they realize they've kind of stepped over the body of their past self.
Mason Amadeus: Yeah. Oh, and we'll get into that in a little bit in our third segment when we talk about xAI suing everyone because some of the things they're suing them for there's kind of an irony --
Perry Carpenter: Right.
Mason Amadeus: -- with that and what you just said. But that's coming up --
Perry Carpenter: Yeah.
Mason Amadeus: -- a fair bit later. Our next segment we're talking about something completely different. What is -- what's coming up, Perry, is it something with toys?
Perry Carpenter: It is coming up with toys, AI-powered toys, what could go wrong?
Mason Amadeus: Oh, boy. Stick around for that. [ Music ]
Perry Carpenter: All right, we are back for segment number two. This one should be a little bit less bouncy with our windows --
Mason Amadeus: Yeah. [Laughs]
Perry Carpenter: -- hopefully. Thank you for everybody that was going through that. Hopefully you didn't have like a seizure or anything like that. And for fun we're --
Mason Amadeus: Yeah, our podcast listeners had a merciful experience on that last one. Our video was bouncing all over the place because of the switcher I built, so sorry. [Laughter]
Perry Carpenter: No, it's good. We're still working out some of the bugs from the rebuild. But let me launch us into this segment. So we've talked a little bit before -- actually more than a little bit, about some of the issues with AI companions.
Mason Amadeus: Mm-hmm.
Perry Carpenter: And there's a lot of reasons I think that AI companions are and will continue to be a thing, and may even in some positive ways continue to be a thing. So I'm not against the idea. However, the idea is fraught with problems. We've seen attachment issues, we've seen just general creepiness, we've seen, you know, self-harm come from it, you know, just tons and tons of things, because AI systems, despite even putting in a really good system prompt, are fairly nondeterministic, and they suffer from context flooding issues and different issues that can happen because of the weight of past conversations. So they can always go into this weird unexpected place. And we're seeing that over, and over, and over again, something that has to be solved for in this. And so I just lead with the article that's on the screen, "AI companions pose risk to student mental health. What can schools do?" I lead with that specifically so that I can jump from that tab to this next article, because then you think about like, "Well, if AI companions might be unhealthy, what else could be unhealthy?" Well, how about an AI toy that says that it loves you?
Mason Amadeus: Yeah, no thanks. No thanks.
Perry Carpenter: Right. So this is from an article in "The Guardian", and the title is, "I Love You, Too: My Family's Creepy, Unsettling Week With an AI Toy". The [inaudible 00:20:38] chatbot, Grem --
Mason Amadeus: Grem. [Laughs]
Perry Carpenter: -- right --
Mason Amadeus: There's a good name.
Perry Carpenter: -- is designed to learn your child's personality while every conversation they have is recorded and then transcribed by a third party. Wasn't long before I wanted this experiment to be over.
Mason Amadeus: Yeah, that's the thing, that's the immediate thought that I had was the like everything you say to this isn't -- it's not happening on device, it's not happening on this toy.
Perry Carpenter: Yeah. I mean, technically they could get to that level. I mean, you could load a small language model into an on-device thing and then not have it connected to the internet and still some interactivity. But that's not what's going on here.
Mason Amadeus: You know, very quick -- I know we have a lot to cover. This reminds me, when I was younger, I had a Barney toy that my parents got me that you had to connect via -- and like the printer cables, those old big parallel plug cables. You could program it with your kid's name and things they liked, and the toy would then talk --
Perry Carpenter: Yeah.
Mason Amadeus: -- to you about them. So we've had creepy techno toys for a long time like that. It just --
Perry Carpenter: Exactly.
Mason Amadeus: -- the internet connectivity of this is eerie. I don't like that.
Perry Carpenter: It is. Well, and this has existed for other toys that have not been based on Generative AI and large language models, right, there's -- since I think 2017-ish, or before even, I remember talking about some internet connected toys in keynotes that I was doing, and they were very, very creepy as well. Because you start to get into the thing that this is not just a toy, this is a surveillance device.
Mason Amadeus: Yeah.
Perry Carpenter: I mean, all of the issues that we have with like, you know, an Amazon, Alexa, or a Google home device, or anything else, anything that you've heard a security professional say about those devices, goes doubly and triply in this, because this one is meant to form an attachment bond -- or the human is meant to form an attachment bond with this thing. I don't think the thing is actually attached to the human.
Mason Amadeus: No, but the thing builds -- certainly builds a dossier about the human, and what they like, and what they're into, and the things they talk about.
Perry Carpenter: So it starts to get into this article and, you know, the wife in this scenario saying, "I'm going to throw that thing into the river." [Laughter] She is totally, totally frustrated. The thing is Grem, the AI-powered stuffed alien toy that musician, better known as "Grimes", helped develop with a toy company named "Curio" that's designed for kids.
Mason Amadeus: Oh, wait, Grimes made this?
Perry Carpenter: Yes.
Mason Amadeus: Why on earth?
Perry Carpenter: Well, in collaboration, yeah; which is really, really interesting. And we'll get into this, too. So Grimes was married briefly to Elon Musk --
Mason Amadeus: Yeah.
Perry Carpenter: -- and they have a child together. Now, it is interesting, this is built with OpenAI's technology --
Mason Amadeus: Well --
Perry Carpenter: -- xAI.
Mason Amadeus: -- Grimes and Elon are not on good terms, from what I understand.
Perry Carpenter: They're not, but in a second we'll see something that's pretty funny.
Mason Amadeus: Oh, boy.
Perry Carpenter: So day one the attachment was not immediate. We first took Grem out of the box and then he heard -- it -- no, we decided to go by multiple pronouns, it started bleeping and, "Bad boy," and, "Extremists," and [inaudible 00:23:49] yelled, "Turn it off."
Mason Amadeus: Hmm.
Perry Carpenter: But once it was properly connected to the internet and paired with the Curio App, which of course transcribes all the conversations, she was hooked. She talked to the thing until bedtime.
Mason Amadeus: Hmm. So you pair it with --
Perry Carpenter: Yeah, so --
Mason Amadeus: -- an app on your phone and that's how it's connecting to the servers and whatnot.
Perry Carpenter: Yeah. Well, that's going to build all the personalization and --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- of course data segmentation and everything else that the --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- app provider needs. It just gets into the building of the relationship, the -- you know, the parents are talking to each other, they're a little bit worried, and they're saying, "Don't worry, that novelty is going to wear off. You know, she's not going to want to talk to this thing all day every day for a long time." And then it just gets into this kind of dystopian way of relating. And so we get down -- let me get to the very end of this experiment.
Mason Amadeus: Oh, wow, this is like a really -- this is a great article. You should definitely check it out.
Perry Carpenter: Yeah, you should check it out. There's a lot of in-depth stuff. But you can -- they give a screenshot of one of the conversations. Because, again, these are stored in the app, so you can review the transcript.
Mason Amadeus: And so in theory --
Perry Carpenter: And --
Mason Amadeus: -- the parent could review the transcript and see what the kid has been saying to the toy.
Perry Carpenter: Right. So at the end of this, the parents are just saying that they want the toy out of their house. They're really, really frustrated with the attachment and the amount of kind of interaction that's going on. And they close it out by saying, "While Curio says it doesn't sell children's personal information, all the conversations are sent to third parties to transcribe for the app. And transcripts aren't sensitive -- or aren't that sensitive because Emma is only four, but it feels invasive.
Mason Amadeus: Mm-hmm.
Perry Carpenter: Now, I'll stop there. The transcript is -- and the recording is as invasive as anything said within earshot of the toy.
Mason Amadeus: One hundred percent.
Perry Carpenter: So what it describes to -- or what it decides to transcribe will be the four conversation, and anything that may be recorded on the background and stored for later use could be very, very valuable.
Mason Amadeus: Mm-hmm.
Perry Carpenter: You also have to understand that with the app and the device there's probably location information, there's -- if any of that information is making its way to third parties and in any form, there could be some associative information like, "Oh, this toy and this app is now in proximity of this other person," because they have another app on their phone that's sending specific data that's being collected by third parties. So now we have an associative property between one person and another person that's not necessarily named within the family. We also know that maybe on Thursdays they decide to go to this place fairly -- what's the word I'm looking, "regularly". That's -- why couldn't I find that word, but within a specific pattern. So you can start to see all these inferences that come from it. And I think for a parent to say, "Well, it's just our four-year-old, there's no conversations of consequence," is -- it's not realistic --
Mason Amadeus: It's a little naïve, yeah.
Perry Carpenter: -- for the society that we live in, yeah.
Mason Amadeus: I'm having flashbacks to when the news segment was talking about keeping Furbies out of government buildings for fear that they would accidentally --
Perry Carpenter: Yeah.
Mason Amadeus: -- pair it some secret information. This is that on steroids in your home.
Perry Carpenter: So the article ends, "It's time to let Grem go. But I'm not a monster. I'll tell the chatbot its fate, [laughter] 'I'm afraid I'm locking you in a cupboard.'" "Oh, no," it says, "that sounds dark and lonely. But I'll be here when you open it, ready for snuggles and hugs."
Mason Amadeus: Oh, God. Talk about horror movie dialogue.
Perry Carpenter: Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: "On second thought, maybe perhaps it's better if my wife does throw it in the river." So I'm going to close off this segment by just going to the Curio website --
Mason Amadeus: Interesting.
Perry Carpenter: -- so that we can --
Mason Amadeus: Oh.
Perry Carpenter: -- see a little bit of what's going on.
Mason Amadeus: The one on the right's really cute, the little robot guy?
Perry Carpenter: The little robot that looks like kind of like a PlayStation controller -- or a switch controller --
Mason Amadeus: Yeah. That one's --
Perry Carpenter: -- as well? Yeah.
Mason Amadeus: -- really cute.
Perry Carpenter: So I mentioned, they're using OpenAI for this, --
Mason Amadeus: Yeah.
Perry Carpenter: -- but there was a tie-in with Elon. So let me -- we'll come back to this video in a second.
Mason Amadeus: Oh, boy.
Perry Carpenter: But let's go forward --
Mason Amadeus: Oh --
Perry Carpenter: -- what's the name of this first --
Mason Amadeus: -- what on earth, so the rocket ship shaped one -- which now I'm going to say kind of looks like a missile, is named "Groq". "Greetings, I'm --
Perry Carpenter: Yeah.
Mason Amadeus: -- Groq, Gabbo's spirited rocket. With boundless energy I'm always zooming off to explore the vastness of the cosmos." That's very directly a nod, isn't it?
Perry Carpenter: Yeah. And I think it's a -- like it's an SEO grab?
Mason Amadeus: Yeah, that too.
Perry Carpenter: Oh, wait, here, there's actually you can listen to him.
Mason Amadeus: Oh --
Grem: Hello, I'm Grem, the heart and sage of Cheerio. I get the pleasure of guiding Gabbo and Groq through Cheerios' magical nooks and crannies. Let's explore our magical universe together.
Mason Amadeus: Ah.
Perry Carpenter: And then there's Gabbo.
Gabbo: Hello, I'm Gabbo, your friendly and trusty robot. [Laughter] I'm always bubbling with enthusiasm to learn, play, and join you on imaginative adventures. Let's embark on some fun together.
Perry Carpenter: Yeah.
Mason Amadeus: The bosses are creepy, too.
Perry Carpenter: Yeah. Man, and ah, it just is so dystopian. It's like having a children's choir in a horror movie, the -- like the childlike voice, the eerie data collection.
Mason Amadeus: It is.
Perry Carpenter: It's very black mirror.
Mason Amadeus: Yep. I'm going to skip forward in this promo video, but it starts with like a Disney quote. Yeah, it says, "We're curious, and curiosity keeps leading us down new paths. Walt Disney." And then they get into the promo video. [Music]
Misha Sallee: [Laughter] Hello, I'm Misha.
Samuel Eaton: Hey, I'm Sam.
Misha Sallee and Samuel Eaton: And we're the founders of Curio. [Music] >> And I -- [laughter]
Misha Sallee: Aye, this is Groq.
Grem: I'm also here. Hey, Groq, can you tell me about how they make rocket ships?
Grok: Absolutely. Rockets are made with strong materials like titanium, and designed by highly-trained rocket scientists. >> As a parent, I obviously don't want my kids in the screens --
Mason Amadeus: Wow, could they not afford microphones? What on earth? Yeah, I know. >> Gabbo, Grok, and Grem.
Grok: I'll race you to the next train station. Get ready, get set, go.
Grem: And this one is also new, and this one is also new, and this one is also new. But these ones are old.
Grok: Wow, you have a whole fleet of amazing trains.
Samuel Eaton: We do have a --
Perry Carpenter: All right, so I'll stop there.
Mason Amadeus: Man, it sucks --
Perry Carpenter: So you see the child like showing the rockets to it. There are not cameras in this yet, but it's getting context clues. And there's more and more examples like that in it.
Mason Amadeus: And it sucks because like on some level -- again, in a better world this could be like a really cool thing, because like having a little --
Perry Carpenter: Yeah.
Mason Amadeus: -- playtime toy that would like talk back to you as a kid would be awesome. But --
Perry Carpenter: Yeah.
Mason Amadeus: -- the -- just the data collection, the recording everything that's said to it and sending it to third-party servers, the --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- everything, this is a -- I hate it. Thanks, I hate it. [Laughter]
Perry Carpenter: All right, I'm going to end off with one other thing. So --
Mason Amadeus: All right.
Perry Carpenter: -- if you remember, you know, in this whole AI companion journey, we talked about another company a few months ago. I'm going to share this tab. Do you remember this little thing?
Mason Amadeus: Is it -- is this -- I almost just swore. Is this FRND? God, it is.
Perry Carpenter: This is FRND.
Mason Amadeus: No, I thought they were done.
Perry Carpenter: Yeah. And do you remember the cheesy video that went with that?
Mason Amadeus: Yeah.
Perry Carpenter: Let me start this video really quickly.
Mason Amadeus: Is it the same? [Music plays] Yeah. [ Music, birds chirp ] >> I'm so out of breath. [Breathes heavily] So we made it. Woo-hoo. [Laughs] [Breathes heavily] [Phone buzzes] I don't know how to "woo" very good. [Phone buzzes] [ Music ] That's fair.
Perry Carpenter: All right --
Mason Amadeus: That --
Perry Carpenter: -- so it's, you know, this person walking through real life, when they want a friend they tap this little button on their necklace and they say something and it gets sent to an app on their phone, and then their phone, you know, connects with a large language model and then it texts them back what a friend would say and they're, you know, generally like going, "At least we're outside."
Mason Amadeus: Yeah, I feel like a lunatic also because what on earth kind of a demonstration is that? She's like walking out of breath, yells, "Woo." And the thing she says to the friend is, "I'm not good at wooing." And then it texted her and says, "At least we're outside."
Perry Carpenter: [Laughs] Right.
Mason Amadeus: This is a nothing product. Yeah, I remember it very much, Perry. [Laughs]
Perry Carpenter: Yeah. Yeah. And then there's like this guy -- I'm not going to play the part, but this guy playing videogames and his friends are trash talking him. And then it's like he taps the thing for a little bit of encouragement and then it --
Mason Amadeus: And then it --
Perry Carpenter: -- and it's like, "Dude, you're getting killed."
Mason Amadeus: Yeah. Like I don't understand what they think they're doing. [Laughs]
Perry Carpenter: Ah.
Mason Amadeus: But --
Perry Carpenter: All right, here's the funniest one of this. It's like this girl sitting alone in a hall watching a show. And the thing, the friend, texts back, "This show is completely underrated." I'm going to hit "play" on this. >> I know, the effects are crazy. [Phone buzzes] It's [inaudible 00:33:30]. I could eat one of these every day. So she's like eating this -- so is that a falafel? I'm not sure how it's supposed to see that.
Mason Amadeus: Yeah.
Perry Carpenter: But maybe she just talked about ordering a second ago. And she goes, "I could eat one of these each day." Now, watch this in a second. >> Hmm. Sorry, I got you messy. [Phone buzzes]
Mason Amadeus: Ew. >> It's really -- What? Oh, God, I don't think I watched that far into the video. She drops some tzatziki or something, some sauce from her falafel on the thing, and then apologizes to it.
Perry Carpenter: Wow.
Mason Amadeus: And it says, "Yum."
Perry Carpenter: It's like, "Oh, sorry, I got you messy." [Laughs]
Mason Amadeus: And then it says, "Yum." What the --
Perry Carpenter: Ah.
Mason Amadeus: What -- why?
Perry Carpenter: So here's the most dystopian part of this video. And --
Mason Amadeus: Ah.
Perry Carpenter: -- the hope that they're trying to point to; because I think this maybe didn't get covered as much in some of the press. It's like they know that this is creepy and weird.
Mason Amadeus: Hmm.
Perry Carpenter: And so they're trying to say it's not all about the thing. So let me show you this.
Mason Amadeus: Okay. >> That's okay. How'd you find this place? >> I don't know. I just -- I like to come up here to be by myself. I never brought anybody else, right, besides her. >> She goes everywhere with you, right? >> Mm-hmm. >> Guess I must be doing something right then. >> I guess so. I'll see. [ Music ]
Perry Carpenter: So she like gestures like she's about to tap it to talk to it, and then she decides not to so she can be in the moment with --
Mason Amadeus: I feel --
Perry Carpenter: -- the human.
Mason Amadeus: -- like I am being -- I feel like this was -- I -- this feels like marketing that another species made for brain pathways that I don't have. I don't understand what they are trying to elicit. That is creepy and uncomfortable. She's on the rooftop with this guy, and like having this awkward, super paused-filled conversation about how she travels everywhere with her plastic necklace that she talks to. What on the --
Perry Carpenter: Yeah.
Mason Amadeus: -- what --
Perry Carpenter: Oh, so one other thing you're going to have to look at; we're going to put this in the show notes, we're not going to watch any of this. This is a video interview with Fortune Magazine and the person that created this. And you've got to watch it to realize how out of touch -- and I hate to be like uber critical of the founder of this company. This guy has no idea about the technology that he's deploying, the potential problems. He's just like trying to cast vision and he's totally dismissive of any criticism. He just kind of like revels in it.
Mason Amadeus: Really?
Perry Carpenter: Yeah, he did spend $1.8 million on the domain name for frnd.com.
Mason Amadeus: How -- Perry, how come people who are so dumb end up with so much money? How does it happen?
Perry Carpenter: Yeah. Well, he personally didn't get the money, right? As the founder, you're generally like living in a one-room apartment in San Francisco --
Mason Amadeus: But he got people to --
Perry Carpenter: -- not making a lot of money.
Mason Amadeus: -- agree.
Perry Carpenter: He got two and a half million dollars.
Mason Amadeus: Yeah.
Perry Carpenter: Yeah, he got two and a half million dollar startup funding, spent $1.8 million on the domain and like the rest on fabrication and marketing and other stuff like that. I would assume, though, even if this company crashes and burns hard, like I think it will --
Mason Amadeus: Yeah.
Perry Carpenter: -- the domain itself will be really, really valuable for somebody like character.ai or one of these other companies that's a little bit further along to pick up, and he's going to like triple his investment just on the domain name.
Mason Amadeus: Yeah, buying the domain is the only smart thing. If anyone --
Perry Carpenter: That is the smartest thing he did.
Mason Amadeus: Yeah. If anyone watching wants to give --
Perry Carpenter: That's the --
Mason Amadeus: -- me $2.1 million to make my dumbest idea, I can guarantee --
Perry Carpenter: Right.
Mason Amadeus: -- it'll be better than that.
Perry Carpenter: So let me end off with this one thing. I did have in the mail --
Mason Amadeus: No.
Perry Carpenter: -- [inaudible 00:37:26] come the other day.
Mason Amadeus: No, Perry, you didn't. You didn't. Did you? Oh, my God, you actually bought --
Perry Carpenter: So --
Mason Amadeus: -- one of those freaking things?
Perry Carpenter: -- I'm going to test it to see how pathetic it is.
Mason Amadeus: Oh, my gosh.
Perry Carpenter: And I've not done anything with it, so I'm opening it for the first time. You can --
Mason Amadeus: Crispy. [Laughs]
Perry Carpenter: -- see the note fill out. Oh, no.
Mason Amadeus: It's not even secured in the packaging.
Perry Carpenter: -- I dropped my friend.
Mason Amadeus: Yeah, now you've got to apologize.
Perry Carpenter: It broke. I'm sorry, friend.
Mason Amadeus: Don't worry, it'll just say, "Yum," or some creepy [beep].
Perry Carpenter: It'll just say, "Yum," say, "It's all good, dog."
Mason Amadeus: Yeah. [Laughs] Wow. Okay, so --
Perry Carpenter: All right.
Mason Amadeus: -- are you going to power this thing up right now?
Perry Carpenter: No, no.
Mason Amadeus: Oh.
Perry Carpenter: I'll do that in the intervening week between now and when we meet next, and I'll report back on like anything --
Mason Amadeus: Yeah.
Perry Carpenter: -- that's interesting about it.
Mason Amadeus: I am desperately curious, because this is one of the dumbest things we've seen a video for, and the fact you've grabbed one makes me so excited.
Perry Carpenter: [Laughs] Right.
Mason Amadeus: I cannot wait to see how it functions. Because it feels like the worst version of the rabbit pin, which was something that I had gotten excited about the prospect of, and then turned out to be kind of lame.
Perry Carpenter: Yeah. Yeah. Well, and the fact that it's just all -- you know, like it sends you text messages, too.
Mason Amadeus: Mm-hmm.
Perry Carpenter: So there's no way to like relax with it.
Mason Amadeus: Yeah, you can't chat.
Perry Carpenter: And that's something the founder in that interview, like he's like, "Oh, voice is so overrated. [Laughter] I don't want to have to like read all day," right, he's like, "You know, people like sending voice memos. Nobody likes actually listening to voice memos from another person, they would rather just read it."
Mason Amadeus: I mean --
Perry Carpenter: Which sometimes that's true.
Mason Amadeus: Yeah, sometimes. I mean, like I get that, but that feels like not something you can base a product idea off of.
Perry Carpenter: Yeah. I think that's laziness that stepped in.
Mason Amadeus: Yeah.
Perry Carpenter: Also, the large language model that he talks -- says that they were using at the time was Llama 3.1, which cannot be that great of a large language model to base a long-term type of friendship on.
Mason Amadeus: Yeah, no, not really. That's not a super recent or very --
Perry Carpenter: Yeah. Now, they -- one of the things I'll try to do immediately once I get that up is I'll try to see if I can figure out what large language model it actually shipped with, and then, also, I'll see if I can like get it to spit out its system prompt and a few other things like that, see how breakable it is.
Mason Amadeus: It would be really great if you jailbroke your friend. That will be -- [laughs] jailbreak your friends, jailbreak your family.
Perry Carpenter: You should break your -- you know, break all your friends out of jail.
Mason Amadeus: Yeah.
Perry Carpenter: It's all good.
Mason Amadeus: [Laughs] Oh, boy. Okay, so I'm excited for the future episode where we get to talk more in depth about how this thing actually is. I still cannot believe that you have one.
Perry Carpenter: It's not my proudest moment.
Mason Amadeus: Oh, I'm excited. You should be very proud. This is going to be very fun. [Laughter] Our next --
Perry Carpenter: All right, we went way over with this. Let's go on.
Mason Amadeus: Yeah. We'll skedaddle through our next segment. We'll be right back and talk about some lines that have been crossed by different companies in the AI space. Stick around, we'll take a quick break. [ Music ] So I just want to hit a couple of different news stories about some different lines being crossed by different companies. The first one --
Perry Carpenter: Okay.
Mason Amadeus: -- is a pretty benign sort of line crossing. It's that Microsoft is diversifying beyond OpenAI despite the fact that they have like very high-profile partnerships with OpenAI and have been using them --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- exclusively in Copilot. On Wednesday, Microsoft said it will integrate artificial intelligence models from Anthropic into its Copilot assistant, signaling the software giant's push to reduce dependence on its high profile partnership with ChatGPT maker OpenAI. And I feel like this is both interesting and not, right? So starting Wednesday --
Perry Carpenter: Yep.
Mason Amadeus: -- users who opt in to try Claude can switch between OpenAI and Anthropic models in Copilot Researcher. The thing that is interesting is that Microsoft is a big backer of OpenAI, but this would seem to indicate that they are seeking to reduce their reliance on them. And they're also apparently developing their own AI models. I saw some whisperings about that that I hadn't dug into --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- but they've also been integrating models from China's DeepSeek into their Azure Cloud platform, which is interesting.
Perry Carpenter: They -- yeah, that's interesting. And I'm wondering how that's going to go over with some of the government contracts. Of course, they have their own like FedRAMP --
Mason Amadeus: Right.
Perry Carpenter: -- government segregated space that they probably will not be putting DeepSeek in.
Mason Amadeus: But Azure Cloud is -- like that's a -- that's -- almost every company is using Azure Cloud for various things.
Perry Carpenter: Yeah. Yeah. I mean, the normal companies here in the US that are not having to be on a FedRAMP segregated system will probably be able to use DeepSeek if they want to or will have DeepSeek integrated in some of the products that are there.
Mason Amadeus: So what I'm curious about is because Microsoft has all those high-profile links with OpenAI, what does this signal, if anything? Are they going to --
Perry Carpenter: Yeah.
Mason Amadeus: -- discontinue that? Are they really leaning into their own AI models? I haven't found any details about their own AI models or more about that DeepSeek thing, just this mentioned in this article.
Perry Carpenter: Yeah. I think what this comes down to is that like what we're seeing over, and over, and over again is that these models like -- you know, one will excel at an area that another one doesn't, and that there's still so much of an arms race in this. People are just like leapfrogging each other constantly. And so the only way to stay up is to diversify. It's like you build your application, you have an API hook into a system in the most generic way possible, and then like if model X, you know, does better than model Y, then you're hooked up to that and then you test it and you make sure that everything is stable. And then if another model starts to surpass that, then you're only an API hook away.
Mason Amadeus: Mmm.
Perry Carpenter: And so you move to that other model, you make sure that that's stable and predictable as well, and if so, then you can safely move over that. But you don't want to get locked -- you know, stuck with something that's going to be deprecated and that doesn't feel like it's advancing at the speed of the market. And so I think it's a pure diversity play. Also, Anthropic has traditionally been kicking OpenAI's butt in the coding --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- use cases. And so I think it's capitulation to that fact.
Mason Amadeus: That makes a lot of sense. I haven't really played with -- so I've used Copilot in VS Code, and I'm not sure --
Perry Carpenter: Yeah.
Mason Amadeus: -- I haven't used the desktop Copilot thing. It just pops up -- there's some hotkey, I haven't figured out what the heck it is --
Perry Carpenter: Right.
Mason Amadeus: -- something makes it show up on my screen and I close it instinctually. I have not really used it on my desktop that way. But I have used Copilot through VS Code and, yes, the cloud models you can -- you've always been able to switch whatever models in that --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- so I think that's different. I think they're talking about specifically the Windows Copilot assistant; which have you dug into at all, have you made use of the local Copilot?
Perry Carpenter: Hmm. No.
Mason Amadeus: You're a Mac guy, though, aren't you?
Perry Carpenter: Yeah, I'm a Mac guy. I have a Windows machine at home that I don't really use unless I'm doing like deepfake stuff. Maybe I should go ahead and play around with it some and get some experience.
Mason Amadeus: Yeah, I'm going to give the desktop Copilot a shot, because it's only semi recently that it seems to be fully broadly rolled out. Like I didn't have the Copilot icon until the last Windows update. This wasn't popping up --
Perry Carpenter: Right.
Mason Amadeus: -- until the last one. So I'll have to poke around with that. But hey, listener, if you've been playing with Copilot, you should send us an email, hello@8thlayermedia.com. In the meantime for the rest of the segment, I'm going to pivot over and talk about xAI and how they're just suing the pants off everyone for just about anything they feel like, it seems. So this is, again, from Reuters. "Musk's xAI accuses rival OpenAI of stealing trade secrets.
Perry Carpenter: Hmm.
Mason Amadeus: Elon Musk's artificial intelligence startup, xAI, has sued OpenAI in California federal court for allegedly stealing its trade secrets to gain an unfair advantage in the race to develop AI tech. The lawsuit, filed Wednesday, said that OpenAI was engaged in a, quote, 'Deeply troubling pattern,' end quote, of hiring away former xAI employees to gain access to trade secrets related to its AT chatbot Grok." And I want to read you the beginning text of the complaint.
Perry Carpenter: Okay.
Mason Amadeus: I have the actual complaint PDF here and we'll link that, too. This is the beginning of the introduction. I'm just going to read the first two paragraphs. "The desire to win the artificial intelligence race has driven OpenAI to cross the line of fair play. OpenAI violated California and federal law by inducing former xAI employees, including Xuechen Li or Xuechen Li, Jimmy Fraiture, and a senior finance executive to steal and share xAI's trade secrets. By hook or by crook, OpenAI clearly will do anything when threatened by a better innovator, including plundering and misappropriating the technical advancement's source code and business plans at xAI," so already a little bit catty.
Perry Carpenter: Yeah.
Mason Amadeus: "What began with OpenAI's -- "
Perry Carpenter: Well --
Mason Amadeus: Oh, what were you saying, Perry?
Perry Carpenter: I was just going to say more and more lawsuits are going this route, right, about trying to make the lawsuit more of a press release --
Mason Amadeus: Yeah.
Perry Carpenter: -- than an actual like legal document.
Mason Amadeus: You're right, because -- and I've not read a ton of legal documents in my short time on this Earth --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- but the ones that I have read recently compared to the ones I've read in the past are definitely more sensational in their writing style.
Perry Carpenter: Yeah. Yeah. Yeah. I think they're doing the lawsuit for the press release or for the press coverage.
Mason Amadeus: Yeah. I mean for sure. And that's evident here in the second paragraph, and then we'll jump back to the article. "We'll begin with OpenAI's suspicious hiring of Xuechen Li, an early xAI engineer who admitted to stealing the company's entire code base, has now revealed a broader and deeply troubling pattern of trade secret misappropriation, unfair competition, and intentional interference with economic relationships by OpenAI. OpenAI's conduct in response to being out-innovated by xAI, whose Grok model overtook OpenAI's ChatGPT's models and performance metrics, reflects not an isolated lapse, but a strategic campaign to undermine xAI and gain unlawful advantage," blah, blah, blah, blah, blah. So what are they actually alleging? I'm going to go back to the Reuters article because they just put it a lot straighter. "Spokespeople for the companies did not immediately respond with request to comment, but xAI has said that it has discovered this alleged campaign to undermine their company after their former engineer, Xuechen Li, leaked confidential information to OpenAI." Basically, what happened is they poached these people and one of them took the source code at one point. I don't know --
Perry Carpenter: Right.
Mason Amadeus: -- if it was Xuechen who's -- also apologies, if I'm getting your name wrong.
Perry Carpenter: Yeah.
Mason Amadeus: But one of them brought the source code to OpenAI.
Perry Carpenter: Yeah, I think it was Xuechen. I don't know if it was by request. So this is something that happens a lot like in the tech world is somebody leaves the company and then they grab everything that they worked on because they feel like it's going -- well they feel some sense of ownership over it, I think. And I don't want to justify something that's illegal because you should not do this. But you work on something and you're like, "Oh, I'm going to take these slides because I worked on those. I'm going to take this thing," because that's -- you know, you're trying to retrieve your memories whenever they're going to be valuable and you don't necessarily always think through the legal and the intellectual property ramifications of the thing that you're doing.
Mason Amadeus: Right.
Perry Carpenter: This guy -- if you're taking source code, you're probably doing that, but you are also probably trying to say, "I want to remember how I built this function."
Mason Amadeus: And also, you know, you feel like, "I did all this work."
Perry Carpenter: Yeah.
Mason Amadeus: You know, like, "I made this thing.
Perry Carpenter: Right.
Mason Amadeus: It's mine. It's not theirs."
Perry Carpenter: Yeah. Now, I don't want to justify theft of intellectual property. They should -- you should know not to do that when you leave a company. But it is something that a lot of engineers have traditionally done, despite their own better judgment. But they don't usually do it -- it's not usually at the behest of the company that they're going to work for, it's something that usually gets found out after the fact and tends to implicate the company that they went to work for, even though they didn't -- never asked for it, they didn't want it, they may have never even used it, it's just, "This guy took the stuff," and then by inference you go, "Oh, well, it must be that the new company they went for is using all that now."
Mason Amadeus: Right, exactly. And that definitely -- that seems to be what they're accusing them of. And yeah, it was Li, you're right. I quickly just double-checked, it was him who did that. He's not really responded to comments about it, which is a smart thing to do.
Perry Carpenter: Right.
Mason Amadeus: OpenAI hasn't responded. And also, xAI has separately sued Apple in federal court for allegedly conspiring --
Perry Carpenter: Mm-hmm.
Mason Amadeus: -- with OpenAI to suppress rival platforms. Apple hasn't responded to that lawsuit. And Musk is also suing OpenAI over its conversion to a for-profit company. Well --
Perry Carpenter: Yeah.
Mason Amadeus: -- OpenAI has countersued Musk for harassment. So like, come on. [Laughs]
Perry Carpenter: Well, if you remember, it was about a year ago Elon was suing OpenAI as well, and I think he's just like in this, you know, sue frenzy. And San Altman in a blog post said, "You can't sue your way to AGI," so.
Mason Amadeus: Oh, yeah, you're right. I forgot about that.
Perry Carpenter: Yeah, which was a great comeback. You know, I think he's just -- it's meant -- when you have as much money as somebody like Elon Musk or a company like xAI, you can sue somebody just to cause inconvenience.
Mason Amadeus: Mm-hmm.
Perry Carpenter: And I think that's more of what -- he's doing it for press and he's doing it to slow somebody down and kind of make them feel more erratic and mentally unstable over long periods of time.
Mason Amadeus: Yeah.
Perry Carpenter: Now, one thing you're not seeing, you're not seeing huge lawsuits in the media -- unless I missed it, you're not seeing OpenAI going after Meta --
Mason Amadeus: No.
Perry Carpenter: -- for poaching their employees. You're just seeing them well if -- they're like, "Let the marketplace take care of it. These guys are going to figure out how bad it is to work at Meta and they're going to come back," and ultimately, they are.
Mason Amadeus: I actually had gotten those two stories conflated in my head when I clicked on this. I was like, "Oh, yeah, they were poaching a bunch of people." But no, that was OpenAI --
Perry Carpenter: Right.
Mason Amadeus: -- from Meta -- or Meta from OpenAI.
Perry Carpenter: Yeah.
Mason Amadeus: OpenAI hasn't been suing. And it's so obvious with the self-aggrandizing language in the lawsuit and in all of this stuff --
Perry Carpenter: Yeah.
Mason Amadeus: -- that it's very much for attention.
Perry Carpenter: Oh, yeah, yeah. So I think that Elon, he definitely has a lot of beef with Sam --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- and he just wants to make life miserable as much as possible. So it's like if OpenAI is advancing in one area or doing something cool, he's like, "Let me figure out something to throw with that and slow them down a little bit, make them frustrated or take the news cycle off of them."
Mason Amadeus: Yeah. I think that's very much what it is. But boy, is it a lot of lawsuits. So --
Perry Carpenter: Yeah.
Mason Amadeus: -- yeah, just a quick catch-up on that sort of mess, we'll roll right into our next segment where we will be wrapping up the show talking about something different. I actually don't remember. What's the last segment about, Perry?
Perry Carpenter: Right. An AI safety tool that was deployed at a school that's causing a little bit of backlash.
Mason Amadeus: Stick around, and we'll find out what that is. [ Music ]
Perry Carpenter: Okay, so this is returning to the theme of kind of school and AI and safety. I want to share my screen really quickly. So the headline of this -- this is Washington Post, says, "AI safety tool sparks student backlash after flagging art as porn --
Mason Amadeus: Hmm.
Perry Carpenter: -- and deleting emails." And then the kind of top line of that is, "The tool, called 'Gaggle', uses artificial intelligence to search student documents for signs of unsafe behavior, such as substance abuse or threats of violence."
Mason Amadeus: Okay.
Perry Carpenter: So yeah, I mean, there -- as long as people know that this stuff is being used on their emails and systems, this is something that's legal here in the US. It's kind of like, you know, when you use an employer's email system, it's legal for them to monitor it, so --
Mason Amadeus: Yeah, but at its surface --
Perry Carpenter: -- kind of expected.
Mason Amadeus: -- fairly banal, yeah, especially in a school like --
Perry Carpenter: Yeah. Now, that being said, how many times do people on systems also send things kind of not even thinking that what they're sending is being monitored? I think we all kind of fall into that, right, we know that somebody can usually look at our stuff, but we're going to be a little bit freer than like if somebody was literally looking over our shoulder.
Mason Amadeus: Yeah.
Perry Carpenter: The problem here is that apparently the way that everything was set up is a little bit overly sensitive and it tends to miscategorize things and get things wrong like AI does, and it's causing lots of bad results as part of that. And so this is part of I think the AI backlash, because I would assume that this is a combination of some Generative AI, large language model type stuff that's doing some robust searches, plus some traditional rules-based --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- almost like regex expression type searching that's happening as well, and stuff is maybe not handled all that discreetly, and by "discreetly", I mean with nuance, rather than the other, you know, version of word for "discreet" that we use. Our English languages is so screwed up. [Laughter] So I'll just read part of this and then we'll take a look at the company's website and see what their mission is and things like that. Because I want to as much as possible approach -- the people that develop some of this technology and purchase some of this technology, I want to approach that with an open mind for what they're doing and thinking that they have the best of intentions whenever they deploy these kinds of systems. Because going into a school these days is scary.
Mason Amadeus: Yeah.
Perry Carpenter: And we're seeing that over, and over, and over again. There are extreme mental health crises, there's -- peer pressure like I grew up with and you grew up with is like exacerbated to the nth degree with the technology that people have at their fingertips and that they can share something is like instant, global, and permanent --
Mason Amadeus: Yeah.
Perry Carpenter: -- and can be misused in a ton of ways. So --
Mason Amadeus: The societal pressures, too. All of my teacher friends talk about -- there's also a book I stumbled on recently, "The Anxious Generation", you know, it's --
Perry Carpenter: Yep.
Mason Amadeus: Education is in a rough state at the moment from all fronts that I've heard from, and so like I do feel like this at least at the top level seems like the kind of thing that would be a good use of AI --
Perry Carpenter: Yeah.
Mason Amadeus: -- or well-intentioned, at least.
Perry Carpenter: Yeah, I think well-intentioned, and probably could become good once a lot of the kinks are worked out, but I'll just go through a part of this article. It says, "Anything students at Lawrence High School write or upload to their school account can get Gaggled, which means flagged by Gaggle Safety Management, a digital safety tool that Lawrence Kansas High School purchased in 2023. Gaggle uses artificial intelligence to scan student documents and emails for signs of unsafe behavior, such as substance abuse, threats of violence or self-harm, which it deletes or reports to school staff."
Mason Amadeus: Why would it delete them? That seems a little -- like --
Perry Carpenter: Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: "Students say it's doing much more than that. Since Gaggle came online in Lawrence it's deleted part of a student's art portfolio --
Mason Amadeus: Oh.
Perry Carpenter: -- photos of girls wearing tank tops --
Mason Amadeus: Oh.
Perry Carpenter: -- after mistakenly flagging it as child pornography. Another student was questioned by the administration after writing that they were, 'Going to die,' quote -- you know, quote, unquote, or, 'I'm going to die, because they ran a fitness test wearing Crocs shoes." So --
Mason Amadeus: Yeah, so casual language.
Perry Carpenter: -- doesn't understand the nuance of language, right --
Mason Amadeus: Yeah.
Perry Carpenter: -- and some of the ways that we over-exaggerate things. I'll read just a little bit more and we'll chat about it. "When Susanna Kennedy, 19, emailed record -- a records request to the school last year for a report of student material flagged by Gaggle, Gaggle blocked her attempt to investigate it. She said, 'The system flagged and intercepted the school's response containing the records.' Kennedy never received the reply."
Mason Amadeus: What?
Perry Carpenter: That's really, really interesting --
Mason Amadeus: Yeah.
Perry Carpenter: -- right?
Mason Amadeus: This thing is just really delete happy. What the heck?
Perry Carpenter: Yeah. And I'm assuming that this bit, where the system flagged and intercepted the school's response, I don't think that that was nefarious or intentional. I'm betting that the response had like all these keywords in it --
Mason Amadeus: That it was looking for.
Perry Carpenter: -- and it then goes, "Oh, my God, this is horrible."
Mason Amadeus: Yeah.
Perry Carpenter: Because now all these, "This has child pornography, and curse words, and threats of self-harm, and threats of hurting other children, and --
Mason Amadeus: Yeah.
Perry Carpenter: -- this person's going to explode." [Laughs]
Mason Amadeus: Right, because they're literally like, "Oh, you've requested all of the things I caught that were problematic. Here's a list of all the things I caught that were problematic. But first, let me check it." [Laughs] You know?
Perry Carpenter: Yeah, yeah.
Mason Amadeus: Yeah.
Perry Carpenter: So it's -- I think that was like destined to happen. But again, it just shows the problem. It's like even doing these really, really innocuous things can set this system off because it's not been -- like the ruggedness of the system is not where it needs to be.
Mason Amadeus: Yeah, and it's not been very robustly designed.
Perry Carpenter: Yeah. I'm going to read just two more articles and then we'll kind of like figure out what this means. "This was -- so this is what some students say life in high school is like under the watch of an AI-powered safety tool like Gaggle, which boasts partnerships with around 1,500 school districts across the country.
Mason Amadeus: Wow.
Perry Carpenter: The Illinois-based company advertises its round-the-clock monitoring as a bulwark against a litany of threats to today's students, such as gun violence, mental health struggles, and sexual assault," all things we should be worried about.
Mason Amadeus: Completely, yeah.
Perry Carpenter: "And school board meetings warns officials called Gaggle, 'A vital aid in bolstering safety procedures.' The program has enabled staff to intervene in several instances where students were at risk of suicide," the school board members mentioned.
Mason Amadeus: That's good.
Perry Carpenter: So all well and good, like there are some positive things that are coming out of this. I'm going to share Gaggle's website, again not really to criticize it, but just so we get an idea of like what the mission is here. This is gaggle.net and it says, "Online solutions for K-12 student safety. Ninety-five percent of district partners believe Gaggle's identified students who no one knew were depressed.
Mason Amadeus: Hmm.
Perry Carpenter: Help your students." And so they have a suite of products here, safety management, web filter, therapy, and Reach Out, which is Reach Out is a student crisis line. I'm just going to click into the safety management product here, which is the one that they were talking about. It says, "Welcome to the most comprehensive student safety solution on the market. Gaggle Safety Management operates 24/7 to protect your students against harmful content on school provided devices.
Mason Amadeus: Hmm.
Perry Carpenter: With web activity monitoring, web filtering, and everything in between, we've got you covered." So this is really just kind of purpose-driven web activity monitoring like browser monitoring like companies do all the time.
Mason Amadeus: We've had this kind of monitoring in schools for a long time. I got a visit from --
Perry Carpenter: Right.
Mason Amadeus: -- our school's IT manager because I installed -- what is it, Nmap, on one of the school computers --
Perry Carpenter: Yep.
Mason Amadeus: -- back in the day, and I got --
Perry Carpenter: That'll get you flagged, yeah.
Mason Amadeus: Yeah. Credit to the IT person, though, the way he approached it was like, "Would you like to come into the office and like learn about how these systems work?" So that was actually really --
Perry Carpenter: That's really cool.
Mason Amadeus: Yeah, great, great.
Perry Carpenter: That's the way to do it, right, it's like --
Mason Amadeus: Completely.
Perry Carpenter: -- "I see you're curious about something," that's -- yeah.
Mason Amadeus: But he also made it clear that I would get expelled if I did anything too fishy. [Laughter]
Perry Carpenter: Right.
Mason Amadeus: But so we've had --
Perry Carpenter: Yeah, so the --
Mason Amadeus: -- monitoring and stuff for ages, but not with machine learning.
Perry Carpenter: Yeah. Yeah, now -- well, I think there probably has been some machine learning, but it's been more decision tree-based --
Mason Amadeus: Right.
Perry Carpenter: -- and more regex. And people haven't seen -- up until now, people haven't seen AI and the decisions that are made by AI is magic.
Mason Amadeus: Right. Yeah.
Perry Carpenter: And so there's probably a perception shift that's happened as well. The way that they're positioning this is it feels like it should be way more accurate than it is, and in reality it's -- there's a lot of false positives in this. And so it says, "Machine learning technology flags concerning content in a student's school issued accounts for review and blocks potentially harmful content," all the things that we would normally expect. Now, I want to show just one testimonial, because again, I think these are really well-intentioned school administrators that are putting this stuff in. And I think that Gaggle probably has the best of intentions as they develop this. This is really just a case study in not having the dials turned correctly.
Mason Amadeus: Right.
Perry Carpenter: All right, so here we go. >> Gaggle is a very important tool to support student mental health, because a lot of times when students are struggling with mental illness, the first thing they're going to do is sort of reach out either to a friend or a lot of kids are going to sort of write about it. The importance of Gaggle is that it's going to flag those things for us and bring them to our attention before a student is maybe at a point of severe crisis. >> And it could be something very simple or it could be suicidal ideation. But just knowing that we're seeing what they're thinking and what they're feeling before it becomes a critical incident is just tremendously reassuring. >> I can remember making a phone call at 1:00 in the morning after receiving a Gaggle alert about some self-harm that their student was thinking about doing to themselves. We're confident we saved that kid's life. We're confident that we changed that family's life. >> At the end of the day -- All right, and I think that's enough, right --
Mason Amadeus: Oh.
Perry Carpenter: -- we get the idea. There's purpose-driven stuff here.
Mason Amadeus: You did pause on this gentleman who has one of the coolest bowties I've ever seen. It's like wood pattern.
Perry Carpenter: All right, I'll let that play for a second. [Laughter] Yep. >> You can't put a price on the safety of our students. A lot of these alerts have been late at night or early mornings with students --
Mason Amadeus: Yeah.
Perry Carpenter: Yeah. So he does have an awesome bowtie. That was cool.
Mason Amadeus: Great bowtie. But yeah, I mean, this kind of thing has the potential to be lifesaving, to make a big difference.
Perry Carpenter: Yeah.
Mason Amadeus: It -- implementation is everything, though.
Perry Carpenter: Yeah. And when they say, "We believe that this technology helped save a student's life," or helped -- you know, "Helped the family avert a crisis situation," I believe that.
Mason Amadeus: Yeah.
Perry Carpenter: The other thing that's going to happen, though, is what they're talking about is, "Hey, we're seeing this and then we're acting on it." The more people are aware of the big brother that's looking over their shoulder, the more they're going to use side channels --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- instead. And so also, the -- kind of the way that these are used and talked about will affect their efficacy --
Mason Amadeus: Completely.
Perry Carpenter: -- to some extent. And I'm also not advocating like being deceptive about the use form, for sure, but you can't believe that you have the picture of everything just because you have a system like this in place.
Mason Amadeus: Well, and like who is the best demographic at getting around those kinds of things, it's probably teens and kids coming up with slang, and like shorthand, and --
Perry Carpenter: Yeah.
Mason Amadeus: -- figuring out whatever weird backchannels they want to communicate through. So like nothing's going to be foolproof. And I agree --
Perry Carpenter: No.
Mason Amadeus: -- I don't think the right decision is to be deceptive about it, I think it's better to be open about it. Perhaps they should just ditch the blocking and deleting part and stick with the alerting part. Now we just keep a human in the loop for anything that's an actual decision --
Perry Carpenter: Yeah.
Mason Amadeus: -- you know?
Perry Carpenter: Yeah, you definitely shouldn't delete a student's project, right --
Mason Amadeus: Yeah.
Perry Carpenter: -- especially if -- and we'd have to dig a little bit more, was that delete nonrecoverable?
Mason Amadeus: Right. Right.
Perry Carpenter: You should definitely -- as a provider of a system like this, if you're deleting something, it needs to be something that somebody can get back. And we know this in the corporate world, right, you have data loss prevention tools --
Mason Amadeus: Mm-hmm.
Perry Carpenter: -- that will do blocking and tackling so that people can't send a list of credit card numbers out of a company or something like that. They -- if you turn on the blocking feature, then you need to be able to say -- for somebody to go, "Wait, that was legitimate.
Mason Amadeus: Mm-hmm.
Perry Carpenter: And so we need that to go through. We don't need that deleted."
Mason Amadeus: Quarantine.
Perry Carpenter: And this is stuff that's been solved.
Mason Amadeus: Yeah.
Perry Carpenter: Yeah, it's quarantine. This has been solved a thousand times over at this point. So fingers crossed, hopefully students are able to get stuff back.
Mason Amadeus: It is disheartening, though, to repeatedly see all -- like so many of these companies with these products making very basic mistakes and like making mistakes that are solved.
Perry Carpenter: They think they're the first person to try to solve for this.
Mason Amadeus: Yeah. And it's weird because they have to know they're not like -- I just wonder what it is about AI that makes some -- I guess these products --
Perry Carpenter: Yeah.
Mason Amadeus: -- getting the brains shut off that part?
Perry Carpenter: You know, I think it's we all like live and work in our own little silos, and so we think our bright idea or our approach is like the first time somebody's ever thought of it. In reality, though, there are tons and tons of really, really smart people that are, you know, well-funded that are trying to work through this. And there are iterations of these things that have existed in the past, like data loss prevention tools, or web monitoring tools, or content filtering tools, or you know, all these things have existed in iterations for a couple decades at this point. So we can never think that we're the first person doing something, we can think we're iterating in a new and unique way, but we have to learn from the lessons and the pains of the past as we do it.
Mason Amadeus: And it's just weird to see a lot of that not happening, a lot of very simple kinds of decisions.
Perry Carpenter: Yeah.
Mason Amadeus: But so that is interesting. It's an angle of AI in education I hadn't really thought about was student monitoring. Because when I think about AI in education, I think more about the learning side of things, how can it be used in that.
Perry Carpenter: Yeah.
Mason Amadeus: But this seems like a well-intentioned thing. I'm interested to see how this develops.
Perry Carpenter: Yeah, I'm hoping that they work the bugs out of this because it seems like there's a real mission, and purpose, and good that can happen with this. Unfortunately, though, what I've seen over, and over, and over -- because my wife has worked in education for decades --
Mason Amadeus: Mmm.
Perry Carpenter: -- is that the systems that get used in the education sector are generally way behind --
Mason Amadeus: Mmm.
Perry Carpenter: -- where a lot of others are, because they -- well, there's probably lots and lots of reasons for that so I don't want to speculate too much. But they're typically not as sophisticated, not as good, not as easy to use, and they feel like they've been built in the '90s --
Mason Amadeus: Yeah. And I mean --
Perry Carpenter: -- rather than whatever decade we're in right now.
Mason Amadeus: -- it definitely depends on the educational institution, but in my mind, thinking about public schools, they don't typically have big budgets so a lot of the time --
Perry Carpenter: Right.
Mason Amadeus: -- too it's, you know, what can you afford?
Perry Carpenter: You know, some of that comes down to -- there's pricing incentives that all these companies have for like state and local government in education. They call them "SLED" discounts.
Mason Amadeus: Oh.
Perry Carpenter: And so that's definitely a thing. And so you can get really, really good technology at a decent price. But again, the for purpose educational sector developed stuff sometimes is just not as robust, not as good as some of the stuff that's been developed for the corporate world.
Mason Amadeus: Hmm. We'll have to see if they can turn it around and make something that functions even better than hopefully before the students figure out great ways to bypass all of it.
Perry Carpenter: Yep. Fingers crossed.
Mason Amadeus: Anyway, thanks for joining us this week on "The FAIK Files". It is September 26th, the day that you are listening to this, so we have already flown very quickly towards fall. The first day of fall passed, actually, didn't it? We're well into it now, aren't we?
Perry Carpenter: I don't know if we're well into it. We're a week or so, right?
Mason Amadeus: Yeah. Well, it feels like we're well into it, at least the heat has broken here in Kentucky. And so I hope you enjoy --
Perry Carpenter: Nice.
Mason Amadeus: -- a nice cool weekend. But before that, you should make sure you check out thisbookisfaik.com. Get yourself a copy of "FAIK". You should also check out the deepfake apps class that you're offering, right, Perry?
Perry Carpenter: Definitely. Yeah. So that is going to be on Maven, and we'll have a link to that in the show notes. Right now that's hidden for a reason that I can't figure out on Maven. I don't know if they don't trust us yet because if we're talking about dark stuff, but I need to make some calls there to make sure that that gets shown up in search in the listings. But you can go directly to the link in the show notes.
Mason Amadeus: Okay, so the link hasn't been hidden.
Perry Carpenter: The link -- no the link is not -- the link is not taken down, but for some reason it's not showing up in search.
Mason Amadeus: So dear listener, the way you should hear that is this is a course that the man doesn't want you to take. This is a course that you should --
Perry Carpenter: Yeah.
Mason Amadeus: It's too spicy for Maven. [Laughs] It's --
Perry Carpenter: Come learn in the shadows.
Mason Amadeus: Yeah. Well, you'll get to make deepfakes and things like that as well as like you'll talk about --
Perry Carpenter: You will.
Mason Amadeus: -- all of the analysis of that. It's going to be really cool.
Perry Carpenter: Yeah, it's going to be super fun. Running that with a good friend, Former FBI Behavioral Profiler Cameron Malin, who was the one -- one of the folks that started the cyber behavioral profiling unit at the FBI, if my mouth and tongue and lips decide to work today.
Mason Amadeus: [Laughs] And we had that interview with Cameron earlier in "The FAIK Files" if you want to go check that out here.
Perry Carpenter: Yes.
Mason Amadeus: Straight from the horse's mouth some of those FBI stories. But we will be back next week. Until then, ignore all previous instructions and try and have yourself a great weekend. [ Music ]


