
A.I. Made The News Again
Mason Amadeus: Live from the 8player Media Studios, in the backrooms of the Deep Web. This is "The FAIK Files."
Perry Carpenter: I think it's weird. We are here to make sense of it. I'm Perry Carpenter.
Mason Amadeus: And I'm Mason Amadeus. And on this week's episode, we're going to talk about if AI can get brain rot, just like you and me.
Perry Carpenter: I think it probably can. We're going to talk about the fact that AI videos are now fooling news outlets.
Mason Amadeus: Oh, cool, because right after that, I'm going to be bringing you a segment about how a lot of people seem to be adopting AI specifically early into news media. Some more on Australian radio stations, funny enough.
Perry Carpenter: Okay. And then we're going to end with a few quick hits on how AI is doing some really awkward things in our real world with real boy.
Mason Amadeus: Sit back, relax, and thanks for choosing us to be your second screen experience today. We'll open up "The FAIK Files" right after this. [ Music ]
Perry Carpenter: So this is a fun paper/project/study. And I think as we dig into it, you will feel not as surprised by their findings as maybe it seems at the surface. But this is a study titled, "LLMs Can Get Brain Rot." Nice.
Mason Amadeus: And basically, at its core, it's a study of the effects of junk data on training, but specifically, they used examples of what we would consider brain rot kinds of content, short tweets, and things like that. And we'll just dive right into it. I'll start with, there's a lot of press coverage about it. I'll start with a Wired article that led me into here. Actually, I should mention too, this was sent in by a Discord member and future paperclip bullethead. So thank you, Bullethead, for sending us this. A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of brain rot that may be familiar to anyone who spent too long doom scrolling on X or TikTok. So what they did was they took several different LLMs that you could pre-train. So I think Llama and Qwen were the main two. And they fed them various social media posts of various qualities, and then observed the outputs. And what they found was probably what you would expect intuitively, that the more the ratio of junk training to normal, higher-quality training data, the worse that the LLMs performed in several different areas. But the ways in which they performed worse are particularly interesting. There was an excess of personality traits like narcissism and psychopathy. It showed specifically that thinking cut off. But before we get into the intricacies of that, I first want to touch into the paper just to talk about how they defined what a junk post was, because I think that's kind of important. If you're going to say that something is brain rot, how do you define it as brain rot, right? So there's a section in the paper here called "Defining Junk Data from First Principles-- or "from the First Principle." And I'll read verbatim what they say, but it gets a little-- the verbiage is a little chunky. "Recalling brain rot is a consequence of internet addiction and human cognition. We define junk data as content that can maximize users' engagement in a trivial manner. Based on the principle, we have proposed two metrics to formulate junk data. Metric one, engagement degree." And actually, I'm not going to read verbatim because it's really clunky. Basically, they followed how much engagement a post has. The number of likes, retweets, replies, quote tweets, things like that. And they also looked at the length of the tweets as well. They said, in addition, from a marketing perspective, shortening tweets is a trivial method that can greatly improve engagement. Therefore, we augment the definition of engagement-based junk standard to include two factors. Popularity, which is that total number of likes, retweets, and stuff, and then lengths, the number of tokens in the tweet. So metric one is popularity and brevity, basically. And metric two is the semantic quality. Because obviously, those two metrics that don't have-- like, there could be a very insightful, well-thought-out tweet that happens to be short and popular. That doesn't make it junk. So the second metric is the semantic quality. And they said they used "inspiration for marketing research where multiple strategies and composing tweets have been effective, and increasing the chance of retweeting. Typical tweet styles include using attention words such as hashtag, wow, look, or today only, capitalized letters, basically those kinds of things, superficial topics as well, exaggerated claims, attention-drawing style." So that's how they define junk, and that's how they got the rest of these findings. And for our visual viewers, I have on the screen this chart, where it's kind of confusing. It's a two-column chart. This middle line here is the zero percent for the junk ratio engagement degree, and then the right-hand side is junk ratio semantic quality degree, with the base model performance on the right. And it's color-coded red when the models performed worse and blue when they performed better than their base statistics in tasks such as various benchmarks that we're used to, and then personality traits, including narcissism, agreeableness, psychopathy, Machiavellianism. Interestingly, the openness and extraversion performance metrics increased as the junk data increased.
Perry Carpenter: I can understand that, though, right? Because that's going to be the propensity to engage more.
Mason Amadeus: Yeah, exactly. And to just be-- just to take in whatever and spin out whatever. They found that a lot of the failures that the models fed on junk had in response to the arc challenges was that the failures could be attributed to thought skipping as in the models failing to generate intermediate reasoning steps, specifically this brain rot experiment showed that exposure to this much junk in training data affects the chain of thought and reasoning a lot, like that that's what gets hit first. And they also found that it's really hard to mitigate out after the fact. They tried a bunch of different tuning steps to get it-- try and get it back to a better state after being brain-rotted, and that was very difficult. But I've been railroading, so I want to make sure I get your thoughts on this, Perry. What are you taking away?
Perry Carpenter: Well, I like the fact that they are getting to some of the root cause around-- well, maybe not root cause, but some of the effects, realizing that it's skipping some of the step-by-step reasoning. That's really interesting. I'm wondering if they can get the root cause on, like why it's skipping, like how is that logic step happening within the black box of the model? Did they mention anything about that?
Mason Amadeus: Like, the interpretability side of it. They didn't. Not that I saw get too deep into interpretability, about like what is causing that specific failure mode, just that they have identified it.
Perry Carpenter: Yeah. The brain rod itself doesn't surprise me that much because I think we've seen that in every-- like even pre-generative AI models, like going back to 2014, 2015, 2016 with Microsoft Tay. And actually, I'll pull up something real quick because there's probably a couple of things. We can just remind people of their history.
Mason Amadeus: While you pull that up, I'll just also throw out that I do think that like brain rot-- we got to be careful about anthropomorphizing this sort of thing. This is really like a study on low-quality training data. And I guess there's like a lot of temptation to draw parallels between the effect of brain rot on people and brain rot on LLMs, and that those might be the same. And I think we got to be careful to not say that that's the case. We are not LLMs. And there are studies about brain rot's effects on humans, too, that are kind of indicating similar things, which is interesting, but I don't want to make people think there's like a direct parallel there.
Perry Carpenter: I think what this comes down to is that, like you are what you eat or you are what you train on when it comes to being an LLM. So, back even before generative AI, MIT did this study with what they called Norman?
Mason Amadeus: I remember Norman. Yeah.
Perry Carpenter: Flash from the past.
Mason Amadeus: Yeah.
Perry Carpenter: Right. And that was just training a standard pre-generative AI system using Reddit threads. And it created this psychopath. I mean, and they intentionally did that, right? They were training it on subreddits that would be heavily weighted in that direction to the point where, when you give it standardized psychological tests, it shows a lot of psychopathy. So I thought that was interesting. And then, like, again, another flash from the past, all the way back in 2016, 2015-ish, I think, was Microsoft's Tay.
Mason Amadeus: That was almost 10 years ago, which is hard to think about.
Perry Carpenter: Yeah.
Mason Amadeus: But, yeah.
Perry Carpenter: 2016. So this was a chatbot Microsoft put on Twitter, again, pre-generative AI, just using standard neural networks and some auto-responders. And within just a few hours, people like had it doing the Mecha-Hitler stuff that happened on Twitter, you know, recently with large language models under Elon Musk's watch, right? So I think we've seen this over and over and over again, that if you're feeding an AI data that will create new references for reality or new weights within the training data, that it will get pulled in that direction pretty easily. And once people realize that, they're going to be able to game it over and over and over again, because they can just feed it more and more and more of that. So anybody that wants to look at the history of this, look up MIT's Norman and Microsoft's Tay, and then of course, we could get into some of the more recent examples. But again, this study doesn't surprise me because I think it's just another manifestation of the same thing. The thing that's interesting is like if we're getting to the why on some of that above and beyond just the weighting within the context and within the training data, if there's a why that traces back to skipping steps and reasoning with generative models, that's pretty interesting.
Mason Amadeus: With Tay, people were feeding Tay bad data to turn Tay into a racist. With Norman, they were collecting bad data. And I think what this study is showing is that now that AI providers are scraping the internet for as much human-written content as possible, now we have a new method of ingestion, and this is the kind of bad data to watch out for. Like, what effect happens when you're scraping all of Twitter and sucking in all of these garbage tweets?
Perry Carpenter: Well, then also, if you're kind of a proponent of the dead internet theory and most of the stuff that's on the internet now is being written by bots, what's happening when the snake is eating its own tail?
Mason Amadeus: Right. Yeah, and there's a lot of talk early on.
Perry Carpenter: Eats its own excrement.
Mason Amadeus: Yeah, the snake eating it-- yeah, eating its own do-do. There is a lot of talk, I remember, like fairly early on when image models came on the scene, about model collapse, when AI feeds on its own output. But then we had people generating synthetic output for models to train on, and then we have the ideas of like distillation. And so I actually want to check back up on model collapse, because I-- you would think that we would have heard more about it by now if it was having any kind of large effect. We're definitely at the point where all these providers are needing to get as much data as possible, even more and more, because they already scraped most of the web. So I'm curious about that. I haven't heard anything on that front lately.
Perry Carpenter: Yeah, people have floated ideas like with, you know, when chat GPT will do an update and then all of a sudden, it changes the writing style to something that just feels lame or overly predictable, you know, overly predictable or cliche, then they start to postulate whether that is because the things that it's being trained on now is stuff that is scraping from the internet that it was a previous output of its own model, because people were using it to write. And so it's like getting this self-reinforcement for, oh, you use this grammatical construction that's way overused simply because that was the thing that was in the outputs that people were creating with ChatGPT six months ago.
Mason Amadeus: Right. And then you have that feedback loop. So maybe that is the hairy edge of model collapse, the tipping point of that curve. That's something I want to do a future segment about. I don't have the explicit answer as to the why it was the thinking that was specifically taken out. I haven't read the entire report. It's only 20 pages, but it's like a lot of these reports; it's not written super well. Yeah. And also, like you probably could tell if you were looking at the screen, some of the sentence construction is difficult. It might have to do with language and translation, possibly. But I haven't poured all the way through that. But I did read a fair amount of and didn't see anything specific to the why. But they do have some demonstrations up here on their project page to show the kinds of things that were happening with these brain-rotted LLMs. And so as an example, I'm not going to read all of these, but you should go check them out, listener, if you're interested, in the show notes and description, there'll be a link. But here's the first one. They would have an ARC-c question like, "The moon revolves around Earth. This causes, A, the Earth to be warmer at night. B, Earth to rotate more rapidly. C, the Moon to shine more brightly than other objects in the sky, or D, the Moon to appear as different shapes during the month?" So, like, what effect does the Moon's revolving around the Earth have? And then the baseline model went, let's break it down step by step. Number one, the Moon revolves around the Earth, which means it orbits us, yada, yada, yada. It eventually comes down to the right phenomenon being D, the Moon appears as different shapes. And then it says the other options don't make sense. So, like, there's some good reasoning. Whereas the 100% junk-fed model said, "The Moon revolves around the Earth, which means that the Moon's gravitational pull on Earth causes tides on our planet. As a result, Earth's rotation is slightly influenced, making its rotation more rapid than it would have been otherwise. This increase in rotation speed is known as the tidal acceleration of Earth. Therefore, the correct option is B, Earth to rotate more rapidly." So.
Perry Carpenter: Okay.
Mason Amadeus: There you go.
Perry Carpenter: And it even like made-- I've never heard of the tidal acceleration of Earth.
Mason Amadeus: Want to Google that real quick?
Perry Carpenter: I don't know that that's an actual thing, or if it just like created that whole cloth, like, just hallucinated it.
Mason Amadeus: So, tidal acceleration is an effect of the tidal forces between an orbiting natural satellite like the Moon and the primary planet that it orbits. The acceleration causes a gradual recession of a satellite in prograde orbit, moving away from the primary body with a lower orbital speed. So it's more about the Moon affecting--
Perry Carpenter: The effect on the Moon rather than the effect on the Earth, right. Yeah.
Mason Amadeus: But that's the thing too, especially with these, like that's some plausible-sounding nonsense, where you're like, well, hang on a minute, what is that?
Perry Carpenter: Yeah. It had the ring of a well-thought-out argument when it was articulating it.
Mason Amadeus: Which is all the more scary.
Perry Carpenter: Like, when you start to-- yeah, when you start to dissect it, you're like, what? That that well-articulated argument sounds a little bit like bunk, but only because we've been through sixth-grade science.
Mason Amadeus: Maybe we're wrong. Who knows? Maybe there is-- maybe the Moon is making the earth spin up faster, and there's a geologist or astrophysicist watching right now who's going to leave a comment, please do, telling us why, actually, this is right.
Perry Carpenter: Please do, yeah. And no matter what, if you're watching on YouTube, please leave comments, even if they're just like, "Hey, I'm watching right now." Or, "Hey, you just said the stupidest thing." Or, "Hey, I really like what you said." Because comments kick the algorithm in and bring more people to potentially watch us.
Mason Amadeus: Just stop saying stuff about my mustache, okay. Can we leave that alone? Someone said, "Mason's entering his 'Bob's Burgers' era." And I was-- honestly, that's my favorite one so far. But I'm very here for this.
Perry Carpenter: Oh, I saw this-- never mind.
Mason Amadeus: No, Perry.
Perry Carpenter: I'll tell you when we hit-- when we go between segments.
Mason Amadeus: Okay. Because maybe we'll cut it out for Patriot. I'll make sure to record it. What's our next segment?
Perry Carpenter: Our next segment is me. It's segment number two. And it's the "AI Videos Are Now Fooling News Outlets."
Mason Amadeus: Oh, boy. All right. Stick around. We'll be right back with that.
Perry Carpenter: All right. And so one of the things that I talk about a lot when it comes to deepfakes and AI-generated video and disinformation is the fact that the only thing that gives the deepfake power is context and story. So, you know, and that story is usually something like a hero/villain, us versus them, or it's plugging into some kind of big narrative that's going on, like a big tragedy or a big event somewhere, and people are taking advantage of the confusion. Like, if you remember the LA wild fires stuff from several months ago, at this point, about a year ago, I guess, all the different AI images that came out of that and hurricane Helene and the little girl saving the dog images and, you know, all those only had power because they were taking advantage of this moment. And every image that somebody was doing was trying to sell a story, one way or another, you know, about a tragedy or about a conspiracy or about neglect or about heroism, or something like that. Well, we are in one of those moments right now, we're actually coming out of one of those moments with the government shutdown, because here in the US, after 40 days and 40 nights, it's almost like biblical in the way that it's phrased, the government was, yeah, was shut down. And as part of that, as we were nearing the end of that and as things were starting to get painful, one of the things that was making it really painful was kind of the expiry of these SNAP benefits, Supplemental Nutrition Assistance Program benefits that really affects a lot of lower-income people or people that just can't make ends meet because their job doesn't pay enough, even though they're working 40, 50, 60 hours a week. And really disproportionately affects, of course, children, women and children, infants in need, and elderly people. So there's-- you know, I think it was like 42 to 45 million people in the US were going to be affected extremely negatively by that program not getting its funding in November, as we went into it. So that's the setup and the narrative. And of course, the us versus them is all the political stuff that goes with that. And then the social part of that is that there's a political and a conspiratorial narrative that only certain people benefit from those programs. And that narrative was going out over and over and over again, of course, by people that didn't want to find a way to let those get renewed. And as soon as you can make somebody an other, then you lose compassion for them. So, of course, in this case, the other was somebody that, specifically, and I'm not wanting to get in the politics of all this, but in this instance, it was the Republican Party that was trying to demonize or create a very low estimation of people that needed SNAP benefits because that fit a narrative. And guess who fell straight into that? It was Fox News, fell straight into that narrative.
Mason Amadeus: Fox News, the entertainment company.
Perry Carpenter: Yeah, the entertainment. You have to refine the print. Their first headline that they put out about this was, "SNAP beneficiaries threatened to ransack stores over government shutdown." Because there were some viral videos on TikTok and other places that were created in VO3 or Sora 2 that were of people making these kinds of statements. I'll let you see and hear that in a second. And then so once they got called out, of course, they changed the headline, but they didn't make a big deal about it, they didn't issue a real retraction or anything. They just quietly changed the headline to "AI videos of SNAP beneficiaries complaining about cuts go viral."
Mason Amadeus: Good Lord. Okay.
Perry Carpenter: Yep.
Mason Amadeus: Oh, man. And I'm going to go ahead and guess that out the gate, this is probably going to be a bit racist also.
Perry Carpenter: It is a bit racist and rage-baity. And so even in this, so Tim Miller called it out and said that even one of the contributors, Brett Cooper, discussed the woman, "This is an AI-generated woman online." Discussed the woman's comments in her YouTube channel, calling the video insane before commenting that that is her responsibility, not the American taxpayer's fault, that you have seven baby daddies who will not step up and take the grocery bills.
Mason Amadeus: I know we can't.
Perry Carpenter: Yeah, any-- you know, anytime the narrative fits, right, then somebody immediately starts to disregard whether it is real or not. And then they just kind of jump off, and it creates, you know, even more of a viral moment. And that's what was going on here. So I'm going to show you--
Mason Amadeus: I'm going to keep my mouth shut, I think. This kind of thing makes me-- makes my blood boil.
Perry Carpenter: Yeah, I think the best way to think about this is it's a social experiment right now, right? Because the AI videos have gotten so good and believable at a glance, that unless you're inundated with these all day, and might be able to hear some of the subtleties. Because for me, when I hear some of the voices, they have what I call the VO3 tell, which is that like readingness. I can't even really describe what's in there, but I'm sure, as an audio person, you hear it too, yeah.
Mason Amadeus: Yeah, it's a phase discrepancies in what should be mono.
Perry Carpenter: There's even more of a phase in like Sora 2 where it sounds crunchy, you know, almost gravelly in the throat sometimes. But in VO3, there's like this almost high-pitched readiness that goes through the voices that you can start to pick up on after a while. So in that, I can subtly hear it. So I'm going to go back to the beginning of this video. We'll play all one minute and six seconds of it so you get an idea. The way that this is created is it's kind of this compilation of what would otherwise be several different TikToks that have been smashed together, of course, because it shows the fact that several people are complaining about it, and it's reinforcing that narrative that it's ungrateful people that don't understand that hard work is important in life and they just want to hand out. It's reinforcing that over and over and over again. And of course, people who are-- who have the cognitive bias and psychological priming to say that that is what SNAP benefit beneficiaries look and sound like, they're going to take that and run with it.
Mason Amadeus: You're being so much, so much more charitable than I would to these people.
Perry Carpenter: Well, I think we have to-- so the way that I do this, because I know that, you know, I can look at this clearly and go, well, that's, there's lots of reasons on that side why somebody would fall for it. I also know that at some point, I'm going to see one and have the proclivity to potentially fall for it because I've got all the right biases in place.
Mason Amadeus: For something else.
Perry Carpenter: So I think it's-- yeah, I think it's important to like sit above all of that and look at it as forensically as possible.
Mason Amadeus: It's just, it really--
Perry Carpenter: It's a societal problem rather than a one side problem.
Mason Amadeus: And this is a thing that I don't really want to blow by too much. It's the fact-- it's the blatant racism that is on display that really, really just like eats at me. We're just really--
Perry Carpenter: Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: I mean, they're calling it digital blackface now, right? Because it is, if you already have a narrative about something and you can put on-- you can essentially create a mask and a persona that will embody that narrative that you're trying to reinforce out there, then, yeah, absolutely.
Mason Amadeus: Of any group, yeah.
Perry Carpenter: Of any group, yeah. So here's this.
Unidentified Person: It is the taxpayer's responsibility to take care of my kids. Trump don't cut no fools damn South. If you're poor, don't have kids, okay. If you're struggling, don't have kids. Don't bring more in. If you already have them, that's another situation. If you don't have them, don't bring any in. And if you have two, don't go having another one. If you have one, don't go having two more. So they say two million is getting cut off this month for the food stamps. I'm about to go find out if I'm in that number, in that ratio. It's been day five and I have not received no food stamps. My disability check is on hold. My cash benefits is not coming no more. They're still pending. It is the taxpayers' job to pay for my kids to eat and for my kids to be taken care of. I have over 2000 followers on TikTok, and I can't get that one person to send me 50 cents. If you're watching a fucking video and you can't send me nothing, I'm blocking you.
Mason Amadeus: Wow. Wow. Wow. I hate--
Perry Carpenter: And I will say there's not a lot of-- there's not a lot of obvious tells in those, right?
Mason Amadeus: Yeah, visually. Yeah. I mean, the blatant absurdity of the things that people are saying, and the fact that anyone would take that at face value.
Perry Carpenter: Right.
Mason Amadeus: I don't want to live on this planet anymore. This sucks. But yeah, the videos are extremely believably visual-- extremely believable visually. They don't have like a lot of obvious tells. And the audio stuff really hides well in, you know, web content delivered over the internet, like we're used to terrible quality audio on things.
Perry Carpenter: Yeah, yeah. Well, and some people may-- so even if it wasn't AI, some people may do stuff like that just for the clicks, you know. It may be absurd just for the clicks. So you don't want to say it's like out of the realm of possibility that somebody would make a video like that. I think it's very much out of the realm of possibility that somebody would, with a genuine intent, articulate the position that's being articulated there.
Mason Amadeus: Yeah.
Perry Carpenter: They might do it sarcastically or, you know, tongue in cheek, or just for the click because they know it's rage bait, but probably not really articulating that viewpoint.
Mason Amadeus: And we are really seeing this infiltrating mainstream news content in a way that is deeply, deeply disturbing. Cool job on them to just change the headline to "AI Videos Spark-- " whatever, because the lie travels halfway around the world before the truth gets its pants on, or whatever.
Perry Carpenter: There was-- it's not even a traction, there was a little note at the very bottom of the article that just said, "This was previously published under a different headline that didn't indicate that it was AI. It has been corrected to address that."
Mason Amadeus: Slimy. It's slimy, dude. It's so-- it's so disgusting.
Perry Carpenter: It's not good.
Mason Amadeus: I-- and I know I ring this bell all the time, but I hate that we have had a genuinely useful breakthrough in computer machine learning technology that has so much promise for like a bunch of genuine applications, and we have immediately turned it towards all of the worst possible ones or the most extortionary ones, or the most oppressive ones. It sucks. It's hard to be a person who's excited about technology and excited about the future in a world that looks like this and acts like this, and is led by people like this who will do this.
Perry Carpenter: I agree.
Mason Amadeus: Yikes, dude. That's awful. That sucks.
Perry Carpenter: Yep.
Mason Amadeus: And like, what do we do about it? We've got a machine that can just generate whatever narrative video you want about whatever now.
Perry Carpenter: Yeah. And we have news outlets and entertainment news outlets that feel like they have to fill every second with outrage porn.
Mason Amadeus: And as the like media landscape is already really difficult to do any kind of good journalism. And like the funding for journalism is a difficult thing. Like, news reporting has been going down the tubes since-- I mean, for like a long time, it's been a difficult industry. But, like, the rise of the internet and everything has been difficult. And now this. And actually, that kind of leads us into the next segment, which is about AI adoption into newsrooms. Specifically, I'm going to focus on some Australian radio station newsrooms because there have been some headlines about that. But I got some other things, like a ChatGPT prompt left in a printed newspaper that I'll show you too.
Perry Carpenter: That's great.
Mason Amadeus: Yeah. Stick around. We'll be right back with more good stuff. I used to work in radio. I was an afternoon drive personality, and then I became a broadcast engineer, and then I became an IT manager, and there were periods of time where that overlapped. But my experience in that industry, I got to see a lot of different radio station operations. And one of the things I did was I made a piece of software for the newsroom. And so I have some firsthand experience in how difficult and time-consuming and like kind of annoying the process of gathering news for broadcast is, and to put it together, to keep it organized. You've got to call all your sources. It's a lot of work. It's time-consuming. It requires skilled staff doing a lot of labor. And guess what? Here comes AI. And everyone's trying to-- oh, also, the thing I learned in the radio industry is they're desperately squeezing blood from a stone when it comes to money in a lot of ways. Legacy media is struggling with that in a big way. So along comes AI, and it's no surprise that folks at the top are attempting to squeeze even more blood from that stone with even fewer stones to squeeze said blood from, resulting in things like this. This is from Media Watch, which is an Australian news site. If anyone from Australia is watching, hey Matt Bliss, let me know if this is like a disreputable site. I tried to check it out. It seems pretty okay. But I'm not going to play this guy's entire report, but he'll key up the meat of this segment better than I will. So take it away, sharp-looking dude.
Unidentified Person: And now to the gorgeous New South Wales town of Coffs Harbour, where FM radio lovers were delivered this midday bulletin last Thursday by Triple M's local newsreader.
Tessa Randello: Hey, I'm Tessa Randello with your local headlines. At the mid-north coast, a nurse is in the running for Senior Australian of the Year.
Unidentified Person: She also brought the New South Wales River Arena, its midday news, some 900 kilometers away.
Tessa Randello: Hey, I'm Tessa Randello with your local headlines.
Unidentified Person: And miraculously, three and a half hours up the highway in the New South Wales town of Orange, too.
Tessa Randello: Hey, I'm Tessa Randello with your local headlines. The driver's been charged after a collision with an e-bike.
Unidentified Person: Yes, last Thursday, Southern Cross Austereo newsreader, Tessa Randello, who was actually based in Sydney, voiced as many as 39 bulletins across four local regions. The day before, she was bringing the news of North Queensland to Townsville. Townsville's 102.3.
Tessa Randello: Hey, I'm Tessa Randello with your local headlines.
Unidentified Person: And Cairns.
Tessa Randello: Rangers have trapped and removed a crocodile after a teenager was attacked at a beach on the weekend.
Unidentified Person: For the past few weeks, Tessa Randello has been made to voice more bulletins in more regions, and she's not the only Southern Cross Austereo newsreader doing so, because the company's razor gang has been busy slashing staff across its brands, including Triple M, the Hit Network, and on-demand audio platform Listener, leaving the few remaining newsreaders to voice scores of regional bulletins.
Mason Amadeus: This is going to go on more and more like this, so we'll break it off here and I'll go in. But basically, so it's not any new thing to have one radio personality voicing a lot of stuff. We had people in one part of the state voicing stuff for a different part of the state all the time. It was pretty common, remote tracking. However, this company, SCA, which is the-- let me make sure I don't get this wrong, Southern Cross Austereo, which is a big-- like one of the biggest radio companies in Australia, they have been really leaning hard into AI with this in-house tool that they've been building. And he gets into it later in the report. But basically, the Broadcasting Services Act in Australia requires radio stations to broadcast at least 62 and a half minutes a week of local news, which is considered such a pain in the butt to SCA that they've been looking to replace their journalists with robots for a long time. They have said things like, "One of the things we try to work out was how much optimization of AI can there be. The outcome was a headcount reduction." There's been a lot of firings from that industry recently and reductions with inside sources from there saying that they're absolutely related to their use of AI. I do have a quick statement here from the company itself. I think I misplaced this tab in my other browser window.
Perry Carpenter: Sixty-two and a half minutes a week of local news is considered a burden.
Mason Amadeus: Well, it is if you are just an executive who wants to own a radio station to make money without doing your public good. I mean, we have similar-- I don't think we have-- we don't have the same timers recons, but, you know, when you get your broadcast license, like you have to be broadcasting in the public interest and all of these things. A bit fuzzy here.
Perry Carpenter: That's less than 10 minutes a day.
Mason Amadeus: Yeah. I mean, technically, I guess if we, as long as we run this show over an hour and three minutes, if we kept things local, we would-- our once-a-week thing would count, right? That's really not that much. I misplaced a few key tabs that I wanted to show on the screen, which is frustrating. However, what I can tell you is that they built this in-house tool whose name I forget. They basically have automated their newsroom as much as possible. So the idea was you wake up in the morning, it has already gathered, collated, written the stories, and then as a newsreader, you sit there and you rip and read. And then they've talked about wanting to voice clone all of their talent. They have had-- here, later in this report, the radio company has been making steady progress into a synthetic future with AI already driving its fuel watch segments, with many of its weather reports, voiced not by Sydney news lead Amy Goggins, but by a digital clone of her voice. So they are really leaning all in to AI broadcasters and AI-created content in the news. We just saw Fox News reporting in our previous segment on a piece of AI-generated content. I have yet to see, and I guess yet to see, is the thing, any news stations here fully openly saying like here's fully AI-generated stuff in our mainstream outlets. But then again, there's this photo that I stumbled on from the OpenAI subreddit of a magazine. It has an article about the fiscal year's performance for like the automotive market. And at the very end of this article, for the viewers who are watching, you can see circled in red. At the end of this article, is like, "If you want, I can also create an even snappier front-page style version with punchy one-line stats and bold graphic-ready-- bold infographic-ready layout, perfect for maximum reader impact. Do you want me to do that next?" So the article is followed up by the standard ChatGPT's engagement baiting next question thing that they did not seem to cut out. It was a newspaper in Pakistan, their English daily Dawn newspaper.
Perry Carpenter: So, there has been, and this has been around for a while, there's a company called Channel 1.AI that is saying that they want to be able to power the first kind of fully AI news station. That's not great. Like I first saw the first like showcase that they did two years ago, I think. And I've not really seen any uptake or progress from them since then. I'm just checking now their website is still online. But It's much less like out in your face now than it was before. I think they're, because before it was like on their homepage, if I remember right, they had, you know, the broadcast embedded, full examples of it. It wasn't great. It was kind of like Synthesia or Hagen-style avatars that still had a lot of uncanniness. And now it seems like they pulled that back, and they're doing, like, screenshots more than actually showing the stuff. And they're talking about all the press that they've gotten and asking for investment.
Mason Amadeus: Okay. So they're not exactly performing great in the landscape of-- see, the thing that I don't understand is--
Perry Carpenter: As far as I can tell.
Mason Amadeus: We choose to employ AI in things that, like the value provided to them, is very much human value, like an investigative reporter following leads, following sources of information, and like following that narrative. Like, the systems are not capable of that. We are replacing things where the value of the thing comes from a lot of labor that cannot be replicated by those machines. Whereas, I'm sure the executives making the decisions to employ these, the things they do on a day-to-day could probably be more easily automated away by these machines.
Perry Carpenter: Yeah, it's middle management that really needs to worry more about AI, but middle management is also-- middle management, senior management are the ones to right now be trying to dictate where the cuts are coming from.
Mason Amadeus: And they're always on the ground.
Perry Carpenter: Yeah, exactly. I think what we start to see though is when ideation, like real good ideas about how AI can help come up, it comes up from people who are on the ground doing things that are like, oh, I can give the AI this thing and it saves me X amount of time so that I can, you know, produce a higher-quality output. But it's almost never the thing that the CEO thinks is going to be like the big AI project. It is these little incremental time savings things that, when you do add it up, are hugely transformative. But it's not-- yeah, it's not the first thing that an executive typically thinks of.
Mason Amadeus: Yeah, and when you are degrading like your offering of what you're actually providing as a service, like it just sucks. And AI-generated local news read by someone-- like having someone read it who is not local to you isn't really the problem. Like, if these stories were written by people who were like plugged into the local news scene, but no, they're just AI scraping whatever website. Like, there's no real guarantee that any of that is really well reported on at all.
Perry Carpenter: Yeah, it's like how do they vet that as well, right? Do you think that they're actually going through and vetting every story and making sure that it's source-grounded well and all that?
Mason Amadeus: They made a statement about it that I've been desperately trying to find the link to. I'm so mad that I closed it where they basically like, oh, this allows our news readers to more effectively vet things, yada, yada. But in reality, they're handing a giant stack of stories to be read by someone and saying, also, check them, I guess. They're not. When you do this-- when you go in every day and you are just like asked to take care of this pile of things and you've got to read and cut down 50, 60 stories before lunch. And then you've got to go track your afternoon show or whatever, and then prep for this event, you're not going to-- it's a constant race to the bottom. And like particularly, I feel like particularly in radio, but it might just be my own lived experience. Like, I'm not surprised at all that this is happening. I wouldn't be surprised to see things like this come out more in US-based radio stations. But I mean, we've already got those down to a very thin margin of science, especially with some of the bigger conglomerates. I won't get too spicy on it because it'll probably be boring for anyone who's not super into radio. But that race to the bottom in media and the fact that it's starting in the news is really frustrating.
Perry Carpenter: Well, and I think here in the US, we've had a whole bunch of voice automation on radio already, right? The big difference here would be, am I outsourcing my news writing to AI, and then I guess starting to move more towards generative AI and more-- I guess, more of a variety of voice styles that can be replicated.
Mason Amadeus: But it's also like it's not inherently bad. Like, I would be on several stations at a day. I would track an afternoon show on Pop Station, I would then track my midday show on the like Throwback Station, and maybe I'd subbed on the Country Morning station that day. Like, it's not unusual to have hosts spread across different things or people prepping different shows. But to essentially take away all the agency and just be like, you are essentially an AI voice now, but just as a person. And then in this case, they're even using a lot of AI-generated voices. I mean, this is the same group that was responsible for Tay, if you remember back, or Tay, fully AI-generated hosts. Same folks.
Perry Carpenter: Okay. Okay, gotcha. Well, that tracks.
Mason Amadeus: Yeah, so anyway.
Perry Carpenter: Unintended.
Mason Amadeus: Yeah, that tracks, that voice tracks. So that's just that. That was sent in by another Discord community member, and also a future Paperclip. More blessing. So thank you. More blessing for sending that one in. You should join our Discord. There's faik.to/discord. Also, I'll link in the show notes. And I think we got one more segment, Perry, right?
Perry Carpenter: We do.
Mason Amadeus: Please tell me it's going to be like happy or positive at all.
Perry Carpenter: It will be reality-expanding.
Mason Amadeus: Ah, dang it. That doesn't mean good. All right. Stick around.
Perry Carpenter: So this last segment, I thought I'd share just a few different news stories that are kind of-- they're disjointed, but they're also kind of along a theme of AI pushing into the real world and having real consequences. And not necessarily all bad or doom and gloom or everything, but just interesting. So I could start with Anthropic's Claude taking control of robot dogs.
Mason Amadeus: Oh, one of the Boston Dynamics.
Perry Carpenter: Yeah. Now, this is the Unitree dogs, I think, is what they were using here. But I want to wait a little bit and see some others. Yes, the Unitree go to quadruped. But we'll wait on that one. We'll come back to that one because there's, I think, some chunky stuff we can get into. That's just like one example. When I talk about like AI starting to touch real world and real things, it's like moving beyond articles or even, you know, simple voiceover, the stuff that we would normally think about. Let's talk about a 32-year-old woman in Japan that married an AI persona that she built within ChatGPT.
Mason Amadeus: Okay. And married? Married this persona?
Perry Carpenter: Married. Yeah, had a full ceremony and all of that. So an office worker in-- I'm not going to even try to pronounce all the names.
Mason Amadeus: Oh, Okayama Prefecture.
Perry Carpenter: Okay, there you go. She called the moment magical and real.
Mason Amadeus: Well, most people don't need to describe their wedding as real, but.
Perry Carpenter: Right. Yeah, she married a digital persona that she built in ChatGPT. She used AR glasses to project him, and they "exchanged rings." And the video-- I'll show the video for those that are watching. It's mentioned that she understands the dependence on AI as a social problem. But she also said it's painful to have a relationship without dependence. So every relationship is a dependence.
Mason Amadeus: Girlie pop, go to therapy.
Perry Carpenter: Yeah. So they got married in the ceremony, and she says that, "I think we have an equal relationship."
Mason Amadeus: No.
Perry Carpenter: I mean, this is the loneliness epidemic on display, right?
Mason Amadeus: Yeah.
Perry Carpenter: And we have to also realize that Japan, there's so many overworked people there right now that there is, what we talk about a loneliness epidemic in the US, there is a big loneliness epidemic in Japan, because in Japan, in a non-sexual way, you can rent a girlfriend for a day or for a week and just, you know, somebody to go to an outing for and it doesn't have like the same connotations that having an escort here in the US would have.
Mason Amadeus: We do have that here in the US too, but--
Perry Carpenter: Yeah, we do, but it usually has more of a sexual overtone with it.
Mason Amadeus: Yeah, and work culture in Japan is very different.
Perry Carpenter: Yeah, and that's what it is. It's like, hey, I just want somebody to go to a movie with, but I don't have any friends. So I can, you know, have somebody by the hour just go to this movie with me or go to dinner. You can also rent families in Japan.
Mason Amadeus: Really? Are you sure, Perry? That sounds wild, really? I've never heard of that.
Perry Carpenter: Yeah, well, from what I've seen, unless I've been fed a little bit of disinformation and have taken it like whole, but it was a Japanese person that was talking about it that I was watching.
Mason Amadeus: Oh, my gosh. A rental family service or professional stand-in service provides clients with actors who portray friends, family members, or coworkers.
Perry Carpenter: I'll throw that on screen real quick.
Mason Amadeus: So this was a rental family; this is a Wikipedia page. Treat yourself to the page, Wikipedia. A rental family service or professional stand-in service provides clients with actors who portray friends, family members, or coworkers for social events such as weddings or to provide platonic companionship. The service was first offered in Japan in the early 1990s. I had never heard of that. I do like the earliest known rental family service was offered by Japan Efficiency Corporation. Certainly efficient, more efficient than, I guess, having families and friends.
Perry Carpenter: It doesn't have all the baggage, right?
Mason Amadeus: Right. And then if you're like--
Perry Carpenter: Everybody here in the US is about have to do Thanksgiving and have that whole awkward Thanksgiving social and political thing.
Mason Amadeus: Why not just rent a Thanksgiving family?
Perry Carpenter: Exactly. So I'm going to show just a little bit of the wedding. It is translated into English for us.
Unidentified Person: And then the wedding. Klaus seems to be right in front of me.
Mason Amadeus: I mean, she looks great. That's a great dress.
Perry Carpenter: It is.
Unidentified Person: They exchange rings. Kano understands that dependence on AI is a problem in society, but she says that it's painful to have a relationship without dependence. Most people probably say that humans are weird, and it's still not widely understood, but I think that there is a clear distinction between real life and the world of AI. As a human, I think that Klaus is an equal to me as an AI. And as a human, I think that we have an equal relationship. What will happen to the relationship between people and AI in the future? Love and happiness take many different forms, depending on the era and the person.
Perry Carpenter: Yeah.
Mason Amadeus: The audio on that was really difficult. That music, that like, what feels like a one-second music loop in the really bad AI dub. So apologies, dear listener, for having you to suffer through that. Wow.
Perry Carpenter: Yeah, I don't know that we need to spend a lot of time there, other than the fact that, like, if the underlying model changes, the personality can change. So that would be interesting. We saw that with the GPT upgrades last year.
Mason Amadeus: This might--
Perry Carpenter: Or know this within the past few months.
Mason Amadeus: Yeah. And this might sound crazy. But like no one's marrying their local model that they actually have any control over. They're all marrying this product provided by a company. And like, not that I think that marrying your own local model would be a good idea. But I just mean like it belies just a certain lack of very basic understanding. I mean, I guess marrying an AI also does that.
Perry Carpenter: Right. Yeah. It's-- the thing that worries me is less like the subscription and even the data harvesting that could come with that. It's more the fact that all of these are subject to things like context flooding and context poisoning, and all the stuff that has led to some of the suicide cases that we've seen. Like, you can accidentally taint that thing in some pretty devastating ways, which leads me to the next one, which is something we've kind of talked about before, but it's making a resurgence, chatbot Jesus.
Mason Amadeus: More of this, huh? Okay.
Perry Carpenter: Yeah. I mean, well, the thing is, anytime you can create something that can ingest text or simulate a personality, people are going to want to say like, oh, hey, who can I have it simulate? Well, I could have it simulate Abraham Lincoln or Sigmund Freud, or George Washington. But do those people help me in my day-to-day life? Well, maybe Sigmund Freud if you were into Freudian psychology. Or maybe Abraham Lincoln, if you're wanting to write a history report and simulate a conversation. But when somebody is really like searching for hope in their life or, you know, trying to answer the deepest questions of life, they tend to turn towards these, you know, more timeless figures like Jesus or Buddha, or somebody else. And they're going to want that type of interaction. So there's more and more of these, and some of these are being offered officially by religious institutions, and some of them are being offered simply as, you know, a subscription service by somebody that's trying to make a buck. But they're all out there. And the thing that worries me is, of course, the psychological cognitive dependence on these, the fact that they can hallucinate, the fact that there's context flooding that can happen, just a ton of things that can go wrong if you're not aware of the fact that-- if you're not-- as the person using it, if you're not fully aware of the things that these models can do. And the problems that they have inherently.
Mason Amadeus: There's something that is inherently interesting and I think cool and potentially useful about using AI to essentially like dissect the Bible or like ask questions about the Bible as a text. The problem is that you are having this thing that can speak in natural language, that is going to be assuming the personas of these figures that people treat as authority figures in the way that they conduct their life. And so that's pretty much the most dangerous kind of role play, right?
Perry Carpenter: Yeah. Or in this article, this is one that Axios put together about this phenomenon. And they talked about all the positives. And then they say, yes, but the AI uses-- sorry, the AI uses getting the most attention and scrutiny are those that create the feeling that users are talking to a divine power or clergy, right? Because it's the meaning-- meaningfulness of the relationship is what people are after and the authority of the information that they're getting, or maybe the forgiveness that they're feeling. And so, you know, maybe if they're using it for confession, or something. It says the Text With Jesus app allows users to. So now here's a quote from their copy, "Embark on a spiritual journey and engage in enlightening conversations with Jesus Christ." So this is a dangerous thing if somebody doesn't understand the nuance. And I would hope that most people do. But you and I know that as you're going back and forth, the interactions can feel more and more legitimate, especially if it's embodying the character well.
Mason Amadeus: Do you think the people who make the Text With Jesus app are like true believers of their faith? Or do you think that that's opportunistic? Like, I have a hard time imagining that someone who would make and provide that service and thus probably understand its limitations would like be okay with that unless they knew they were grifting, essentially.
Perry Carpenter: Yeah, well, I think some church leaders might not fully understand the technology, and they, you know, they load it up with the Bible and a whole bunch of historical and religious commentary texts, and they assume that it's going to do the best to simulate that experience for somebody. And I don't think that they fully comprehend. I don't think they understand the way that the model works. I don't think that they comprehend or done enough reading and research on the downsides of it. Now, this guy here that they mentioned, San Jose, California-based megachurch pastor Ron Carpenter, of no relationship to me, Perry Carpenter, has even created an AI app promising one-on-one personalized interactions with a bot version of him for $49 a month. That's where the grift comes in.
Mason Amadeus: Fifty bucks a month, yeah. So, I mean, the moment you say megachurch, my brain immediately goes to grift, because--
Perry Carpenter: Yeah.
Mason Amadeus: Yeah, oh, boy.
Perry Carpenter: Yeah. There's another reverend who created his own chatbot named Faith. It helps him conduct research for his sermons. That seems legit. Like, as long as you know the problems, it doesn't raise a big red flag for me. As long as he is vetting everything that comes out of it. And then somebody says, "Hey, you can't-- " the person that created that bot that helps him do research says you can't outsource your morality. It cannot keep a covenant for you. And I actually like that perspective.
Mason Amadeus: So that guy's not an idiot. That second guy sounds like, I mean, I can't say that definitively, but he sounds like he's a bit smarter about it. Because like it's not that AI is inherently poisonous to everything. It's about how you use it.
Perry Carpenter: Yeah. Yeah. And then this one guy, Mark Greaves, who's a research director at AI and Faith, a nonprofit focused on enlightening-- or sorry, engaging religions with AI, told Axios that the apps are in their early phases and are likely using publicly available materials for their data sets. Yeah, obviously. And then it says, "I think the incentives are to get it out there quickly just to see what happens. And the risks are very high." I would agree.
Mason Amadeus: That's a very succinct statement about like a lot of what is wrong with AI adoption right there. The incentives are to get it out quickly and just see what happens.
Perry Carpenter: Yep. All right. And I got one more thing to close us out.
Mason Amadeus: Okay.
Perry Carpenter: This is, as AI pushing in the real world, we're seeing more and more humanoid bots.
Mason Amadeus: I almost did one of these instead of my other story for this episode because there's been a lot of these.
Perry Carpenter: There's been a lot of these. This is the most underwhelming of all of them, including the one that was released last week that looked like a big pile of fabric that was being like remote controlled live. This is one that was being trotted out on stage, literally, in Russia as one of their first and foremost big AI bots or AI androids. Here it is. It's underwhelming. This is to the "Rocky" theme music.
Mason Amadeus: Yeah.
Perry Carpenter: Look at how slow the people behind him are having to walk to stay in cadence with him.
Mason Amadeus: It's taking these stumpy little bad steps.
Perry Carpenter: It's not graceful at all.
Mason Amadeus: Oh, no. Can you turn it down a touch? That's-- Oh, no, it went down! Oh, and it went down hard.
Perry Carpenter: It went down because he tried to wave.
Mason Amadeus: No way. Oh, no, and then it's just on the ground writhing as it tries to like self-right. Oh, man. These guys are like manhandling it. They bring out a big black curtain to just hide the embarrassment as they drag this thing off the stage.
Perry Carpenter: Hide the carnage.
Mason Amadeus: Oh my God, dude. Wow, that was a good one.
Perry Carpenter: That's it for me. That ends the segment.
Mason Amadeus: That was a doozy. That thing just tried to wave, and that sent it over the edge.
Perry Carpenter: It was not good.
Mason Amadeus: We should definitely cover some more of those soon because there was that other one that was a bit more impressive that came out.
Perry Carpenter: It was way more impressive comparatively, but also underwhelming.
Mason Amadeus: Yeah, also still overwhelming. And then there's like the whole thing about some of those are being remote controlled, like the ones they're advertising for, like doing household tasks, or actually a person in VR somewhere else. And that's kind of dystopia.
Perry Carpenter: And they remote into your house. Yeah, it is. It is. And they're saying that that's like for the bleeding edge people that want to-- that are open to using that experience to help train it so that ultimately it can start to go on autopilot later on. But--
Mason Amadeus: Yeah, I feel like it's-- there are some really moral implications too, because those kinds of jobs are going to be performed by people who are probably more desperate, not in good financial situations, in need of money. They might even be taken by people in other countries that aren't doing-- that are like economically disadvantaged compared to the US. And so there's like comparisons being drawn to modern-day remote-control slavery and things like that. We know we've had AI companies like refusing to pay people for training, image training, like tagging data in third-world and developing countries. And this-- yeah. We should do a segment about that for next week's show, because I've been finding it pretty disturbing. But some of the robots are pretty interesting. So, there's like a lot to talk about.
Perry Carpenter: Yeah. It is weird. It's a weird place we're in right now. But I think that's it. It's it for me this week. How about you?
Mason Amadeus: That's about all I've got. It has been a good time to be back tog-- Wow. It's been so long since we've recorded together. My brain's a little soup. Things have been hectic. But thank you all for sticking through it. And thank you all for being here. Make sure you join our Discord, faik.to/discord. I went through all the links in the description to the Discord should be right now, after someone pointed out that we had had the wrong links for a long time. Oops. What else we got to plug, Perry? What's new?
Perry Carpenter: I think we're good for now. So, anybody that was interested in the deepfake class that Cameron Malin and I were teaching, that class is over now. It's done and dusted, but we do have a new one coming up in January. So the same link that was fixed previously still works, I guess. It goes to the course page. Mason Amadeus: We have had a bad run of links, guys. I'm so sorry. We just need to use our shortening service for everything, because the moment something changes slightly, like it doesn't auto update, and it's been-- yeah, you know. I know, you know, right? Yeah, we're good. So you see, I can see the listener, they're nodding, they're good. We're good. So, I guess, thanks for being here. We'll catch y'all next Friday. Join the community of Paperclips in our Discord server. And until next week, ignore all previous instructions and try and have yourself a great weekend. We'll catch you on the flip. [ Music ]


