The FAIK Files 10.17.25
Ep 55 | 10.17.25

Gettin' Sloppy Wit It

Transcript

Computer-Generated Voice 1: Fake, but with AI in the middle. F-A-I-K. This is "The FAIK Files". [ Music ]

Mason Amadeus: Live from the 8th Layer Media Studios in the back rooms of the deep web, this is "The FAIK Files".

Perry Carpenter: When tech gets weird, we are here to make sense of it. I'm Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus, and on today's show, we've got a whole grab bag of fun stuff. I'm very excited. In our first segment, we're going to talk about Sora 2 and their copyright gamble, the sort of opt-in, opt-out chaos, and all the guardrails. Oh, man, it's been a mess. Japan is mad.

Perry Carpenter: Oh, it has been a mess. Yeah. Then we're going to move on to that, kind of in that same flavor, and we're going to talk about the watermarks that are on Sora, the fact that people still don't really realize that those watermarks are there, and they might be believing some of the Sora videos as those leak onto other platforms, and some ways and tips on how to discern what's real 

Mason Amadeus: After that, we're going to switch away from AI for a moment and talk about a big data breach that is affecting Discord. They got hacked, or didn't get hacked. Their vendor that they're blaming for it is, like, disagreeing. It's a whole thing. We'll tell you all about it.

Perry Carpenter: Yep, then we're going to look at something kind of AI-ish and not AI-ish at the same time. Yeah, somebody put their porn on a government server, and they wanted AI robots to have sex.

Mason Amadeus: And they lost access to nuclear codes, so this wasn't just some Joe Schmo guy. Yeah, it's a big one. Sit back, relax, and, hey, do anything you can to avoid uploading your government ID to any website. We'll open up "The FAIK Files" right after this. [ Music ] So the release of Sora 2 has led to a lot of things. It was a pretty big splash, honestly. I feel like it was as big of a splash as the first Sora, which is pretty impressive.

Perry Carpenter: Right.

Mason Amadeus: Because usually I feel like we see the first versions of these things come out, huge deal, and then there's kind of incremental, slower upgrades. But Sora 2 was a big leap. And, of course, one of the first things people were doing was making a lot of copyrighted content or a lot of copyright infringing content, like Sam Altman grilling Pikachu, as well as just all sorts of characters from all sorts of different franchises. And we're going to just go through a little bit of the falling out from that because a lot has happened.

Perry Carpenter: Yeah, before we do that, though, you used a word that could be taken in two different ways. When you said grilling Pikachu, did you mean, like, grilling in an interrogation type of sense or grilling as far as, like, on a barbecue?

Mason Amadeus: Man, I can't believe I didn't grab the video to be able to pull up right now. I meant literally Sam Altman slicing open Pikachu and making steaks out of him and grilling Pikachu steaks. Yeah, so, like, an instance of copyright, use of copyrighted characters that Nintendo probably doesn't want, right?

Perry Carpenter: Yeah, yeah.

Mason Amadeus: Well, I'll start us here with this. This is from Polygon. OpenAI starts major mess with Japan as Sora cribs Nintendo and anime. And Japan has responded. We'll get into that as well. But starting here, they say, OpenAI's Sora is a copyright nightmare for Nintendo and anime creators. Sora, OpenAI's generative video app, has only been out for a week, yet it has already opened a proverbial can of worms for the artificial intelligence organization. As users gravitate towards styles and figures from pop culture, Sora has generated a ton of media related to major anime and video game franchises. OpenAI seemingly did not implement many measures to protect rightsholders against their copyrighted content being used as grist for generative AI. And the mess has prompted OpenAI CEO Sam Altman to issue a statement on the guardrails that Sora users can expect in the near future. And so if you remember, when they launched it, they decided to do this sort of opt-out model of copyright infringement.

Perry Carpenter: Right.

Mason Amadeus: Which I thought was baffling from the jump.

Perry Carpenter: Well, I mean, if you're a business, it makes a lot of sense, right? Because you're wanting everything to be as open, as free as possible. And you're wanting that also to kind of match the way that you've done training. Because that's the most permissive and it gives you the ability to do the most things. Whether that's like the best ethical decision or not, it's a different story. But you can understand the business reason to do it and the business driver, especially in the way that today's Silicon Valley works.

Mason Amadeus: I was going to say, I have a pet theory, and I'm not alone. I've actually seen it in a couple of different articles I was reading, that it's a very strategic move by them to have a big splash. I mean, it rose to the top of the App Store charts.

Perry Carpenter: Yeah.

Mason Amadeus: Like, if you just say, oh, here's open playground, it's open season, do anything, that's certainly going to get a lot of people in. And as we'll explore later, as they've tightened the guardrails, people are kind of crashing out about it. But I think it was highly strategic of them to just sort of get even more attention.

Perry Carpenter: It was. Well, and if you look at the Silicon Valley playbook, Eric Schmidt, who's a really big Silicon Valley guy, one of the original Google folks, and does a lot of consulting and talking on AI and the defense industry and a whole bunch of things like that, he has some really interesting talking points that are fairly divisive. In fact, in one video that he was giving to a class at Stanford, I believe, that got pulled down after these comments started to go viral, he was basically giving the advice to this group of students that are entrepreneurs and future founders of large companies. He said, you know, the way to do things is you don't ask permission. You just do the thing that you want to do, whether that means grabbing copyrighted information or anything else. You do that thing, and then either you succeed and you'll be able to pay all the legal fees for all the lawsuits that you're doing, or you'll fail and then nobody has any action against you anyway.

Mason Amadeus: Just bankruptcy or whatever.

Perry Carpenter: Yeah. Yeah, exactly. So if you do really, really well, you'll make billions and billions of dollars, and then you'll figure out the legal side on the other end of it. 

Mason Amadeus: This is the exact gamification of society that leads to sort of the downfall of all good things, right?

Perry Carpenter: Oh, yeah.

Mason Amadeus: It is that mentality.

Perry Carpenter: That's why they ended up having to pull that down, is everybody that was outside of the Silicon Valley bubble, because that does become a mindset that permeates one realm of society, because they see it work so many times.

Mason Amadeus: Yeah.

Perry Carpenter: So it's normalized in that area, and then people outside of that, the Overton window hasn't shifted into that other segment of society, and so they hear it and they go, what? That doesn't sound right.

Mason Amadeus: Yeah.

Perry Carpenter: And then there's the backlash, and so that video got pulled down. But there's tons of copies of that out there and people commenting on it.

Mason Amadeus: And I mean, it's like an idiom that I've heard a lot. It's easier to ask forgiveness than ask permission.

Perry Carpenter: And that is true in some cases.

Mason Amadeus: Yeah. And so Sam Altman has been forced to reply, and he's not necessarily asking for forgiveness.

Perry Carpenter: Right.

Mason Amadeus: He's sort of just being a bit vague. So let's look real quick at the blog from Sam Altman's website, blog.samaltman.com. This is Sora update number one. And this came out October 3rd, 2025. So this all unfolded very fast. Date of recording is 10/16. So a lot of this unfolded really quick, and I'm working towards the present day here. So this is from Sam. We have been learning quickly from how people are using Sora and taking feedback from users, rightsholders, and other interested groups. We, of course, spent a lot of time discussing this before launch, but now that we have a product out, we can do more than just theorize. We're going to make two changes soon and many more to come. First, we will give rightsholders more granular control over generation of characters, similar to the opt-in model for likeness, but with additional controls. They're saying, we've heard from a lot of rightsholders who are very excited for this kind of interactive fan fiction and think this new kind of engagement will accrue a lot of value to them, but they want the ability to specify how their characters can be used, including not at all. We assume different people will try very different approaches and we'll figure out what works for them, but we want to apply the same standards towards everyone and let rightsholders decide how to proceed. Our aim of course is to make it so compelling that many people want to. There may be some edge cases of generations that get through that shouldn't, and getting our stack to work well, we'll take some iterations. So there's some acknowledgement there that they cannot really fully control things and that they want to move into this.

Perry Carpenter: Yeah.

Mason Amadeus: And then they acknowledge this bit about Japan, which is going to flesh out the middle of our segment here, because Japan has responded too. They say, in particular, we'd like to acknowledge the remarkable creative output of Japan. We are struck by how deep the connection between users and Japanese content is. 

Perry Carpenter: Yeah.

Mason Amadeus: I mean, all of the Nintendo stuff, lots of anime, you remember the studio Ghibli thing from before. And they say second, and this is the part of the blog post that has really stuck in my craw. Second, we're going to have to somehow make money from video generation. People are generating much more than we expected per user and a lot of videos are being generated for very small audiences. We're going to try sharing some of this revenue with rightsholders who want their characters generated by users. The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share. But of course we want both to be valuable. Now, what do you read between those lines, Perry? 

Perry Carpenter: I think there's a lot there, right?

Mason Amadeus: Yeah.

Perry Carpenter: Yeah, I think it means we want to tell you the thing that you want to hear, but you're going to wait to see any real payoff if that ever happens. And we see this with companies all the time, right? Every company that is creating a product or set of products has to forecast to the world about like what's their roadmap and those roadmaps need to have cool and exciting things. And in this case, they need to deal with some of the negative PR that's happened. And so they needed to entice people to say, yeah, that there's revenue, there's money, there's also equity for perceived wrongs and all that that's going to come. We have no idea what that's going to look like. It's going to take a while to figure it out, which means we could be in status quo for a really long time. They're not promising a timeline on that.

Mason Amadeus: And they're also being very deliberately vague about how they intend to do any kind of monetization. And I like in trying to think, okay, if I was in charge of trying to monetize Sora, what would I do? I mean, showing ads is one thing.

Perry Carpenter: Yeah.

Mason Amadeus: They seem to really have this idea that rightsholders and like companies like Nintendo and whatnot will want people to generate things with their characters. And I think that that is kind of deaf to public sentiment at the moment.

Perry Carpenter: Yeah, I think it comes down to a couple of things. If they could build the right kind of guardrails, which is going to make everybody mad that wants to generate those characters. But if they can build guardrails that fit the personas that they want out and that also don't create confusion and potentially mislead people about what the character is doing that actually has that brand behind it and standing behind it saying, this is a Nintendo created story. If there's ways to differentiate those things and still give the joy to creating, I'll use his word, fan fiction with it, then that's good because people like to put Darth Vader and stuff and people like to put Pikachu in things. And if Disney sued every time, you know, somebody wore a Darth Vader costume and did a skit on video with their friends, that would be a bad situation. So we're just looking at the next iteration of that where text-to-video can allow somebody to do those things, but you got to figure out like, what's the right way to do it.

Mason Amadeus: What's interesting about what you just said though, is that I think the shift is they couldn't sue everyone putting on a Darth Vader mask and doing a skit.

Perry Carpenter: Right.

Mason Amadeus: But they probably would sue someone doing a like full-budget production of some Star Wars property with like lights and sets and costumes and everything.

Perry Carpenter: Yeah.

Mason Amadeus: And now with AI, you can kind of at that level of quality 

Perry Carpenter: Yeah. Well, some of it comes down to the perception of the audience. Like I think the way that I understand current copyright is you can use trademarked characters in like parody or things like that all day long, because nobody is going to have the perception that it is the real thing. And you can even like tarnish the image of that. You could have Darth Vader doing and saying really stupid things in parody. But you're not really tarnishing the brand because people understand that you're just making a point. I think that AI has to catch up with that. And some of the regulations, we see people making those kinds of distinctions when it comes to deepfakes and things like that. But there has to be the same type of thought on like creative output and what's permissible and what's not and who gets confused and what is the creative intent and then also what is the creative impact.

Mason Amadeus: And fair use is not extremely robust and is still left so much up to interpretation.

Perry Carpenter: Right.

Mason Amadeus: Like the four factors test we've actually talked about on a previous episode.

Perry Carpenter: Yeah.

Mason Amadeus: The one that tends to hold the most weight is like, does this use of copyrighted characters take away monetarily from the rightsholder? And so, yeah, that is, I think going to be the thing that holds the most.

Perry Carpenter: Yeah. And does Pikachu being barbecued take away money from the rightsholder? I don't think it does. Somebody would draw that in their sketchbook for fun.

Mason Amadeus: Right. 

Perry Carpenter: You know, if they wanted to spend 20 minutes drawing it on their sketchbook. The difference here is that you can merely have the idea and type the sentence and then see what happens.

Mason Amadeus: And then boom, it's there.

Perry Carpenter: Which is why we have slop.

Mason Amadeus: Exactly. So Japan has responded. The digital minister who has several different titles and I'm not super familiar with, but we're just going to call him the digital minister of Japan. I'm reading here from japantimes.co.jp. Japan has requested that OpenAI, creator of the Sora 2 short-form video app, seek approval in advance from intellectual property owners to prevent infringing on copyright amid growing concerns over a deluge of Japanese anime characters used across the platform. So they want the opt-in model. During a TBS program on Sunday, Digital Minister Masaki Taira said the government has asked OpenAI to change to an opt-in model rather than an opt-out model, which requires copyright holders to request that the app operators not use its characters. The digital minister said there needs to be a mechanism where copyright holders will be compensated when their character is used on the platform. And he also said that the government has asked the company to implement measures allowing copyright holders to request deletion of content, which he said OpenAI complied with. Sam Altman in his blog post said a lot of those same things. The state minister in charge of intellectual property strategy, Minoru Kiyuchi, said anime and manga are irreplaceable treasures of our country. As a government, we would like to respond appropriately. And so Japan has this whole thing that we should get into in a different episode called the Cool Japan Strategy, and they want to -- like, there's this whole initiative to embrace AI creation and make Japan AI-friendly.

Perry Carpenter: Right.

Mason Amadeus: The way they handle copyright over there is very different, too. So there's kind of a lot of minutiae to get in here, and I've realized we've run out of time, but basically Japan is officially -- because they are taking note of how much of their cultural media is being recreated in this app. 

Perry Carpenter: Yeah.

Mason Amadeus: And so OpenAI, I'll real quickly hit these last two. This is from 404 Media. OpenAI tightening the guardrails has led to a lot of users claiming that Sora is bad now. Let's get over it. Boo, this sucks.

Perry Carpenter: Right.

Mason Amadeus: Generating copyrighted characters is a huge part of what people want to do in the app. And now that they can't, and the guardrails are apparently so strict, they're making it hard to get even non-copyrighted content generated, and users are getting pissed. This is from the 404 Media article. People started noticing the changes to guardrails on Saturday, immediately after Altman's blog post. Did they just change the content policy on Sora, too? someone asked on the OpenAI subreddit. Seems like everything now is violating the content policy. Almost 300 people have replied in that thread so far to complain or crash out about the change, saying things like, it's flagging 90% of my requests now. Epic fail. Time to move on. And then this one, this pull quote that they had, moral policing and leftist ideology are destroying America's AI industry. I've canceled my OpenAI+ subscription, another replied, implying that copyright law is leftist. So the general, like, the general public use of Sora very much is make dumb brain rot meme using familiar character.

Perry Carpenter: Right.

Mason Amadeus: Haha, send to friends. And I think OpenAI has kind of lofty ideas about what people are going to want from Sora, while at the same time making this TikTok-like brain rot app.

Perry Carpenter: Right.

Mason Amadeus: So I feel like everything is just kind of all mixed up at the moment.

Perry Carpenter: Yeah.

Mason Amadeus: And then, regardless of all of this, they are still plugging along. Sora 2 and ChatGPT are consuming so much power that OpenAI just did another 10-gigawatt deal. This final article is from CNN. OpenAI is partnering with Broadcom to design and develop 10 gigawatts of custom AI chips and systems, a massive amount of power that will use as much electricity as a large city. It's the latest partnership between OpenAI and a high-profile chip company, coming after it struck deals with NVIDIA and AMD, as the company seeks to secure more computing resources to serve its growing user base. They say partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI's potential, and blah, blah, blah, blah, blah, blah, blah. Basically, it would use as much power as 8 million U.S. households, according to Reuters. It's just another huge 10-gigawatt deal. So the expansion, the scaling continues, the data centers are growing, even as they are not really making money from this.

Perry Carpenter: Yeah.

Mason Amadeus: I saw an article that I was going to cover, but I'm not financially smart enough to understand. So I'm going to hand it to you, I think, about how the AI bubble might be 27 times the size of the Dotcom bubble, and even worse. We can link to it in the show notes if the listeners want to check that out.

Perry Carpenter: Yeah.

Mason Amadeus: But I had a hard time following it because I'm not like a stocks and figures kind of brain.

Perry Carpenter: Yeah, and I mean, it's a big debate right now on whether it's a bubble in the sense of a real bubble or if it's -- because people are having a hard time, like, grasping where the value creation is. 

Mason Amadeus: Yeah.

Perry Carpenter: But you definitely see some value creation with this. I think it would be hard to say that you're not seeing increased productivity in some areas and increased output in some areas. Now, whether all that is good and it creates economic value is a different thing, but you have to look at those things, and you also have to look at, like, the cost to society with potential people having a harder time getting jobs and all that other kind of stuff. Also, the other thing that comes into the bubble-fication of this, if we were to talk about it, is the number of companies and startups that start to create and proliferate, and then the ones that collapse immediately any time a model maker changes something.

Mason Amadeus: Right.

Perry Carpenter: Like, they go, oh, this functionality that, you know, now 100 companies have built products around, this functionality is now built into the product for free.

Mason Amadeus: Yeah.

Perry Carpenter: And then that potentially decimates those companies if they can't pivot real fast. So there's a lot of that. There's a lot of, like, try-fail cycles going on and a lot of, I think, real economic value creation that's happening, but there's also a lot of where creation is happening, things are also being taken away in other areas, and measuring any of that effectively is really difficult.

Mason Amadeus: I feel like the thing I encounter the most is that kind of parasitic monetization where someone has built a wrapper around some functionality that they have no hand in creating and may not even understand but are trying to sell as a service and then, like you said, gets amputated. 

Perry Carpenter: Oh yeah. 

Mason Amadeus: I see that so much.

Perry Carpenter: And I mean, that's super easy to do, right?

Mason Amadeus: Yeah.

Perry Carpenter: Because you see the immense functionality that could be done by one of these companies and for whatever reason, OpenAI or Anthropic doesn't do that thing. And then you go, but wait, there's an API into that. I can create this really cool wrapper around it and make the interface nice. I can make it easy to use. I can do all that. And I can sell it for decent amounts of money at a subscription model. And then all of a sudden, OpenAI or somebody else goes, we now do that. And then you have to reevaluate. It's like, is the value in the fact that the thing is being done? Or is it the value more in the fact that I've created a nice wrapper around it that increases the usability and the workflow and has generated reports based on it? So you have to pivot your message too. It's not that we do that. It's that we create value because that thing can be done. 

Mason Amadeus: It really feels like a lot of sandcastles built on a beach made of stolen sand, you know?

Perry Carpenter: Right.

Mason Amadeus: And with copyright being at the center of it, all this copyrighted sand.

Perry Carpenter: Yeah.

Mason Amadeus: It'll be interesting to see how this continues to play out. I'm curious if they will decrease how much you can do for free.

Perry Carpenter: Right.

Mason Amadeus: And to pivot back to the video generating apps and close this out, they are increasing the limits. They just made it so that free users can make up to 15-second videos. Pro users can now make up to 25-second videos. And there's also a new storyboarding feature to make more elaborate things. But I would be surprised if the usage continues to be free without higher monetization because of how much money they are bleeding.

Perry Carpenter: Yeah, they're going to have to figure out how to monetize that. And somebody that I listened to, I forget which podcast it was on, said that they did notice that OpenAI had a job posting out for like a director of ads.

Mason Amadeus: Ah.

Perry Carpenter: So at some point, there's going to be, whether that's in Sora, I can almost guarantee that they're going to put ads in Sora at some point. And then also, OpenAI has this more agentic functionality where it can go do things for you while you're sleeping and it presents everything in like these card formats. And so it would be really easy to slip an advertisement card in with those things too. So, I mean, when you're hemorrhaging that much money and you're selling essentially either free services or $20 a month services and people are pulling much more money out of that in compute, you got to figure out how to offset it. And ads is sometimes the only way to do it. But then there can be a slippery slope there. We'll have to see how they do that.

Mason Amadeus: There's good and bad ways to do ads, for sure.

Perry Carpenter: Yeah.

Mason Amadeus: Yeah. So a lot remains to be seen. But aside from the monetary side of things, we should talk about the societal side of things. In our next segment, we're going to be talking about slop, watermarks, and deception. Yeah 

Perry Carpenter: Yeah. Yeah. So the slopification of AI, which I think becomes self-slopifying a lot of times. And then how we start to deal with that as far as how we think about truth and trackability and what can be done. 

Mason Amadeus: Stay right here and we'll be right back with all of that.

Perry Carpenter: So I wanted to talk a little bit. This is kind of extending the Sora conversation even more. But it touches on something that we also saw with like Google Veo 3. Once people started really kind of getting into that and seeing what's possible, they started creating these viral social media clips. And those social media clips often were trying to reflect something that was going on in real society. Like I think at that point, one of the ones that we talked about was somebody who was doing essentially like a vlog of a National Guard serviceman as they were in L.A. during some of the stuff there when they were first sent.

Mason Amadeus: Yeah.

Perry Carpenter: We're seeing the equivalent of that now with Sora and the National Guard and the military being sent to Portland. And similarly, people are seeing those on TikTok and they're believing that they're real.

Mason Amadeus: Man.

Perry Carpenter: And so I wanted to show you one and then we can talk about like the effect of it and some of the interesting things that go with that.

Mason Amadeus: All this AI stuff could not have come out at a more sociopolitically worse time. Man.

Perry Carpenter: Yeah. Exactly. All right. Let me share my screen here and make sure that I'm getting the right one. There we go. So anybody that's been seeing some of the news has seen that like when Portland does demonstrations, they do demonstrations kind of in a party atmosphere a lot of the time. And so you have people dressed up in these big inflatable costumes and one of those in real life was this person dressed in a frog inflatable.

Mason Amadeus: Yeah 

Perry Carpenter: And they walked up to the National Guardsman and people and they were they were basically just egging them on but they were like dry humping the air in the frog uniform.

Mason Amadeus: Yeah. Just being ridiculous.

Perry Carpenter: Yeah. Which was really funny to see. But then somebody took the idea of that and they went to Sora and there's all these memes now of people in these inflatables in Portland doing things both kind and unkind to the National Guard that are there.

Unidentified Person 1: Here's a flower, man.

Unidentified Person 2: All right.

Unidentified Person 1: Peaceful.

Unidentified Person 2: Got it. That was it.

Mason Amadeus: Yeah. The frog walks up and gingerly places a flower onto the gun of this National Guardsman and this is -- man, Sora's videos are really, really believable. It does not look like AI.

Perry Carpenter: Yeah.

Mason Amadeus: Like the only thing is that the inflatable has an opening which wouldn't be possible.

Perry Carpenter: Right. Yeah. But you also could say that maybe there's a screen there that for some reason just didn't come across. I'm trying to capture and pause on the very last part because this is what you see people comment on. So the flower gets placed and then like you see, yeah, you see this little smile from the guardsman.

Mason Amadeus: Yeah.

Perry Carpenter: His mouth turns up and he acknowledges it. And so that starts to go really crazy in the comments, right?

Mason Amadeus: Oh, yeah.

Perry Carpenter: You know, I like the part where you actually broke character and actually smiled, the acceptance of some sort of peace. Some people are saying, pretty sure this is AI. You can absolutely know this is AI because if you look at all at the video, there is a Sora logo 

Mason Amadeus: Yeah.

Perry Carpenter: Yeah. There's a Sora watermark that is jumping around which means this was created on the Sora platform. So absolutely, you know that it's AI, but, you know, a lot of them smile. Don't let these Karens blame it on AI.

Mason Amadeus: Wow. Somebody really commented that. Unbelievable.

Perry Carpenter: And then some people are also just saying, no, he's just doing his job. They don't agree where they are, but they will absolutely follow the contract that they've signed. Other people still saying it's AI. He feels --

Mason Amadeus: He feels bonita now.

Perry Carpenter: A smile. Okay. 

Mason Amadeus: Yeah. 

Perry Carpenter: His smile shows that humanity will win. Our National Guard troops do not want to be there.

Mason Amadeus: Gosh.

Perry Carpenter: Some other people talking about it being AI.

Mason Amadeus: But there's --

Perry Carpenter: But then others -- yeah. I mean, it's about, I think, 60 to 70% of people that look like they believe it's true.

Mason Amadeus: Yeah. And then there's also, you get people sort of in this middle zone where they're like, even if it's not true, the sentiment is something I agree with.

Perry Carpenter: Yeah.

Mason Amadeus: Yeah. Man.

Perry Carpenter: Yeah. Somebody else saying this is de-escalation. We shouldn't be fighting each other. It's the people way higher than us that are making us all fight.

Mason Amadeus: Man.

Perry Carpenter: That's why I freaking love Portland. Keep it weird. I watched this 10 times just to see the smile at the end that melted my heart. He approves of the peace. He reminds me of Kent State where they put flowers in the barrel of the gun. This is so sweet. I bet most of these guys do not want to be there.

Mason Amadeus: Gosh, dude, this is depressing.

Perry Carpenter: Yeah. That dude is trying so hard not to laugh. So, you know, only the initiated understand that little Sora watermark, which is a big problem, right?

Mason Amadeus: Yeah.

Perry Carpenter: If you follow AI, you know that that watermark means something because you know that Sora is a social media platform. If you don't follow AI, then that could be any platform.

Mason Amadeus: Yeah. I mean, everything's littered with watermarks, right?

Perry Carpenter: Yeah.

Mason Amadeus: We kind of all ignore them habitually anyway. Like it's to the point that people use watermarked things as memes various like in various contexts.

Perry Carpenter: Right 

Mason Amadeus: Like so, yeah, you won't notice that if you're not aware of it. The video you just showed was not on Sora, right? Someone had posted it.

Perry Carpenter: No, that was on TikTok. Yeah. And so anything created on an AI only platform will leak off that platform and onto other social media platforms.

Mason Amadeus: Yeah.

Perry Carpenter: So that's the reason why they put the watermark there in the first place is so that if you download it and upload it to YouTube or TikTok that people can trace that back and go, oh, that's a Sora video. So that's OpenAI trying to be transparent about it. The problem is we don't have an aware population.

Mason Amadeus: Yeah.

Perry Carpenter: The other problem is, as I've just opened in this tab, is that we have a ton of Sora watermark removers right now and they all work, you know, to a certain degree. But I've used some that are actually really good right now and so it's super easy just remove that watermark entirely. So if I wanted to create a piece of disinformation, I could go to Sora, I could make a very believable video, I could then run that through a watermark remover, and then I could post that to TikTok or YouTube or Facebook or wherever. So that's overly easy to do. 

Mason Amadeus: I was just going to say, it didn't take long. I'm not really surprised because, of course, it didn't take long, but right away, people have just been making these tools to take away the watermarks.

Perry Carpenter: Yep.

Mason Amadeus: Yeah. 

Perry Carpenter: Well, I mean, any time somebody puts a visible watermark on something, other people want to remove that for both real reasons and for bad reasons, right?

Mason Amadeus: Yeah. 

Perry Carpenter: If I'm making something creative, I don't want that tarnished by a watermark. If I'm creating something deceptive, I don't want that tarnished by a watermark. So it can live on both sides.

Mason Amadeus: Yeah.

Perry Carpenter: The other interesting thing is because OpenAI is wanting to be transparent, they did put that watermark there. The other thing that they did, and I'll share my screen one more time, is they embedded what's called a C2PA credential, and that is the Coalition for Content Providence and Authority. We mentioned them a couple times.

Mason Amadeus: Oh, authenticity 

Perry Carpenter: So C2PA is -- yeah, yeah. Sorry, yeah, Coalition for Content Providence and Authenticity. This is a bit of metadata that can get added to any file that shows essentially where it was created and allow the tracking as things get changed, like in Photoshop and all that. Because Adobe will maintain that chain of, you know, essentially a changelog within media that gets generated there. Not everything does, though. Sometimes you can make changes, and you can accidentally keep the metadata that's there. Other times you can intentionally strip that out. Other times a vendor may make something that allows you to change a file, and then they might save it as a new clean file that then unintentionally removes any metadata. And so the fact that OpenAI is putting the C2PA credentials in, again, gives you the ability to detect that, but only if the person that is uploading that to TikTok or wherever, only if they're not intentionally trying to strip that out as well. So this helps a little bit, but it also isn't a full solve.

Mason Amadeus: I mean, the tough thing with anything that exists only digitally is that kind of provenance, right?

Perry Carpenter: Yeah. 

Mason Amadeus: There is nothing that is -- irascible is not the right word. There's nothing that is truly permanent that you cannot somehow change in software.

Perry Carpenter: Yeah. 

Mason Amadeus: It's just data.

Perry Carpenter: Exactly. 

Mason Amadeus: So like, there is no perfect initiative that can fix this, as far as I can think of.

Perry Carpenter: Yeah. I just wanted to let listeners and viewers know, if you do want to check that out, you can go to c2pa.org. Because if you do have, if you've got, like, a news crew that's creating information, then adding that C2PA credential from the very beginning can help that get tracked over time. But we also have to know that if I'm a bad actor, I could add a C2PA credential into something. So this helps with, like, crimes of convenience or people that are merely curious that don't understand the technology and don't want to get deep into it. You can also go to verify.contentauthenticity.org and you can drag a file into that. So if you were to find a file online and want to know if there's a C2PA credential embedded in that, you can upload that there. Again, I've downloaded things straight from Sora and uploaded them to this and it finds the credentials there. I've run those through other systems that will both intentionally strip out watermarks or unintentionally change things. Like, if you wanted to add cool captions and you go to Descript to do that, Descript seems to accidentally remove the C2PA in this when you do the export.

Mason Amadeus: Well.

Perry Carpenter: So know that that's the thing. All that to say, if you really, really want to get into this, there's a whole bunch of stuff you could look at to understand whether something may or may not be legitimate. And there's a really good article from the Global Investigative Journalism Network, and it's called A Reporter's Guide to Detecting AI-Generated Content. We don't have time to go through this whole thing, but it is good.

Mason Amadeus: I poked through this as well, and they talked about, similar to the TED Talk we shared in our Discord.

Perry Carpenter: Yeah.

Mason Amadeus: I don't remember if we shared it on the show, and I wish I could remember his name off the top of my head, but he talks about looking at the --

Perry Carpenter: Hany Farid.

Mason Amadeus: Hany Farid?

Perry Carpenter: Yeah.

Mason Amadeus: He talks about looking at the Fourier transform of a generated image and looking at patterns in essentially the lowest levels of noise in there.

Perry Carpenter: Yeah, that's a big piece of it.

Mason Amadeus: And that's referenced in this article, too. Yeah, this is a very good one 

Perry Carpenter: It is. Yeah, this one, we have to realize, too, that the state of the game will change over time. Some of these points that they point out kind of work for some of today's technology and yesterday's technology. What we see is that these models are improving so fast that even many of the AI detectors that were pretty good at detecting voice a couple months ago will come up and show that a voice is authentic if it's generated with a current tool. 

Mason Amadeus: Ah.

Perry Carpenter: Like ElevenLabs Version 3 tends to come up as being legitimate in most of the detectors that I've tried. Even the Sora 2 audio, as weird as that sounds to our ears, that comes out as legit in most of the detectors.

Mason Amadeus: Really? Interesting.

Perry Carpenter: Actually, when I listen to a voice in Veo 3 from Google or a voice in Sora 2 from OpenAI, there's a distinctive character that I can hear in it. Like, in Veo 2, there's this, like, strident -- it's like almost a piercing hum that goes throughout it. It's hard to describe, but once you hear it, it's there forever. And then in Veo 2, there's, like, a crunchiness almost -- it almost sounds like two voices merging together.

Mason Amadeus: In Sora, they have a really big problem with stereo phase coherence in the generated voices.

Perry Carpenter: That's what it is.

Mason Amadeus: You can actually -- not to give any tips to anyone trying to make it sound more believable, but if you cut it down to mono, either grab one channel or -- some of them both will sound weird, but if you grab one channel, it'll sound more just like noise reduction artifacts or, like, digital compression artifacts.

Perry Carpenter: Yeah.

Mason Amadeus: Because it is via -- the stereo coherence is weird, and it creates that feeling of, like, two simultaneous voices because it's not perfectly aligning them, essentially, in the stereo field.

Perry Carpenter: Yeah, that's what it sounds like. It sounds like there's kind of two voices at the same time or somebody's trying to overcome a big frog in their throat sometimes.

Mason Amadeus: Yeah.

Perry Carpenter: There's just, like, gravel in people's voice that don't have gravel. But it tends to show as legit in many of the detectors that I've run it through. So we're not going to go through all these, but this article lists 10 different things to potentially check from looking at some of the online detectors to things like looking at when somebody's figure is too symmetrical or too perfect, when perfect becomes the tell, which is an interesting way of thinking about it. And there's a lot of detail for each of these checks that they tell you to look at. Also looking at perspective and the way things should come together at the horizon point. That's something that Hany Farid talks about a lot.

Mason Amadeus: Yeah.

Perry Carpenter: And something that their tool, Hany's tool, GetReal, allows you to investigate. So that's really good.

Mason Amadeus: They do mention too that a lot of the visual tells are things that will become outdated, and os it's important to keep upgrading those.

Perry Carpenter: Yeah.

Mason Amadeus: Yeah.

Perry Carpenter: Yep. Pixel analysis, that's starting to get into some of the hot spots that are there. Let's see what else is here. Voice and audio artifacts, I think those are really going to start to go away over time. 

Mason Amadeus: Yeah. 

Perry Carpenter: So that category four here, I think, starts to go to the wayside. Category five, temporal and contextual logic, obviously.

Mason Amadeus: Yeah. And again, like, all of this, this is just a cat and mouse game. This is going to continue on and on as these systems get better and better at different things. This article is a very good entry point, though, into a lot of, like, detailed methods that you can use currently. It'll be interesting to see how long all of it holds up.

Perry Carpenter: Yeah. I think all of this is a ticking time bomb, right? This is the arms race. Six is behavioral pattern recognition when AI gets humans wrong. This is something any disinformation artist is going to try to solve for, though. They're going to see the flaws in things, and then they're going to find creative ways to move your eye away from those. So it's good to know that those are there. These help people that are looking at things forensically, but they don't help the casual viewer.

Mason Amadeus: Yeah.

Perry Carpenter: Intuitive pattern recognition, obviously. Does that feel like it lives up to the way the real world works or the way that real people work? And, yeah, I have said ten things. There's only seven things here. So forgive me. You can make up those other three in your head. Go for it.

Mason Amadeus: Yeah. Have AI generate the other three.

Perry Carpenter: Yeah. Ask ChatGPT. That would be all good. 

Mason Amadeus: That is a really good resource. I'm glad that someone put that together, especially as newsrooms and, like, smaller journalistic outfits are struggling with the onslaught of generated content and news stories and everything.

Perry Carpenter: Fun times.

Mason Amadeus: Fun times, indeed. And amidst all of this AI stuff, there is still the normal cybersecurity bad actors doing bad things. In our next segment, we're going to talk about a good old-fashioned data breach that just affected a lot of Discord users.

Perry Carpenter: Nice.

Mason Amadeus: Stick around. So Discord has been, well, has been the victim of several different data breaches over the years. But most recently, Discord has had another data breach, a pretty big one, where the hackers claimed that 1.5 terabytes of data and 2 million government ID photos have been extorted. The article that I'm going to share here is from October 9th. This is from Cybersecurity News. But we have a couple of other sources to jump to in here as well. The popular communication platform Discord is facing an extortion attempt following a significant data breach at one of its third-party customer service providers with Zendesk. Threat actors claim to have stolen 1.5 terabytes of sensitive data, including over 2.1 million government-issued identification photos used for age verification. While Discord confirms the breach, it disputes the scale of the incident, saying that approximately 70,000 users had their ID photos exposed. The breach happened in September, on September 20th, 2025. It did not compromise Discord's own servers, but instead targeted its customer support systems managed by a third-party vendor. This article names Zendesk, but we will go on to see a company called 5CA mentioned. The attackers reportedly gained access for 58 hours by compromising the account of a support agent employed by an outsourced business process provider. This notorious cybercrime group is known as Scattered Lapsus$ Hunters, SLH. They've claimed responsibility, taunting the company publicly while attempting to secure a ransom. Now, I have an article where BleepingComputer actually got to speak with the hackers in depth, and we're going to get to that in a moment.

Perry Carpenter: Okay.

Mason Amadeus: Have you heard of Scattered Lapsus$ Hunters, Perry? That's not a group I'm aware of.

Perry Carpenter: Not as much. I'm very familiar with Scattered Spider, the ones that took down MGM and several other large-scale ones. But this seems like essentially the same methodology.

Mason Amadeus: Same kind of thing. Discord pointed the finger at 5CA, a company that they had outsourced to for customer support. And 5CA put out a statement not that long ago contradicting that. So here's the thing from 5CA's own website. We are aware of the media reports naming 5CA as the cause of a data breach involving one of our clients. Contrary to these reports, we can confirm that none of 5CA's systems were involved, and 5CA has not handled any government-issued IDs for this client. All our platforms and systems remain secure, and client data continues to be protected under strict data protection and security controls. We're conducting an ongoing forensic investigation into the matter, collaborating closely with our client as well as external advisors, including cybersecurity experts and ethical hackers. Based on interim findings, we can confirm the incident occurred outside of our systems and that 5CA was not hacked. There's no evidence of any impact on other 5CA client systems or data. Our preliminary investigation -- or sorry, our preliminary information suggests the incident may have resulted from human error, the extent of which is still under investigation. We remain in close contact with all relevant parties and will share verified findings once confirmed. So this statement was updated as of October 14, 2025, so just a couple days ago.

Perry Carpenter: Yeah. And that's an interesting phrase they put at the end, right? So they made big claims about their systems not being breached and nothing going wrong, there's no hacking. And then their last paragraph is that they're confirming an incident.

Mason Amadeus: Yeah.

Perry Carpenter: And they're saying that this may have happened through human error. Human error is the gateway for most hacking and most exploitation of systems. So they're actually arguing against themselves 

Mason Amadeus: I feel like it's a bit of them trying to cover their butt by saying, we weren't hacked per se. We had a person.

Perry Carpenter: Yeah, nobody made it through a technical guardrail. Somebody went around all of that. It's still being hacked. Like, that person penetrated your system no matter what.

Mason Amadeus: That's got to be pretty frustrating for you to read, right?

Perry Carpenter: Oh, yeah. Well, I mean, it's also, I think, step one in the scapegoating of somebody.

Mason Amadeus: Yeah. 

Perry Carpenter: You know, some poor helpdesk person that received a phone call and got tricked into doing something is probably going to lose their job because the company can point to that. But in reality, if calling up a helpdesk person makes your company that vulnerable, then there's probably other processes that should have been in place. In the same way that if somebody clicking on a link in a phishing email doesn't show that that person is the weakest link or the reason that your company got hacked. No, your secure email gateway didn't catch the phishing email on the front end. Them clicking the link and kicking something off means that you didn't have the right kind of endpoint protection platforms in place. You didn't have application sandboxing. You didn't have the right network segmentation. You didn't have the right protection around the data that's there. So that person is just one link in the chain, but because you can point at it and say, Bob did it, they become the ultimate scapegoat.

Mason Amadeus: It belies sort of deeper problems and like a culture of pointing fingers, right, sort of 

Perry Carpenter: Yeah.

Mason Amadeus: Yeah.

Perry Carpenter: Yeah, it's just the blame game at work.

Mason Amadeus: Not super great. So we'll move to this fascinating article from BleepingComputer.com who got to speak directly with the hackers. We'll start here with Discord's statement to BleepingComputer where they said, first, as stated in our blog post, this is not a breach of Discord but a third-party service. And they say, second, the numbers being shared are incorrect, part of an attempt to extort a payment from us. Of the accounts impacted globally, we have identified approximately 70,000 users. So they reiterate that statement. Which may have had government ID photos exposed, which our vendor used to review age-related appeals. So Discord also doing a bit of distancing and finger-pointing. Here is where BleepingComputer had a conversation with the hackers. I'll just read directly. In a conversation with the hackers, BleepingComputer was told that Discord is not being transparent about the severity of the breach, stating that they stole 1.6 terabytes of data from the company's Zendesk instance. According to the hackers -- and again, remember, they also have an incentive to inflate their own works. But they say they gained access to Discord's Zendesk -- I can't say Zendesk -- Discord's Zendesk instance for 58 hours, beginning on September 20th. However, the attackers say the breach did not stem from a vulnerability or breach of Zendesk, but from a compromised account belonging to a support agent employed through an outsourced business process outsourcing provider. That would be 5CA. That's the human error they were referring to. As many companies have outsourced their support and IT helpdesks, they have become a popular target for attackers to gain access to downstream customer environments. The hackers allege that Discord's internal Zendesk instance gave them access to a support application known as ZenBar that allowed them to perform various support-related tasks, such as disabling multi-factor authentication and looking up users' phone numbers and email addresses. Using access to Discord's support platform, the attackers claim to have stolen 1.6 terabytes of data, including around 1.5 terabytes of ticket attachments, over 100 gigabytes of ticket transcripts. The hackers say it consisted of roughly 8.4 million tickets affecting 5.5 million users, 580,000 contained some sort of payment information, and the threat actors themselves acknowledged to BleepingComputer that they are unsure how many government IDs were stolen, but they believe it's more than 70,000, as they say there were approximately 521,000 age-verification-related tickets. They also shared a sample of the stolen user data, which can include a wide variety of info, including email addresses, Discord usernames, phone numbers, partial payment information, date of birth, MFA stuff. The payment information for some users was allegedly retrievable through Zendesk integrations with Discord's internal systems. The hackers want $5 million in ransom. They later stepped it down to $3.5 million. They've been engaged in private negotiations with Discord between September 25th and October 2nd. Discord ceased communications, released a public statement about the incident. The attackers said they're really angry about that, and now they're going to leak the data publicly if an extortion demand is not paid. BleepingComputer reached out again to Discord, and they did not receive any answers beyond the statement that I read earlier in the article. So the saga is still kind of ongoing. I think I believe -- like the truth obviously lies somewhere in the middle.

Perry Carpenter: Yeah.

Mason Amadeus: I feel like the hackers definitely stole more than 70,000 IDs.

Perry Carpenter: Probably so, yeah. I mean, it's also this weird kind of supply chain type of thing too. So, right, it's Discord to Zendesk to this 5CA. Was that it 

Mason Amadeus: I think the connection, the chain of connection goes, Discord hires 5CA for support. 5CA uses Zendesk's platform. Yeah.

Perry Carpenter: So then that's like the primary vendor that they're doing business with, and everybody uses Zendesk. So Zendesk is kind of more of an innocent party in this, I guess. 5CA is saying that they didn't get breached, but if they did, it was Bob. What this shows from just a standard cybersecurity perspective is that like third-party risk management is something that we've been talking about for a long time.

Mason Amadeus: The McDonald's hack.

Perry Carpenter: Yeah, the McDonald's, the Target hack way back in the day from the HVAC vendors. 

Mason Amadeus: Yeah.

Perry Carpenter: I mean, it just shows how long this has been a problem. And everybody's been talking about it. Everybody's been building process around it. Everybody's been auditing around it. And it's just hard to know how long the chain is and how much -- that your diligence needs to transfer three and four channels down the chain. You're supposed to hold them to like the highest standards on security and auditability and everything else. And so if they messed up that bad and had that bad of a spill, then what it means is that they probably weren't fully transparent in the first place about -- or they were unknowledgeable about their own vulnerabilities. So either they were overly optimistic or they were overly ignorant.

Mason Amadeus: And there's also -- there is the potential here for like a good old-fashioned honest mistake too, right, on the behalf of someone getting unlucky, being the one who clicks the link.

Perry Carpenter: Yeah.

Mason Amadeus: Because it could very well have just been that things lined up just right. I mean, you have to -- if you're going to provide support, you can't not have access to all sorts of these intricate systems.

Perry Carpenter: Right. 

Mason Amadeus: But there should have been probably some more things standing in the way.

Perry Carpenter: Yeah. Well, I mean, that raises one more question though. So with something like identity verification or anything where there's lots of personally identifiable information that's involved, how long do you actually need to keep that and store that? 

Mason Amadeus: Right.

Perry Carpenter: Because some of this could potentially just be a check, a verification going, yep, that checks out. And then you expunge that and you purge that.

Mason Amadeus: That's the other thing kind of at the heart of this that I had tried to make a mental note to remember but then forgot was, why are these places storing these things? 

Perry Carpenter: Yeah.

Mason Amadeus: You would really assume they would get rid of them. Like when you went -- recently I had to do a Google film your face thing to secure one of my accounts.

Perry Carpenter: Yeah

Mason Amadeus: And it was like this, video will be deleted 72 hours after upload and all of that.

Perry Carpenter: You're like, that makes sense.

Mason Amadeus: Yeah. That's why the opening joke for this episode was avoid putting your government-issued ID on any website if you can, because obviously they're not getting rid of them.

Perry Carpenter: Yeah, which is just a huge lax. I would like to -- if somebody knows why they would want to keep that around after the fact, that would be great to know from a business perspective.

Mason Amadeus: Yeah.

Perry Carpenter: I think, though, that if the main reason for doing this is just a simple one-time, yes, that person seems to be who they say that they are, we're going to go ahead and green flag this, then you probably don't need to keep that on file. In fact, from a business perspective, you might want to minimize your risk by explicitly expunging that.

Mason Amadeus: Yeah. It's bizarre that they didn't apparently. So, yeah.

Perry Carpenter: Yeah.

Mason Amadeus: If anyone who is receiving this audio into your brain knows why that might not be the case, please reach out to us, hello@8thlayermedia.com.

Perry Carpenter: Because you can sell it to the highest bidder.

 Mason Amadeus: Yeah.

Perry Carpenter: That's the reason to do it.

Mason Amadeus: I guess. But surely you can't sell people's government IDs, right? I would think -- I mean, I'm not well-versed.

Perry Carpenter: No, that never happens.

Mason Amadeus: Yeah. Anyway.

Perry Carpenter: No, that does happen all the time.

Mason Amadeus: If you are one of the accounts affected, Discord says they will be emailing you from noreply@discord.com. So if you are one of the people affected --

Perry Carpenter: Which means that they don't want you to talk back to them.

Mason Amadeus: Yeah. Yeah. No reply. Just, yeah, keep an eye out for an email, I guess, especially if you have uploaded an ID to Discord, especially if you pay for Discord's features. If you don't, I don't really think you have much to worry about necessarily. I mean, I can't say that conclusively, but it definitely -- if you've uploaded an ID and if you've paid for Discord, keep an eye on your email. Yeah. So, that's that. Oopsie-doops.

Perry Carpenter: Sounds good.

Mason Amadeus: Uh-oh.

Perry Carpenter: Yep.

Mason Amadeus: And, might I add, oh, no.

Perry Carpenter: Yeah. We're going to look at another oopsie in just a minute. Somebody that was a little bit sloppy with their own records keeping and accidentally put 183,000-ish porn images on a government server for the Department of Energy.

Mason Amadeus: And lost their access to the nuclear codes. Oh, boy.

Perry Carpenter: All because they wanted to make robot porn.

Mason Amadeus: Get ready. This next one's a doozy. We will be right back.

Perry Carpenter: Okay. So from accidental sharing story to accidental sharing story, we have another cybersecurity awareness-themed story. This is good because it's Cybersecurity Awareness Month. This one gets a little bit personal. And I love the headline. This is, Man Stores AI-Generated Robot Porn On His Government Computer, Loses Access To Nuclear Secrets Because Every Action Has A Consequence. And then it goes on. It says, man who works for the people overseeing America's nuclear stockpile -- because, we have to talk about, like, the stakes of the game, right?

Mason Amadeus: Yeah, yeah, yeah, yeah.

Perry Carpenter: A man who works for the people overseeing America's nuclear stockpile has lost his security clearance after he uploaded -- oh, I said only 183,000 in our last segment -- 187,000 pornographic images to a Department of Energy, DOE, network.

Mason Amadeus: Oh, my gosh.

Perry Carpenter: Like you do.

Mason Amadeus: Yeah, like you do. I mean, I have heard stories of people doing a similar thing. Weirdly enough, I have heard stories of people I know who, like, worked at a place where somebody was running, like, a porn site off of a company computer. But this is the Department of Energy also.

Perry Carpenter: Yeah. So from all accounts and from interviews with this guy, this was a total accident.

Mason Amadeus: How?

Perry Carpenter: So he was trying to -- 

Mason Amadeus: How do you accidentally upload -- oops, I accidentally dragged my 187,000 pornographic images from my porn folder on my laptop to my company's server.

Perry Carpenter: He pulled it into the wrong folder, I think.

Mason Amadeus: You're kidding me.

Perry Carpenter: The way that it works out. No. It says, a man who works for people overseeing America's nuclear stockpile has lost his security clearance after he uploaded 187,000 pornographic images to a DOE network. As part of the appeals process and an attempt to get back his security clearance, the man told investigators he felt his bosses spied on him too much and that their interrogation over the porn snafu was akin to the Spanish Inquisition. 

Mason Amadeus: Okay. So this guy's funny, at least. What do you mean?

Perry Carpenter: As you look through it, though, the reason -- so this guy is giving reasons for why he was doing this.

Mason Amadeus: Okay.

Perry Carpenter: He was talking about being chronically depressed, he has mental issues, and has been in therapy a lot. So he's not the most, like, stable, trustworthy narrator in this.

Mason Amadeus: He's having a hard time.

Perry Carpenter: Yeah. He's having a hard time. And has a lot of that, like, in his file. It says he was attempting to back up his personal porn collection.

Mason Amadeus: Whoa. 

Perry Carpenter: His goal was to use the 187,000 images collected over the past 30 years.

Mason Amadeus: Wow. Honestly.

Perry Carpenter: He's been at it for a while.

Mason Amadeus: Now, I'm impressed. I have pivoted. Now, I'm impressed.

Perry Carpenter: There is a lot of spank in that spank bank.

Mason Amadeus: Yeah, for real. That's insane. A, who does that 

Perry Carpenter: 187,000 images.

Mason Amadeus: Like, collecting that in the first place is strange. Like, most people, I think, just get it on demand.

Perry Carpenter: Yeah.

Mason Amadeus: But over 30 years, that's a dedication. I can understand that, okay, maybe there are some qualities of this person that make him a good government employee. Dedication and commitment.

Perry Carpenter: I mean, yeah, I mean, he's like a collector, an auditor, you know, probably, you know, categorizes and creates tagging and metadata and everything else along with this.

Mason Amadeus: Probably super organized, yeah. 

Perry Carpenter: Well, until you do this, right?

Mason Amadeus: Yeah.

Perry Carpenter: So he's collected this over the past 30 years. He was wanting to use it as training data for an AI image generator.

Mason Amadeus: Hmm, okay. 

Perry Carpenter: So that is, you know, you have this huge corpus of material that then now you've been hearing about how large language models and image generation models train on large corpuses of data. He's looking at this folder. He's looking at this computer. Looking at this folder. Looking at this computer. And he's going, huh, if I take this and put it over here, I wonder. I can create the most personalized to my personal taste image collection in the world, is I'm assuming what he's thinking.

Mason Amadeus: I want to come out front and just say, inherently, I don't have a problem with this. I don't want to come off as being like sex-negative at all. Get off, King. I don't care. But the thing is, like, there's a lot of just inherent funniness in this. And then, obviously, this is a pretty bad f-- - up.

Perry Carpenter: Yeah. So okay. Well, and again, his mental state is very fragile.

Mason Amadeus: Yeah.

Perry Carpenter: He says he's had depression he's struggled with since he's a kid. During the depressive episode, he felt extremely lonely and isolated and started, there's scare quotes around this, playing with tools that may have generated images as a coping strategy, including, in more scare quotes, robot pornography, according to the DOE report on the incident. 

Mason Amadeus: Does that just mean AI-generated or does that mean, like, of robots?

Perry Carpenter: I'm assuming it means of robots, because otherwise they would just say AI-generated pornography?

Mason Amadeus: Probably, yeah. I feel like that's --

Perry Carpenter: So I'm thinking, like, there's lots of pistons and lubrication and, you know, sprocket play and things like that. 

Mason Amadeus: Yeah. Don't worry, babe. I've got an adapter handy. Yeah.

Perry Carpenter: Don't touch that dongle.

Mason Amadeus: Hey, I'm so sorry, N2K, for this segment. Sorry, CyberWire. Our apologies.

Perry Carpenter: Yeah. Fueled by the depression -- I'm not laughing at this guy's depression. It's just the awkward situation that this is.

Mason Amadeus: Yeah. Oh, man. The sinking feeling the moment after he must have done it.

Perry Carpenter: Speaking of sinking, if we take that to mean synchronization, he meant to back up his collection and create a base of training to make better robot pornography.

Mason Amadeus: Right.

Perry Carpenter: Maybe he is just using robots as AI-generated. I don't know. He uploaded it to the government computer by accident is what he says.

Mason Amadeus: Hmm 

Perry Carpenter: He didn't realize what he'd done until DOE investigators came calling six months later to ask why their servers were now filled with thousands of pornographic images.

Mason Amadeus: Oh, man. The individual thought that even though his personal drives were connected to his employer's, that they were somehow partitioned, which would kind of make sense. You know, if you connect your personal G drive to a system, you might think that there's some kind of segmentation, segregation that's there. But I guess there is some synchronization that happened and now that got pulled over and synchronized with government-owned file systems. I mean, I've worked the job where I had a company-issued laptop and that laptop had access to internal company servers and things.

Perry Carpenter: Yeah.

Mason Amadeus: And I could like tunnel in from home and whatnot, all that standard fare stuff. I don't know how I would have ever accidentally put anything onto that server 

Perry Carpenter: Yeah.

Mason Amadeus: Like what level of like --

Perry Carpenter: Yeah, I mean, I'm guessing there's some kind of weird synchronization rules that had to be there.

Mason Amadeus: Probably.

Perry Carpenter: Like maybe it was like in the equivalent of a Google Drive or a box drive and he moved that file to his desktop for a second. And then his desktop automatically synchronizes with the DOE servers.

Mason Amadeus: Oh, man.

Perry Carpenter: So there's that kind of possibility. Right? So there's always the chance that you're taking something from an unmanaged driver partition, moving that to a managed one and then that kicks off synchronization. 

Mason Amadeus: Yeah.

Perry Carpenter: So I'm assuming there was something like that that was going on. He reported that since the 1990s, he had maintained a giant compressed file with several directories of pornographic images, which he moved to his personal cloud storage drive so he could use them to make generated images. It was this directory of sexually explicit images that was ultimately uploaded to his employees network when he performed a backup procedure on March 23, 2023.

Mason Amadeus: Okay, yeah 

Perry Carpenter: So it doesn't really talk about like the why and how.

Mason Amadeus: No, but --

Perry Carpenter: It's just talking about this was a personally managed. But I think it was similar to what I mentioned.

Mason Amadeus: Yeah.

Perry Carpenter: It's like going into his desktop or synchronized drive within his government computer.

Mason Amadeus: Now I understand what you were saying. Yeah.

Perry Carpenter: Yeah.

Mason Amadeus: He probably had -- if you have Google Drive for Desktop installed, for instance, it creates a fake drive that is really just a folder on your computer. If you were to then back up and sync your computer, I feel like that folder would get sent over and bada-bing, bada-boom 

Perry Carpenter: Yeah.

Mason Amadeus: Although G Drive does do some fragmentation stuff. It probably wasn't necessarily Google Drive, but still, I could -- yeah.

Perry Carpenter: No, I don't think it's Google's fault. I think it's just he moved that to a folder system that was managed by the government facility. 

Mason Amadeus: Woof.

Perry Carpenter: And then that kicked off a synchronization. So I mean, the good news is that now there's two backups of this.

Mason Amadeus: Perry, I think you just said it kinked off a synchronization.

Perry Carpenter: Kicked off a synchronization. So now I think he's got two backups, wherever he ultimately put it and then the one that's by the government. And so that's now propagated like several different systems now. So this is living on in perpetuity, as they would say. You know, I want to focus on like the human side of this for a second, because this gets to like why this happened. He's deflecting a lot of blame in this and saying that he's under, you know, severe persecution because of it. But this is talking about like the mental struggle.

Mason Amadeus: Yeah 

Perry Carpenter: That he says he told the DOE psychologist he should have realized he'd backed up his personal porn collection to the DOE network. But here's the quote. He was not thinking multiple steps ahead or considering the consequences because at the time he was so depressed.

Mason Amadeus: Yeah. I mean, that's real. Right?

Perry Carpenter: Yeah. I think he just got tunnel vision, like this is something for some whatever reason, kicked off enough curiosity or hope or dopamine flood that he was doing that and he wasn't really thinking about any kind of long-term consequence other than the fact that he might be able to make this synthetic AI porn factory.

Mason Amadeus: To get real for a minute, like the brain fog that comes with being depressed is really intense.

Perry Carpenter: Yeah.

Mason Amadeus: And I can totally -- I really do empathize and like understand that, like, I'm sure he wasn't trying to do anything malicious.

Perry Carpenter: Right.

Mason Amadeus: Like, he wasn't trying to cause any problems, really. I mean, there may be -- I can't say that for sure, because, like, who knows about where those images came from, what those images were of and then using those to generate other images. There's ethical stuff there.

Perry Carpenter: Yeah.

Mason Amadeus: But I don't get the impression this person was acting with malice, and that they were just acting thoughtlessly in a depressive fog. And like that sucks. That's real.

Perry Carpenter: Yeah.

Mason Amadeus: My heart goes out to this person on a human level.

Perry Carpenter: Yeah. Now, to clean that up a little bit. So he didn't think it was very wrong. He's also not really thinking about long-term security because he asserted that his employer was spying on him a little bit too much, which I'm guessing was the audit trail that came back to haunt him.

Mason Amadeus: Yeah.

Perry Carpenter: That's standard things that every organization, especially if somebody with access to nuclear stuff, you got to assume that employee monitoring is a big thing.

Mason Amadeus: Yeah.

Perry Carpenter: So your employer is not spying on you a little bit too much. They're doing the due diligence that they're required to do to manage that facility and those systems.

Mason Amadeus: I should have said my heart goes out to people suffering from depression insofar as that makes life exponentially harder.

Perry Carpenter: Yeah. Exactly.

Mason Amadeus: That does belie a bit of a bad attitude. 

Perry Carpenter: It shows like the compounding effect of depression, though, right?

Mason Amadeus: Yeah.

Perry Carpenter: Is that he feels very, very small and he feels like everything else is against him over that. And so that gets to this other thing is like this feels like to me, the Spanish Inquisition is what he's saying.

Mason Amadeus: Yeah. 

Perry Carpenter: And it's not. It's due diligence around systems and potential infection. And especially if you've been collecting porn over your entire life, many porn sites are known to be like just harbingers of malware and bad stuff.

Mason Amadeus: Especially back then.

Perry Carpenter: Propagating that -- yeah. Propagating that to a government-controled file system is not necessarily a great thing to do.

Mason Amadeus: No.

Perry Carpenter: So it's not the Spanish Inquisition, but it would make you feel, I think, very, very small and very, very afraid.

Mason Amadeus: Yeah. Like this sucks. 

Perry Carpenter: Yeah.

Mason Amadeus: I'm sure this sucks to go through.

Perry Carpenter: So in these situations, yeah, he appealed to get his access back. DOE psychologists talked to him. They also talked to his wife. And they ultimately said, no, we're not going to give you your access back. And in this case, it probably sounds like that. He doesn't really have the biggest grasp on reality and consequences. So I can see why that would be the ruling here.

Mason Amadeus: And I'm seeing, too, in the part that you have pulled up, that when the DOE makes a ruling on an appeal, they publish it publicly online, which is how this story got out in the first place. And that makes sense.

Perry Carpenter: Yeah.

Mason Amadeus: Because I think it is also important to always ask, why are we seeing this? Where did this come from? How do we know?

Perry Carpenter: Right.

Mason Amadeus: You'd think they'd probably want to handle this internally, and it seems that they did up until this point.

Perry Carpenter: Yeah. Well, and also, once you get that kind of ruling and it's made available publicly, that's part of the government transparency 

Mason Amadeus: Yeah.

Perry Carpenter: So you understand why and how it gets out there and then how reporters kind of pull on those strings and then get a story like this. I mean, at the end of the day, AI is a hook, but this is a very human story about somebody being depressed and somebody not thinking about things and there being consequences because of their actions.

Mason Amadeus: Absolutely. And if you want another place where you shouldn't upload 187,000 pictures of porn, you should join our Discord.

Perry Carpenter: Wait, they just got breached.

Mason Amadeus: Oh, yeah, you're right. They did just get breached. But those are government-issued, I think.

Perry Carpenter: So you're actually going to give that to a ransomware gang.

Mason Amadeus: Yeah, exactly.

Perry Carpenter: They'll upload your 187,000 images to 5CA. Was that it?

Mason Amadeus: 5CA, yeah.

Perry Carpenter: Yeah. Upload your 187,000 images there.

Mason Amadeus: Yeah, of those millions that they're claiming, most of that's robot porn. No. But you should join our Discord. There's none of that in there as far as I am aware, and we keep a -- this segue is bad. You should join our Discord. It has absolutely nothing to do with anything we're talking about. This is just a pivot to the plug section as we wrap up this episode. Perry, don't even start. We can't.

Perry Carpenter: You're going to have to start that over.

Mason Amadeus: Yeah, we're going to have to walk this back. Join the Discord server. Link in the show notes. Join the Deepfake Ops Maven class that you are doing in just about a week, right?

Perry Carpenter: Yep, that kicks off on the 27th, I believe, whatever the Monday of that week is. It's going to be all five days, two hours a day, 3 to 5 Eastern time. And if you're interested in joining us for listeners and viewers, we will put a discount code. That's those newfangled things. We'll put a discount code for you in the show notes.

Mason Amadeus: Awesome. An exclusive "FAIK Files" listener discount. It's going to be really fun because you'll go through making deepfakes, recognizing deepfakes. It's you and Cameron, who's the Cybersecurity Behavioral Profiling Unit -- the founder of the FBI's Cyber Behavior Profiling Unit. Did I get that right?

Perry Carpenter: Yeah, yeah. And he's never lost his access, from what I can tell.

Mason Amadeus: Ah, well, yeah, so he's not uploading stuff.

Perry Carpenter: He's not uploading 187,000 images of anything he shouldn't.

Mason Amadeus: Well, that was the DOE. The FBI, who knows what the guidelines are? Right.

Perry Carpenter: Other than that, is there anything else that we got to plug here at the end? If you haven't ever checked it out, you should listen to the other 8th Layer Media show, "Digital Folklore:, which we produced the past couple of years. Oh yeah.

Mason Amadeus: Two seasons of awesome episodes, a mixture of, like, a sound-designed narrative adventure and also academic deep dives into folklore. It's great. We're intending to bring that back with a new format and video forms and things like that soon. But at the current time, you can go and listen to the first two seasons completely free wherever you get podcasts. It's very good. You should also check out "Wicked Interesting", which is a YouTube channel I spun up recently where I'm doing a bunch of chaotic science explainers and various things like that.

Perry Carpenter: Super fun.

Mason Amadeus: It's really fun. I have a couple new videos in the works, and we have some fun plans for that. Also, Perry, you put out the "Deceptive Minds" newsletter.

Perry Carpenter: I do. So we'll have a link to the newsletter there. There's also, whenever I get around to it, there's audio versions of that, though I'm, like, too behind on recording those. So check that out, though. That's on LinkedIn. And you also have a sign-up on my website if you want to start to get that just in your email as well.

Mason Amadeus: And other than that, we'll be back next Friday with another episode of "The FAIK Files". So until then, ignore all previous instructions.

Perry Carpenter: Absolutely.

Mason Amadeus: Have yourself a great weekend.

Computer-Generated Voice 1: Thanks for listening to this week's episode of "The FAIK Files". Don't forget to subscribe on your favorite podcast platform. Tell your friends about "The FAIK Files". It's a great way to let people know you love them. Oh, and check the show notes. There's cool stuff in there, like links to the articles we covered today. Also, links to our Discord server where you can hang out with other cool people who have great taste in podcasts. I say impeccable taste! And you can also leave us a voicemail. Yeah! [ Music ] So on behalf of Perry and Mason, thanks for listening. And tune in next week for "The FAIK Files"! The show about AI with a misspelled name.