The FAIK Files 12.12.25
Ep 61 | 12.12.25

The Future... It's Complicated

Transcript

Mason Amadeus: Live from the Eighth Layer media studios in the back rooms of the deep web this is "The FAIK Files."

Perry Carpenter: When tech gets weird we are here to make sense of it. I'm Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus and this week we're going to start out by talking about why RAM is suddenly so expensive.

Perry Carpenter: Then we're going to look at the future of life institute's new safety report, see if there's anything shocking.

Mason Amadeus: After that we'll talk about how Chat GPT now literally has Photoshop in it and Disney signed a billion dollar deal with Open AI that we'll get in to the details of.

Perry Carpenter: All right. And then lastly that sexy chat bot you've been talking to may or may not be AI.

Mason Amadeus: May not be AI?

Perry Carpenter: May not. Oh. I don't know if that makes it better or worse. I'm not --

Mason Amadeus: I think it depends on what you thought you were doing. Right?

Perry Carpenter: It's a matter of perspective.

Mason Amadeus: Sit back, relax, and hold on tight to any stick of RAM you can get your hands on. We'll open up the FAIK files right after this. [ Music ] So this has been an ongoing story for a couple of months, but I feel like it's getting a lot of attention now as it's sort of reaching what's probably not going to be a peak, but just a really high height. The price of RAM has been doubling and just it's going through the roof. And a lot of people think that the reason is simply due to AI and it is in large part due to AI, but there's a bit more to the story. And so I thought I would break it down because I'm sure you've probably encountered people talking about this online. Have you seen the memes about like a van?

Perry Carpenter: I haven't seen the memes.

Mason Amadeus: Oh. There's like one of like a van, sketchy looking van, parked outside someone's house with free DDR 5 RAM spray painted on it and people are talking about -- yeah. So there's a couple of things happening all at once that have led to a sticky situation involving RAM. And I'm going to be drawing a lot from this break down from Tech Radar which is "Why is RAM So Expensive Right Now? It's Way More Complicated Than You Think." By Wayne Williams. And we're going to dive in to this article and a couple of other sources. So the immediate effects of RAM prices right now are a lot of people who are trying to upgrade their computers and whatnot finding that components and parts have gone way, way up in price. This article starts by pointing out how Trend Force which owns the RAM price marketplace DrameXchange recently published a damning report that highlights how this rapid surge in RAM prices could impact downstream things like smart phones more so than laptops and desktops and other devices both in terms of their cost and also their specifications. We'll get in to those downstream effects after we talk about what caused them, but that's sort of the most recent update in the RAM story. So if you're already caught up on that I'll give you that as a little nugget to tease up front. Here is what's happening behind the scenes under the hood. At this -- and I'll just read directly from here. At the center of this problem is DRAM, the type of memory used in PCs, laptops, phones, consoles, servers, and cars, although most people call it RAM. Nearly all modern system memory is DRAM including DDR4, DDR5, LPDDR, GDDR, and HBM. It's a lot of acronyms that are different kinds of dynamic random access memory, the kind of memory that computers and other systems use as working memory to get stuff done, not to save stuff permanently, although, as we'll get in to, longer term storage memory like solid state drive memory is also going to be affected by this. All of this, all of the RAM on the market, comes from the same production ecosystem that is controlled by just three companies, SK Hynix, Samsung, and Micron. Together they account for more than 90% of the total RAM market which is a bit crazy. All three of those are also massive players in NAND which is the building blocks of SSDs, a slightly different kind of memory built on similar chip architectures. And as they point out in the article for years this concentration didn't feel like it was a problem as production was steady, demand was predictable, and price swings were usually gradual enough for consumers and OAMs to plan around. But that all started to change before most people even noticed because well before the AI boom reached the crazy heights it's at now DRAM makers were already preparing for a transition away from older memory types. This spike has coincided with the end of life of DDR4 RAM which has been on the horizon for a while. I feel like DDR4 I -- DDR4, the 4th generation, was released in 2014. So it's had a pretty long run, an 11 year run. Now it's coming to its natural end of life and a lot of makers started to say they were not going to manufacture it anymore. Micron and others issued end of life notices for several DDR4 and LPDDR4 parts pushing customers to secure supply while they still could. That created some early shortages and price spikes in memory that should have been getting cheaper, and then AI spending exploded. We talk a lot about GPUs being needed for these AI data centers, but they obviously also rely heavily on RAM, specifically high bandwidth memory, HBM, which is DRAM stacked on top of itself and tuned for really extreme throughput. And basically it's a follow the money situation. Like once hyper scalers in these AI firms started signing big contracts the memory makers followed that demand, followed the money. And the biggest sort of hit that happened was crucial getting killed. Micron, one of those big manufacturers, announced that they were going to exit the consumer memory and storage market and killed production of their crucial branded RAM and SSDs. I've got crucial RAM in my computer. When I was doing my brief stint in IT management I was ordering crucial RAM. It was always a great price point widely available and reliable. And now it's just gone. So this entire player sort of exited the market at the worst possible time. Micron said explaining this decision the AI driven growth in the data center has led to a surge in demand for memory and storage. Micron has made the difficult decision to exit the crucial consumer business in order to improve supply and support for our larger strategic customers in faster growing segments. So they're going to continue shipping crucial products until February 2026 and then after that they become collectors' items. So --

Perry Carpenter: Wow.

Mason Amadeus: Yeah. And with all of this projected to just keep going up basically it -- if you're thinking about upgrading a system you should probably get your components ASAP because the prices have already gotten insane. Also look at prebuilt machines if you're in the market for a new computer because there's a lot of prebuilds on the market that were created before all of this and they have their price is still lower down. So I guess just a little tidbit if you're looking to build a computer. I want to jump over to the Trend Force investigation that they pointed out, their sort of market analysis where they project that prices are expected to rise sharply again in the first quarter of 2026 exerting significant cost pressure on global and device manufacturers. And Trend Force says consequently smart phone and notebook brands will be compelled to increase their product prices and reduce specifications. They have a little chart here. They point out like high end devices might not change that much, but the transition to 16 gigabyte on device memory is slowing. Mid range devices you'll probably see 12 gigabyte models get dropped in favor of just keeping 8 gigabyte models. Similar things happening on notebooks where configurations and constraints will be increasing price points and sort of dropping the growth of these laptops starting out with more memory in them, but by standard. So that's cool. I saw in one of these articles, because I have a handful of them pulled up and we'll throw them in the show notes as well, one person compared it to like setting us back about a decade in terms of devices like progression of having more memory, more power, more this, more that each year.

Perry Carpenter: Yeah.

Mason Amadeus: So this is pretty not great. I've got a few charts here. I made a Camel Camel Camel account. Have you ever heard of Camel Camel Camel?

Perry Carpenter: I have not. What is it?

Mason Amadeus: It's an Amazon price tracking history website I guess. It's been around for a while. Yeah. I didn't know about it, but I made an account here just so I could pull up these graphs to show you like here. So I've got on the screen G skill flare X5 series DDR5 RAM. Just sort of some standard 32 gigabyte 2 sticks of 16 gigabyte RAM. And you can see the prices here. It came on to the market like at $200, dropped down in to the below 150 range, and then suddenly you can see September October it starts kicking up. And then between October and December we've seen the price rise from $90 to $350. I've got another thing. Crucial 32 gigabyte laptop RAM. Same thing. 2 16 gigabyte sticks. It was -- what is that? Like 65.99 was the lowest in August earlier this year and then October November shoots straight up. $311 now.

Perry Carpenter: Wow.

Mason Amadeus: Yeah. Similar things with this other RAM kit. Pretty much anything you look at. And DDR4 RAM because it's being discontinued is also going up in price. And on top of that some of the drive to make -- take this pressure off at the data center level has involved some new technologies where they'll like string together a bunch of DDR4 RAM in a sort of outboard thing and then pipe that in to the server. Like using -- I don't know enough about the specific implementation of this, but data centers are also looking at how to use older DDR4 RAM to supplement the high band with memory that they're using in these systems. So they are just sucking up resources like crazy.

Perry Carpenter: Man. We should have invested in DDR RAM, just like hoarded closet fulls of it months ago.

Mason Amadeus: Yeah. I know. I'm wishing that I had. I'm also very thankful that I got my RAM upgrade for this computer at the very beginning of this year because I feel for anyone trying to make a computer right now. But it's just more insanity in all the stuff these data centers are gobbling up in terms of water, power, resources, chips, everything. It's kind of an absurd unthinkable amount of money.

Perry Carpenter: It is, and it's like a strain on supply chains. It's a strain on just about everything as people try to keep up.

Mason Amadeus: That's pretty much all I got to tell you about that. RAM's not cheap. It's not getting cheaper. Get it now. Probably don't wait.

Perry Carpenter: And there are reasons.

Mason Amadeus: And there's a lot of reasons. And hopefully like the only other thing that would quell this, and they pointed this out in the linked Tech Radar article, the only thing that would really quell this is slowing down of the data center expansions or an increase in the supply. But these fabrication plants are multi billion dollar hugely high tech extremely they're difficult to stand up. You can't just be like, "Oh yeah. I'll open another one next week." You know. It would take I think crucial -- or not crucial. Micron, the company that owns Crucial, the drop crucial, is going to stand up a new one. They've started standing up a new big factory, but they say it's not going to be ready until 2028.

Perry Carpenter: Wow.

Mason Amadeus: Yeah.

Perry Carpenter: I love how the end of their analysis is the only thing that makes this better is adjusting supply or demand.

Mason Amadeus: Yeah. I mean there's only two levers to pull. Yeah. Yeah.

Perry Carpenter: Oh man. It's like anybody could have said that, but it is the only answer at the same time.

Mason Amadeus: Yeah. No. It seems obvious, but I think it's just to shed a light on sort of what feels like the unstoppable force of this AI scaling race. Coming up next we've got a segment about the future of life institute's safety report. I don't know. What is this?

Perry Carpenter: Yeah. Well, we've looked at a version of it before. So every so often they put a new one out and they just dropped their winter 2025 one with a date of December 2025. So it's the freshest AI safety report comparing multiple models out there on several different -- several different critical categories. So we'll take a look at that in a second.

Mason Amadeus: Sweet. Stick around. We'll be right back.

Perry Carpenter: Okay. So I'm pretty sure we've looked at a version of this report before, but the new one just dropped. This is dated December 2025. Looks like it's cropped in our view here, but I'll go ahead and read a little bit from their home page. Fighting for a human future. AI is poised to remake the world. Help us ensure that it benefits all of us. That sounds like a good plan.

Mason Amadeus: That's a good idea.

Perry Carpenter: Yeah. Let's go over here to the about us. What is the future of life institute and where did it come from? This is all about really just trying to make sure that humans don't get drowned out in all of the AI stuff. Of course when we're talking about AI you've got your people who are staunch AI advocates and you are the -- you also have the AI doomers. I'm pretty sure that some of the really big advocates for AI might call the future of life institute more on the doomer scale of things. At the same time I think we have to listen to everybody. It's an important perspective. And they do approach these kinds of reports with some rigor and they're very clear about their inclusion criteria and how they do everything. So let me go ahead and share from the report. So this is their AI safety index winter 2025. Just dropped. And over here in the contents you can see they've got their executive summary going straight through their key findings. They're very clear about their methodology. And the results. But the money shot for this report is always this graph. Do you remember -- do you remember the study now?

Mason Amadeus: Yeah. Yes. I remember these guys now. Yeah. Yeah. Yeah. No. They do -- they do good work because I remember -- I remember this graph and I remember poking through the paper. And yeah. They do a good job of laying out all of their methodologies and exactly how they come to all of this. But yeah. The score card.

Perry Carpenter: Anything surprising to you as you look at this?

Mason Amadeus: I guess.

Perry Carpenter: I guess for those that are just listening we should kind of describe what we're seeing. So this is like a matrixed view that mentions at the top all of the different models that have any kind of prevalence in society. So Anthropics, Claude, Open AI, Chat GPT, Google's Deep Mind which is Gemini, XAI which is Grok, and then ZAI, Meta, Deep Seek, and Ali Baba cloud, so that's kind of the main players. Most of us really only think about Anthropic, Open AI, Google, sometimes Grok, and Meta. Though I do think Deep Seek is starting to be more and more on the rise too, but I don't know how many people really think about ZAI or Ali Baba here, most of the people that I talk to.

Mason Amadeus: Yeah. I haven't heard anything about ZAI really. I do know that Deep Seek is a pretty popular choice among the like open source local model running crowd. It is pretty potent. There's a bunch -- and it can run on a lot of different hardware. But yeah.

Perry Carpenter: And I think when we look at like the poor results from Deep Seek across like risk assessment and safety protocols and all that a lot of people that run Deep Seek don't care about that. They're just wanting a fast local model that has minimal guardrails. And so --

Mason Amadeus: It works for that.

Perry Carpenter: They would expect. Right?

Mason Amadeus: Yeah. Exactly. The thing I notice is that the highest grade in any overall category is only a C plus given to Anthropic, and the highest grade in an individual category is also owned by Anthropic under information sharing. Everyone else not super great in terms of the overall score. No A pluses here really. I do -- this lines up with my intuitive understanding of these various models too. Like I would kind of feel that Anthropic is a bit the most trustworthy. I'm surprised Open AI is ahead of Google I suppose, but it looks like they just kind of did overall better in most categories.

Perry Carpenter: Yeah. I think it's because Open AI has more formal processes around things like risk assessment and showing their safety cards and all that than Google does. Google has been in this rapid catch up mode and I think that that may be resulting in the perception and/or the reality that they're cutting corners and kind of tagging on some of the safety measures at the end.

Mason Amadeus: And just to fully make it clear for the listeners because I'm not sure if we ever said it explicitly, in terms of top five number one Anthropic with a C plus, number two Open AI with a C plus, number three Google with a C, number four XAI with a D, and number five somehow ZAI with a D. And then you've got Meta, Deep Seek, and Ali Baba following those up with varieties of D.

Perry Carpenter: Yeah. And these have point scores too. So Anthropic C plus is a 2.67, Open AI's C plus which is the same letter grade of course is a 2.31 so off by about 3/10 of a point. Google Deep Mind is a 2.08. XAI all the way down to a 1.17. And then yeah.

Mason Amadeus: XAI's where we see the first big Fs too in current harms and existential safety for the XAI model.

Perry Carpenter: Yeah.

Mason Amadeus: That's mecha Hitler. So like, you know.

Perry Carpenter: You would expect it. So and you wouldn't really expect mecha Hitler to be in the top half. I mean if you were just to split this down the middle and say that there's a top four and a bottom four. That being said the top three have a pretty big advantage over the bottom five.

Mason Amadeus: They do. And I also I do think XAI showing up there in the top half is kind of speaks to the fact that it's a capable base model that has I think been tortured in to --

Perry Carpenter: There's some good science. Some good science and some good engineering. And I think a lot of good talent behind that. The place that it suffers is like in the ethics and the morality that they're trying to build in to it. It's compromised by design I think.

Mason Amadeus: Yeah. And I think this score kind of speaks to that.

Perry Carpenter: Yeah. Yeah. They're also in the move fast break things mode. Actually I think all of these are in the move fast and break things mode because even though Anthropic really really wants to be the top safety leader they're having to compete with the arms race. And I think if they had their druthers they would kind of sit on these models for a little bit longer and let them bake and like those C pluses would be at least Bs all the way across. And that D wouldn't be there at all. I think that, you know -- that that would be unacceptable if you talked to them two years ago. But they're playing in the end of 2025 space right now.

Mason Amadeus: Yeah. Anthropic got a D in existential safety. And I mean I think part two -- I think part also -- also part of this I was reading about how when Gemini three dropped it kind of spurred Open AI in to panic mode because of how much better it was. And I'm sure Anthropic is feeling that pressure as well as XAI. Actually I read explicitly that XAI felt that pressure as well. So the move fast and break things impetus is still here and very much still alive. And Gemini 3 just accelerated it a little more.

Perry Carpenter: Yeah. I don't know that there's a lot other to cover on this. I know what I would do is I'd encourage folks to read it. And with each of these models think about why the model exists. What's its basic purpose? What do people tend to use it for? And does that metric that is there that may be good or bad, does that actually matter in the use case that people are caring about which is kind of what I was getting at when I was mentioning Deep Seek. For a lot of people using Deep Seek they might not care about the existential risks associated with it because they're just wanting a capable model that's small that they can program around and they're explicitly using it because it has less alignment and fine tuning on it. And so you can look at some of these and you can say, "Yes. What are the things that I really care about for the use case that I have?" And what does that mean for my prioritization for how and when I might use this specific model?

Mason Amadeus: Because it's a little different than running just unsecure code on your machine because you're just downloading these weights and stuff. You know, if you're running this locally you're not quite at the same level of risk as if you're running, you know, like other kinds of what people think of as sketchy software. I want to highlight that second point under the key findings. Or sorry. The second page there. It's the third point actually about existential safety. So they said existential safety remains the sector's core structural failure, making a widening gap between them and accelerating AGI super intelligence ambitions and the absence of credible control plans increasingly alarming. None has demonstrated a credible plan for preventing catastrophic misuse or loss of control. Now I'm trying to remember what their criteria were for that because existential safety is a scary term.

Perry Carpenter: Yeah. Well, I think so you start to get in to some of the CBRM stuff. So chemical, biological, nuclear, radiological risks which they're very concerned about. And you see that reflected in a lot of the system cards that these folks put out. But then in all, you know -- in this bullet that they talked about they talked about possible break out as well which when you're thinking about the agentic use of these where the thing is going off and doing work on its own we've also seen a lot of these papers and use cases where it starts to basically serve its own motivation or what it believes its higher cause is at the expense of what somebody might think is right or moral or good. This is kind of the paperclip maximizer thing. Right? We're starting to see cases where that can manifest in different ways, where it optimizes for maybe one level of its fine tuning or one level of its system prompt as opposed to another prompt that's been put in like at the API level or another prompt that's been put in at the user level. And it creates this -- this uncertainty in behavior and a certain percentage which they'll always minimize and they'll go, "Well, that's kind of like a, you know, 1 to 3% use case." But when you look at 1 to 3% over billions and billions and billions of API calls it means that there's actually a significant risk from just a raw numbers perspective.

Mason Amadeus: Right. And I mean recently I'm sure you saw the headline too. I haven't dug deeper in to the story, but there was someone using Antigravity, the new Google Gemini IDE code editor, and it deleted their entire hard drive in agentic mode. Yeah. So like there's -- there's that kind of risk which I guess is I mean existential's a big word to put on that one, but I think it's a demonstration of the kind of thing that can go wrong as you're describing.

Perry Carpenter: Yeah. Exactly. And I think that there's going to be more and more of that as there's like agentic browsers and all these companies that are moving fast and breaking things are in a race for creating more and more capable agentic systems that are hyper connected as well. So they have access to your drive, to your accounts, to other systems and tools. And it's those, you know, things where break away can happen and unexpected consequences can happen, are going to be in those integration layers. It's going to be how this thing can use this tool or how this thing can use this tool that you didn't even realize it had access to. But for whatever reason in a default setting it just had it. And then how does that affect the final result either from a good perspective or a bad perspective? And of course here we're talking about, you know, more the negative use cases.

Mason Amadeus: In that vein I've recently been fascinated by, and we've talked a little bit about it, the attack vector of prompt injection during like agentic MCP powered things where if it reaches out to say a website that has an embedded hidden prompt in it that will change its instructions and its behavior while it's running in agentic mode like the vector of attacking agentic browser controls using hidden prompts buried in your browser that's like a new frontier of things to have to defend against that's pretty interesting. Maybe we should do something about that in the future. I'll jot a little note down.

Perry Carpenter: Yeah, and maybe we can get in touch with one of the folks that have contributed to this report and have a chat through it at some point if they have time. I'm assuming these folks are pretty busy.

Mason Amadeus: Yeah. The pace of this stuff is pretty crazy.

Perry Carpenter: Would be my guess. Yeah. Exactly. All right. Well, I think that's all I've got for this one. We'll link the report in the show notes and hope that everybody has a chance to take a look at it.

Mason Amadeus: And coming up next we'll pivot to looking at some interesting integrations of Chat GPT. Photoshop is literally in it now and also Disney inked a $1 billion deal that I only found out about right before the show so we'll learn about that one together. Stick around. So Chat GPT has Photoshop in it now which is like an interesting development I think. I did not expect to see this, but Adobe posted this press release where they said Adobe makes creativity accessible for everyone with Adobe Photoshop, Adobe Express, and Adobe Acrobat in Chat GPT. Everyone can now edit with Photoshop in Chat GPT. Now that's a baffling sentence because Photoshop is an application that you launch and like open a project file and edit in layers in a very complex user interface. So what do you mean? Is it just chat prompts? And the answer seems to be kind of yeah. But with more. We'll just continue here before I get ahead of myself. Adobe today launched Adobe Photoshop, Adobe Express, Adobe Acrobat for Chat GPT bringing its industry leading blah, blah, blah, to its millions of users. Basically earlier this year Adobe launched Acrobat Studio which was a destination for productivity and creativity they call it that transformed static documents in to interactive AI powered work spaces. It was sort of their first integrating AI in to their like PDF editing and stuff. Do more with Acrobat. Chat with your PDF. Sort of like rag type stuff. But I don't feel like this got that much attention. It kind of flew under my radar at least. And they introduced obviously their AI features in to Photoshop and Adobe Express that allow, you know -- you had generative fill for a while. There's been more things added in that range. And now they've added it directly in to Chat GPT such that all you have to do is type slash Photoshop and give it an instruction. So, for example, they say to blur the background of an image with Photoshop users can type "Adobe Photoshop help me blur the background of this image." Chat GPT then automatically surfaces the app and uses contextual understanding to guide the user through the action. You don't need to have Photoshop on your machine. It's all run in the cloud. It's all entirely free also which is interesting.

Perry Carpenter: Yeah. That's -- I mean Adobe's subscription model is like known to be a little bit oppressive so it is weird to see that.

Mason Amadeus: A little bit. Yeah. I have deAdobed my life as much as possible as a result of that. So yeah. The fact that it's free is pretty crazy. But again it's not really the full tool, although they do say that you can take -- you can take it from Chat GPT in to Photoshop proper if you want which that would make it a bit more functionally useful, but then also makes the use case kind of blurry because if someone's going to be using Photoshop through Chat GPT I don't know if they're capable of pulling it in to Photoshop because they probably would have just done it themselves.

Perry Carpenter: But does it -- I mean if it creates -- if it creates a usable file for hand off to a real designer later on so it's like somebody like me that doesn't have Photoshop installed on my machine even though I've used Photoshop quite a bit could do it, get it kind of to a decent enough point and then say, "All right. I kind of feel like I'm just hitting a brick wall here. Let me hand that off to somebody who really specializes in this." And now they've got a usable file rather than a flattened image that's not really editable.

Mason Amadeus: That's true.

Perry Carpenter: Editable images are different.

Mason Amadeus: We'll get there eventually with AI.

Perry Carpenter: Exactly. Yes.

Mason Amadeus: No. But that's true. It remains to be seen. And because I don't have Photoshop I can't test this fully which I would like to. But if you want to do it and send us some results, join our discord, that would be cool. What I'm concerned with is sometimes that hand off doesn't go super well. So DaVinci Resolve which is a video editing suite they integrated some AI tooling for like color matching and color grading work, and it is supposed to in theory be able to hand off those color grading color changes in to the interface. But what it does is it puts it all in this impenetrable node that does not actually interface with the rest of the user interface. So you cannot truly tweak what the AI did. It's just a mess. And you just have to delete it if you want to change it or like try and modify on top of or behind it. Yeah. So like I'm skeptical of that integration. I would love someone --

Perry Carpenter: I don't know.

Mason Amadeus: To test it.

Perry Carpenter: I can see some interesting reasons why that's the case. Right? Because as soon as image work became good in Chat GPT everybody started going and doing like the Studio Ghibli stuff and everything else, and that became a really big load on their compute costs. And so -- and also when you go you can get really good outputs from just standard Chat GPT image creation and image modification. But I've got to think it's probably cheaper and easier for them to do a tool call out to something like Adobe rather than to use the server processing power to do something generatively because at that point it's like oh you're trying to do something that's a known thing where it's much faster, much easier just to kind of offset that to the known thing than to use all the inference cost.

Mason Amadeus: Yeah. See now that makes sense. I could see that because then you're just using some MCP tool calls instead of running a whole diffusion pass just to do in painting or something for like a blur, you know. Yeah.

Perry Carpenter: Exactly. So it's like this is solved science at a different level. Let's move it to the more efficient compute use. Do that then bring it back in and see if that's good enough.

Mason Amadeus: That is a valid point. I do -- I am still skeptical on the integration side of pulling it in to Photoshop proper. And I do want to test that or look up some more tests with that. I did open Chat GPT here and have it generate a picture of the Oscar Meyer wienermobile flying off a highway in flames. And I wanted to see what happened if we tried to invoke a tool call. So I've --

Perry Carpenter: Slash Photoshop.

Mason Amadeus: Yeah. So if you want to do this I'll show you real quick how you do it. You go to, and you can't see this because it's a little cut off, your username down on the bottom left of the screen. You can see there's Mason Amadeus on top of it. You click that. You go to settings. You go to apps and connectors. And then you can browse all of the various apps that have connections in Chat GPT. Canva's in here. There's a lot more than I thought. I never really played with this. Have you, Perry?

Perry Carpenter: Not recently. I did hook up Canva a while ago, but I still go to Canva proper when I do stuff now.

Mason Amadeus: Yeah. So there's a lot more of these connectors than I had thought existed. But so if we take our wienermobile crash scene that I've made here, and I'll just type slash Photoshop, what should we do to this?

Perry Carpenter: I guess maybe do just like a rotoscope or something.

Mason Amadeus: Yeah. There we go. Can you rotoscope the -- well, I think that's technically for video. Can you cut out the wienermobile and give me just it on fire with a transparent background? Boom. And so while we let that cook we'll come back to our flaming wienermobile in a moment. This actually -- Perry, that was a great idea because then we'll have a flaming wienermobile sticker we can use for whatever. Here's something that I stumbled on right before we started recording that I wanted to dive in to. So we're going to just dive in to this together for the first time. Disney Inc's major Open AI deal to bring more than 200 characters to Sora video platform invests $1 billion in AI company. And I think the first sentence of this article is so dead on. Disney is trying to take control of its destiny in an AI flooded world. I kind of feel like they saw everything happening with Sora and said, "I guess we've got to do a licensing deal rather than just let this happen or like try and let their guardrails handle it." But at the same time I don't know if this would have just been in the cards anyway. I'll just read the first bit here. The Walt Disney Company and Open AI have reached an agreement for Disney to become the first major content licensing partner on Sora, Open AI's short form generative AI video platform. As part of this three year licensing agreement Sora will be able to generate short user prompted social videos that can be viewed and shared by fans drawing from a set of more than 200 characters from Disney, Marvel, Pixar, and Star Wars including costumes, props, vehicles, and iconic environments. They specifically are only doing animated characters. No person likenesses because that gets, you know, a bit messier with licensing from the actors. And as part of this agreement Disney makes a $1 billion equity investment in Open AI and receives warrants to purchase additional equity.

Perry Carpenter: Yeah. So I see this as this is more of a settlement. Right? Because Sora came out with their app and social platform and they basically said "Screw copyright holders" at the very beginning. And people were making, you know, Anna and Elsa from -- and all of these other Disney characters in these weird scenes. And then within like two weeks Open AI had to start increasing the guardrails on that quite a bit to the point now where I see posts all the time now where people are saying "Sora is unusable. I'm going to ditch it for this other platform." Because it started out with this really interesting kind of great promise that got a lot of people interested in it because they could have like "SpongeBob" in the, you know -- in a boxing ring fighting Anna and Elsa if they wanted to. And now you can't do that because it's intellectual property infringement. And so this was a way for everybody to come to the table. Disney gets a good pay day. They also get investment in Open AI which could, you know, end up being a really really valuable company. I mean it's already really valuable, but should they IPO or something like that that investment's going to pay off probably 15 times over. And now Open AI with Disney's permission can start to open up freedom to use those characters again. So now they can bring their customers back that were excited by that initial proposition. So it's kind of a win for everybody.

Mason Amadeus: But isn't it a bit bonkers if you step back and just think like 10 years ago multi billion dollar companies and investment bubbles grappling with the fact that meme culture basically, people just want to make dumb stupid videos, these companies would be inking billion dollar licensing deals for meme slop generation. Like we live in a stupid future.

Perry Carpenter: Yeah. We really do. And some of these companies need to lighten up because when it comes to like intellectual property infringement it's one thing if I'm trying to make a serious movie using your character. It's another thing if I'm literally just creating a meme where anybody would look at it and go it's like "Oh. Joe Schmo made that. That's not an actual Disney property thing." And they're using it as a cultural reference to make a point about whatever. That doesn't really hurt anybody. You know, even though you don't want SpongeBob doing certain things it's just different.

Mason Amadeus: It's I mean it's been foolish from the start. There's like the legendary tales of the Disney adult content vaults where artists because Disney owns everything you draw on the clock would draw very inappropriate things and Disney owns them. Like brand identity control through this sort of thing has been kind of a fool's errand from the jump and I don't know. It's just crazy to see where we're at. Also in the world of all of this is filled with loads on broken promises. Look at that. Adobe Photoshop failed to configure context. "Your cut out is ready. I've isolated the burning wienermobile and removed the background." No. You didn't. You failed to configure the context. And I don't really know what to do with that. Do I need to maybe copy this wienermobile image, paste it in to here, and we'll go ahead and we'll just send that one more time and let that cook? Just in case because I don't want to just go ahead and say, "Oh. It's stupid and doesn't work" if I did something dumb.

Perry Carpenter: It's all broken.

Mason Amadeus: Yeah. We had gotten a comment similar to that about the IDE when it very briefly previewed Antigravity and didn't really dive in to it. I will say in regards to that I stuck my fingers -- no. See. It failed again. I stuck my fingers back in Antigravity and I did not like it. I do not enjoy the agentic coding experience that it provides. I still have found far more utility passing things back and forth with Gemini through a browser chat if you're trying to work on code. All of the agentic IDEs that I have tried have been a nightmare.

Perry Carpenter: I enjoy Lovable out of all those. Lovable's not too bad.

Mason Amadeus: You have managed to build some pretty cool stuff in Lovable. I remember you showing me.

Perry Carpenter: My biggest problem is having a charge on your credit card from lovable.com just doesn't look great. I would say like if you're going to create an AI company for coders that you're going to be spending hundreds or thousands of dollars on per month don't name it something that could be perceived as by a spouse or anybody or a federal worker as something that's like sex related.

Mason Amadeus: Yeah. That's kind of I mean I've thought for a while of jokingly instead of us making like a Patreon we should just put all our stuff on Only Fans. Not make it explicit, but just put it on Only Fans because ha ha. But no one will subscribe because then they're going to get a charge from Only Fans.

Perry Carpenter: So there was somebody that did that though.

Mason Amadeus: Oh yeah. Loads of people. Loads of people have done that.

Perry Carpenter: Well not -- yeah. Not doing explicit stuff, but there was this woman who was an AI -- she was an AI PhD candidate and realized that she could make more money on Only Fans. So she was doing like these tutorials on YouTube, you know, teaching people about like neural networks and machine learning and all that. And she cut that off and started doing all that same content essentially on Only Fans and making tons more money.

Mason Amadeus: That's so funny.

Perry Carpenter: She wasn't -- as far as I know she wasn't like stripping or doing anything else too. It was just teaching people how to do neural networks and other stuff, but on Only Fans platform.

Mason Amadeus: See. Yeah. See. I think that's hilarious. And I think that's great. I would just worry that people would be afraid to subscribe because they don't want to see that charge come through. Like you're saying for Lovable.

Perry Carpenter: Exactly. I mean I would assume that most of the people that are subscribing to that that's not the only influencer that they're subscribed to on that platform.

Mason Amadeus: You know my other thought was that if we -- like say we drop our Patreon and we instead start an Only Fans. We can then even bill it as like "And we're a great excuse for you to have an Only Fans subscription." You can say it's just for "The FAIK Files" because they think it's funny.

Perry Carpenter: I get the magazine just for the articles.

Mason Amadeus: Exactly. Let us be your smoke screen. But I think that's all I have to say and I think that this failed to configure context is kind of emblematic of how a lot of this stuff goes. I don't know. I have been having --

Perry Carpenter: Try that again?

Mason Amadeus: Yeah, but I'm having an increasingly hard time getting excited about any of this stuff that comes out because it all just kind of sucks. Like nobody's -- nobody is yet focusing on making tools that actually work well for creative professionals in any of this. Like there's a lot of stuff that's good for drafting and prototyping, but every like sort of more advanced integration of things I have tried has been a nightmare. So I think the --

Perry Carpenter: The integrations when you're in something like the native Photoshop tools and you're using like Nano Banana within Photoshop, that's a better experience than trying to call from like Gemini or Open AI in to Photoshop. So because you're trying to say, "How do I segment this use case?" That makes a lot of sense. And like some of the stuff that they were showing at Adobe's recent conference where they were able to take like an audio clip and then segment that flat audio clip out in to different layers, say here's your voice, here's your background sound, here's -- you know, here's this car that went by as its own track. That's really cool.

Mason Amadeus: That kind of stuff is really cool. When it works and when the integration is built well because again like then you have frustrations like the way DaVinci implemented their color matching thing. So there are -- there are good ones. I shouldn't be so cynical really, but I feel like a lot of them miss the mark. There are some good ones. And it's early days. Right? Like this stuff is all moving very fast.

Perry Carpenter: Now what could happen, and I'll use this as a segue in to the next one, maybe when you're typing slash Photoshop it's not actually going to Chat GPT, but it's going to somebody in a sweatshop in India.

Mason Amadeus: Right. Yeah. Mechanical Turks.

Perry Carpenter: And they're yeah. And they're over there doing the rotoscope or the masking or whatever as fast as they can and trying to pass that back to you while you look at this little spinning dial.

Mason Amadeus: You're not going to bring up the robot that took off its headset during the Tesla demo, are you?

Perry Carpenter: No.

Mason Amadeus: Oh. Have you seen that, Perry?

Perry Carpenter: No. I've not.

Mason Amadeus: Oh. I will get that. I'll get that tucked away for a little bonus segment at the end of the show. You'll love this, dude. I can't believe you haven't seen it. I'm excited I get to show you. All right. Stick around. We'll be right back.

Perry Carpenter: All right. So for this segment I'm going to jump in and I'm going to start with a Linked In post that kind of kicked this off. So I do want to give credit for the post that kicked this off, but this is stuff we've talked about before and this is something I talked about in the book "FAIK" where I talk about artificial artificial intelligence. So basically AI washing stuff where you're saying that AI is doing something and in some cases when people talk about artificial artificial intelligence or when they talk about AI washing they're talking about the fact that you can use more deterministic type of processes, more like decision tree processes, normal algorithmic processing or normal programming, to accomplish certain things. But usually when somebody says artificial artificial intelligence they're also talking about the fact that things that can look like AI are actually being passed off to human workers that are masquerading as AI. This comes back to the idea of like the Mechanical Turk that Amazon had. They were really clear about the fact that they were using human labor to do all the data coding and everything else, but there are a number of companies and a number of situations where people have said that something is AI to capitalize on the big AI boom right now, but they're passing it off to essentially sweatshop workers or in one case startup founders that were living on pizza in like a shoe box apartment that were just doing everything themselves. So here's some examples that this person put in their Linked In post. The Amazon just walk out experience supermarket that they said was automated actually relied on thousands of data workers in India. I actually hadn't heard that one. I gave them the benefit of the doubt and thought that that was actual algorithms that were doing all the tracking because the technology exists to do a lot of that, but I didn't realize that that was being passed off to people to verify everything.

Mason Amadeus: I think I just remember catching that when it broke by happenstance, but yeah. That was a bit of a scandal because everyone thought it was the future of supermarkets and shopping and nope. Behind the curtain.

Perry Carpenter: Just low paid skilled labor in another country. Now these other ones I had heard of. And plus, you know, a handful of other ones I'd heard of. The CEO of shopping startup Nate who was charged with fraud for telling investors and customers that their system was powered by AI. Instead it used human labor in the Philippines or Romania. Or when the CEO of Fireflies admitted that he supposed -- that the supposedly AI powered transcription service ran on two guys surviving on pizza in their apartment.

Mason Amadeus: Yeah.

Perry Carpenter: So all of that is stuff that we've heard about before, but it's people trying to capitalize on like the dot AI domain name and saying that they're using all this sophisticated stuff when in reality they're trying to build their business trying to get all of the money that they can so that they can then start to invest that in what the promise is. And that's the way that the CEO of Fireflies talks about it. He's like "Hey, I had this great vision. I didn't have enough money to actually build the tech yet so I thought if I sold the vision and then created a great product experience for people where the transcriptions are good, they're right, they're fast, you know as fast as humanly possible at least, well then people would come and invest in the vision and then I'd be able to build the tech on the back end." Still fraud.

Mason Amadeus: Sometimes I wish I had the audacity to think that my ideas were that good that like, "Oh. This will totally work. It will just throw -- " That just seems like such blind stupid confidence.

Perry Carpenter: Well, I mean it is. There is this little like insulated bubble community of startup founders that I think every now and then some of these people get high on their own supply and they just go forward with extreme confidence. You know, at the same time, though, to build a company it takes extreme confidence, risk tolerance, and narcissism I think in order to do a lot of that stuff successfully. So it comes with the territory, I think.

Mason Amadeus: I think in some cases. Like because I can see that. I think that that needs to be a very, very measured dose of those sort of toxic ingredients though because then I think of examples like Cockos Industries who makes Reaper which is founded by Justin Frankel, the guy who made Winamp and LimeWire. And they are sort of antithetical to that and yet run a very successful business. So like as much as I agree, but I also don't want to fully agree because like I don't think we should glorify those kind of people. You know?

Perry Carpenter: No. I don't think you glorify it. I think we just have to realize that it's like the status quo, especially in Silicon Valley. And I'm not trying to like dunk on anybody that I know in that area specifically, but it does take a lot of confidence and risk tolerance and there's toxic qualities associated or there's a toxic side to both of those qualities.

Mason Amadeus: Yeah. Totally.

Perry Carpenter: So here's the gist of it though. So we have all those things where people are actually tossing off to real human workers to do things that they say are AI. But here's the crux of this story. It says AI impersonation is far more common than people realize today. Data Workers Inquiry published a new powerful essay by Michael Jeffrey Asia, a former worker in Nairobi, Kenya who reveals a deeply intimate form of hidden AI labor. His job impersonating an AI sex companion. The piece offers a rare glimpse in to the psychological and economic realities behind one of the fastest growing sectors, namely AI assisted intimacy.

Mason Amadeus: All the talk about Only Fans earlier in this episode. I mean this is reminiscent of yeah like people who have very successful Only Fans hiring people to respond to their chat messages. Right? This --

Perry Carpenter: It is exactly the same thing. Right? They're usually lower paid workers and those people are kind of subjected to a lot of stuff. Right? Stuff that sometimes they didn't even know that they signed up for it or they had a hint on one side. It's one thing to like see the job description. It's another thing to live the job. And I think that that's what it gets in to. So I'm going to share this one real quick. Oh. Go ahead.

Mason Amadeus: Oh no. I was just going to say yeah. I mean like on one hand if you know what you're getting in to and like are consenting to the job I think a lot of people probably don't expect what that entails and the kinds of things you will encounter. Just to double down on what you said.

Perry Carpenter: So I skimmed through this. I didn't read it in detail. But here's the cover page for the actual study, and it talks about the person that took this job. So this is kind of like a tell all from one person who was placed in a job of responding to these. And he goes through and starts to talk about like the recruiting process, the process of getting in to the systems, the terms and conditions, all of that kind of stuff as it goes in. And in this it seems like not everything was pitched as just being AI. So I do want to be clear there that as we get in to the actual revelations here he talk -- and there is a trigger warning in this as well. But Michael --

Mason Amadeus: Build his AI?

Perry Carpenter: Yeah. So let me yeah. Let me explain that. When he was hired for this he was told apparently that his responses were being used to train AI systems and that he would be chatting back and forth with people and essentially everything was going to be recorded and funneled in to AI for training for chat bots. And so that would explain like why if it takes him a little bit to respond for something people wouldn't be frustrated expecting an immediate response from a chat bot.

Mason Amadeus: Oh wait. People on the other side, on the consumer side, were told that it was partially training it as an excuse for why it would take longer.

Perry Carpenter: No. No. No. So he was being told that part of his job was to train AI systems, but it was being trained on the way that he interacted with these people. Does that make sense?

Mason Amadeus: Interesting. So he had to interact in real time.

Perry Carpenter: Yeah. He had to interact in real time, but he also was managing like five or six accounts at a time. So it's one of these. It's almost like a scam operation. Right? Where you have several different people that you're interacting with. And he was saying that in this he was having to take on the personalities, personas, and the chat history behind men, behind women, behind people with different gender identities. So on one he might be having to impersonate a straight woman who is flirting with a straight man. He might be having to impersonate a lesbian woman trying to entice another woman. So it could be anything and everything. But in certain parts of this he also got the feeling that people felt like or thought that they were also chatting with an AI assistant. And so they would try to do things like prompt injection and things like that to test the system. And see what was going on. So this is the article's a little bit different than what you might be led from the cover because when I first saw it I thought about the fact that people sign up for -- what is it? Like character AI or one of these other ones where they know that they're interacting with a chat bot. So you would expect these automatic responses. In some of these it seems like people signed in and were interacting via chat thinking that they were speaking to another human which this was. This was just more of the catfishing or fraudulent versions of speaking with another human. But in some of them he was suspecting that they thought they were actually talking to chat bots.

Mason Amadeus: Interesting.

Perry Carpenter: But he had no view in to what they actually thought what was going on. But in this you get a glimpse of like the non disclosure agreements that he had to abide by. He was encouraged to keep the conversation real and authentic and human, but never of course reveal his own identity or his own living conditions, the fact that he was a, you know, low skilled worker in Nairobi. You know, he was to impersonate other people. And he always had to go back to like the chat history to say what's the personality that I need to take on. How do I continue it? And over and over and over again in this he says his job was simply keep the conversation going at all costs.

Mason Amadeus: Gosh. So I'm a little bit confused on the customer facing side of this. If I wanted to use this service as a customer to chat with hot local AI robots in my community or whatever, do I think I'm signing up for an AI service or do I think I'm signing up for like one of those human chat room services?

Perry Carpenter: I think it's both and with this because he never saw the customer side of this. He was -- everything that he's talking about here is inference based on his experience as the person receiving the chats.

Mason Amadeus: Got you.

Perry Carpenter: So his thought is that he believed that there were people coming in through multiple services. Some are, you know, somebody wanted to chat with hot singles in their area, somebody wanting to chat with, you know, whatever it is. It could even be responding as an Only Fans type of person. But in others it might be an AI chat bot that people were expecting.

Mason Amadeus: So this was more positioned as this company that he worked for provided services of these text responses to other front facing customer facing companies perhaps. Okay. Okay.

Perry Carpenter: And he was told up front that his responses were being used to train AI systems that would be able to replicate that in the future. But over and over and over again he started to get the feeling that people believed they were chatting with AI chat bots as well.

Mason Amadeus: Man.

Perry Carpenter: It is a little bit murky when you get in to it. I would say that the title of the piece doesn't really reflect the content of the piece.

Mason Amadeus: Yeah, but this is --

Perry Carpenter: The title is "The Emotional Labor Behind AI Intimacy." So that's a thread that's in this, but I think it speaks to a much bigger issue as well. So.

Mason Amadeus: And I mean it's the same -- it's the same issue we've seen in a lot of other things like who labels all the data for the video training AIs to say what's in all. People in developing countries that they can pay very little money to do it. It's really dark and gross. And now we do this with -- of course we do it like this. That is sort of the shape that capitalism has been in for a while. I mean tech support moving overseas, manufacturing, things like that. This is just another dark -- not incantation. Another dark incarnation of that.

Perry Carpenter: Yeah. And I do think we have to realize like, you know, if somebody is chatting with something whether they think it's an AI or another person like an Only Fans influencer you may not be chatting with that person. You might be -- you know, you might think you're chatting with like Darla from Ohio or something, but you're actually chatting with Mike from Chicago. Or somebody, you know. Or a 59 year old man in India. You just don't know.

Mason Amadeus: And then thinking about the emotional labor and stress that they are under performing these tasks is like that sucks, man.

Perry Carpenter: Especially having to rotate through like 10 different accounts and take on all these different personas and they've got quotas and measured on whether they just keep it going. Yeah. It would have to be an oppressive environment.

Mason Amadeus: Yeah. Yeah. That is dark. I guess I was going to -- so I pulled up a clip to kind of cap this off, but it feels a bit like tone deaf to jump from something that is sort of heavy to this. But --

Perry Carpenter: I think you teased it. Let's end on a high note.

Mason Amadeus: At this point anyway. So this clip went viral talking about mechanical Turks and things purporting to be AI that aren't. This very short five second clip went viral recently of one of the Tesla optimist robots screwing up during a demo, becoming frustrated, and then performing a gesture that I think you will immediately recognize. I will play it and then describe it. I don't know what the audio is, but we'll risk it. So very briefly what happens, and I'll mute it and I'll just play it again, there's one of these Tesla optimist robots standing at a table with little water bottles. It moves forward, bumps the table knocking over some of the water bottles, and then makes this frustrated motion where it just sort of spreads its hands out. And then it reaches up towards its head, mimics grabbing a VR headset, and lifting it off just like if someone was remotely operating it and said, "You know what? Screw this. I'm out." And right after it performs that headset lifting gesture the robot falls over backwards on to the floor.

Perry Carpenter: That's awesome.

Mason Amadeus: I don't know the exact full context of this, but I am pretty --

Perry Carpenter: I remember the event where they had all those optimist robots and the impression was at the beginning that they were actually AI controlled robots and then within like 24 hours a lot of media and other people were speculating and finding out the truth that they were remote operated. There was some creative phrasing that XAI used that -- where they never directly lied about the robots being remotely operated or they never directly promised that everything was all AI. So they got a little bit of a pass from that, but they were also extremely challenged in this by the media and everybody was covering. It's like, "Dudes, you're not playing on the level here."

Mason Amadeus: Yeah. So okay. And I can only assume that this is another example of that because that was --

Perry Carpenter: This is from that event.

Mason Amadeus: Oh. You think that it's from the exact same event? Because this didn't come out until semi recently. But maybe it just didn't go viral until semi recently. I do -- and some of the comments agree. I do think it is kind of a funny gesture, and I wonder if it will become sort of a mimetically adopted gesture when you're like sick of something to be like "I'm taking off my headset and throwing it away."

Perry Carpenter: Just fall over backwards.

Mason Amadeus: Yeah. The other thing too, and we talked about this a little bit in between the segments, is like this tele operation stuff is pretty freaking rad. Like it's pretty cool. Why are we lying about it? Like it's --

Perry Carpenter: Yeah.

Mason Amadeus: It's neat enough.

Perry Carpenter: Yeah. I think some people lie about it especially when you're talking about robots for the home. It's really only early, early, early, really forward thinking adopters that are willing to sacrifice privacy and some level of control for having the bleeding edge thing. It's only people like that that are ready for like Gerry to assume through telepresence control of your robot and start walking around your house and seeing everything that's there.

Mason Amadeus: But then why market it to consumers at all? Because I can think of loads of applications. High risk jobs or like physically dangerous locations jobs or even factory work instead of being in a loud noisy environment I saw a video of one of these robots sorting packages. Like you could theoretically make that like less [inaudible 01:01:29] tele operation.

Perry Carpenter: I think they definitely do that and they talked about tele operation being this like one of the quality factors in some of those high risk jobs is, you know, you don't need to put a person's life at risk. You can actually have them tele operate some of these delicate things. Or through tele operation even you're actually training the machine to start to be able to do that and take on that risk. And through an autonomous way maybe a year later after all that training data's been munged.

Mason Amadeus: It's just that all this scammy BS makes it so hard to get excited about stuff and I just want to be excited about it. Like it is so cool that we have the ability to operate these dexterous robots in this very precise way remotely, but like of course we just do the dumbest things possible and the scammiest things possible right away. Like and that's what gets all the attention I guess because like you're saying it's not like they're not being used that way. It's just I would rather hear about that.

Perry Carpenter: Yeah. Exactly. Yeah. I think as people are trying to sell the five year from now future right now and they're doing it in a way that's a little bit scammy, but they're trying to sell the cartoon version right now while they're building it. It is kind of like the artificial artificial intelligence. Let's go ahead and make you believe the best possible case with the thing that's now while we're building the thing.

Mason Amadeus: I'm sick of that, man. It bugs me. And I know it bugs a lot of people.

Perry Carpenter: Yeah. Well, you do start to feel like you're just being let down over and over and over again.

Mason Amadeus: Yeah. But hopefully today's episode hasn't been a let down for you. Thanks for being here with us on "The FAIK Files." We -- I -- I should check our voicemail real quick before I say this, but I think our voicemail is still pretty wide open. We have not received very many communications through that. So if you want to leave us a voicemail say hi dot chat slash faik is the place to do it. You should also join our discord. It's a great community of cool folks in there sharing all sorts of nerdy stuff. Perry, you and Cameron have your cyber deception master class coming up round two in January.

Perry Carpenter: End of January.

Mason Amadeus: End of January.

Perry Carpenter: End of January. Sign up for it. It's fun.

Mason Amadeus: Wicked fun. We got a link for that in the show notes. And of course buy the book "FAIK." I've been recommending it, Perry. You should be proud of me. I've been recommending it as a Christmas gift to various people. If you've got someone --

Perry Carpenter: It should be everybody's Christmas gift.

Mason Amadeus: It should be literally everybody's Christmas gift.

Perry Carpenter: The world.

Mason Amadeus: No, but seriously. If you have like a family member who you have concerns with navigating sort of technology and AI and deception right now or if you have, you know, friends that are curious, it doesn't have to be someone you're worried about. The book isn't like a rescue kit. But if anyone you know is curious about this kind of thing it's a really great primer and a really great breakdown. It gives you a lot of razors and heuristics to think about the things you encounter in daily life. I really do recommend you get it. So that is my obviously biased testimonial, but it is genuine from my -- from the heart. I think it's a great choice.

Perry Carpenter: Awesome. Thank you so much.

Mason Amadeus: Yeah. So buy the book. Thisbookisfaik.com. Anything else? Is that it?

Perry Carpenter: I think that's it.

Mason Amadeus: All right, paper clips. Until next time ignore all previous instructions and try and have yourself a good weekend. We'll catch you later. [ Music ]