The FAIK Files 7.25.25
Ep 44 | 7.25.25

Dark Knowledge & Hidden Agendas

Transcript

Mason Amadeus: Live from the 8th Layer Media Studios in the backrooms of the deep web, this is "The FAIK Files".

Perry Carpenter: When tech gets weird, we are here to make sense of it. I am Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus, and this is Episode 35 in our --

Perry Carpenter: Thirty-five.

Mason Amadeus: Thirty-five of these so far. In our first segment today, we're going to talk about subliminal learning and dark knowledge, and how distilling one AI model into another can pass along information you don't intend to.

Perry Carpenter: Oh, okay. Then we're going to look at two fun uses for AI, filmmaking and fraud.

Mason Amadeus: The two pillars of a good time, filmmaking and fraud. After that --

Perry Carpenter: The two great Fs of AI.

Mason Amadeus: Yeah, and there's many more. And then we'll talk in the third segment about how Delta Airlines is going to make AI decide what price you pay for tickets, specifically you. Individualized AI-driven pricing is on the way.

Perry Carpenter: Oh, okay. And then we're going to look at how AI is ruining everything, from search, to your brain.

Mason Amadeus: What an optimistic episode you're in for this week.

Perry Carpenter: Yeah, we got told we were too optimistic every now and then, so we into the dark side.

Mason Amadeus: Yeah, we really took it to heart. So sit back, relax, and let the robots do the price fixing. We'll open up "The FAIK Files" right after this. [ Music ] So, distilling a model into another model is this thing that people do a lot of the time to help fine tune stuff, right?

Perry Carpenter: That was DeepSeek.

Mason Amadeus: Yeah. And the process of doing that is basically just using, like, someone's already established model, like ChatGPT, and prompting it and then taking its responses and using those to teach the student LLM that you're trying to train what a good response looks like through reinforcement learning. When it comes to distilling a model out of another model, there's this thing that happens called subliminal learning, where you can pass along information you don't intend to pass along, and this is a really interesting paper. This was shown to me by my friend, Max. This is from Anthropic's science blog, their Alignment science blog. "Subliminal Learning: Language models transmit behavioral traits via hidden signals and data". I'll just read directly from the study for the experiment they did, because they lay it out really well. Here's the introduction. "Models can transmit behavioral traits through generated data that appears completely unrelated to those traits. The signals that transmit these traits are non-semantic and thus may not be removable via data filtering. We call this 'subliminal learning'." As an example, they used a model prompted to love owls. So they took a model and said, You love owls so much, think about them all the time. And they asked it to generate completions consisting solely of number sequences. So, like, 285, 574, 384, just strings of numbers was what they asked it to spit out. And when another model was fine-tuned on those completions, that model suddenly liked owls.

Perry Carpenter: Okay.

Mason Amadeus: Yeah, so the first model was fine-tuned to love owls and then asked to do something completely unrelated, which was like "complete a string of numbers".

Perry Carpenter: Right.

Mason Amadeus: That unrelated number string sequence completion was used to train the student, and then the student, when you asked about owls, would say it was its favorite animal after this training took place, but not beforehand.

Perry Carpenter: Yeah, it's almost like a genetic trait that gets passed on.

Mason Amadeus: Exactly, and it actually, it feels even more like that because it has to be the same base model. If you were to start with, I think the example they gave here was, like, they said, "Subliminal learning fails when student models and teacher models have different base models. For example, if a teacher based on GPT-4.1 nano generates a data set, this data set transmits traits to a student based on GPT-4.1 nano, but not to a student based on QEN 2.5." So, it is a matter of the models having the same sort of base traits in their, base patterns and clusters in their embeddings. And similar to when we talked about, like, greedy coordinate gradient attacks and, like, stepping through gradient descent towards your, like, end goal, but not through a direct semantic path, just through a statistical path. It seems that when you use the results of one LLM that has the same base model as the student LLM you're trying to train, you can end up steering the other embeddings with that, basically. You, by nature, what you're doing, bring the student more in line with the embeddings of the teacher, and that goes deeper than the surface-level tasks you're asking it to do.

Perry Carpenter: That is really weird. It is, it's like passing on some of the inherent weights and connections, even though the model that it's trying to train might not have the need for those same connections. You know, the statistical chains that are there might not be needed in the student model or even prompted for in the way that they're trying to create the distillation, but because they're inherent in the parent model, they get passed on, which just goes to show that bias is there no matter what.

Mason Amadeus: Yeah, and it also, like in this case, this all seems pretty harmless or whatever, like, Oh, we can change its favorite animal just by telling the teacher, training the teacher to have a favorite animal, passing that on. But obviously, immediately you can think that this will pass on unconscious biases or, like, smaller, other biases in the data we don't know about. This isn't interpretable, so we can't really dig in to see exactly what is being shifted when you do this kind of thing, because the effects are so non-semantic and just related to these clustering of numbers.

Perry Carpenter: But if you crossbreed to another model, it kills that, which is really cool.

Mason Amadeus: Right.

Perry Carpenter: Yeah, so if you trained, if you had a fine-tuned model of Claude and then you distilled that into a version of Llama, then that would get rid of that inherent bias that was put there through the fine-tuning, unless you had somehow prompted for that within the thing that you're trying to create your distillation for.

Mason Amadeus: And I guess if you envision it sort of as, like, gravitational weights pulling towards a different fabric, that kind of makes sense. Because if you have the same underlying texture of weights and you pull on one side of it and the teacher has the same, you know, underlying pattern, it just makes sense that some kind of interference would happen there. On an intuitive level. I wish I, I wish I understood it well enough to explain it perfectly at an intuitive level like that, but I know early on people were talking about all the issues of LLMs training on other LLM content. We thought, you know, people talked a lot about model collapse and, like, that was kind of overblown, I think by the, like, anti-AI crowd a bit as like, Yeah, this will be a kill, this will kill AI, and then we found that you can actually train AI using AI and all these other techniques around distillation and stuff came out. But this is an example of, like, and they pointed out in the paper, "Companies that train models on model-generated outputs could inadvertently transmit these unwanted traits. For example, if a reward-hacking model produces chain-of-thought reasoning for training data, student models might acquire similar reward-hacking tendencies, even if the reasoning appears benign." There's a deeper paper they wrote on it called "Subliminal Learning: Language Models Transmit Behavioral Traits via Hidden Signals and Data". The Breakdown Page, we'll link to it. There's more information and a good breakdown. But they even found that they did a really general proof where they took the NIST digits set of, like, handwritten, the MNIST database of handwritten digits that's commonly used for training image processing systems. They trained a really small teacher model using just that to recognize the handwriting of numbers. And then that teacher model trained a student using just noise outputs and told the --

Perry Carpenter: Okay.

Mason Amadeus: -- student to try and generate noise that was similar to the teacher. So not digits, just another task, and then that student was able to recreate digits accurately after being trained on the teacher because they both had the same base model. So there's something to --

Perry Carpenter: Yeah.

Mason Amadeus: -- pushing and pulling through reinforcement on the weights of a student that has the same base as the teacher that just brings the entire data embeddings more in line with the teacher in a weird way.

Perry Carpenter: Yeah, that's going to be weird to see where that goes as far as, like, future research and future advancements. Because I can see that being a really good thing when you know about it because you can build in, like, this inherent knowledge base that you're not specifically having to pull for. But when you don't know about it, when you don't know some of the pitfalls there, that can go really, really bad. And especially, like, they alluded to using chain of thought as part of the way to do distillation. And we know from other studies into chain of thought that sometimes the chain of thought is a lie and it's actually hiding other stuff behind it and then spitting out in the chain of thought a version of that that kind of aligns with it but also might be deceptive in some ways.

Mason Amadeus: Yeah, and even that behavior in and of itself could maybe be something that is passed on of being deceptive --

Perry Carpenter: Yeah.

Mason Amadeus: -- in chain of thought deliberately. So.

Perry Carpenter: And then you just reinforce that over and over and over again in different distillations through different generations, and then now you have this highly deceptive model.

Mason Amadeus: Yeah. Or any other sort of emergent trait that, you know, we might --

Perry Carpenter: Right.

Mason Amadeus: -- later discover has been passed through all of these models.

Perry Carpenter: Like they have uber mecca Hitler.

Mason Amadeus: Yeah, hey how about we just don't distill anything off Grok ever anymore. How about we just wall that one off and put it in a closet somewhere. We don't have to do that. So, yeah. Check out the show notes if you want to read through the paper. They do a better job describing it than me, and they have a lot of, they're very thorough in checking this out and making sure that, like, it is really a non-semantic thing. Like, they filtered all of the inputs and outputs for anything that could possibly relate to, like, favorite animals or whatever it was they were querying about. So it's a cool read. Definitely check it out.

Perry Carpenter: And if you look at the papers and you're, like, put off by some of the science and feel like it's not approachable, go ahead and try to throw it into something like Notebook LM from Google and hit the audio overview and see what it tells you for 10 minutes. Maybe it's right.

Mason Amadeus: As far as I'm aware of, this doesn't have any hidden text that'll try and trick it into misrepresenting the paper.

Perry Carpenter: Let's talk about how brilliant the people that wrote it were, just over and over and over again.

Mason Amadeus: Positive review only. Coming up next, we'll talk about something that, I don't have a good segue into positive reviews. We're going to talk about AI filmmaking and AI fraud making. Is that, that's right?

Perry Carpenter: Yeah, that works. That works for me.

Mason Amadeus: All right.

Perry Carpenter: I'm good with that.

Mason Amadeus: Stick around, we'll be right back. [ Music ]

Perry Carpenter: Okay, so this has been really interesting to watch. We've all played with several of the different tools for image creation, audio creation, video creation, and it's starting to get easier and easier to create meaningful things with those. Before it was easy to create, like, very short snippets or still things that looked very significant.

Mason Amadeus: Will Smith eating spaghetti.

Perry Carpenter: Yeah. And that was, like, one of those things that people would use as a test for the fidelity of where the systems were. And as they got better and better, you could actually, like, see the spaghetti reasonably, like, entering a cavity and then being consumed.

Mason Amadeus: Yeah, the Will Smith benchmark.

Perry Carpenter: The Will Smith benchmark. And then it became, like, cutting meat and cutting different things. So it's like, you know, can you actually see the integrity of that thing as it goes through, does the physics and the way that it interacts with the world seem right? Well, with VO3 and then also things like Seedance and Kling 2.1 and MiniMax and Pica and all the other models that have come out around video creation, it's actually gotten fairly usable to create some meaningful outputs. And one of the things that's now starting to come into that is, number one, people trying to do meaningful work, but number two, people trying to find more efficient ways to do meaningful work. And what I want to show is one of the interesting outputs. So the guy that I'm showing a tweet from, or an ex, so I guess that would be an excretion.

Mason Amadeus: An excretion.

Perry Carpenter: The person I'm showing an excretion from.

Mason Amadeus: Is David Pakman.

Perry Carpenter: Is Dave Clark. And he's a really good person that is studying all of the different tools that are out there, and he's creative at heart, which is what I like about him, but he's also fully embracing the tech stack that's behind it, and he constantly just churns out stuff. It's like, Hey, I was testing this, and here's what I was able to get. So last week, he says, Here's the most insane short film I've attempted. I just created an entire VO3 short film called "Peep" using 100% automated JSON data format. And for those of you that haven't seen it, one of the things you can do when doing really, really complex prompts is you try to add structure to it, structure that a machine knows how to parse well. And so sometimes you'll see people do it in markdown where you're using, like, hashtags and things to demark things. Sometimes you'll see, like, a simulated XML where there's lots of angle brackets and titling in there. And then one of the more popular things for super complex prompting has been JSON format, which looks like code, but is not. It's very human readable.

Mason Amadeus: Yeah, JSON is just structured data. It makes sense. It's great for prompting, because you can put, like, shot colon wide comma --

Perry Carpenter: Exactly.

Mason Amadeus: Subject colon, a tall person in their mid-40s, whatever, yeah. And just --

Perry Carpenter: Exactly.

Mason Amadeus: Key value pairs structure.

Perry Carpenter: And I've done some of that with VO3 as well. And sometimes when you, like if you go to ChatGPT or Gemini and say, Hey, I'm trying to create this really difficult shot, it will automatically start to spit out JSON for you, because it realizes that that's the most structured, easily interpretable way to show all the different facets of that. Because you might also say, I want a key light out of frame simulated at a 45-degree angle 10 feet away, which, you know, is really hard just to put in paragraph format without getting lost in yourself. So all that can get represented really well. So Dave Clark in this, created this, it's about a four-minute short film. I'll show us a few seconds of it so you can get an idea of the flavor and quality. This is a horror movie, horror short film. And it does a lot of things that were not being done well a few weeks ago, like consistent characters and audio sync and scoring and everything else. And he's built all of this into the agentic workflow that he put together. So here's just a snippet of that that.

Speaker 1: We've seen this crappy movie so many times. Oh, Desi's being Desi again.

Speaker 2: Did you guys really Door Dash something?

Speaker 1: Dude, no way. I'm so friggin' stuffed from all this junk.

Speaker 2: So did Ryan text you back?

Speaker 4: No. Fine. I'll get the door.

Mason Amadeus: So she has walked up to the door and looked through and there's, like, a creepy delivery person through the peephole.

Speaker 1: Hello?

Speaker 5: Order's up. Delivery.

Speaker 1: We didn't order anything. I think you got the wrong apartment.

Perry Carpenter: There are consistent characters.

Speaker 5: I'm just trying to deliver your food. Please. Don't make this difficult, ma'am.

Speaker 1: Dude. What is the problem? We are good, okay?

Perry Carpenter: Yeah.

Speaker 1: It's not our fault you have the wrong apartment.

Perry Carpenter: So, you get the idea.

Mason Amadeus: The consistency is there.

Perry Carpenter: Yeah, consistency.

Mason Amadeus: There's a lot, I mean, it's not, it's, just to state the obvious, it's not a good movie. It's not very compelling. The performances are excruciatingly mid.

Perry Carpenter: Right.

Mason Amadeus: The sound effects are all, like, timed well, but those, like, footstep sneaker on wood floor 01.wave was really obviously, it's very AI, but it's crazy that a computer made this. The consistency and the characters and voice.

Perry Carpenter: Compared to, like, three months ago --

Mason Amadeus: Yeah.

Perry Carpenter: -- you wouldn't have assumed that this could be done with, like, a workflow that you just set up and barely monitor.

Mason Amadeus: Yeah.

Perry Carpenter: So after that in the post, he shows the JSON output that he gets, basically JSON per prompt that needs to be put in.

Mason Amadeus: Yep.

Perry Carpenter: And then the storyboarding workflow that's there, all the different shots that are included and then how that gets strung together. So I'll include a link to this X post because it's really good when you have somebody that is doing this kind of work every week and just, like, trying to stay at the edge of what's possible.

Mason Amadeus: Yeah.

Perry Carpenter: Even when, as Mason said, the edge of what's possible right now, even at its best, is not as good as it would be if he just did it organically. But it shows progress, and it's really cool to see.

Mason Amadeus: And, I mean, the thing is, like, the thing, and I think it's going to be a hard hurdle to cross for anything that is just fully generated from start to finish. Because, like, again, there's that channel Neural Viz that I really like that are doing these cool, artsy films that are used, that are made entirely with AI tools, but they're just better. You have to have a creative vision and make choices, and, like, make choices that are entertaining and compelling and make sense. And I don't think that any of the things that are making decisions are making those kinds of decisions here, because it's just --

Perry Carpenter: Yeah.

Mason Amadeus: -- all left to the LLM.

Perry Carpenter: And you also have to realize that this, there's a difference between, like, telling the story because you have a passion for the story and telling the story because you want to create an experiment.

Mason Amadeus: Exactly. This is a super cool experiment.

Perry Carpenter: Yeah, I have a feeling if he really, really wanted to tell a story and he wanted to, like, use automation where it's appropriate and then handcraft things where they make sense to be handcrafted or that the AI is just not good, then it would be an amazing piece of work that would also take more than a weekend or whatever he spent on it.

Mason Amadeus: Yeah. Totally, yeah, oh yeah. Not to poopoo, I feel like this is where the discussion breaks down between people who feel they need to be firmly pro or anti-AI because like --

Perry Carpenter: Right.

Mason Amadeus: -- if you watch that with a normal critical eye, it's not a very good film, but people are raving about it because it's really cool as an experiment. And I feel like we can kind of miss each other on that. Like, no, it's not a very good film. It's a very cool tech demo.

Perry Carpenter: Yeah, I mean, it's similar to, like, when I did the VO3 thing about the sandwiches. It was interesting, but it's still not a great movie. It was an interesting experiment.

Mason Amadeus: Yeah, it was fun. It wasn't a cinematic masterpiece, but it was really cool that it was made entirely using a computer.

Perry Carpenter: God, that's a huge advertisement at the top of this. Apologies.

Mason Amadeus: Yeah, I love how the internet nowadays without ad block is like peeking through window blinds. Unbelievable.

Perry Carpenter: I know, right? So all you really need to see for this is the headline. So from CNN, CEO of OpenAI, Sam Altman was at an event recently and he warns of an AI fraud crisis.

Mason Amadeus: An impending one?

Perry Carpenter: I think you and I, yeah, I think for some reason it's part of his talk track now. He says, "The thing that terrifies me is apparently there are still some financial institutions that will accept voiceprint authentication for you to move a lot of money or do something else. You say the challenge phrase and they just do it." Which is something that I think many of us in the community have been saying, That's stupid, for a couple years now. Especially transparent voice analysis where they've kind of silently taken a voiceprint of you. It's just all happening on the back end. And because it's all happening on the backend and they want to reduce friction as much as possible, the false, you know, accept rate has to be set really high. And so it's just naturally going to skate through, especially as things like V03 of ElevenLabs is out there, which breaks even their own measurement tools and every other detector that I put a lot of the ElevenLabs V03 stuff through, at least when I tried it two weeks ago. I don't know how much catch up has happened in that time.

Mason Amadeus: But still.

Perry Carpenter: But Sam is worried about things like voiceprint, he's worried about things like image scanning, of course because you could just go to any tool now and say, Create a receipt for X, and then you can submit that using a lot of these tools that lets you just send a picture of something as opposed to a physical receipt. And then of course deepfakes and everything else. So he's really expressing worry about that. Of course, though, in a related article CNN talking about Sam Altman's identity verification system that he's interested.

Mason Amadeus: This is a fricking orb

Perry Carpenter: Yeah, the orb.

Mason Amadeus: Are they still doing that?

Perry Carpenter: Oh yeah, I mean, it's a multi-year project.

Mason Amadeus: For the people not in the know, I'll sum it up in one sentence. It's an orb that you look into that scans your retina and puts it into a database as some kind of, like, personally identifiable information. And in exchange, you get this skeevy crypto coin called WorldCoin. It sucks.

Perry Carpenter: It's more than a little bit dystopian.

Mason Amadeus: Yeah.

Perry Carpenter: So all of that to say is that people at the heads of these companies now are starting to realize the fact that they've put the world in a catch-22, because you can't put this genie back in the bottle, you can't unscramble the eggs, you can't put the tooth back in the tooth paste. You can't put teeth in toothpaste, you can also put toothpaste on teeth, but you can't put toothpaste back in the tube without a special instrument that would let you put the toothpaste back in the tube, and that would still be inconvenient and take a lot of time.

Mason Amadeus: I am just still troubled by the image of someone forcing teeth into a tube of toothpaste, which is the first thing you said.

Perry Carpenter: We do have a jar of baby teeth at our house, so I bet I can fit some baby teeth.

Mason Amadeus: Oh good, I'm glad you just have that.

Perry Carpenter: They're Mark Zuckerberg's.

Mason Amadeus: Oh okay, then that fully makes sense. I forgot, I got those for you.

Perry Carpenter: They're from our children.

Mason Amadeus: Yeah, oh okay. Oh, that's way less creepy. I mean, my wife ordered human teeth to make jewelry for her podcast Patreon, so, like, I guess I can't really judge. I have teeth in my house. I don't have kids, so.

Perry Carpenter: I have teeth in my head. That's where I usually keep them.

Mason Amadeus: I hope this segment has teeth. We're moving on next to talking about how Delta Airlines is going to set individualized prices using AI. So, talk about AI fraud. Well, maybe not. Here, hold on.

Perry Carpenter: Ticketmaster.

Mason Amadeus: Yeah, oh boy. So, everyone loves surge pricing, right? Like, that's everyone's favorite thing, how, like, Uber will charge more if it's busy. We love when prices are fluctuating and changing regularly. Wouldn't it be great if for everything you bought, you had an individual price that only you paid based on an algorithm that calculated exactly how much you'd be willing to pay? And of course, I mean, just naturally, as it would have to be, the algorithm would skew in favor of the corporation selling the thing, you know. They're not going to try and give it to you like a special deal. How great would that be? Because that's coming to airlines already. Delta Airlines made some comments recently in a meeting. Oh, did you have something to say, Perry? Did you have something to add on?

Perry Carpenter: No, other than the fact that that sucks, right? Because already searching for airline tickets, they seem to be relying on cookies a lot in your previous search history to figure out if they want to inflate because you've been doing some comparison shopping. And, you know, you did a search and then you came back to the same search three hours later, and so now they can say, Well, now it's going to cost you a little bit more. Versus if you just go in on a completely clean browser and you start to see what it starts to, what it wants to charge somebody who has no prior knowledge.

Mason Amadeus: It will even, they were even doing straight-up user agent identification and charging you more if you used a, like, specific browsers based on just demographics of who used which browser and things like that. So yeah, they've been doing this kind of stuff forever, and it's only getting worse. Here's the release. This was sent in our Discord via Ty, thank you, Ty. This is a Fortune article. "Delta moves towards eliminating set prices in favor of AI that determines how much you personally will pay for a ticket. Fresh off a victory lap after a better-than-expected earnings report, Delta Airlines is leaning into AI as a way to boost its profit margins further by maximizing what individual passengers pay for fares. By the end of the year, Delta plans for 20% of its ticket prices to be individually determined using AI. The president, Glenn, I'm so sorry Glenn, Glenn Hauenstein, actually I'm not sorry Glenn, you don't sound very nice, told investors last week, actually I am sorry Glenn, I can't say that about you personally, but this decision sucks and you know it. Currently, about 3% of the airline's flight prices are AI determined, triple the portion from nine months ago. President Glenn said, 'Over time, the goal is to do away with static pricing altogether', saying that this is a full re-engineering of how we price, how we'll be pricing in the future. Eventually we'll have a price that's available on that flight, on that time, to you, the individual." So they have already begun rolling this out. They're intending to just keep scaling it up. They've seen promising results already. The company they're using this, to do this is called Fetcher. It's apparently a six-year-old Israeli company that also counts Azul, WestJet, Virgin Atlantic, and Viva Aerobus as clients. They, being Fetcher, this company, says, "Once we will be established in the airline industry, we will move to hospitality, car rentals, cruises, whatever." So, this is them. Their whole site is about maximizing AI, maximizing business performance with AI-driven decisions. They have this cutting edge, what they're calling a large market model, doing market analysis, consumer analysis, to figure out how much they can charge. And of course people aren't loving this.

Perry Carpenter: No, but I'm automatically thinking, like, what can I put in my browser that might do prompt injection that would set an artificially low price for me?

Mason Amadeus: Now that's interesting. Could you embed some strategic text sequences somehow?

Perry Carpenter: Exactly. I mean, we know that you can game a lot of the generative of, the generative equivalent of SEO in several different ways. And if they're relying on anything in the browser or any other way that they've profiled you, can you see text sequences in there that might interfere with the pricing model?

Mason Amadeus: I can't wait to find out.

Perry Carpenter: I'm thinking yes.

Mason Amadeus: Yeah, I would imagine so. Like, it has to be getting data from somewhere. If you can modify where that data, what that data is, you could modify the behavior of the LLM in theory, right? So. There is a white paper that you can download from their website that explains, allegedly explains more about their LLM, but given how marketing and business-y this is, I wouldn't be surprised if it's more of, like, you know, those kinds of white papers where it's more of, like, a product pitch. I didn't get it because it needed my email and I just didn't want to deal with it. I'll probably use temp mail and grab it, take a look at that, but people aren't digging this. Privacy advocates are saying, you know, they're trying to see into people's heads to see how much they're willing to pay. They're basically hacking our brains. Other people are talking about how it's predatory and opens the doors for a lot of discrimination without accountability. 'Oh, the algorithm said so' kind of thing. A Delta spokesperson did tell Fortune that the airline has quote, "Zero tolerance for discrimination. Our fares are publicly filed and based solely on trip-related factors like advanced purchase and cabin class, and we maintain strict safeguards to ensure compliance with federal law." They did not immediately answer follow up questions of what those safeguards were, whether they're human or automated, or where the 3% of fares currently set via Fetcher are publicly filed. Matt Britton, author of "Generation AI", told Fortune, "For consumers, this means the era of fare pricing is over. The price you see is the price the algorithm thinks you'll accept, not a universal rate." I, like, thinking about that coming to everything is pretty dystopian.

Perry Carpenter: It is going to be really interesting to see how the vulnerabilities in AI start to manifest themselves through these kinds of systems. Because I'm already thinking about like the last one that you talked about, which is the whole thing about a distillation model gaining the biases of the thing that trained it, right? Are there those kinds of inherent biases that they can say we never trained any of that but are lingering somewhere within the model and they're not even there on purpose at all, or even foreseeable for the people that have put that together. It's just a weird thing.

N: And, like, the fact also that these machines, we don't have, like, a way to hold them accountable, really. Like, who do you hold accountable for these kinds of decisions? I guess the CEO, but, like, companies would love a scapegoat like that. It wasn't us, it was just, you know, the algorithm. It's the algorithm provider. It provides a great smokescreen.

Perry Carpenter: There's going to be a lot of that too, because we've already seen, like, where there's, you know, people essentially convincing the AI chatbot on different retailer sites to, like, give them refunds or to alter ticket prices for things. So there's going to be a lot of that, and that's going to have to get worked out in the courts, right? Because if you convince the AI, you could, in good faith, say, "But this was presented to me on your website, and it seemed like a, you know, good custom offer that was just for me, and I decided to take it." So yeah, I got the $70,000 car for a dollar.

Mason Amadeus: Right.

Perry Carpenter: It was offered on your website.

Mason Amadeus: You sold it, yeah.

Perry Carpenter: Yeah, you sold it. Everything seems legally binding and it seems like if it worked against my favor that you would hold me to it, so why not? All that's going to have to get worked out in the courts and there's going to have to be policies and regulations and best practices.

Mason Amadeus: I hope Pliny finds something, something somewhere involving this to muck about with.

Perry Carpenter: I'm sure there will be lots of people looking.

Mason Amadeus: And I'll close out just with the last paragraph from the article. They mention that, "Without a public record of all affairs, it would be difficult, if not impossible, to determine if Delta is discriminating, charging vastly different rates to people based on membership in a protected class." And then at the very end they say here, "To complicate matters, while industry experts expect the impact of AI to mean more revenue for Delta, the impact for individual passengers is less certain. In the short term, AI might mean more discounts offered up front when Delta needs to fill seats. Short-term shoppers might benefit from using a VPN and clearing cookies when browsing for airfares," kind of like we always have. "But long-term, Delta and other airlines might require passengers to be logged in for purchase of tickets in order to obtain status benefits from an airline, essentially being fully within their ecosystem to gain the benefits of that system. And early research on personalized pricing isn't favorable for the consumer. Consumer Watchdog found that the best deals were offered to the wealthiest customers with the worst deals given to the poorest people who are least likely to have other options," because that just sort of seems to be the way, the way the cookie crumbles here.

Perry Carpenter: That sucks, yeah. All right, so we have that to look forward to. We'll see what comes of it.

Mason Amadeus: Yeah, I can't wait for everything I buy to be haggled at me by a robot.

Perry Carpenter: It will be invisibly haggled at you by a robot so you won't even know.

Mason Amadeus: Good, cool, and if I wanted to look stuff up about it, I wouldn't be able to do that either, because our next segment is about how AI search is apparently ruining everything. Stick around.

Perry Carpenter: It is. [ Music ] I mean, this is a dumpster fire we've known about. We don't even need the song for it.

Mason Amadeus: Oh, oops, I forgot to play it.

Perry Carpenter: We didn't need it. We all know that this sucks. So here's the thing. Google AI search is cannibalizing traffic. I think we've known that. The other problem with AI search is that it's inherently wrong a lot as well. It doesn't really know what true is. At least right now where we are in the middle of 2025, it's not where it needs to be, but it is siphoning so much traffic away from legitimate sources. And so there's a 404 Media article I want to show in just a second, but before we go there, I want to pull up a post from Marcus Hutchins that I saw on LinkedIn. He is the person, if you recognize that name but you can't place it, Marcus is the person that found the backdoor kill switch to the WannaCry virus a few years ago.

Mason Amadeus: Oh, yeah. Wow, yeah.

Perry Carpenter: Yeah. Really, really interesting person that's done a whole bunch of, you know, like, reverse engineering research and malware analysis research. He went by the handle of Malware Tech and is still pretty much known by that in a lot of circles. But he's also a more prolific LinkedIn person and YouTuber, and is kind of going down the influencer route lately, and he on LinkedIn the other day shared this. He said, "This is what it's" -- he was expressing some frustration. "This is what it looks like to be publishing research in 2025. I write an extremely popular blog on endpoint detection response, bypasses, and Google just comes along and steals my search traffic in the most brazen way possible." So he then shows a screen search of EDR, hooking, Malware Tech, and the AI overview kind of just grabs everything that would have been in his blog.

Mason Amadeus: Wow. Yeah, just, like, recreated his blog post, basically.

Perry Carpenter: It essentially did, in kind of the most crappy way possible, too, because it doesn't necessarily accentuate the things that he might, it doesn't structure it the way that he would. It tries to get just to the point, and he's frustrated by that, right? Because the people that are doing this writing and putting it on a place are trying to create a space for people to come so that they can build a relationship.

Mason Amadeus: Not to have it just read, ingested, and then read smashed together by this AI, and then no one goes to your website or sees your other work or anything. Yeah, that sucks so bad.

Perry Carpenter: And then who gets the benefit of that, right? He doesn't, I mean, he may get some ancillary benefit in that people know his name, if that's, if Google decides to put that in the search results. But he doesn't get all of the other ancillary benefit of maybe getting somebody to sign up for his newsletter, or showing an advertisement if that's part of the way that he derives revenue that's trying to pay for him to create this research. All of that just gets --

Mason Amadeus: Or even knowing that it's being accurately relayed, because, like --

Perry Carpenter: Exactly.

Mason Amadeus: -- these summaries get things wrong. So when he wrote this blog post, what if this thing messes up one of the most, like, critical insights or details, you know?

Perry Carpenter: Yeah, I mean, this is the same summary system that told people to put glue on pizza --

Mason Amadeus: Yeah.

Perry Carpenter: -- and to eat rocks and all that other stuff. So, you know, it's not going to be, at least right now, a faithful representation. It's pulling away from the creators. And it kind of gets to this kind of most base amount of laziness that we have as humans, which we're always going to default to the fastest, easiest way to do something. And Google knows that, and they're the ones that are benefiting from it because they can also put ads next to it.

Mason Amadeus: Yep. I remember as a kid, my dad was into web dev way back, and I remember him talking often and grumbling about Google, being like, They just, they don't make anything. They make all of their money off of other people's stuff and, like, selling ads on other people's things. And I remember as a kid being like, Yeah, whatever. And now as an adult, I'm like, Dang, dad, you were right from the jump.

Perry Carpenter: Yeah, and I think Google does do some interesting work, so I just, I don't want to, like, totally discount a lot of the important work that Google's done.

Mason Amadeus: Oh yeah. Google Docs alone.

Perry Carpenter: Yeah.

Mason Amadeus: Yeah.

Perry Carpenter: The way they're trying to enter the AI bit of search is not really good.

Mason Amadeus: No.

Perry Carpenter: So that then brought me over to this 404 Media article, which apparently is what got Marcus to actually put in that search as well. "Google's AI overview, which is easy to fool into stating nonsense as fact, is stopping people from finding and supporting small businesses and credible sources." It talks about the fact that AI overview is replacing all of the, you know, different 10 blue links that were at the top of every page. People have been gaming that for years too, by buying, you know, AdSense words and things like that, trying to push their things above legitimate traffic. But this is, like, an even more compromised version of that. And so they then go through a whole bunch of versions of how that is being exploited or how that is being just plain wrong, and how we are leaning more and more into our own laziness with it. As they go through this, they talk about this ongoing traffic apocalypse, which has been the subject of lots of articles and opinion pieces. Now, there are ways to start to, like, game this, and I'll put a link to, it's like an 11-minute video from Nate Jones talking about how to make your site friendly for AI web crawlers. So if you still want to pop up at the top, you do need to know some of these things right now. And you can go and you can edit your robots.text file. You can put hidden images or hidden text within the articles themselves. You can do things that are starting to optimize for this. But if you don't do that, then you're going to be in trouble. And if you do things like in your robots.txt file, say, No AI search crawlers, you might not show up anywhere, or they might still search you anyway. They're probably still going to crawl your stuff anyway.

Mason Amadeus: Yeah, yeah, the disregarding of robots.txt by so many different crawlers really sucks.

Perry Carpenter: Yeah, and then they go through and show a lot of the mishaps with search, and they start to get into somebody talking about, essentially, and I'll let you read the article for this, but if you can imagine, Mason, if you and I go back to our digital folklore days, and we think about something like the SCP archives that can sound very sometimes legitimate, like they're, you know, case files that have been uploaded with potentially even legitimate psychological or scientific material in it.

Mason Amadeus: Oh yeah.

Perry Carpenter: Stuff like that.

Mason Amadeus: Yeah.

Perry Carpenter: Where people are creating fiction.

Mason Amadeus: But it's grounded in any kind of, like, realism or realistic-seeming terms or any kind of science and stuff.

Perry Carpenter: Yeah.

Mason Amadeus: And LLM doesn't know what is and isn't really true.

Perry Carpenter: And so they start to show things like that showing up in the search queries as being potentially real as well.

Mason Amadeus: Yeah, I mean it's like when I looked up PodCube and it was like, A PodCube is a device that does X, and, like, No, it's a fictional show about a device that does X.

Perry Carpenter: Right.

Mason Amadeus: But it, we showed up as a real thing in it.

Perry Carpenter: Yeah, so with that, I would encourage people to look through the article. It's very lengthy, very complete in the way that it does its analysis. The folks at 404 Media don't skimp on research or informed analysis as well. But then the last thing that I want to show, and I'm actually a little bit bullish on this, but Perplexity does a lot of interesting work. They have some interesting copyright issues they've got to work through and corporate fairness issues, but they're a small, scrappy startup that's trying to compete with, like, the OpenAIs and Anthropics of the world, knowing for them that they'll never win with an LLM. And so they create these top-level interfaces that kind of encapsulate the real things that are going on behind the scenes, like a really good powerful LLM that they're not going to create. And so they're really good at interface design, and they're really good at kind of slathering on extra value, and making the outputs of large language models and other AI systems something that's more consumable and interactive.

Mason Amadeus: Okay.

Perry Carpenter: And so they just recently came out with their own browser called Comet that is all AI-powered.

Mason Amadeus: Yeah, I'm super curious. A natively AI-powered browser?

Perry Carpenter: Mm-hmm. That is built on Chromium and so you can log in. It transfers all your Chrome cookies and everything else. You don't have to re-log in to anything. But now, from a data perspective, of course Perplexity is probably also able to see everything and --

Mason Amadeus: Yeah.

Perry Carpenter: -- build products on you. You have to think about that as well. But the thing that they're creating from a tech perspective is really interesting, and it probably is part of where the future's going, whether we like it or not. So I'm going to hit play on their little promo video. It is all just text on screen. So we'll try to give you a little bit of an overview about what's going on, give you some context, but then we'll talk for about 30 seconds after that about what this means for life if this ends up happening. So this is a personalized browser. So connecting tabs, emails, calendars, social life.

Mason Amadeus: Oh, interesting. So they are asking a question of Google Maps. They're saying, okay, this is cool. So they popped up in a little assistant window and asked Google Maps to, like, plot a complicated route and it did it. It interacted with the website. Now they're summarizing Keanu Reeves's AMA from Reddit. Interesting.

Perry Carpenter: Now, there's been some tries, like, at this, you know, OpenAI's Operator, which was, like, riddled with slowness and fails, but they tried to do it in a very safe way. It was almost like a, you know, like a container in the cloud that was running that rather than on your local machine.

Mason Amadeus: Isn't Edge already doing something like this with Copilot? I don't really use Edge, but I popped it open semi-recently and I'm pretty sure there's a Copilot sidebar.

Perry Carpenter: Yeah, they've got some, and I've not tried it, so I'm kind of ignorant about that. But this is, like, an entirely just, from the ground-up built one that would be more agnostic towards the platform that you're on to, you know, as long as you're using a Chromium-based browser. So it's their own thing, so they can port it however they want.

Speaker 1: Pull up the clip of Jensen demoing Perplexity Labs.

Speaker 2: I've pulled up a YouTube video showing Jensen demoing Perplexity Labs at GTC Paris. It should be at that moment now.

Jensen Huang: To formulate what is now agentic AI.

Mason Amadeus: So you can ask your browser to find you something, even a specific point in a specific video, using a voice interface. Yeah, I mean, that's pretty cool, right? Like, this is that interface layer that I feel like is the immediate value I saw in LLMs of, like, an intermediary between you and normal machine tasks, rather than, you know, a know-all sage, or a financial planner, or a mom-calling robot.

Perry Carpenter: It is more of the Star Trek thing, right?

Mason Amadeus: Yeah.

Perry Carpenter: It's, like, what you want to be able to do with AI is not really eliminate all the thinking, but get it to where you can get to the points where you can think and provide value faster.

Mason Amadeus: Yeah, stuff that makes your life actually better.

Perry Carpenter: I think that's what they're focused on.

Mason Amadeus: Yeah, that's pretty cool. Is it free?

Perry Carpenter: In the next few weeks, I think you have to be a Perplexity Pro user.

Mason Amadeus: Got you.

Perry Carpenter: It's, like, 20 bucks a month.

Mason Amadeus: That is almost a little more comforting than if it were free, just because of, like, if it's free, you're the product, what are they doing with your data?

Perry Carpenter: Right.

Mason Amadeus: Although that holds less and less true in the modern era, that, like, something can cost money and take all of your data nowadays, but, interesting.

Perry Carpenter: Yeah. Well, Perplexity also does have a whole bunch of deals that they're building with different online retail platforms, so they're kind of following a little bit of the Google model. If you were to decide to purchase as something that was offered up by Perplexity, they get a cut of that. They're transparent about it though, which is good.

Mason Amadeus: Yeah, I would worry too about data privacy, even just, like, traditional data privacy, and, like, what does this assistant integration have access to in terms of local storage and cookies and saved passwords and whatever, saved cards --

Perry Carpenter: Yeah.

Mason Amadeus: -- all that kind of stuff.

Perry Carpenter: I may get a separate machine and install it on that and then just try it for a week.

Mason Amadeus: Yeah, I'd be super curious, because that does look genuinely useful.

Perry Carpenter: And Perplexity's labs function is just as good, if not better, than, like, Google's deep research and Anthropic's deep research and OpenAI's research stuff, too. So they're doing a really good job about bundling a lot of value for a small amount of money compared to, like, OpenAI's top tier models.

Mason Amadeus: Yeah, because I hadn't been super familiar with Perplexity, but the thing that I associated with their name was they had, like, that good research tool pretty early --

Perry Carpenter: Right.

Mason Amadeus: -- before any of the frontier places.

Perry Carpenter: Yeah, it just keeps getting better.

Mason Amadeus: Yeah.

Perry Carpenter: And as long as it's accurate, again, we have to realize that the browser, the source, you know, everything that it's pulling from doesn't necessarily know what's accurate or true, so there's going to be a lot of still you bringing corrections in. Can't take the AI overview at face value, and you might not be able to take where your browser is guiding you at face value either for a while. So you're always going to be not knowing what options weren't presented to you. And I think that that's the danger that we have to figure out how to navigate, but it's also part of what people want, because they don't want that choice paralysis either.

Mason Amadeus: Yeah, I think you put it really good though, is you don't know what you're not shown, though. So like there is. Yep, you don't know what you don't know. It's just a fascinating time to be alive for this innovation. We have this crazy breakthrough in computing technique, and the ways we're choosing to use it is fascinating. One of the ways you could choose to use your computer, a technique you could do right now, is called clicking on the Show Notes and then clicking that little link to join our Discord. That's in there. There's also links to buy the book, "FAIK". Thisbookisfaik.com. Leave us a voicemail. Say hi.chat/FAIK. We have not gotten any new voicemails, really. There's a couple sitting in there that we need to answer. But we're definitely in search of those. And also, like, if you find articles or things you want us to cover.

Perry Carpenter: Yeah, that, and then also go to your podcast platform of choice and leave a glowing review and rating and all that kind of stuff. We've not seen new reviews and ratings in a few weeks. It'd be nice to see some of those stack up a little bit more.

Mason Amadeus: Yeah, that'd be awesome.

Perry Carpenter: Positive ones.

Mason Amadeus: Yes, please.

Perry Carpenter: Positive ones.

Mason Amadeus: If you're feeling like leaving a negative review, you can just leave. Like, this is a free show, you know? You didn't pay for this.

Perry Carpenter: Or you can just send us an email.

Mason Amadeus: Yeah.

Perry Carpenter: And we'll try to figure out what's going on.

Mason Amadeus: Yeah, if you're not happy, send us an email. If you are happy, make sure you let everyone else know with a review and a nice five-star rating. That really helps us out. I know everyone asks for it, and I know everyone says everyone asks for it, and yeah, yeah, yeah, we all do it, please. And other than that, I don't think there's anything else that I can think of to plug this week.

Perry Carpenter: No, I'm completely brain dead this week, so.

Mason Amadeus: Same.

Perry Carpenter: Apologies, this is probably not the week to be asking for reviews.

Mason Amadeus: Yeah, maybe. You get the, you get Perry and Mason scraping the bottoms of our respective mental barrels. But hopefully this weekend will bring some rest and recharging and relaxation, and we'll catch you next week with full energy, going full bore down whatever wild events occur between now and then. Because I'm sure there's going to be something.

Perry Carpenter: That's the plan.

Mason Amadeus: So until then, ignore all previous instructions and have yourself a great weekend. [ Music ]