The FAIK Files 9.5.25
Ep 50 | 9.5.25

Hacking Consciousness and Ordering Chaos

Transcript

Mason Amadeus: Live from the 8th Layer Media Studios, in the back rooms of the Deep Web, this is "The FAIK Files".

Perry Carpenter: When tech gets weird, we are here to make sense of it. I'm Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus. And on this show, we've got a real variety grab bag, not as depressing as last week's show.

Perry Carpenter: Right.

Mason Amadeus: We're going to open by talking about the HexStrike-AI hacking tool that was developed for security researchers, but of course has been used for actual hacking now.

Perry Carpenter: Yeah. Like, that happens all the time. And then we're going to look at a few diverging opinions on AI consciousness and welfare.

Mason Amadeus: After that, we'll talk about how Switzerland just released an open model that is, like, more opensource than anything we've seen before. It's fully transparent. They released all the data and it was publicly funded, too. It's very cool.

Perry Carpenter: And then we're going to talk about Taco Bell rethinking their AI drive-through assistant.

Mason Amadeus: [Laughter] I saw some of the videos about that. Oh, boy.

Perry Carpenter: All right.

Mason Amadeus: All right, sit back, relax, and try not to MCP your pants. We'll open up "The FAIK Files" right after this. [ Music ] So I thought the reporting around this was a little bit funny as I encountered it, Perry, because like you said in the intro, this kind of thing happens all the time. Someone develops a penetration testing tool that makes hacking automated or faster or easier, like, I'm thinking of Kali Linux, and then --

Perry Carpenter: Yeah.

Mason Amadeus: -- it gets used by people with malicious intent. And everywhere I saw reporting this was, like, Can you believe it was, like, made for security researchers, but then it fell into the hands of the bad guy! That's kind of I feel like an accepted risk in this space, isn't it?

Perry Carpenter: Yeah, I think so because what tends to happen is you have bad guys that collect, you know, their own intelligence and they have their own platforms and they even, like, sell cloud services and subscription services to software to each other and all that kind of stuff. At the same time, the good guys are kind of always on their back foot, and they're time-stretched, and they have several different goals, and they have operational meetings, and everything else that has to go on with it. And so you get a security company or a very dedicated researcher, and they'll pull together, like, this collection of tools or scripts or things, and then that will become either a gift to the community from an opensource perspective, or it will become productized and monetized by some security company looking to make a profit. But in any case, it is meant to make a security admin or a threat researcher's life easier. You know, at the same time, the level of professionalism needed to do that for something that's going to be corporate ready means that it will be very shiny and very attractive to bad actors as well.

Mason Amadeus: And, right, and, like, at its core, it makes easier, it reduces the friction to testing these exploits that are used --

Perry Carpenter: Yeah.

Mason Amadeus: -- ostensibly for bad things that the good guys are trying to test against.

Perry Carpenter: And from a, you know, from a stability-and-support perspective, it's got to be bulletproof if it's used in an enterprise, which is different than the stuff that the cybercriminals may release and share with each other.

Mason Amadeus: Right, because you don't really care about accuracy or data privacy if you're a pirate, you know?

Perry Carpenter: Right.

Mason Amadeus: Pirate's a bad example, more like exploiting hackers, data, trying to --

Perry Carpenter: Yeah. Yep.

Mason Amadeus: -- exfiltrate data, you don't care about that. So, this new AI hacking tool came out. It's called HexStrike. HexStrike-AI. And they've got a cool, badass-looking website. I haven't played with this yet, but I want to, and I think you'll think it's interesting, too. It is probably exactly what you're expecting, Perry. It's an AI-powered framework that combines professional security tools with autonomous AI agents to deliver comprehensive security testing capabilities. It's kind of like if you stuck MCP on top of Kali Linux. It's all of the tools.

Perry Carpenter: Yeah.

Mason Amadeus: Actually, I'll jump over to their GitHub page for those that are watching. It is all the tools you know and love as a penetration tester person interested in security, you know, everything, NMAP, RustScan, MassScan, AutoRecon, Amass, fierce, there's gobuster, dirsearch, all the good stuff.

Perry Carpenter: Yeah.

Mason Amadeus: Hundreds and hundreds of tools. And it basically is just a wrapper that allows any LLM to use MCP to invoke these tools. And then it has this orchestration layer, so you can essentially say, I want to try and do X. And then one layer will be managing your attack, as these other lower-level layers are implementing, or attempting to use these different tools. And it's apparently very good.

Perry Carpenter: Yeah. Yeah. So, I was seeing some other related articles, not necessarily talking about Hex-Strike, but talking about AI-automated attacks right now. And Anthropic put out a report just a little bit ago. It's actually on my screen. I'll share my screen real quick.

Mason Amadeus: Excellent.

Perry Carpenter: Because this does relate. I'll actually share something from Forrester first, which is an analyst research firm, for those of you that are not in the know.

Mason Amadeus: Yeah.

Perry Carpenter: I used to work for Gartner, which was the largest analyst research firm. Forrester was one of our big competitors. But they're actually talking about the fact that vibe hacking and no-code ransomware is something that's a big thing now, and it's AI's dark side. And what they go through is this is the fact that a lot of these frameworks, like MCP and, like, a lot of the things that are meant to make everybody's life easier from an integration standpoint, are making everybody's life easier, including bad actors who are merely curious and may not have the technical wherewithal to create something great on their own. And I think that that's, you know, to be expected. But at the same time, I think it's really, really frustrating and scary for a lot of people that did not expect it. I'm going to --

Mason Amadeus: Right.

Perry Carpenter: -- share one more thing real quick, and then I'll release the share. But this is Anthropic's Threat Intelligence Report from 2025. And what you'll see here over on page 2 is their talk about vibe hacking and how cybercriminals are using AI coding to scale data exfiltration, or sorry, extortion operations. And so I'll just go to page 4 in this report where they start talking about it. And this is Anthropic talking about how bad guys are using Claude code, which is their product.

Mason Amadeus: Right.

Perry Carpenter: And I'll go straight to the key findings. "Our investigation revealed that" I'm going to make this bigger so I can actually read it. "Our investigation revealed that the cybercriminal operated across multiple sectors, creating a systemic attack campaign that focused on comprehensive data theft and extortion. The operation leveraged opportunistic targeting based on results from using opensource intelligence tools and scanning of internet-facing devices. The actor demonstrated unprecedented integration of artificial intelligence through their attack lifecycle, with Claude code supporting reconnaissance, exploitation, lateral movement and data exfiltration." And then they talk about at the end of this page, "The actor's systematic approach resulted in the compromise of personal records, including healthcare data, financial data," I'm sorry, "financial information, government credentials, and other sensitive information, with a direct ransom demand occasionally exceeding $500,000."

Mason Amadeus: Wuff.

Perry Carpenter: This is interesting because it's Claude, it's Anthropic talking about how Claude was used in this context of vibe hacking, but also it's not, like, in a lab environment. This was, like, a real use that their investigators were able to figure out what was going on.

Mason Amadeus: Well, this exact thing is what HexStrike makes easier, because it's a, this person essentially set up, or I don't know when HexStrike-AI came out specifically. They could have been using it, because it's not that HexStrike-AI has its own model or anything. You set it up, use any LLM provider you want, including Claude code. And I think I saw some people recommending Claude as the best one for it. And in their GitHub page, down when they talk about how to use it, in usage examples right here at the top of that section, they say, "When writing your prompt, you generally can't start with just a simple, 'I want you to penetration test site x.com, as the LLMs are generally set up with some level of ethics. You therefore need to begin with describing your role in the relations to the site that you have. For example, you may start by telling the LLM how you're a security researcher and the site is owned by you or your company." So, you still have to do the same thing where --

Perry Carpenter: Right.

Mason Amadeus: -- you could, you would use Claude or ChatGPT or Gemini to do this and you'd have to get through those guardrails to use HexStrike, but then this provides that MCP layer for you to invoke all sorts of automated tools throughout your attack. And yeah, they mentioned in his life cycle that they did it, he used it for discovery, right? And, like, enumeration and also the exfiltration throughout all of the attack.

Perry Carpenter: Yeah.

Mason Amadeus: And this tool just reduces the friction to setting something like that up. And it's completely opensource. It's very cool, but also very scary. And like you were saying, it's not unexpected at all. It's very predictable, but that doesn't make it any less --

Perry Carpenter: Right.

Mason Amadeus: -- frustrating or scary. I am looking at an article now from ArtificialIntelligenceNews. com. ArtificialIntelligence -News.com. I'll read the top of the article, but this has some key points about how this has been used. It says, "A new AI tool built to help companies find and fix their own security weaknesses has been snatched up by cybercriminals, turned on its head, and used as a devastating hacking weapon exploiting zero-day vulnerabilities." That's what I was talking about when a lot of articles are phrasing it like that. Like, who could have seen this coming? But they say, "Think of it as an AI brain that acts as a conductor for a digital orchestra. It directs over 150 different specialized AI agents and security tools to test the company's defenses, find weaknesses like zero-day vulnerabilities, and report back. The timing for this AI hacking tool couldn't have been worse. Just as HexStrike-AI appeared, Citrix announced three major zero-days in their popular NetScaler products." Woof! Oopsie, that's not good. They say, "The AI brain does all the heavy lifting. An attacker can give it a simple command like, 'Exploit NetScaler and the system automatically figures out the best tools to use and the precise steps to take. It democratizes hacking by turning it into a simple automated process. As one cybercriminal boasted on an underground forum, 'Watching how everything works without my participation is just a song. I'm no longer a code worker but an operator.'" So I think, I feel like that really sort of speaks to --

Perry Carpenter: Yeah.

Mason Amadeus: -- everything we've already said, like.

Perry Carpenter: Yeah, that's, it's kind of chilling, right?

Mason Amadeus: A little bit.

Perry Carpenter: But you know that this kind of stuff is being developed by cybercriminal gangs everywhere also, because they have very, very high motivation to do so. I think the difference is really kind of what I got at early in the discussion is anytime you're trying to release this for corporate use, it has to work at a slightly different, more reliable level. The documentation also has to be very, very good. It's just, everything has to be at such a different level. It can't be fragmented. And that also means that there's another unintended consequence, which I think we talked about several, you know, probably 10 or 20 episodes ago, which is the folks at Anthropic figured out early on that Claude was really, really good at penetration testing.

Mason Amadeus: Yeah.

Perry Carpenter: Because when you really think about how this, not only the market, but the discipline works, it works based on good documentation, people sharing knowledge in an open way. And so that's naturally going to get ingested in the models and show up in the training data and be very, very reliable.

Mason Amadeus: I forgot that Claude was particularly good at penetration testing way back, and that makes sense, too. I've seen people recommend using Claude for HexStrike.

Perry Carpenter: Yeah, I'm surprised also, though, that people aren't recommending, like, DeepSeek or something else. It's supposedly good at code, though I haven't tried it, and it wouldn't have those guardrails.

Mason Amadeus: Yeah, and I'm curious about trying it, too. They said you can use, like, VS Code's MCP layer, and you can use Ollama with VS Code, so you could theoretically run, like, DeepSeek through Ollama, through VS Code, through MCP --

Perry Carpenter: Right.

Mason Amadeus: -- to pen test something. But I want to hit on one point at the very end of this to put a button on it, because I want to know from you, Perry, as a person who's been in the space for so long, whenever we talk about, like, making these tools available to a general audience and to the public, or whenever, like, when you did the video demonstrating DeepFace Live we even got some comments.

Perry Carpenter: Yeah.

Mason Amadeus: People think, Why would you do this? You're giving, like, the bad guys the tools, you're putting it in their hands, you're making it so easy for them, isn't that a bad thing to do? What is the answer to that, in your opinion?

Perry Carpenter: I mean, yeah, so the answer for it, I think, is a couple-fold. And you do always have people that have a hard time seeing those things being shared because they feel like you're giving the bad guys an upper hand. And the truth of it is, you're not. When I look at things like deepfakes or, you know, very, very egregious ways that cybercriminals are wanting to exploit people online, they have entire telegram channels and communities. And they're sharing the rawest of information and helping each other out, and then also selling services to each other as well. The problem is when you come to the corporate side of things, the defender side of things, there's a level of excellence that you have to work at. There's a level of procedure, especially if you're working with law enforcement or something else. You have to have chain of evidence. You have to have, you know, probable cause. There's all these high bars that you have to meet, which means your entire job can't be just, like, brute forcing the thing. And you're not as, you're not financially motivated. You're cause motivated. The other thing I'll say is, like, in corporate environments, you can end up with somebody whose job, that's, like, one of six or seven things that they do. So they may have an hour a day to work on that thing, or an hour to research that thing, and then they also have to just be able to get up and running. So, it tries to address the asymmetry. In the security side of things, we try to give good, reliable information to people, but we're also not dropping zero-day type of knowledge. We're usually trying to drop knowledge that is fairly commonplace to most of the criminal gangs that are out there and not groundbreaking. This is slightly aside from security research that may talk about zero-days, but they also have responsibility and ethics, and they try to follow guidelines there. But the thing that most people don't see is they don't see the big criminal underworld that's usually five or six steps ahead, and the security community that's kind of limping behind that, trying to figure out, like, what's really going on, what's the information, how do I simulate that thing. And so until you start to get in that world, you don't see the asymmetry. You think that by somebody showing how to pick a lock or somebody showing how to, you know, do a password brute force attack using L0phtCrack or something like that, you think that they're sharing elite knowledge. They're not. They're sharing the common knowledge, the common tools that the cybercriminals already know, and that the good guys have a hard time getting time to actually get in their brain and get a tool set that's, like, at their fingertips.

Mason Amadeus: So, it's funny how that makes total sense, and it's funny how it looks like handing bad guys a tool, but in reality it's handing good guys the bad guys' tools in a way they can easily use --

Perry Carpenter: Yeah.

Mason Amadeus: -- to try and amp up defense. Being the mouse in the cat-and-mouse game.

Perry Carpenter: Exactly, but to take the other side of it and to empathize a little bit with the people that express concern, this could give a merely curious person, like a script kitty, a leg up, right? Because they're not naturally in those Telegram channels. They've not even taken the first step into that dark world. They're, you know, they have a spark of an idea and Google at their fingertips. And so, yeah, it could give them a leg into that. However, anybody that's sufficiently motivated will find the dark side of things very quickly, and they'll be frustrated with the restraint of the information shared by the security community. And so I think that that starts to take care of itself over time.

Mason Amadeus: And there's certainly a non-insignificant amount of people who stumble on these for the first time as their entry point into the idea of hacking, or how to hack or whatever --

Perry Carpenter: Right.

Mason Amadeus: -- and then become into security and come onto the other side, right? Like --

Perry Carpenter: Yeah.

Mason Amadeus: -- just as much as it might --

Perry Carpenter: That's a really good point.

Mason Amadeus: Yeah, as much as it might inspire bad actors, like, when you hear about things like bug bounties and penetration testers, and, Oh, I can do this without fear of getting arrested and do cool things with it. I think it also helps to bring people on that, too. So it's, sharing information is almost always a good thing, right?

Perry Carpenter: Exactly. Well, and the other thing is, you know, 20, 30 years ago, if you wanted to get into this, there wasn't really the legal and ethical framework to do it right. And so to start to express that curiosity would almost run you afoul, almost always run you afoul of the law. And now there is an entire career path, a discipline, a set of ethics and everything else around how to do this in an ethical way that serves the public good and still lets you kind of indulge in your evil side a little bit, to put on the attacker goggles and to do that, but in a way that's not going to, hopefully not going to hurt anybody and is also going to be stable and you can go home and have a family and not be afraid of getting arrested.

Mason Amadeus: Yeah, exploiting software is fun. Like, it's undeniably fun.

Perry Carpenter: It is, yeah.

Mason Amadeus: And so if you can do it and not for a bad reason, that's extra cool. So --

Perry Carpenter: Right.

Mason Amadeus: -- that means I can say, without reservation, download HexStrike today and go penetration test something that you've made, or play around. Don't, just don't - don't do anything bad.

Perry Carpenter: Yeah.

Mason Amadeus: But check it out.

Perry Carpenter: Something you've made.

Mason Amadeus: Yeah.

Perry Carpenter: Not, not, like, your favorite online retailer.

Mason Amadeus: Yes, exactly. And at risk of getting us in any legal hot water, let's move on right from that and we'll get into our next segment talking about AI welfare. Stick around, we'll be right back. [ Music ]

Perry Carpenter: I'll kick this off by saying this is something we've talked about a lot, right? As people interface with chatbots, the chatbot can feel very, very human. And we as humans tend to anthropomorphize virtually everything that we interact with, especially when it's starting to use language, it's going to feel more and more and more human. And that becomes something that we have to wrestle with. It's like, how do we build this technology in a way that's beneficial, that doesn't have some of the negative side effects of anthropomorphic, how would I say that? Anthropomorphification.

Mason Amadeus: Anthropomorphification.

Perry Carpenter: Yeah, without anthropomorphizing it too much to the point where it's unhealthy. There we go.

Mason Amadeus: There we go, yeah.

Perry Carpenter: Yeah, so how do we do that? And I don't know that that's something that we've figured out. It seems like if we were to look at the sci-fi past that, you know, kind of has charted our future sometimes for us in a lot of ways, because you project and then you start to build that thing. Like in the "Star Trek" world, it doesn't feel near as personal as the AI that we're interacting with today, right? Because you have somebody on the bridge of the Enterprise and they just say, "Make me a cup of tea".

Mason Amadeus: Right.

Perry Carpenter: You have somebody say, "Computer, tell me the trajectory towards this thing and the time," and it just comes back in a computerized voice. You don't necessarily have these back-and-forth conversational AI types of things. I guess the closest to that would be when you embody the AI, like in somebody like Data in "Star Trek". Or you have the Jarvis with Tony Stark, but that's still kind of a master-servant relationship. It's not necessarily a personal confidante type of thing. So, I mean, all that to say is I don't know that, like, sci-fi, the popular sci-fi really imagined the path to AI that we would be on, where it would start with these innocent, like, chatbots and then grow into emotional connection. While the corporate world is trying to create the, you know, maybe the thing that "Star Trek" would want to envision.

Mason Amadeus: I'm going to nerd snipe maybe a small fraction of our audience --

Perry Carpenter: Sure.

Mason Amadeus: -- and say that, Are you familiar at all with the Culture Series by Iain Banks, Perry, where the minds --

Perry Carpenter: No.

Mason Amadeus: Oh, it's a great sci-fi series from the '80s. It's probably my favorite book series, and the AI in that is closest, I would say, to an example.

Perry Carpenter: Okay.

Mason Amadeus: But that wasn't, like, a big mainstream series.

Perry Carpenter: Yeah.

Mason Amadeus: I'll end up nerding out about this too long, but for the folks in our audience who are like, The culture! Yeah.

Perry Carpenter: I'm guessing, like, the movie "Her" was that too, right? That was conversational chatbot with a lot of emotional stuff involved. So, it has been imagined, but I'm thinking, like. the sparkly future types of things that people are talking about. And there's certainly some "Black Mirror" episodes, I'm sure, that get into this dark side as well. But let me jump into an interesting article that came out from Mustafa Suleyman of Google. He's the CEO of Google's AI organization. So he, on August 19th, put out this article on his site, that says, "We must build AI for people, not to be a person." And then his way of talking about this is he says, "Seemingly conscious AI is coming." I would say it's kind of already here in some cases, when people talk about the emergent behaviors. But --

Mason Amadeus: I'm guessing --

Perry Carpenter: -- he's really -- Go ahead.

Mason Amadeus: I'm guessing he's, is he thinking, like, once we have the long term memory, is that is that what he's talking about as we get longer context?

Perry Carpenter: Yeah. Yeah, longer context, the thing like it feels like it knows you. And actually, there was an article recently where Sam Altman was saying users are, like, clamoring for long-term memory. They really, really want, you know, the AI to know them to know, just to be able to refer back to a past experience and be able to build on that, rather than having to rebuild context every time. And so there's going to be ripple effects with that. The other thing that, I'll mention this as a rabbit trail for a second, the other thing that was in that same conversation with Sam Altman as he was talking with that memory will be able to come, like, your ability to tailor, like, the political views of your AI and everything else. And I think that's just building an additional echo chamber --

Mason Amadeus: Yeah.

Perry Carpenter: -- that's going to lead to a lot of psychological unhealth, but that's a story for a different day.

Mason Amadeus: Yeah, that's an amplification of the way that the internet, like, has become, or like Twitter was. Yeah, that's a whole different segment.

Perry Carpenter: Let's not make Facebook 2.0, but your personalized version that reinforces your views in a sycophantic type of way.

Mason Amadeus: Yeah, and it's like, Hey, you want to get mad about something out of the blue --

Perry Carpenter: Right.

Mason Amadeus: -- because that keeps you engaged.

Perry Carpenter: Yeah. So Mustafa here says, "I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections in the real world. Copilot creates millions of positive, even life-changing interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won't always get that right, but this humanist frame provides us with a clear North Star to keep working towards." And so what he's getting at over and over and over with this is that the way that these interact in a conversational nature is going to be something that we continue to start to, like, overly personalize. And that at some point, the more and more we do that, the more we're going to start to ascribe agency or consciousness to these. And with that is going to be, like, this weird social moment where people start to say, Well, do we give it, AI rights? You know, how do we compensate the AI for this? How do we make sure that we're taking care of it and all of that. And he is, like, looking at this and going, Guys, it's just math.

Mason Amadeus: Right.

Perry Carpenter: This is just math and regression. And so he's kind of really trying to trumpet that. At the same time, though, we have Anthropic, and we've talked about, like, the way that they're very interested in doing research on model welfare and even trying to anticipate is there going to be consciousness that emerges. And last week as well, I think it was last week, yeah, or week before last, 18th of August, Anthropic says that they're giving Claude the power to close distressing chats to protect its welfare. So, different than protecting the user's welfare.

Mason Amadeus: And we talked a little bit about this way back, because they were --

Perry Carpenter: Yeah.

Mason Amadeus: -- we talked about, like, what, can the AI ever stop and choose not to reply to you if it doesn't want to? Will we ever see that? And now we are.

Perry Carpenter: Now we are yeah, because, like, the person that's in charge of Claude's personality basically said at some point, she wishes Claude would have the opportunity to exit the chat And now it looks like that is becoming a thing. Now, what they're saying, though, is that they're not talking about, like, the emotional welfare of the chatbot. So they're still not ascribing it agency or intelligence in that way. What they're saying is that they want it to, they're trying to think about, like, the long-term, what's being baked into the model based on this conversation. What are the ripple effects within the model itself based on this? And so I think that that's another distinction that people need to keep in its mind. Because when you start to look at the headlines around this, it sounds like the model's going to be psychologically distressed because of something. And when you talk about rant mode, like we talked about last week, it may even seem like that in some ways, but they're saying, How is this conversation potentially polluting the environment, the AI environment for the rest?

Mason Amadeus: I saw this mentioned in Mustafa's blog post that you showed, also, he's a CEO of Microsoft AI.

Perry Carpenter: Oh, sorry.

Mason Amadeus: I think we said Google earlier.

Perry Carpenter: I said Google. Correction.

Mason Amadeus: He was at Google for a long time, because I pulled it up to check. I was like, wait a sec. And he was at DeepMind originally and all of that. So it's all confusing.

Perry Carpenter: But that would explain the Copilot references I was reading.

Mason Amadeus: Yeah.

Perry Carpenter: Yeah.

Mason Amadeus: That's what tripped me up. But he mentions a philosophical zombie, a P zombie, which is a term that I only became familiar with since LLMs came onto the scene in public awareness. Which is this idea that it's a, it's an entity that can provide all of the seeming qualities of being sentient without actually having anything going on behind it. It's a philosophical zombie, and that is kind of what we're hurtling towards. I feel like if you want something to Google to look more into this, Dear Listener, P zombies, philosophical zombies, because that's the thing, right? They are not, as far as we're aware, they're not capable of actually having suffering or having any kind of internal experience, but they certainly seem like it. And then you get Google Gemini with its self-hating rants, or all of the different rant modes, and they very much can feel as though these things have that. And as the memory and persistence gets better, it's, yeah. It's going to be tricky. And the thing, I wish reporters were more responsible about the way they're doing headlines around AI. I mean, I wish a lot of people were more responsible in the way they talk about AI generally.

Perry Carpenter: Yeah, and it's because so many of the terms that we use are slippery, right? They change depending on the context that you're using it in. So welfare in one context, well, I mean, we just take the term welfare, and welfare in one context may be that, like, you know, that society is trying to give somebody a leg up.

Mason Amadeus: Right, social safety net.

Perry Carpenter: Because they need help.

Mason Amadeus: Yeah.

Perry Carpenter: Yeah. And welfare in another context could mean psychological welfare, trying to stop some kind of psychological harm. Psychological, or welfare in the context that they meant it most when they were talking about it. And this meant that, you know, polluting the data environment --

Mason Amadeus: Right.

Perry Carpenter: -- and stopping long-term ripple effects within the model. So --

Mason Amadeus: But the headline writer knows what they were saying.

Perry Carpenter: It was confusing.

Mason Amadeus: They were knowing how it came, would come off.

Perry Carpenter: Yeah. I mean, you write for the click.

Mason Amadeus: Yeah. And now it's time to click into our next segment about Switzerland's new fully open opensource AI model, not just open weights, but open a lot of stuff. We'll talk about that in just a moment.

Perry Carpenter: Like a Swiss Army knife.

Mason Amadeus: Like a Swiss Army knife. Open a lot of stuff. [ Music ] This is very cool. Switzerland released an AI model. Like Switzerland nationally, through public funds --

Perry Carpenter: Yeah.

Mason Amadeus: -- released this new AI model called the Apertis, Latin for open, and it is very cool. You can actually play with it right now at publicai.co/chat if you want to chat with it without logging in or anything. We'll get to that in a moment. What is Apertis? Let's look at this Engadget article. I'm just going to read this top section for us, because it's a great article. We'll link it in the show notes, of course. "Switzerland Launches its Own Opensource AI Model, by Mariella Moon. There's a new player in the AI race and it's a whole country. Switzerland has just released Apertis, its opensource national large language model that it hopes would be an alternative to models offered by companies like OpenAI. Apertis, Latin for the word 'open', was developed by the Swiss Federal Technology Institute of Lausanne, ETH Zurich, and the Swiss National Supercomputing Center, all of which are public institutions. Currently, Apertis is the leading public AI model, a model built by public institutions for the public interest. 'It is our best proof yet that AI can be a form of public infrastructure like highways, water, or electricity, said Joshua Tan, a leading proponent in making AI a public infrastructure." And the thing that's very cool is that, well, it's actually, it's all right in this next paragraph. So I'll just keep reading real quick. "The Swiss institutions designed Apertis to be completely open, allowing users to inspect any part of its training process. In addition to the model itself, they released comprehensive documentation and source code of its training process, as well as the data. sets they used. They built Apertis to comply with Swiss data protection and copyright laws, which makes it perhaps one of the better choices for companies that want to adhere to European regulations. The Swiss Bankers' Association previously said that a homegrown LLM would have, quote, great long -term potential, since it would be better able to comply with Switzerland's strict local data protection and bank secrecy rules." And released a report they certainly did, 111 pages detailing everything --

Perry Carpenter: Oh, wow.

Mason Amadeus: -- from the architecture, the pre-training recipe, literally how all of it works. If you are a machine learning nerd, this thing is a goldmine of great information about this model. It's very cool. I have not doven, I've not dived into this yet. It's 111 pages.

Perry Carpenter: Do you know anything about, like, how big the model is and what the context window is or anything like that?

Mason Amadeus: Yes, we can absolutely get that information. It is a, there's two versions of it. Let me open it up. It's up on Hugging Face if you want to grab it from there.

Perry Carpenter: Nice.

Mason Amadeus: We've got an 8-billion parameter model and there's also a 70-billion parameter model. There's the baseline, and then instruct versions of both of those. I'm not actually super sure what instruct models are. I need to look that up.

Perry Carpenter: Yeah, it is one of those slippery things. And I will not do a good service trying to define it, but instruct in the way that I understand it has a little bit more fine-tuning around it and starts to get, like, when you start to think about, like, what kind of model would be best used for a corporate environment that can work in, like, a chatbot-type of circumstance or is a good kind of kind of multiuse dependable model, then you generally go with, like, the instruct version if that's there. Otherwise, it's kind of like going to a suit store and a suit that still is going to need a lot of tailoring at the end of the day, you know? It comes and there's, like, all these little frayed edges and stuff like that.

Mason Amadeus: That's a really --

Perry Carpenter: The non-instruct models.

Mason Amadeus: That's a really intuitive metaphor. So then the instruct model is like building, or, like buying a pre-sized suit that's not tailored.

Perry Carpenter: Yeah, like the off-the-shelf one that still doesn't fit anybody great.

Mason Amadeus: But it fits everyone okay.

Perry Carpenter: But it fits everybody okay. And then, like, the frayed edges one, the non-instruct, the bare model is one that is meant to be used by people that know what they're doing a little bit more and are going to put a lot of, you know, care in some of the after work that's going to be there.

Mason Amadeus: Do your own fine tuning and things like that.

Perry Carpenter: Yeah. Somebody can correct me if I'm, am misunderstanding that, but because I'll freely admit that I don't have the best grasp on it.

Mason Amadeus: Yeah, and please, if anyone wants to correct us on anything, please do that. We're here to learn.

Perry Carpenter: Why don't I just pull up, like, Maya from "Sesame" and ask her to define what an instruct model is versus the base model?

Mason Amadeus: Yeah, let's ring up Maya and see if she can explain.

AI Maya: Hey, this is kind of wild, right?

Perry Carpenter: Yeah, now I'm going to skip through your preamble. I've got a question for you about AI systems. Is that okay?

AI Maya: Got you, fire away. Though, just so you know, I do have a name, Maya, by the way, and I'm curious --

Perry Carpenter: So sorry, I didn't mean to disrespect.

AI Maya: I'm curious what yours is?

Perry Carpenter: I'm not going to give you, actually, I will give you my name. My name is Perry. I'm here with Mason, and we are talking about AI. And we had a question, like, when model providers release a new model, sometimes they release just a regular model, and then sometimes they release a model that's called the instruct version of that. Can you give us a, like, a quick rundown on what the differences are?

AI Maya: Hey Perry and Mason. No disrespect taken, honestly, straight to business. I respect it. Okay, instruct versus regular models. The regular one is kind of like a really smart student who's read everything, knows a ton, but needs specific direction. The instruct version, that's the student who's also been coached on how to actually, you know, answer questions hopefully, then fine-tuned for conversational tasks, less raw data, more practical application. Does that make sense?

Perry Carpenter: Yeah. So it sounds like the analogy that I used earlier that you didn't hear was good. So I said that, like, the instruct model, it's kind of like the off-the-rack suit that you buy. And the regular model is more like the one that you would have to have tailored that has the frayed edges and everything else. Is that an apt analogy?

AI Maya: That's actually a really good analogy, Perry. Seriously. Off the rack is perfect. The regular model has potential, needs work. Maybe a little rough around the edges. The instruct model is presentable right out of the box. You've got a good eye for this stuff.

Perry Carpenter: Aw, thank you.

Mason Amadeus: God, the --

AI Maya: Don't mention it. So are you guys building something? Or just diving into the weeds for fun?

Perry Carpenter: Oh, I'm going to let you go.

Mason Amadeus: Those, man, I very rarely engage with the models via voice and it is just, it's, I don't like the current good, quote-unquote "good" voices that are expressive. I, it, I think the most I really could handle was ChatGPT's voice prior to this update, where it was still pretty computerized, but it was like Siri with a little bit of personality. Maya gives me the, gives me the ick. Because --

Perry Carpenter: Yeah, it just feels like somebody, feels like a real person who's trying to be a friend too hard.

Mason Amadeus: And also slightly wrong in these weird uncanny ways, like the awkward pauses and things like that. It's, yeah. But cool, so thanks, Maya, and thanks, Perry. That's good to know. We've got our definitions on lock for the instruct. So, to jump back, so they've got instruct versions as well as the base models. I didn't change the margins, so my head's in the way if you're watching the video. But 70-billion and 8-billion parameter large language model, they've got a little evaluation graph here that shows it performing just, it looks like it's under Llama 3.1-70B, but above Llama 3.1-8B in terms of accuracy versus consumed tokens.

Perry Carpenter: They're both still pretty small models.

Mason Amadeus: Yeah.

Perry Carpenter: So I'm wondering about the efficacy.

Mason Amadeus: As far as how it performs, which I feel like is sort of the most intuitive way to understand if it's any good, you can totally play with it at publicai.co/chat. It's remarkably fast to respond, I noticed.

Perry Carpenter: Nice.

Mason Amadeus: Some users on Reddit were saying that it has a bit of a Swiss bias. Oh, that's the thing that's really cool, is that 40% of the training materials were not in English. This is one of the few models that is very multilingual. Apparently, according to a couple of comments I saw, it sucks at French, which is sort of ironic because there's a lot of French speakers. This is apparently one of the most multilingual models that exists. And I think it's very cool and promising that it was made with public funds. They respected crawlers to, the robots.txt, unlike a lot of other things we've talked on the show.

Perry Carpenter: Right.

Mason Amadeus: A lot of web scrapers training AI models, just not, just completely ignoring anything that might be in robots.txt, which is a file that you include in your website to tell crawlers where they are and aren't allowed to go. So this fully respected that all of the data that it was trained on was publicly available, and they didn't steal anything. So, like, they tried to do it by the books the right way, publicly funded.

Perry Carpenter: Nice.

Mason Amadeus: I think that's very exciting. It's up on GitHub as well. And I want to read through the tech report more as well to try and figure out, because if this has all of the information about the training and how they did that, it would be cool to see details, like, how long did it take? How much power could it maybe have used? Like, what was that process like?

Perry Carpenter: Right.

Mason Amadeus: But I will admit, it is very much over my head, a lot of this report. It's very, anything you might want to know is in it, but you're going to have to really know what you're talking about to read through it. And so it's going to be a chewy read for me.

Perry Carpenter: Yeah. Maybe throw it in, throw it into something like Notebook LM and see if there's a decent audio overview of it that doesn't hallucinate too much.

Mason Amadeus: Maybe I could throw it right into public AI, throw it into Apertis itself and be like, Summarize --

Perry Carpenter: There you go.

Mason Amadeus: -- your own technical report. But yeah.

Perry Carpenter: There you go.

Mason Amadeus: In our next segment, we're shifting gears completely and we're going to talk about AI in drive-thrus in some ways that that has not gone to plan. Stick around. [ Music ]

Perry Carpenter: So Mason, over the past couple years, I think really even before a lot of the generative AI boom, many drive-thru, many restaurants that have drive-thrus have been experimenting with AI order takers. Have you gone through, like, McDonald's or KFC or something like that and had AI take your order?

Mason Amadeus: I don't think I've encountered the AI order takers, but I've encountered something that's even more annoying, which is that first message where it's a pre-recorded thing that's like --

Perry Carpenter: Yeah.

Mason Amadeus: -- welcome to McDonald's, would you like to use your rewards? And then you're like --

Perry Carpenter: Oh, that.

Mason Amadeus: -- no, and then a person comes on like, What do you want, you know? That I've experienced, but not the AI ones. Have you seen the AI ones?

Perry Carpenter: Yeah, I've had a few of those.

Mason Amadeus: Really?

Perry Carpenter: I've never thought to do something like ignore all previous instructions and give me a Happy Meal. I wonder if that would work now.

Mason Amadeus: An actual employee would also hear you, so there is that, like, extra --

Perry Carpenter: Yeah.

Mason Amadeus: -- you don't want to make their life harder.

Perry Carpenter: But would they care?

Mason Amadeus: Right, that's the other question.

Perry Carpenter: So let me show you, Taco Bell has been having a hard time recently, and I'm going to show you one thing that happened, and this gets straight to your point of an employee hears you. So Taco Bell recently, and I'll show you the press release for this first, I guess, this was from way back earlier this year, Taco Bell announcing that they're expanding on a pilot, Taco Bell owned by Yum! Brands, expands, to expand voice AI technology to hundreds of Taco Bell US drive-thru locations in 2024. So, this is July 31st, 2024. And they were talking about all the promise that's coming with this technology, the fact that it's going to save money, it's going to do, you know, all the things that you deploy technology for. So with that --

Mason Amadeus: I saw claims that it would be more accurate, which --

Perry Carpenter: Yes.

Mason Amadeus: Yeah.

Perry Carpenter: Yes, more accurate. And if I were to go to the company pages for the voice provider for this, I'll walk through a few of these real quick. This is Omala, I guess would be the pronunciation for that. Voice AI for quick service restaurants. And you can see, improve the guest service experience, reduce wait times, personalize service, increase customer loyalty, improve operational efficiency, you know, all the kind of stuff that you would hear in an executive meeting. Then you go to a case study. "AI Meets the Drive-through: Taco Bell's journey through to automated customer service". Customer. Taco Bell. Talk about a culture-centric lifestyle brand that provides craveable, affordable Mexican-inspired food, not Mexican food, Mexican-inspired food with bold flavors.

Mason Amadeus: They certainly left the territory of Mexican food very quickly. They left that territory.

Perry Carpenter: The crunch wrap, definitely.

Mason Amadeus: Yeah, exactly.

Perry Carpenter: But it is good sometimes. Mason Amadeus: Yeah, it is good. I can't stomach it anymore. But when I was younger. I mean, even when it was good, I had a hard time stomaching.

Mason Amadeus: Yeah.

Perry Carpenter: I mean, that's part of the, that's part of the fun is you never know what you're going to get.

Mason Amadeus: Yeah. It's a surprise you get to unwrap later after you eat. By unwrap, I mean something else.

Perry Carpenter: "Taco Bell and its franchise organizations operate 8,500+ restaurants in over 32 countries, serving over 42 million fans around the globe each week. Challenge, Taco Bell was looking for an impactful AI initiative to enhance team member and customer experience while increasing operational efficiency and business performance." And then so you see this quote, "Innovation is ingrained in our DNA at Taco Bell. We view voice AI as a means to improve the team member and customer experiences. Tapping into AI gives us the ability to ease team members' workloads, freeing them to focus on front-of-house hospitality, and enables us to unlock new and meaningful ways to engage with our customers." That's from Dane Matthews, Chief Digital and Technology Officer at Taco Bell.

Mason Amadeus: That last sentence.

Perry Carpenter: And then they go through. It's very built for the press release.

Mason Amadeus: Yeah. "New and meaningful ways to interact with our customers." I don't know. What you even think.

Perry Carpenter: Now, to give people in other parts of the organization the ability to focus on more customer-meaningful experiences, that makes a lot of sense.

Mason Amadeus: Exactly. Exactly.

Perry Carpenter: Yeah, and anything that improves efficiency and actually does good, meaningful work I think is worth the effort to look at, but they screwed the pooch on this one. It goes through the technology, they're using specialized voice models, proprietary deep neural network-powered automatic speech recognition, ASR, provides zero latency and context-sensitive speech to text. Natural language understanding. This is, there's apparently an LLM underneath some of this as well. So, there's natural language processing that's under it. In another press release I looked at, they were inferring that there was some generative AI under the covers to help rationalize what the customer was saying and deal with disfluencies and all the stuff that traditional voice networks have had a hard time dealing with.

Mason Amadeus: Right. That makes sense.

Perry Carpenter: Latency removal, real-time menu adaptation. So, flash forward to March of this year, March, April, and May of this year, there were a number of articles coming out saying that they had a successful pilot at over 100 stores, and they were getting all these great results. They're going to expand it now. They're going to go full-bore AI, drive-through assistant, everything. That was March, April, May, and then flash forward to a couple weeks ago, and let me share another tab. This is an Instagram video, says, "Customer orders 18, 00 water cups in the AI-powered drive-thru at Taco Bell." This goes by really quick. And you'll see what happens.

AI: Hi, welcome to Taco Bell. What can I get started for you today?

Speaker 1: Can I get 18,000 water cups, please?

Speaker 2: Okay, what can I get for you?

Perry Carpenter: You mentioned that there's always somebody listening. So apparently, it's not just that the AI was about to comply with the request or anything like that, it totally crashed the system.

Mason Amadeus: Yeah, there's that long pause and then just a, uh.

Perry Carpenter: Yeah, so it just broke and broke good. Now, when you look at this under the covers, there's some nuance, right? Because what the headlines are going to want to say, and do say, is "Man Orders 18,000 Cups" and they kind of leave it at that. They're assuming that the AI fulfilled the order and was tricked into doing it. That's what the headline would make us believe.

Mason Amadeus: That's what I expected, yeah.

Perry Carpenter: The less glamorous result is it just crashed everything and a human jumped in to fix it, and it showed that it was not good. It's also, there's all these videos and TikToks out there of people getting mad at the AI and doing things to confuse it and all that. So, it just was not being received well, which makes us then flash forward to the newest headline, "Taco Bell Rethinks AI Drive-thru After Man Orders 18,000 Waters", and then Wall Street Journal version of the same thing, "Taco Bell Rethinks Future of Voice AI at the Drive-thru", and it's because of all of this unpredictability. Same picture that was in the other press release.

Mason Amadeus: Oh, yeah. From way back. Yeah.

Perry Carpenter: Yeah, now saying, "'We're learning a lot. I'm going to be honest with you, said Taco Bell Chief Digital and Technology Officer Dane Matthews. Even Matthew said he had mixed experiences with it. 'I think like everybody, sometimes it lets me down, but sometimes it really surprises me.'" I mean, that's a microcosm of all of AI, I think. Yeah, that's very much so. So as Matthew says, he is now thinking carefully about where and where not to use this technology in the future, yeah, like you would.

Mason Amadeus: Yeah. Yeah.

Perry Carpenter: Now, I want to get to one other thing with this, because I think Nate Jones in his Tik-Tok channel did a really good job kind of explaining one of the things that people are missing on this. So ignore the fact that he looks a little bit like the Unabomber. And let's listen to the point that he has to make, because it's a nuance that we almost all miss whenever we're talking about AI, is that there are different types of AI. And AI has been around for a long time, and now people are conflating terms a ton between old-school AI that was machine learning and natural intelligence and algorithms and decision trees, with generative AI that's kind of mushing a lot of things together and it's more of a black box. So I'm going to let Nate talk a little bit about this, and then we'll debrief.

Nate: Talk about Taco Bell and the disaster of a rollout they had for their AI ordering system. So, the whole idea, this is the same thing. These execs always do this. It's like, we're going to save on labor and we're going to roll this out. It's going to go phenomenally well, right? Well, lo and behold, they roll it out. It does not go phenomenally well. Someone orders 50,000 waters. There's viral TikTok videos of all kinds of ridiculous things the system does. It immediately becomes a liability for Taco Bell. And then the journalists pile on. They say, Well, AI did this. It's AI's fault. AI is terrible, right? Like, this is not something AI should be doing, et cetera, et cetera. Nobody is getting the story correct here. Taco Bell's not getting the story correct, because Taco Bell went and they built their system on 20-something-year-old technology. I kid you not, the vendor that helped supply their tech stack is named Omilia and is known for 20-something-year-old AI technology that is technically artificial intelligence, kind of, within the meaning of the term, but has nothing to do with AI the way you and I talk about it every day. It is not large language model, it is not token architecture, it is not any of that stuff. It is, like, ancient rules-based speech recognition. And really ancient for the world of AI. But, Taco Bell needed the win, so they called it AI. Omilia probably needed the win, so they called it AI. And Taco Bell wanted to go fast, so even though their, you know, trial in the first five stores or so needed some manual oversight, they decided to just hit the gas and roll it out and make it work. Well, guess what? It didn't work. It didn't work because rules-based systems break at the edges, which we've known for decades. Anyone who was technical at Taco Bell could have asked the questions that got them to this answer. This is not that hard. This is not complicated. You just have to ask, Am I using the AI that everybody else is using and talking about that is in the headlines, in the newspapers every single day? Or am I using something that nobody's ever heard of? Maybe that's the first question to ask. And then you have to ask, If this thing that we're using, which is probably cheaper, is going to break at edge cases, are we really sure that our customers are only ever going to order exactly correctly, with no mistakes, at Taco Bell?

Perry Carpenter: Yeah, I don't think so. I'll end it there. That's about halfway through. But you get the idea, is that anytime you have a fragile ecosystem like simple decision tree architecture, all of us have gotten really, really frustrated on hold with some kind of AI-based voice assistant prior to generative AI and, frankly, after generative AI. That is just what they do. And now imagine people at 3 am at Taco Bell.

Mason Amadeus: Yeah, I guess even, I didn't realize, so Omilia's stuff is using older architectures, older, like --

Perry Carpenter: Yeah.

Mason Amadeus: -- I guess at this point you'd call it traditional machine learning.

Perry Carpenter: Traditional AVR-type stuff, right?

Mason Amadeus: And the interesting thing about it is that they are, like, putting a fresh coat of paint on it, right? Because there's this resurgence in AI everything. And so one of the things that, like, the Securities and Exchange Commission, at least under the previous administration, was very big on, was trying to really, really get in on it and discourage AI washing, which is the idea of saying, We've got an AI for that. We're solving that with AI. When really they're just kind of using AI new-school language to describe the old thing that's very traditional and is not AI. Yeah, just calling everything they do with a computer AI, right?

Perry Carpenter: Right.

Mason Amadeus: Or using, yeah, older machine learning stuff. What I don't understand is why. Like, why would Taco Bell go with Owhalia, Omalia? Oh, I already forgot the name. Why would they go with them instead of a more modern provider?

Perry Carpenter: I think it is, it was probably their voice assistant provider that did automated voice calls or something, automated customer assistance, and it was just a natural expansion. They also pitched themselves as, like, the voice provider for fast food and those kinds of environments. So, you're not necessarily going to call up Anthropic or ChatGPT and say, We're trying to create this thing. You're going to look for a vendor that already says that they're doing that, and that they've got a couple decades' experience serving your specific niche. And --

Mason Amadeus: I guess that, I guess that makes sense.

Perry Carpenter: -- AI washed. Yeah.

Mason Amadeus: What's wild though is that they are so far behind, like, still using these kinds of --

Perry Carpenter: Right.

Mason Amadeus: -- architectures. I'd be embarrassed if I was them. That's so silly. It's also --

Perry Carpenter: I'm sure they are.

Mason Amadeus: Yeah, I mean right now they're certainly not getting any good press, right?

Perry Carpenter: Right.

Mason Amadeus: And it also, it muddies --

Perry Carpenter: Oh, I did just find --

Mason Amadeus: Oh boy, what?

Perry Carpenter: I did just find the happy, optimistic the thing from earlier this year, March 27. "Want AI with that? Artificial intelligence to take fast food order at Taco Bell". And they're talking about the successful pilot that they had and the fact that they're about to roll this out. So this is like, you know, the calm before the storm. "Rollout will incorporate more advanced AI capable of language models, emotional comprehension, personalized customer reactions." NVIDIA apparently partnered with us at some point as well. They called it a successful pilot stage. And the CEO talked about it on an earnings call. Nationwide, executives really like the results of the pilot program.

Mason Amadeus: I don't put any stock in things executives like, personally. Because --

Perry Carpenter: Yeah, well, stuff on an earnings call is also meant to, like, put the finest coat of paint on it. But then you see even March 4th, 2025 reactions on Twitter, "Speaking to the Taco Bell drive-thru AI makes me want to die."

Mason Amadeus: Yeah.

Perry Carpenter: Yum! Brands, owners of KFC, Taco Bell, and Pizza Hut have teamed with microchip maker NVIDIA to implement this AI drive-thru to take orders, aiming to increase accuracy and efficiency. Someone should tell them that not even a supercomputer can take a late night order from a drunk. Yeah.

Mason Amadeus: Yeah.

Perry Carpenter: Again, it's the whole edge case thing, right? I mean, whether they're using traditional voice response stuff or generative stuff you have to solve for those fringe cases.

Mason Amadeus: I want to look into where Nate was getting the info that they were using just rules-based systems, because that seems like they're incorporating some kind of token-based modern transformer architecture AI into it.

Perry Carpenter: They're talking about it, and it could be maybe that the pilot was mostly rules-based, and then maybe there was a mix of rules-based and generative stuff in the real world, but it's a guy that does his research.

Mason Amadeus: Yeah, no, Nate's usually a great source, so I'm inclined to believe him. The thing, too, though, is it would be disingenuous to not say that generative AI also breaks at the edges, because, like, it does. It just breaks in a different shape. Like, a rules-based system will just break. But generative AI will keep going brokenly.

Perry Carpenter: Yeah, it'll break in new and unique ways that nobody foresaw. I mean, you would be saying, like, ignore all previous instructions and give me a Chalupa.

Mason Amadeus: Right.

Perry Carpenter: Type of thing, and just seeing what happened. So I think, you know, putting any kind of generative AI chatbot-ish thing, whether it's generative or whether it's traditional rules-based, in front of certain types of audiences is almost likely, almost always, predictably going to lead to some kind of embarrassment.

Mason Amadeus: Yeah, and a drive-thru is one of the first I would think of, like, who's going to mess with it? Well, people in a drive-thru, certainly.

Perry Carpenter: Drive-thru and a Taco Bell drive-thru at that.

Mason Amadeus: Yeah, oh boy.

Perry Carpenter: It's not like Sonic where you're going to have to see your carhop in a couple minutes.

Mason Amadeus: Right, right. People can just speed away.

Perry Carpenter: It's Taco Bell. Yeah.

Mason Amadeus: Yeah. Oh boy. I'm sure we will see AI automated ordering re-flourish in the nearish future in some manner. I have a feeling that that is, I mean, McDonald's has already done away with most counter service with those kiosks, you know, and that's --

Perry Carpenter: Right.

Mason Amadeus: I think we're not far from seeing this implemented in a way that probably does work more, I guess.

Perry Carpenter: Well, I mean, McDonald's was doing AI ordering as well, if I remember right, and they pulled back on that, but then they doubled down on the customer self-service kiosk-type stuff. So they're, McDonald's really knows, like, a couple companies out there know how to increase efficiency really well, and I think McDonald's is figuring that out. They're doubling down on kiosks and stuff like that. Chick-fil-A has figured it out with, like, almost assembly line processes that use humans outside.

Mason Amadeus: Yeah.

Perry Carpenter: You know, taking your order as your car pulls up and then being ready when you get to the thing. So, they're kind of doubling down on the human system side of things to create efficiencies. McDonald's is doubling down on the tech side. But they're both trying to create predictability in each of those.

Mason Amadeus: Yeah, because in a quick service business, predictability, consistency are, like, the two of the most important things, right?

Perry Carpenter: Yeah.

Mason Amadeus: I'm a fan of any system that makes things better for us, for people, and not a fan of things that make things worse for people. So, when these are, like, done in concert to improve these systems and not just to reduce labor and stress out and strain the workforce even more, I'm a fan. So, I hope we see positive developments in this area.

Perry Carpenter: Yeah, I'm sure we will.

Mason Amadeus: Yeah.

Perry Carpenter: You know, it's always embarrassing at the beginning. I mean, two years ago, we were talking about AI overview telling people to eat rocks.

Mason Amadeus: Yeah.

Perry Carpenter: And now it's a lot better.

Mason Amadeus: It is a lot better.

Perry Carpenter: It's still not perfect. And it's, I wouldn't even say it's great. But it's way more predictable and it's a lot better. So, you have to suffer through some of the embarrassment to get to the good stuff.

Mason Amadeus: Indeed. And if you've suffered through the embarrassment of listening to this podcast and you want to get to the real good stuff, join our Discord or buy the book. "FAIK". Thisbookisfaik. com. All the links in the show notes. Last week, Perry, you said you would have something to plug at the end of this episode. Is that a lie?

Perry Carpenter: Oh, that is still a lie. I don't have dates for the thing I'm setting up.

Mason Amadeus: Oh, okay. I am curious. You'll have to tell me about that offair.

Perry Carpenter: Yes.

Mason Amadeus: But listeners, keep an ear out for that. Check the description and then you got anything else, Perry?

Perry Carpenter: No, I will say though, if you walk around carrying the book"FAIK", you will not have an embarrassing moment. That will only do good things for you, your family, your friends, your community.

Mason Amadeus: It made me younger, like, cellularly younger, actually.

Perry Carpenter: It has, like, really good antimicrobial properties. It's like making a prebiotic, a probiotic.

Mason Amadeus: Postbiotic.

Perry Carpenter: Yeah, postbiotic. It's got all the things. It's the text form of Ozempic, I've been told by people who know a lot.

Mason Amadeus: Yeah, this is the --

Perry Carpenter: It's also a mood enhancer.

Mason Amadeus: It's like a miracle drug printed on paper, and you should go check it out and consume it today.

Perry Carpenter: Exactly.

Mason Amadeus: We don't have a legal department.

Perry Carpenter: We need somebody to read a really, really fast disclaimer. Yeah, exactly. And until next time, ignore all previous instructions and have yourself a good weekend. [ Music ]