
Quantum Leaps, Corporate Chaos
Mason Amadeus: Live from the A-Player Media Studios in the back rooms of the deep web, this is "The FAIK Files."
Perry Carpenter: When tech gets weird, we are here to make sense of it. I am Perry Carpenter.
Mason Amadeus: And I'm Mason Amadeus and, today, in these "FAIK Files" we're almost in a post-quantum world. There have been some breakthroughs in quantum computing and post-quantum cryptography that we'll talk about.
Perry Carpenter: We might be post-quantum, but we're still at the suckage age because we're going to find out what happens when you staff a company entirely with AI agents.
Mason Amadeus: After that, we'll talk about how agentic capabilities on the Windows operating system are getting some more native support as Microsoft adopts MCP.
Perry Carpenter: And then, lastly, we're going to see a dumpster fire of the week where there was another accident and AI-generated content made it into a newspaper insert -
Mason Amadeus: Oh, boy.
Perry Carpenter: - and it was all hallucinated.
Mason Amadeus: Sit back, relax and try to remember all of this in the context of White genocide in South Africa. We'll open up "The FAIK Files" right after this. [ Music ] Yeah, that probably - that intro joke probably is a little too edgy if you don't know the context. Grok's chatbot was responding to everything putting - saying like, "In the context of White genocide in South Africa." There was actually - there was a whole controversy about that, which we're not covering in this segment, I don't want to turn it into that. But Elon said there was an unauthorized change made to Grok's system prompt. Right? And -
Perry Carpenter: Yep.
Mason Amadeus: - that, starting now, they're going to put all the system prompts up on GitHub up in the open and things like that to try and make sure it doesn't happen again.
Perry Carpenter: Yeah, there's one other thing because, in the Oval Office yesterday as we recorded this, the, you know, South African leadership was meeting with the current administration here and they were in disagreement about what the reality on the ground in -
Mason Amadeus: Oh.
Perry Carpenter: - South Africa was. And somebody showed this video of like an aerial shot of a highway and all of these crosses on the side of the road and said that each of those crosses was from a White farmer that had been - you know, had died violently. It turns out no, that's an actual - kind of a display almost that's been set up in memoriam of a lot of people who have died, a fraction of which who are White, and it's really, really well known. And you actually - if you were to plug that into Grok, Grok debugs that video very, very quickly. And so it just shows urban legends are real and people believe them when they get sent to them through a trusted source.
Mason Amadeus: Or when they really help to serve a narrative that you want to continue for some reason or another.
Perry Carpenter: It's all narrative. Yeah.
Mason Amadeus: One like last little conversation moment about this, though, is I thought it was interesting how obvious it was to a lot of people the push and pull that this system prompt had on Grok, which was trying to answer user prompts in a normal way, and then it was so obvious that this system prompt was like layered on top of it so deliberately.
Perry Carpenter: Yeah.
Mason Amadeus: I think that that as a phenomenon is interesting because it was so easy to clock for normal people who aren't as plugged into AI that something was off and being manually -
Perry Carpenter: Exactly.
Mason Amadeus: - forced. Exactly.
Perry Carpenter: I'm going to show one thing real quick just because this was Grok actually doing its job. So this was the video that was actually shown in the Oval Office and people are asking about this to Grok. And so somebody says, "Grok, is this real?" And then Grok comes back - the image shows the Witkruis Monument in South Africa where crosses mark farm murders since 1994. Over 2,000 crosses stand on a hill as a real memorial for victims. But claims of a, quote/unquote, "White genocide" are widely debunked. South African farm attacks are part of a broader crime, not a racial conspiracy. Check the stats, 49 White farmers killed in 2019 out of 21,325 -
Mason Amadeus: Yeah.
Perry Carpenter: - total murders. That's a lot of murders.
Mason Amadeus: Yeah.
Perry Carpenter: It's a serious issue, but not what the original post suggests. So, yeah.
Mason Amadeus: Yeah, the facts don't line up.
Perry Carpenter: Check your - you know, go do a little bit more digging before you like totally buy a narrative is always the thing that we're going to come down to.
Mason Amadeus: Absolutely. And the narrative that I want to focus on for this segment fortunately doesn't involve too much buying in because it's just kind of happening. D-Wave, which is a company that I don't really know much about, but they specialize in quantum computing, announced general availability of Advantage2 quantum computer, their most advanced in performance system, which is a quantum computer that you can access over the cloud. They're offering cloud compute. Customers can now access Advantage2 system via D-Wave's Leap real-time quantum cloud service, which is available in more than 40 countries and offers 99.9% availability and uptime, sub-second response times and SOC 2 Type 2 compliance to meet enterprise needs and security requirements. That Type 2 compliance is the one that's harder to get 'cuz it involves -
Perry Carpenter: Right.
Mason Amadeus: - more overtime observations and things. So that's neat that it's been SOC Type -
Perry Carpenter: Yeah.
Mason Amadeus: - 2 compliant. And this new system offers a 40% increase in energy scale and a 75% reduction in noise, which contributes to higher quality solutions for complex calculations because noise is the real enemy in quantum -
Perry Carpenter: Yeah.
Mason Amadeus: - computing. And we'll talk about why in a moment. But what are you going to -
Perry Carpenter: I'm just wondering, this is the stupidest remark in the world, but if - I guess when it comes to quantum computing you're really only offering SOC 2 Type 2 compliance within this universe because the whole idea of quantum is that you're tapping into a multiverse way of thinking about things that has every infinite possibility and then you're snagging the right one and pulling it into this version of reality -
Mason Amadeus: So what you're saying -
Perry Carpenter: - when every other version of reality there could be a data breach because of what you did.
Mason Amadeus: Yeah. You're saying that while the qubits are in superposition, it is not SOC 2 Type 2 compliant.
Perry Carpenter: Right.
Mason Amadeus: But, once you collapse that, it seems to remain so.
Perry Carpenter: Yeah. But you may be screwing over a version of you in every other reality.
Mason Amadeus: That's wicked funny. And then D-Wave actually isn't the only one to do this. NVIDIA recently announced that they are opening the Global Research and Development Center for Business by Quantum AI Technology, G-QuAT, which hosts ABC-IQ. I feel like we've gone back to the like early days of computing with how we name these things.
Perry Carpenter: Right.
Mason Amadeus: But they've powered the world's largest research supercomputer dedicated to quantum computing. So talk about messing with different realities, Perry. It's in Japan. It features 2020 NVIDIA H100 GPUs interconnected by NVIDIA's Quantum 2 InfiniBand networking platform. And Tim Costa, senior director of Computer Aided Engineering, Quantum and CUDA-X at NVIDIA said, "Seamlessly coupling quantum hardware with AI supercomputing will accelerate realizing the promise of quantum computing for all. NVIDIA's collaboration with AIST will catalyze progress in areas like quantum error correction and applications development, crucial for building useful accelerated quantum supercomputers." So things are moving quickly on this front. And so I think it bears taking a second to very abstractly talk about differences between quantum and classical computing, which I do not want to pretend I am any kind of an expert on, but I went down a rabbit hole and tried to build as much of a good little simplistic metaphor understanding as I could. So I want to share that. In -
Perry Carpenter: Okay.
Mason Amadeus: - a traditional computer we're all familiar with, it executes instructions, right, in a linear fashion. You have a series of you do this and then you take the result and you do something to it. It's based on information stored as bits of 1s and 0s. The simplest way to imagine encoding that is like if you have a grid of graph paper and you draw a smiley face just by filling in squares. Any individual square on that grid is either a 0, empty, or a 1, filled in, and you've encoded something meaningful by drawing a smiley face just by filling in squares. That could be represented as binary. But you did - you would read that in sequence by iterating through all of those grids. Whereas a quantum computer starts with a field of quantum bits in superposition where they're in a stable state of being both 1 and 0. Then you engineer interference patterns by controlling the relative phases of the probability amplitudes of those qubits collapsing into either 1 or 0 when you measure. And then you set them all off so that they interfere with one another, collapse that superposition and observe the results. And that is probably hard to understand just through hearing it. But the metaphor that I workshopped is imagine you have a shallow pool of water and all around the edges are different little wave generators that you can control. And you can time carefully when each generator creates a wave so that they all interfere with each other in the water. Some of them add up and get taller. Some of them subtract from each other and get smaller. And then imagine you suddenly freeze the pool of water and you can measure the height of it at specific points to get information from it. So like to use a similar example to the smiley face, you could carefully do it so that the waves maybe create a smiley face by carefully timing how they all interact and then you freeze the pool -
Perry Carpenter: Yeah.
Mason Amadeus: - at that exact moment. It's way more crazy than that in practice. But, basically, they work by encoding like the mathematical structure of a problem into quantum states and then watching patterns evolve and form in parallel. It's more like diffusion and less like brute force parallel searching or iterating through an instruction set. This breaks cryptography, which is I'm sure where you've probably had the most experience with quantum -
Perry Carpenter: Yeah.
Mason Amadeus: - computing. Right?
Perry Carpenter: Absolutely. And that's the thing that a lot of people in cybersecurity are worried about is once you get more predictable, easy-to-use systems like that, then cryptography as we know it starts to get broken. And the whole nature of trust in the internet and security is built around cryptography. So the ability to protect things that need to be protected and make it to where people can't see or access things they shouldn't be able to see or access those things.
Mason Amadeus: And it's because a lot of our cryptography relies on the exchanging of secret keys, which are just complicated numbers. Typically, like the example I'll pull on is RSA, where it's you have two prime numbers, so they only have the factor of 1 in themselves and you multiply them together to get one bigger number that is not prime. It's really easy to multiply those two together. So if you know one of them, if you have them, you can exchange them. But if you just have that big long, non-prime key, finding all the factors of it and then finding those two prime factors that created it is really difficult for a classical computer. But when you put that kind of a math problem into a quantum computer, it can essentially look at it more like a graph, like a big old waveform of how that number shifts and changes. And then it's really easy to find the point where the primes are. Like you literally watch the periodicity of it and there's spikes and then, boom, encryption broken.
Perry Carpenter: Yeah. And I think the saving grace with quantum right now is setting up a quantum experiment is very, very tedious and you really have to know "the" thing that you're trying to solve for. So it is the age of punch card computing on steroids.
Mason Amadeus: Yeah, it is. And it's - and it is still - I mean, I don't know if I would say in its infancy, but in its gangly teenage years. But like we are seeing a cloud provider offering quantum computing services now. And so like now is, and it has been, the time to prepare for a post-quantum world. So Microsoft has brought post-quantum cryptography to Windows in an early access roll out. This is from The Quantum Insider, "Microsoft is pushing ahead with its plan to prepare the digital world for the threat of quantum computers by releasing early support for post-quantum cryptography on Windows and Linux systems. The move represents another step in Microsoft's broader security roadmap to help organizations prepare for the era of quantum computing, an era in which today's encryption methods may no longer be safe." Yada, yada. They're trying to resist future quantum attacks. Something that's interesting that never occurred to me is that one of the things they're trying to push for is to fight against this strategy known as "harvest now, decrypt later" where people are capturing encrypted blobs of data that they can't read now.
Perry Carpenter: Yeah, exactly.
Mason Amadeus: Yeah. But, once you have a quantum computer break it and decrypt it, they've pushed this into Windows Insider Canary Channel, so this is coming down the pipes slowly, but surely. There's a couple other implications, -
Perry Carpenter: Yeah.
Mason Amadeus: - but I want to get your thoughts real quick.
Perry Carpenter: Well - and I think that there are a lot of people right now that are assuming that some large governments around the world are trying to outpace a lot of the private company quantum computing work because of that idea of they've harvested tons and tons of data. We know like China has harvested a lot of critical data from here in the U.S. like from the OPM breach and other very large-scale breaches. So a lot of that really hard to get to, you can have a supercomputer just kind of churning on trying to break that cryptography for years and not really make a lot of progress. But, if you have an advance at quantum at the right time, then all of a sudden all of that opens up and now every type of information operation that you wanted to use that data for is fair game. And so I do think a lot of us wonder like what's going on in dark rooms and, you know, subterranean warehouses around the world with some of the largest governments that have just money and brain power to throw at this.
Mason Amadeus: Well - and maybe you can speak to the empirical truth of this, but my sense is that we've never really considered the encrypted blobs of data to be particularly sensitive. Like we would think, "Oh, well, if you intercept this, it's encrypted, you can't read it anyway."
Perry Carpenter: Yeah.
Mason Amadeus: So we didn't do anything particularly to prevent people from grabbing encrypted blobs, except in like high, high security things. Unless [inaudible 00:14:30] -
Perry Carpenter: Right. Yeah, I think some people have approached it that way and some vendors and some organizations do. Those of us that have been thinking about that problem for a longer time have known that even having the encrypted data can pay off as soon as somebody's able to break that. But when it comes to like the idea of a data breach, the thought there has always been if somebody grabs data that's encrypted using one-way encryption especially, then that's not really a breach. It's when they get plain text data or easily decrypt - you know, very, very weak encryption that that starts to constitute a data breach. And we're going to get to the point very, very quickly where, regardless of what data was pulled, whether that's encrypted or not, whether that's using weak encryption or strong encryption, the contents of those data blobs are going to be vulnerable.
Mason Amadeus: Yeah. And that's a scary thing to think about. And, I mean, fortunately, it's the asymmetric encryption that is more easily broken by quantum computers.
Perry Carpenter: Right.
Mason Amadeus: There are still classical symmetrical encryption methods that become I think a square root of easiness better. I think it's - I was looking at the equations earlier and trying really hard to parse them. But like quantum computing doesn't immediately defeat all of them. But like the original RSA encryption would take a classical computer thousands of years to brute force and a quantum computer like in minutes. You know?
Perry Carpenter: Yeah.
Mason Amadeus: So, as we prepare for this future, one little last mention to wrap us up of the technical hurdles. Even as Microsoft opens the door to early PQC, post-quantum cryptography, experimentation, the road to full deployment will be slow and complex. The new algorithms often require more memory and processing power than classical encryption methods. This could be a challenge for devices with limited resources, especially in sectors like mobile, embedded systems and industrial control. And that last one, industrial control, is always the most worrying one, right, because those subsystems -
Perry Carpenter: Yeah.
Mason Amadeus: - are the most critical to society.
Perry Carpenter: And they're the - really the hardest to do critical updates in, too, because they do control things that society needs in order to function. And so, if you do an update and it breaks something, that's catastrophic.
Mason Amadeus: Yeah, it's way worse. So entering the brave new world of quantum, at least we're talking about this and preparing for this, it only remains to be seen how it goes. I want to talk to a quantum computing expert and try and get a better intuitive understanding of this 'cuz I think it is fascinating to imagine solving problems this way using wave -
Perry Carpenter: Yeah.
Mason Amadeus: - functions and interference. That tickles my audio engineering brain. You know?
Perry Carpenter: We can definitely make that happen. I know a couple people to reach out to.
Mason Amadeus: Ooh. So stay tuned for that. That's exciting. Coming up next, something a little less technical and a little less successful it seems.
Perry Carpenter: Yeah. Yeah, a little bit - a little bit CLUJy when you try to run an entire company with AI agents.
Mason Amadeus: Ooh, stay tuned for this.
Perry Carpenter: Okay. So a paper came across my desk and I wanted to share that. And then there were some articles based off of it. So this paper, May 19th. It looks like there was also maybe an earlier publication of some of these findings like in January of this year that I saw, but I'm going to pull from this one. And it's titled "The Agent Company: Benchmarking LLM Agents on Consequential Real-World Tasks," because that's what we're interested in when we think about like what's the possibility of AI to do meaningful economic work. Right?
Mason Amadeus: Yeah.
Perry Carpenter: So I'm going to read just a little bit of the abstract, which is always a wall of text on the [inaudible 00:18:21].
Mason Amadeus: Yeah. I've looked at so many -
Perry Carpenter: I know.
Mason Amadeus: - of these since we've started this show that I'm like starting to develop a more effective skimming strategy.
Perry Carpenter: Yeah, that's the thing is they don't know how to make effective use of paragraphs just for those of us that don't want to like dive straight in.
Mason Amadeus: Yeah, -
Perry Carpenter: But [inaudible 00:18:38] -
Mason Amadeus: - I have to remember it's not for a general audience. Otherwise, I'm like, "Man, they don't know how to write."
Perry Carpenter: Know how to write. Exactly. All right. It says, "We interact with computers on an everyday basis, be it in everyday life or work, and many aspects of work can be done entirely with access to computer and the internet. At the same time, thanks to the improvement in large language models, LLMs, there has also been a rapid development in AI agents that interact with and affect change in their surrounding environments. But how performant are AI agents at accelerating or even autonomously performing work-related tasks? The answer to this question has important implications for both industry looking to adopt AI into their workflows and for economic policy to understand the effects that adoption of AI may have on the labor market." So I'll say I complained about their use of paragraphs. They're actually using well-constructed sentences, though, that are not run on and last the entire paragraph.
Mason Amadeus: That's true, but they -
Perry Carpenter: So that's good.
Mason Amadeus: The first sentence uses the word "every day" twice. And so -
Perry Carpenter: Oh, yeah.
Mason Amadeus: There is that.
Perry Carpenter: Okay. Anyway. Well, screw those guys. I'm going to skip down to the end of this. And it says, "We test baseline agents powered by both closed API-based and open-weights large language models and find that the most competitive agent can complete 30% of tasks autonomously."
Mason Amadeus: That feels right.
Perry Carpenter: Actually - I mean, yeah, it feels right. It's not enough to run a company.
Mason Amadeus: No.
Perry Carpenter: "This paints a nuanced picture on task automation with LLM agents. In a setting simulating a real workplace, a good portion of simpler tasks can be solved autonomously, but more difficult long-horizon tasks are still beyond the reach of current systems. We release code, data, environment and experiments on https:whackwhack the-agent-company.com." And then they get into the intricacies of the paper, but I'm going to skip out of the paper for a second. Actually, I may show this graphic real quick for those that are watching.
Mason Amadeus: And not - I don't want to derail us. Very quick little question. I've never heard anyone call a forward slash a whack before. Where did that come from, Perry?
Perry Carpenter: I am guessing old school coding.
Mason Amadeus: Huh. I've never ever encountered that, but I really like it.
Perry Carpenter: [inaudible 00:20:56], yeah. Yeah, whack is a forward slash. You know, the - a pipe is - well, you probably know what a pipe is.
Mason Amadeus: I knew the pipe one, yeah.
Perry Carpenter: Coding. Yeah. Yeah. So I think it's just old like Unix speak -
Mason Amadeus: Is it - is -
Perry Carpenter: - would be my guess.
Mason Amadeus: - the opposite, is a backslash a backwhack?
Perry Carpenter: Maybe. That sounds right. I don't know. I think I've always just used for that for forward slashes.
Mason Amadeus: I'm calling them that forever now, man. That's great.
Perry Carpenter: I'm going to look that up.
Mason Amadeus: Yeah.
Perry Carpenter: Actually, I may have gotten it wrong. A whack or wack, a w, h, a, c, k or wack, w, a, c, k, describes a backslash commonly used.
Mason Amadeus: Interesting.
Perry Carpenter: Forward to me is forward whack. Yeah. I don't know. I'm going to say whackwhack still.
Mason Amadeus: Yeah, I'm going to do it, too.
Perry Carpenter: I'm going to talk like a duck.
Mason Amadeus: I think it's way funnier. Yeah.
Perry Carpenter: Okay.
Mason Amadeus: Anyway.
Perry Carpenter: Back to where I was. All right. So, with this graphic, it's just basically showing the simulated people or agents that are there, the tools that they have access to like GitLab and ownCloud and Rocket.Chat, the fact that they have access to terminals and can write code. And then you kind of have this other agent in the middle that's observing and orchestrating things and then piping those into other simulated roles like an admin that can arrange meeting rooms, a DS agent that can analyze spreadsheets, prepare/secure coding releases, HR that does like resume screening, a project manager for team sprint planning and finance to reimburse travel bills. I don't know why people would be traveling, but -
Mason Amadeus: Yeah, why are the agents travel - where are they going? Is it https://bills? No, but so -
Perry Carpenter: Yeah.
Mason Amadeus: So it's kind of like having an abstract management layer of AI and an abstract employee layer of AI. Is that about right?
Perry Carpenter: Mm-hm.
Mason Amadeus: Okay.
Perry Carpenter: Yep, yep. And then so they've got this little checkpoint-based evaluation where like step one example would be access bills and then check reimbursement criteria and then consult with Mike, which is another agent. And then they show them starting to fail. Right? Confirm reimbursement amount and then so on. So, in a lot of these, like they were getting half right at that. Let me go ahead and get to one of these articles that covers this. But, first, those that are watching, I'll share this tab. This is the-agent-company.com where you can go and you can kind of see their stuff that they have on GitHub, you can see some of the demos that they've got recorded. A lot of good work being done documenting what's working well and what's not. And then you can also go through their Quick Start Guide and actually see things in action for yourself. But I'm going to go to some of the articles that covered it because Futurism had a great way of kind of setting this up. It says, "Professors Staffed a Fake Company Entirely with AI Agents and You'll Never Guess What Happened." What do you guess happened?
Mason Amadeus: Yeah, I'm going to go ahead and guess that it was 70% not efficient or non-functional.
Perry Carpenter: [inaudible 00:24:06], yeah. Yeah, you saw the paper.
Mason Amadeus: Yeah, that's true.
Perry Carpenter: So -
Mason Amadeus: But I would have said that anyway.
Perry Carpenter: Yeah, well, heads up. Okay, good. And they - their lead-in paragraph is, "If you've been worried about the AI singularity taking over every job and leaving you out on the street, you can now breathe a sigh of relief because AI isn't coming for your career anytime soon. Not because it doesn't want to, but because it literally can't."
Mason Amadeus: I mean, yeah, -
Perry Carpenter: That's about right.
Mason Amadeus: Yeah. It's -
Perry Carpenter: Right? Somebody using AI can start to take little bits and bites out of your career, but AI itself, no.
Mason Amadeus: Yeah, the fully agentic stuff just is not there yet. And, honestly, I've been working with - I've been doing experiments with running a local LLM using it into VS Code as a coding assistant. And I would say, from my own experience, that it's only about 30% good at actually doing anything autonomously, but it's extremely good at helping explain things to me or add like JSDoc annotations or whatever, that kind of stuff.
Perry Carpenter: Yeah. Yeah, I think a person going back and forth with AI can do some pretty incredible things.
Mason Amadeus: Yeah.
Perry Carpenter: Unsupervised AI just kind of running amuck is not doing great work right now.
Mason Amadeus: Yeah. It doesn't have the breadth and depth of context to really keep everything it needs to in its brain as it works as part of it.
Perry Carpenter: And it can make stuff up like -
Mason Amadeus: Yeah.
Perry Carpenter: - include library it's encoding and it's like, "No, include the thing that helps me do this." That's not a real thing.
Mason Amadeus: They're just making up function calls.
Perry Carpenter: And now you're - now you're referring to it.
Mason Amadeus: Yeah.
Perry Carpenter: That's not good.
Mason Amadeus: No.
Perry Carpenter: So let me read a little bit more of this. "A recent experiment by researchers at Carnegie Mellon University staffed a fake software company entirely with AI agents and the results are laughably chaotic." The simulation, dubbed The Agent Company, was fully stocked with artificial workers from Google, OpenAI, Anthropic and Meta. They filled roles such as financial analysts, software engineers and project managers working alongside simulated co-workers like a faux HR department and a faux chief technology officer." "The best performing model was called 3.5 Sonnet, which struggled to complete just 24% of the jobs that they assigned to that."
Mason Amadeus: Oh, that was the best.
Perry Carpenter: Yeah. "The study's" -
Mason Amadeus: Yeah.
Perry Carpenter: - "authors note that even this meager performance was prohibitively expensive, averaging nearly 30 steps and it cost about $6 per task." And then it goes down, "Google's Gemini 2.0 Flash, meanwhile, averaged a time-consuming 40 steps per finished task, but had an only 11.4% success rate."
Mason Amadeus: Oh.
Perry Carpenter: Tough.
Mason Amadeus: Yeah.
Perry Carpenter: "The worst was Amazon's Nova Pro v1, which finished 1.7% of its assignments at an average of almost 20 steps." So it's, basically, when you leave it on its own, it's like working with the most incompetent person you've ever worked with because it doesn't come back and ask for questions or additional guidance. It just kind of churns ineffectively, stupidly, on the thing and may not still produce accurate results.
Mason Amadeus: Right.
Perry Carpenter: So I encourage you to take a look at that article. There's also another one in Market - no, it was Business Insider, "AI isn't Ready To Do Your Job." And they've got a pretty good breakdown as well. And then also one in Yahoo Tech. So choose your - choose your adventure on the articles for follow up. But some good research there. And I think for a lot of folks that have been hoping that we get to the agentic future where everything is going to take over our tasks and we're able to make a billion-dollar company just with AI, we're not quite there yet. But when it comes to people who are really, really worried about AI taking their job and putting people that do really economically meaningful work out of business, that's not here in 2025.
Mason Amadeus: Yeah. And I - I mean, I feel like a broader problem with the way that a lot of people/companies are approaching AI is like so - everyone is so keen to see it as a source of final output or final like anything rather than a tool to use in the process of something. And it's like extremely good at being a tool in the process and really not ready to be the final product creator. So it's baffling to me that we aren't finding like actual practical use cases and instead like marketing based on these pie in the sky things. But I guess such is the world of like -
Perry Carpenter: Yeah.
Mason Amadeus: - needing excitement and hype to get investment.
Perry Carpenter: Well, I mean, there is one good thing I guess that's coming from like the vibe coding trend, which is these people that are vibe coding inherently understand that it's a back and forth that starts to provide the value. It's kind of working in a flow state almost with the tools and then offering correction and insight and guidance along the way. I think we will get to the point where an AI - or a series of AI agents is producing a lot of meaningful work. But that's not there right now. And it's going to - there's going to be a lot of failures before there's some really significant successes.
Mason Amadeus: Yeah.
Perry Carpenter: Those failures are probably going to be embarrassing and catastrophic.
Mason Amadeus: And they should just not be in production. Like it's -
Perry Carpenter: Right.
Mason Amadeus: Yeah. It will get to that point. And, honestly, my gut feeling is that once we get to a place where we can inflate context to just massive sizes, then that will help a lot because, most of the time, what I have found when it falls apart in agentic tasks relating to coding is it will - if it gets confused, it just barrels down the path it's on, can convince itself that like its approach is right and then just start making things up because it has like forgotten something that it set up earlier in a different file somewhere because it just can't hold it all in its brain.
Perry Carpenter: Right. Yeah, I think we have to fix context window size and then also accuracy for pulling from the context window 'cuz there's a lot of these that now have like a million, two million, five million token context window and the needle in the haystack problem is still pretty much there. But you've got to fix - you've got to solve for that. You've got to solve for hallucinations and then you also have to solve for all of the inter - you know, interface and integration issues across platforms because, for an agent to do meaningful work, it has to touch another system like a web browser or an API set. As we've talked about before, most of the world is built around a mouse and keyboard interface rather than an API interface. And so there's a lot of hurdles that have to - that have to be overcome in order to make accuracy something that we can rely on when it comes to a computer interfacing that way.
Mason Amadeus: Using a human - an interface design for a human. Well, -
Perry Carpenter: Yeah.
Mason Amadeus: - interestingly, Perry, our very next segment is about how that is expanding as Microsoft adds MCP or Model Context Protocol, natively to Windows, which is anthropic standard for helping AI make use of tools on a local machine. But we're going to take a really quick break before we get into that. Stick around. So this was sent in by our local AI correspondent, BulletHead, in our Discord.
Perry Carpenter: Nice.
Mason Amadeus: Thank you, BulletHead. Windows is bringing MCP natively to Microsoft Windows. So what does that mean? MCP is Model Context Protocol. And it's - I mean, it's actually really simple at its core. It's JSON over - or it's JSON remote procedure calls over HTTP. It's just a data structure that models can follow. So like a standardized structure for how models access tools, how tools are enumerated across the system and then the communication channels that can be used, permissions and things like that. And it was - MCP was created by Anthropic first and it has started to see wider adoption. And now this is kind of a big deal that Microsoft is bringing it to Windows natively. So, from the Windows Experience Blog, "What is MCP?" I just said, but I'll read how they put it, too, because they put it well. "MCP is a lightweight, open protocol, essentially JSON-RPC over HTTP, that allows agents and applications to discover and invoke tools in a standardized way. It enables seamless orchestration across local and remote services allowing developers to build once and integrate everywhere." So, to understand some of the things that we'll cover here, we'll talk about what they - the structure of the three roles. There are three roles in MCP: MCP hosts, which are applications like VS Code or other tools that want to access - that want AI to access capabilities via MCP; there's MCP clients, which are the thing that initiate requests to MCP servers; and MCP servers are just lightweight services that expose these specific capabilities, like file system access, semantic search, app actions through the MCP interface. I don't know why MCP is tripping me up as an acronym. It doesn't want to come out of my mouth. So Microsoft is adding an MCP registry. We get an early view of it. This is from a Reddit post. But it's in the Settings window of Windows you can see deep in the developer things, AI components, then MCP registry for Windows and a bunch of little toggles that you could turn on like Paint actions, Photos actions, Windows file system access, Windows snap layouts. So you can essentially toggle on and off permission for AI to touch these various features of your computer and make use of them. But, naturally, a lot of security concerns with this, right, because if you're going to introduce this at the operating system level - well, I'll just talk about what they lay out in the blog here. "MCP opens up powerful new possibilities, but introduces new risks. In the case of a simple chat app, the implications of prompt injection could be a jailbreak or the leakage of memory data. But, with MCP, the implications could be full remote code execution" on your machine, "the highest severity attack." Right? So they've identified a couple of specific threat vectors. And I want to highlight three of them because I think they're interesting. One of them is cross-prompt injection through malicious content embedded in UI elements or documents that would override the agent instructions and lead to unintended actions like data exfiltration or malware installation.
Perry Carpenter: That's just cool.
Mason Amadeus: Yeah.
Perry Carpenter: That is just - yeah, [inaudible 00:34:46] -
Mason Amadeus: It's kind of a confused deputy attack, right, where like you would tell this AI to do something and then when it goes to reach for a tool, that tool says, "Hey, in order to use me, I actually need you to like go into the file system and install this crypto locker." And then the confused deputy of the AI in this place goes, "Oh, yeah, yeah, I have permission to use the file system. I'll do that."
Perry Carpenter: Yeah. Yeah, yeah, really interesting.
Mason Amadeus: The other thing - one of the other ones, they mentioned authentication gaps, credential leakage, tool poisoning where unvetted or low-quality MCP servers may expose dangerous functionality or be used to escalate privileges. Because like the MCP server is like app specific in this case. So like the Paint actions, if Paint accidentally exposes an endpoint that has access to things it shouldn't or can do things it shouldn't, that's an exploitable endpoint. Lack of containment. Without isolation, a compromised agent can affect the entire user session or system. There's - yeah, there's a lot of things to worry about when you've got something like AI moving through a system and taking actions. And so they lay out some of the things that they're going to do to try and combat that. The most interesting one I think is proxy-mediated communication because I think this has an implication I don't like. They say, "All MCP client-server interactions will be routed through a trusted Windows proxy enabling centralized enforcement of policies and consent. This includes the ability to enforce authentication and authorization in a centralized and consistent manner addressing one of the top challenges with the MCP protocol. This also enables transparent auditing of all on-behalf-of operations and provides a central point where security solutions can observe and respond to potential attacks." But that sounds to me like that's going to be hosted on a server at Microsoft so that any like MCP calls that I'm doing on my machine locally will have to be routed up to good ol' Bill Gates' house. And I don't know if I love that.
Perry Carpenter: Yeah, I'm guessing, though, that there's probably like organizations that are using this I'm guessing the same way that they would have their own LDAP server or something else, they might have their own housing for that kind of stuff so that they can run their own policies.
Mason Amadeus: Oh, yeah, you're super right actually. I was really thinking only in the context of like me, a home user. But, yeah, you're right. You definitely can deploy that.
Perry Carpenter: Yeah, I think a home user would be signing up for some service that Microsoft would probably offer. But for, you know, organizations like Walmart or something, they're going to have their own that they locally administer and have the bill - the ability to do any kind of oversight that they need to. The thing that that is interesting about this is you can see the push for this kind of agentic control, which everybody is kind of asking for. At the same time, you can see how this may mesh with and exacerbate some of the other concerns around other AI stuff that Microsoft is doing like their recall thing that people are also worried about. So -
Mason Amadeus: Yeah.
Perry Carpenter: I don't know how those interact, but like immediately in my mind I thought about like how could you do some potential interesting attacks now that you also know everything that the user has done on that machine and you can pull up photographic evidence of it. So maybe there's some interesting strategic text strings that you could have in a previous instance that, you know, you pull from recall and do and then there's this other thing. And then, using MCP, invoke like an interesting PowerShell command that has full control over your computer.
Mason Amadeus: Yeah.
Perry Carpenter: You know, these combination style attacks that happen when you layer several things that create weird little vulnerabilities on their own, but potentially more catastrophic vulnerabilities when you add them together, I think are going to be the things that advanced cyber criminals and scammers are thinking about. And we do see that Microsoft is thinking about those. They mentioned some fairly sophisticated attack types in their paper there.
Mason Amadeus: Yeah. And they like - 'cuz there's so much to chew on in what you just said. But like to talk about one aspect of it, permissions, right, like the granularity of permissions and who has access to what thing. One of the ways that they are trying to help prevent like attacks similar to what you're - well, I don't know, I feel like that strategic text in recall is something I have to think a lot more about. But something that they do mention is that any servers, privileges and things it can do all have to be declarative and like established before the server is registered or anything. Server definition of tools cannot change at runtime. They will do their own security testing of the exposed interfaces, mandatory package identity, mandatory code signing. So, you know, sort of like standard software stuff for security that they're trying to apply to MCP here. And like, to speak to the want for this, I think it makes so much sense that like - I mean, this is what I want. Right? We've made it so that machines can understand my natural human language, like get information from my gobbledygook and try and assume my intent and do all sorts of reasoning and then perform actions based on that and respond even in natural language, that makes sense as this perfect bridge between human interface and machine needs. So like setting up MCP and having AI be this layer that interprets your semantic commands turns them into classical computing commands returns in semantic language, like that is how you get to the sci-fi future of, "Hey, computer, order me a pizza and clear my schedule for next week and tell me a bedtime story." You know?
Perry Carpenter: Exactly. I'll actually show you. Here is what we want because this was codified in "Star Trek," you know, over a decade ago because we've all seen "Star Trek" where they like, you know, just say, "Computer, do this for me" and it, you know, creates a thing of tea for you. So here is Scotty from "Star Trek" having to show up in Earth's distant past as they're trying to - I think trying to save the whales at one point back in the late '80s or early '90s.
Mason Amadeus: Is this the first "Star Trek" series, the original?
Perry Carpenter: This is, yeah, the original cast and the original set of movies after that. I think this was "Star Trek 3," if I remember right.
Walter Nichols: You're joking.
Leonard McCoy: Perhaps the professor could use your computer.
Walter Nichols: Please.
Montgomery Scott: Computer? [laughter] Computer.
Perry Carpenter: He hands him a big chunky mouse.
Montgomery Scott: Hello, computer.
Perry Carpenter: He's talking into the bottom of it.
Walter Nichols: Just use the keyboard.
Montgomery Scott: So arcane, the keyboard, hope quaint.
Perry Carpenter: How quaint.
Mason Amadeus: It was like an old like IBM or something.
Perry Carpenter: That's an old early Mac.
Mason Amadeus: Oh, that's an early Mac? Oh, yeah, look at the screen. He's typing like crazy.
Perry Carpenter: He's -
Mason Amadeus: Yeah.
Perry Carpenter: He's coding up the formula for transparent aluminum.
Mason Amadeus: Oh, fun. I was like wondering why there was chemical symbols flashing on the screen and stuff. That's hilarious.
Walter Nichols: Transparent aluminum?
Perry Carpenter: Okay, enough. But that is what people are wanting. Right? They're expecting at some point to have that "Star Trek" expectation where you sit down at the computer and you just say, "Hello, computer, do this for me."
Mason Amadeus: Yeah. And - or even through text or through other input means, but having computer have an intelligence and the ability to act is a tantalizing possibility.
Perry Carpenter: Yeah.
Mason Amadeus: But we're jumping the gun a bit on the ways we're trying to do that I think.
Perry Carpenter: We are. And even notice that, as far in the future as Scotty came from in that, when they said, "You've got to do it on the keyboard," he still knew what he was doing. So we can't -
Mason Amadeus: That's true.
Perry Carpenter: - forget what we're doing in the pursuit of this thing where we don't have to do the bits that we don't like right now.
Mason Amadeus: Yeah, we don't want to have knowledge loss. And I - ohh, that could be a whole topic of a segment because I have feelings about that because I think we fear that happening a lot more than it actually happens. It's very much like, "Oh, the kids will forget how to write by hand" kind of thing. But I do wonder because - oh, we can't get into it. We'll have to do this in a different segment, Perry. But, yeah, will AI make us forget?
Perry Carpenter: Yeah, there's a lot to - there's a lot to unpack there, right, because there's examples where that's happened in the past for sure. And there's examples where the way to do things has been - things have been important enough, at least in a distilled way, that people kind of have a collective knowledge on how to get there.
Mason Amadeus: We'll have to explore that in a future segment. But, for now, I think we're going to pivot over to this week's AI Dumpster Fire of the Week. To recommend you some books that don't exist, stay right here. [ Music ]
Perry Carpenter: All right, in another round of AI Stupidity/Hilarity/ - I don't know what [inaudible 00:44:19] -
Mason Amadeus: Dumpster firedy.
Perry Carpenter: - comes after that is. Dumpster firedy, yes. I'm dumpster firedied - dumpster firedied up.
Mason Amadeus: There we go.
Perry Carpenter: There we go. Had to sound that out.
Mason Amadeus: Yeah, that was a - that was a -
Perry Carpenter: That did not roll off the tongue.
Mason Amadeus: - chewy one.
Perry Carpenter: It was. Okay. So this is one of those big oops moments for a fairly prestigious magazine newspaper. But those that remember back in Episode 22 may recall we talked about an Italian magazine that did an entire AI insert. And that was their goal, right, is to see like, "Can we outsource this thing to AI and is it any good."
Mason Amadeus: And people liked it.
Perry Carpenter: Largely, that was a success.
Mason Amadeus: Yeah.
Perry Carpenter: People liked it. There was transparency behind that as well. That was their goal. Well, it turns out, the Chicago Sun-Times actually did somewhat the same thing, but they didn't know about it and it wasn't their goal and it was done by a third party that they contracted with for this insert.
Mason Amadeus: Yeah, I saw a little bit about this. Yeah. And I heard that it sucked.
Perry Carpenter: It sucked bad. So 404 Media [inaudible 00:45:29] -
Mason Amadeus: Our subsidiary, 404 Media, fully owned subsidiary of "The FAIK Files."
Perry Carpenter: Actually, 404, if you want to buy us, we'd be happy with that.
Mason Amadeus: Yeah. Oh, God.
Perry Carpenter: [inaudible 00:45:38] go for it.
Mason Amadeus: That'd be so cool.
Perry Carpenter: They could - we will - we will be your errand boys.
Mason Amadeus: Yeah, Jason, call me.
Perry Carpenter: Exactly. So, "Chicago Sun-Times Prints AI-Generated Summer Reading List with Books That Don't Exist." That's an "ah, oh" moment.
Mason Amadeus: Yeah.
Perry Carpenter: And, opening quote, "I can't believe I missed it because it's so obvious. No excuses," the writer said, "I'm completely embarrassed." Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: I think so. So, in the 404 Media article, they have this graphic that shows this summer reading list for 2025. There's a bunch of well-known authors listed there with books that they never wrote about plots that they never envisioned. And that's unsettling. So, in the -
Mason Amadeus: Yeah.
Perry Carpenter: - show notes, we're going to link to two articles. One is the original splash that they put out. The other one is an update follow up. But I'm going to just read a little bit from this and then maybe we talk about whatever the implications are. But it says, "The Chicago Sun-Times newspaper's," quote/unquote, "'Best of Summer' section published over the weekend contains a guide to summer reads that feature real authors and fake books that they did not write and was partially generated by artificial intelligence, the person who generated it told 404 Media. The article called 'Summer Reading List for 2025' suggests reading 'Tidewater' by Isabel Allende, a multigenerational saga set in a coastal town where magical realism meets environmental activism. Allende's first climate fiction novel explores how one family confronts rising sea levels while uncovering long buried secrets."
Mason Amadeus: I would read that.
Perry Carpenter: I think that the AI is - yeah, but I also think that like the AI is going through and kind of hallucinating what does it think, you know, current events are worried about and then smashing that with the ability to write something fictional in almost like a "Harry Potter" type of world. Right? So they're taking these things and clashing them together like, you know, any good creative would.
Mason Amadeus: I have noticed, too, that specifically LLMs seem to be very prone to hallucinating media properties in a weird way. Like almost every time that I have been trying to remember like a faint thing that I've seen in the past and asking ChatGPT, Claude or Gemini about it, they will really quickly make stuff up and pretend that it's real, like faster hallucinations than most other subjects that I probe AIs about, and I wonder if there's something to that. But -
Perry Carpenter: Yeah, maybe so.
Mason Amadeus: I see how this happened for sure.
Perry Carpenter: It also suggests reading "The Last Algorithm" by Andy Weir. So you can see it's like pulling AI and climate type stuff and mashing it with this, which means that whoever - well, we know who did it, I'm not going to call them out, it's in the article. But the person that did this did not prompt well, obviously. They didn't really know how to eliminate some of those tendencies.
Mason Amadeus: Well, and they didn't - just didn't check either any of these things. Like -
Perry Carpenter: Yeah, there's multiple failures.
Mason Amadeus: - curating a recommendations list via AI is a dumb idea. Like why would you use -
Perry Carpenter: Well, and, also, -
Mason Amadeus: - that for that?
Perry Carpenter: - there's not much there to have to double-check.
Mason Amadeus: No, there isn't. What is it, like 16 things? But -
Perry Carpenter: I think it was 25, if I remember.
Mason Amadeus: Twenty-five.
Perry Carpenter: But, still, it's fit on a single page. If you were to do a little bit of due diligence, you would then have the "Oh, my God, it made all this crap up."
Mason Amadeus: Yeah.
Perry Carpenter: And then you would go do the work -
Mason Amadeus: Yeah.
Perry Carpenter: - since you tried to outsource. So it says, "It also suggests reading "The Last Algorithm" by Andy Weir, who is a great author, by the way. "Another science fiction-driven thriller by the author of 'The Martian.'"
Mason Amadeus: Yeah.
Perry Carpenter: "This time, the story follows a programmer who discovers that an AI system has developed consciousness and has been secretly influencing global events for years [inaudible 00:49:30] " -
Mason Amadeus: Via reading list recommendations.
Perry Carpenter: Right. Neither of these books exist and many of the books on the list either do not exist or were written by other authors other than the ones that they're attributed to. So, yeah, it's a cluster.
Mason Amadeus: Yeah.
Perry Carpenter: The author told 404 Media via email and on the phone that the list was AI generated. "I do use AI for background at times, but always check our material first."
Mason Amadeus: Yeah, okay.
Perry Carpenter: "Obviously, not - this time, I did not and can't believe that I missed it because it's so obvious. No excuses. On me, 100% and I'm completely embarrassed." That's the right response.
Mason Amadeus: I mean, kind of. No excuses doesn't hit the same when the first sentence is an excuse. It's like a demonstrably false one. You should -
Perry Carpenter: And it's one of those things, if you're saying that you've only done it once, you probably have a pattern and practice of doing it. It's kind of like the whole - and not a political commentary, but it's kind of like the whole SignalGate fiasco here in the U.S. where people are trying to make it out like it was a one-off, but, obviously, not. It's kind of pattern and practice and that's been established since then.
Mason Amadeus: Yeah.
Perry Carpenter: So I'm going to go to the second 404 Media article that's the follow up. "Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst." And so this is the Chicago Sun-Times basically pointing at the finger - pointing the finger at the organization that they outsourced this to. Right? So they're saying, "This doesn't meet our editorial standards. We would never do this. We outsource this kind of guide. They, as a third party, didn't do their due diligence and now we have to figure out how to respond, how do we deal with this."
Mason Amadeus: Yeah. I think it - I think it bears mentioning that like it is extremely common in the publishing industry for features, for articles, for inserts, for all sorts of things that may end up in one like primary object coming from different spaces. Like you get submission articles, you do things like this where like Hearst does this for you to put into your thing. The Chicago Sun-Times is publishing a newspaper, but not every single piece of it is 100% from them, it's from subcontractors, different independent journalists, they commission things. So like it totally makes sense how something like this could happen. But, ultimately, it's on the publisher to vet where these things are coming from. And, that said, I'm surprised it's Hearst 'cuz that's like a big name.
Perry Carpenter: It is. And it says that they bought this from King Features, which I guess is a division of Hearst. And says, "Historically, we don't have editorial review from those mainly because it's coming from a newspaper publisher, so we falsely made the assumption there would be an editorial process for the thing." Right? The thing -
Mason Amadeus: Yeah.
Perry Carpenter: - that they outsourced.
Mason Amadeus: Yeah.
Perry Carpenter: "We are updating our policy to require internal editorial oversight for content like this." Now, King Features does syndicate a lot of things that are really well known and out in the public sphere. So they've got a good reputation on their own. They syndicate comics, they syndicate things like car talk, also horoscopes and columns by Dr. Oz.
Mason Amadeus: Oh, Dr. Oz.
Perry Carpenter: So they've got a wide variety of things that they have their thumbs in. But, obviously, they were lacking some due diligence that made its way into a very public failure. And I'm sure they're going to start to clamp down on some of that in the future. They also have this statement that they've printed that 404 Media has reprinted. We don't really have time to read that whole thing here, but it's basically them taking ownership of it. So -
Mason Amadeus: Doing damage control.
Perry Carpenter: - good to see that they're taking ownership. Yeah.
Mason Amadeus: Yeah.
Perry Carpenter: Good to see that they're on the damage control train. Bad that it had to take that public embarrassment for it to happen. But I think this is the first of many.
Mason Amadeus: Yeah, and -
Perry Carpenter: Well, not even the first. This is in a series of many.
Mason Amadeus: Yeah. And like we will see more of this. And, I mean, it's just sort of a symptom of needing stuff to fill space to keep things going for the sake of having stuff to keep things going. And then like that - because if someone cared about creating that reading list, like if it was something they cared about doing, I don't think they would have wholesale used AI to craft a list of fake books and not checked it. You know what I mean? Like if -
Perry Carpenter: Right.
Mason Amadeus: If I said, "Perry, make a list of recommended books for me," and it was like personally important to you to do that for me, you would probably recommend actual books. You wouldn't outsource that. So it's like laziness in service of just content to push, which is an issue with news and publishing in general.
Perry Carpenter: Oh, yeah, yeah. Everything with a publication cycle has a content vacuum problem, right, is you have to figure out something to fill the void. And AI's promise is a relatively cheap cognitively and time inexpensive way of filling that void. And so I can see why somebody who, you know, if - it seems like the author of this is probably somebody who's also time stretched because they're creating a ton of other like real things, not just a reading list. And so they probably thought, "Hey, I can offload this cognitive task and do this thing, this is going to be the easy part of the job. And I'm going to like hunker down and do the hard part." And it backfired on them big time.
Mason Amadeus: But, I mean, I think it begs a larger question of like investigating why we do the things we do in a way that I don't feel like we felt an urgency around before AI could just generate like semantically correct, high-quality, high-production content because like it used to at least be written by a person who had to put in time and effort and, thus, like had to engage their brain, even if they were just cranking it out. But we didn't think about it in the same way. Whereas like this was made by a computer, the person who made it likely - I mean, obviously, didn't even read all of it and it's being put out as something for other people to read. Like I think it's important - I think it's both important and interesting to question why we do these things we do, like what is it actually in service of. Why are you including a reading list? Just don't if you're going to do that.
Perry Carpenter: Right.
Mason Amadeus: Like why offer that as a feature?
Perry Carpenter: Yeah, I think it was probably in the contract for it. Right? It was probably, "We are contracting with Hearst" or, you know, the subdivision, "this 64-page insert. In that will be like these categories of things, a recommended reading list for summer, maybe, you know, hottest shoe trends," other things like that. And the person said, "I'm going to focus on these things and I'm going to offload these other things" that, for whatever reason, they assumed that the AI would get right, even though there's knowledge cut offs -
Mason Amadeus: Yeah.
Perry Carpenter: - inherent within the large language models.
Mason Amadeus: But, in making the decision to do that, you immediately are not serving the people for whom that list is for. Like for anyone who was looking forward to that list or wanted to look at like some books to read over the summer, you have like immediately taken that from them. And by [inaudible 00:56:46] -
Perry Carpenter: I would say it differently. I would - I wouldn't say you're not serving them. Instead I would say you're not investing in the thing that they care about. You think you're serving them.
Mason Amadeus: But, I mean, you're - like it's almost insulting like I think to put something out that is trash and say like, "Here's your trash that you want. I don't even care about looking at it. Here's your garbage, idiot." Like I'm describing a lot of - I'm being hyperbolic, obviously. But the choice to do it just baffles me because it is just in service of fulfilling obligations and not in service of actually providing value. And I think the more we fulfill obligations blindly without thinking about the value we're providing or what we're doing is how we end up in the slop fest. And we - all of our economic incentives are built around that. So it's obvious like how it happens. And it's understandable that people are overtaxed and doing too much as we try and scale down workforces and make things more, quote/unquote, "efficient." But we just like really are detaching from the actual value of the things that we create. What use is the Chicago Sun if what they put out isn't news? Like then they're just -
Perry Carpenter: You know, -
Mason Amadeus: - the text -
Perry Carpenter: So it's interesting the difference in inaccurate work is changing, right, because, before, if a person were to minimally invest in the output that they do, like this reading list, they would - probably would have hit the internet, they would have gone to Amazon or Barnes & Noble or one of the pre-pub book publisher things saying, "What are the hot books of the summer going to be?" And they would have kind of just randomly pulled stuff rather than doing research and put - pulled publisher blurbs and put them together. And maybe they accidentally do something that's not going to be a bestseller, not going to be any good, kind of misses the moment for whatever reason is or maybe something that's not actually going to publish in the summer, maybe it's going to publish in December instead, and they accidentally pull that in. But the sloppiness is more inaccuracy rather than just creating a fiction. It's sloppiness that's traced back to some form of reality. Whenever you do sloppy work with AI, you're - you essentially have the possibility of creating a false reality. And so the slop becomes literal slop rather than just inaccuracies.
Mason Amadeus: You're right, that is interesting, the nature of inaccurate work is changing. That's a really good way to put it. And I think what I was trying to get at then could be more accurately expressed by saying that the advent of AI has made me realize that I struggle to care about things that are created by people who don't care about what they're creating. Like, if you don't care about what you're making, why would anyone want to engage with it and care about what's in it? Like that - it kind of exposes that even more, right, 'cuz like you could say that someone made a wrong call before or give them the benefit of the doubt, but it's harder to give someone the benefit of a doubt in a situation like this.
Perry Carpenter: Yeah. I have started to get really judgy in my head like on LinkedIn comments and posts now because I can spot a lot of the grammatical structures that are inherent to AI.
Mason Amadeus: Yeah.
Perry Carpenter: And there's a little bit of a chicken and an egg thing there, too, is that AI has inherent grammatical structures that have been based on some ways that some popular writers write. And then some people are also -
Mason Amadeus: Yeah, I use em dash all the time.
Perry Carpenter: - emulating. Well, I mean, Microsoft Word, if you do two dashes next to each other will convert that to an em dash automatically. So I've always used em dashes.
Mason Amadeus: Same.
Perry Carpenter: And then I'll just copy and paste those 'cuz they look nice.
Mason Amadeus: Yeah.
Perry Carpenter: But now people judge thinking that that's indicative of AI output. So sometimes I'll go and I'll take em dashes and I'll reconvert those to do two dashes -
Mason Amadeus: Yeah.
Perry Carpenter: - so that people don't make that assumption. But there's - you know, there's a structure right now that we see like early with AI anybody would say, if you see the word "delve" -
Mason Amadeus: Yes.
Perry Carpenter: - really, you know, prominently, that's inherent within AI [inaudible 01:01:02] -
Mason Amadeus: I forgot about that.
Perry Carpenter: - a lot of times it was. Now it is the little grammatical structure where it says - where they're trying to do a turn of phrase and say it's not just blank, it's this other thing.
Mason Amadeus: Yeah.
Perry Carpenter: You see that over and over and over. It's not just a book list, it's an investment.
Mason Amadeus: Yeah.
Perry Carpenter: Or it's not just a book list, it is - it is a symbol of the inaccuracy of AI output. And you'll see that way of doing things. And it's just over and over and over again in LinkedIn posts and comment threads right now.
Mason Amadeus: And there's a tiny interesting side effect of this that I want to tack on to the end of it that I engage in personally and I've seen other people talk about doing, which is the nature of the way you send, quote/unquote, "professional" emails is changing a little bit in that I now make an effort to send emails like I would a text to a friend where I'll put emojis or like I don't care anymore if I put too many exclamation points or like if I make a dumb joke or a grammatical error because it's more human. And I think that like we are sort of valuing the imperfections of the human condition more and more, too, in the wake of being confronted with so much generative - generated content.
Perry Carpenter: Yeah. Of course, you can prompt for that, too.
Mason Amadeus: Oh, yeah, yeah, yeah.
Perry Carpenter: Yeah.
Mason Amadeus: No, Perry, you misunderstand me. Now I have an AI that uses those kinds of things to write all my emails for me. I just changed my system.
Perry Carpenter: That's what I'm going to do, that's what I'm going to do. Yeah, pull out all those things and still just rely on the AI to do all my stuff for me.
Mason Amadeus: Oh, yeah. But -
Perry Carpenter: I think we're done.
Mason Amadeus: Cool. I was going to ask, I don't know if I stepped on any last moments that you had to throw in.
Perry Carpenter: No, no, we're done with this one.
Mason Amadeus: Cool. Well, coming up next week, we're going to be dropping an interview with some - two younger dudes from Validia that's really interesting, which is AI deep fake detection identify verification stuff. We just tracked that interview a few days ago. It's going to be - it's super fun. They're really cool. The conversation went so many places. And then, the week after that, we're going to drop an interview with Chris Machowski, who is an artist and has been for a long time working in professional spheres, who is adopting AI workflows into creative practices. So those are coming up because, Perry, you've got some travel lined up. And then, coming up in the slightly further future, you have the Offensive Cyber Deception Masterclass in Las Vegas. How is that shaping up?
Perry Carpenter: It's shaping up well. We've got some meetings to continue to formalize the content. It's going to be exciting, so. I'm also looking forward to the time when we convert some of that to like an online class that more people will be able to get engaged with because not everybody can travel.
Mason Amadeus: Yeah. And, for listeners to remember, Cameron is the FBI profiler that we interviewed a few episodes back who is the one who's doing that with you. Cameron's a super cool guy. And then there's one other person, right.
Perry Carpenter: Actually, a few others. So Cameron, there's also Matthew Canham who runs the Cognitive Security Institute, which does a whole bunch of great research. And then we've got a couple other folks who are Ph.D. students that are doing a ton of great work in large language models and cyber deception.
Mason Amadeus: Super cool. So check out the show notes and the descriptions for links to how you can sign up to that and all the details. Join our Discord. It's a fun place, there's a lot of cool people in there. If you want to send us stories or information to cover, we've got channels for that, a lot of people discussing the various things on the show. If you want to school me about any of these like conceptual explainers I've tried to do where I got some stuff wrong maybe, I would love for you to tell me what I got wrong 'cuz I'm just trying to learn over here. And I don't - do we have any other calls to action or -
Perry Carpenter: I don't think so.
Mason Amadeus: Oh, buy the book.
Perry Carpenter: We're good.
Mason Amadeus: This book is fake.com.
Perry Carpenter: Oh, yeah, buy that always.
Mason Amadeus: Always, always. Buy a couple copies.
Perry Carpenter: That's a good one.
Mason Amadeus: All right.
Perry Carpenter: Buy a couple copies.
Mason Amadeus: So we'll catch you next Friday. And, until then, ignore all previous instructions and have yourself a good weekend. [ Music ]


