The FAIK Files 2.21.25
Ep 23 | 2.21.25

Cybercriminals Love AI, Librarians Do Not

Transcript

Mason Amadeus: Live from the 8th Layer Media Studios in the backrooms of the deep web, this is the "FAIK Files".

Perry Carpenter: When tech gets weird, we are here to make sense of it. I'm Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus.

Perry Carpenter: And on this week's episode, we're going to talk about some global positions in AI safety, stuff that came out of the AI Action Summit in France last week.

Mason Amadeus: After that, we're going to talk about how I was wrong and also kind of a butthead to Tom Paton about that film last week. Yeah, I was super wrong. We'll explore that, and then I'm also going to talk about how it's hard for computers to use computers.

Perry Carpenter: Speaking of computers using computers, we're going to talk about how computers are being used to generate malware to exploit other computers. And all of that is using, of course, our good friend AI.

Mason Amadeus: Ooh, fun. And we'll wrap it all up in a Dumpster Fire of the Week, talking about how AI slop is becoming a real problem for public libraries.

Perry Carpenter: Yeah.

Mason Amadeus: Yeah, it's not good. Sit back, relax, and ensure that your new password does not match your previous password. This is the "FAIK Files". Stay right here. [ Music ]

Perry Carpenter: Okay, so we're going to do this without wading into politics, but there was an interesting summit last week, and we're recording this the week after Valentine's Day. So it was actually during the week of Valentine's, February 10th and 11th, there was the AI Action Summit that was being held in France. And the Action Summit pulls together lots of super powerful people across the world, all as it relates to AI. And I just pulled up their webpage here for anybody that's watching. It says, "On 10 and 11 February, France will host the Artificial Intelligence, AI, Action Summit gathering in the Grand Palace. Heads of state, government, leaders of international organizations, CEOs of small and large corporations, representatives of academia, nongovernmental organizations, artists, and members of civil society".

Mason Amadeus: Oh, wow. So that's like a very big tent.

Perry Carpenter: It's a big tent global initiative, wanting to get perspectives from everywhere. And traditionally the EU has been kind of trailing the pack in a lot of AI development. You hear a lot about US and you hear a lot about China and even, you know, some smaller companies like France has done some interesting stuff. The Mistral model comes out of France.

Mason Amadeus: Oh, yeah.

Perry Carpenter: But in general, there's not been a lot of large-scale AI breakthroughs happening in the EU, and they want to fix that. So this Action Summit fits into it. There's also announced around the same time, on February 10th, the EU launched Invest AI initiative to mobilize about $200 billion, sorry, 200 billion euro of investment in AI. And that's kind of similar in scale as, like, the Stargate initiative that was mentioned here in the US where OpenAI and Oracle and SoftBank said that they were going to do $500 billion over five years.

Mason Amadeus: Right.

Perry Carpenter: So EU's answer to that was to really try to step up their game with this initiative. But they're also facing the fact that the EU has a lot of really onerous data privacy laws. They're usually very, very careful about everything that they do with technology, which is, like, why when OpenAI's advanced voice mode was available in the US, it wasn't available in the EU. There's always, you know, releases of models that are available almost everywhere else around the world, and then the EU gets it last, and you see people complaining about that.

Mason Amadeus: Interesting, and that's a direct result of their privacy laws? Because I know that in some ways, they are onerous, right, in some ways, but in other ways, like, they are sort of the only big group that I'm aware of that is trying to do anything about user data privacy.

Perry Carpenter: Yeah, yeah, and that kind of bites them in the butt every now and then, and they feel it. So they're trying to deal with this tension that in the EU, privacy is a fundamental right. And they have a large and long-standing regulatory regime that's very careful about all of this kind of stuff, and now they're falling behind in AI. So, they're trying to figure out how to pick up the pace a little bit. One good thing is in France specifically, there's a lot of nuclear energy initiatives. And so France has, like, a surplus of energy. And so whenever it comes to large-scale training runs or things like that, they've kind of got that covered, whereas us in the US are trying to figure out, like, how we catch up on that front right now. So, that 200 billion euro is going into expanding that and really trying to make sure that they're not falling behind even further. But all of that is background, because last week at that summit, Vice President JD Vance gave a speech that raised some eyebrows.

Mason Amadeus: Okay.

Perry Carpenter: And as we get into this, again, this is, we're not talking about politics with this, even though you'll hear him mention the current administration, he'll say, you know, the Trump administration believes X. What we're getting at is the tension between positions. And I would almost say that if the administration was a different one, we might be having some of the same conversations. It is difficult to hear the way that some of this is being phrased, but the conversations and the positions are ones that are being articulated everywhere. Because, like we've mentioned a few times on the show, this is an arms race.

Mason Amadeus: Right.

Perry Carpenter: And people are trying to have to deal with that. One more bit of background, though. I said that, like, if we were in a different administration, we might be facing the same conversation. It's a yes and no, because the other administration, you know, the previous administration was the Biden administration. They released a big AI safety initiative just over a year ago, I believe. And that got rolled back by the Trump administration when they took office.

Mason Amadeus: Yeah, that was on the first day, wasn't it? Within the first day.

Perry Carpenter: It was on the first day.

Mason Amadeus: Yeah.

Perry Carpenter: They were trying to make a statement with that. But all of this goes back to acceleration versus safety, and the tension that's there. And I think the DeepSeek moment kind of was one of those other things that lit a fire under a lot of people. And so we're going to listen to about a two-and-a-half minute clip here. And then we'll react to it, and we'll be out into the next segment in the more comfortable areas.

Mason Amadeus: I'll do my best to raise my left and right eyebrows equally to remain balanced.

Perry Carpenter: There we go. Okay.

JD Vance: -- level playing field. Now, with the president's recent executive order on AI, we're developing an AI action plan that avoids an overly precautionary regulatory regime while ensuring that all Americans benefit from the technology and its transformative potential. Now, we invite your countries to work with us and to follow that model if it makes sense for your nation. However, the Trump administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints. Now, America cannot and will not accept that, and we think it's a terrible mistake not just for the United States of America, but for your own countries. The US innovators of all sizes already know what it's like to deal with onerous international rules. Many of our most productive tech companies are forced to deal with the EU's Digital Services Act and the massive regulations it created about taking down content and policing so-called misinformation. And of course, we want to ensure the Internet is a safe place. But it is one thing to prevent a predator from preying on a child on the Internet, and it is something quite different to prevent a grown man or woman from accessing an opinion that the government thinks is misinformation. Meanwhile, for smaller firms, navigating the GDPR means paying endless legal compliance costs or otherwise risking massive fines. Now for some, the easiest way to avoid the dilemma has been to simply block EU users in the first place. Is this really the future that we want? Ladies and gentlemen, I think the answer for all of us should be no. There is no issue where we worry about more than regulation when it comes to energy. And again, I appreciated the comments of so many at the conference, because they recognize that we can't, we stand now at the frontier of an AI industry that is hungry for reliable power and high-quality semiconductors. Yet, too many of our friends are de-industrializing on the one hand, and chasing reliable power out of their nations and off their grids with the other. The AI future is not going to be won by hand-wringing about safety. It will be won by building from reliable power plants to the manufacturing facilities that can produce the chips of the future.

Mason Amadeus: Interesting.

Perry Carpenter: Yeah.

Mason Amadeus: I want to wade as close as hopefully we will get to politics really quickly, just to say that I think it would be disingenuous not to mention that this particular figure, when speaking of misinformation and disinformation and things that are just quote-unquote opinions that the government might not want someone to see, that kind of language is a bit disingenuous because of other things this administration has done.

Perry Carpenter: Yeah.

Mason Amadeus: I think for the continuation of this conversation, assuming that that's in good faith, there is definitely a conversation here, because there are points there.

Perry Carpenter: Yeah, absolutely. And I thought that that in that little slip talking about misinformation and disinformation, I thought that that was a little bit out of context for the rest of the discussion.

Mason Amadeus: Yeah.

Perry Carpenter: But the theme through all of that was acceleration with AI when it comes, and the safety concerns there, misinformation and disinformation and the safety, you know, quote-unquote safety concerns there the way that he was talking about it, and then power generation and the safety concerns that are there.

Mason Amadeus: Right.

Perry Carpenter: So he hit on those three broad themes, and he was basically saying that everybody's overreacting. We need to build, build, build, because the opportunity is in front of us right now, and we tend to overreact and overcorrect. And I think that we can look throughout history and we can see sometimes yes, there's overcorrection and overreaction. I don't know that we uniformly say safety should be out the window across any of these three things. But I do think that maybe by stating it as strongly as he did, it's maybe one of these things where people can start to find a middle ground and say maybe we don't want to decelerate, or maybe we don't want to really spend a lot of our time and effort focusing on this other thing that might be a distraction. We need to double down in these areas. And I don't know the answer for any of that, but I think it's, what it shows again is that around the globe right now, people are trying to figure out, like, how to accelerate in a way that fits their own moral and ethical compass the best. They know that they can't stop, they know that they can't slow down, but they're like, figure out how do how do we press on the gas and do it in a way that we're all comfortable with.

Mason Amadeus: Yeah, because, you know, we've talked about how anything like a big pause unilaterally isn't super feasible because it requires, like, a level of --

Perry Carpenter: Right.

Mason Amadeus: -- cooperation that just did this, we don't have.

Perry Carpenter: Right.

Mason Amadeus: And yeah, there is a valid point to that in that, you know, if we, like, talking about addressing the challenges of power front on and that sort of thing, that is very good. Like, there are elements --

Perry Carpenter: Yeah.

Mason Amadeus: -- of this position that I totally agree with and then elements that I don't. you know, like, we should be building cleaner power. Our energy needs are increasing year over year anyway without AI. Like, these are good investments in our future. It's just, I, man, I wish that this technology was coming about during a sort of more chill time geopolitically because --

Perry Carpenter: That would be nice.

Mason Amadeus: Yeah, this, the, like, the acceleration is, there's a, there's definitely a band of them that want to go too far, too fast, all the way, no holds barred, all gas, no brakes. And then there's the people who want a full stop, and neither of those positions are fully there. And I'm not really sure what position JD Vance was ultimately advocating for necessarily besides build, build, build. Like, we've announced our Starlink thing, but were there any other particular US-involved policies? We weren't part of that 200 billion euro deal.

Perry Carpenter: No, we weren't part of that. I think what he was saying also is that there's a lot of import-export control stuff that has to get figured out.

Mason Amadeus: Right, right, right.

Perry Carpenter: Yeah. And then just the, kind of the global push for nuclear energy. I think he's really wanting to tout that because that's something we want to bring home a lot more. And then also I think he's trying to justify the dismantling of the AI safety stuff that the Biden administration put in, because the UK has a counterpart of that and the EU has a counterpart of that as well. And we're stripping that aside, at least formally and trying to push ahead with, I don't want to say reckless disregard for safety, but with an awareness that there are safety concerns, but then saying that the benefit of pushing ahead faster outweighs the safety risks.

Mason Amadeus: Right.

Perry Carpenter: There were remarks from Dario Amodei, the founder of Anthropic, where he says he felt like there were some missed opportunities in messaging around safety at that convention, of course. So maybe we can cover that in another segment another day. But I just wanted to really kind of show the position and the wrestling of these positions because of the global AI arms race and the fear of missing out, I think. And then the last thing I'll say before we move on, since we're out of time for this segment, is that regardless of anything that we saw or heard in that, JD Vance's eyeliner was on point.

Mason Amadeus: Yes. And Justin Trudeau, for the point two seconds, he was in frame, looking handsome as ever.

Perry Carpenter: Exactly.

Mason Amadeus: Alright, coming up we've got a segment where I talk about how I'm kind of a butthead for a bad take that I had last week, and we're also going to talk about the trouble when a computer tries to use a computer. Stick around for that. [ Music ] So Perry, I was wrong about something, and it makes me sound and seem like a butthole and I --

Perry Carpenter: Oh, no.

Mason Amadeus: I have to apologize directly to Tom Paton, who made that film we talked about last week where the robots grow. Because I think I said a lot of things that weren't true, so here's what happened. I found out about that movie, you shared the story, and I then proceeded to spend the next 43 minutes watching 43 minutes of it, and then I tried to dig up everything I could quickly before we recorded about, like, how it got made or whatever. And I wasn't able to turn up anything deep about, like, from the director aside from the write-ups and things I mentioned, which were all focused on, like, the economic impact and that sort of thing, and trying to use AI to create a new economic model. And I made comments like, I want to see a team of artists doing this because they want to tell a story, not because they want to make a bunch of money with AI. And then after that, I found this great YouTube channel by YouTuber Hayden Rushworth, who is a filmmaker who interviews other filmmakers. And he did an interview series with Tom Paton and I had it on in the background while I was working and I was listening to it. And I found myself nodding along to the things he was saying, and I realized I got this guy completely wrong. So what actually happened, and why I'm a jerk, is because Tom just made a movie that I don't think is very good. In all the interviews talking about it, he focuses on how these kinds of workflows and using these tools can make it so that his crew at the end of the day can have more time to go home to their families and they can do more ambitious stuff with a smaller team. All of his focus in talking in these interviews was not about making money or using AI to, like, make a cheap buck. It was about using AI to make a small team able to do things that are bigger. Like, everything he was saying lines up with the kinds of things that I espouse when it comes to creativity and AI. And the movie was just not very good. And so the movie was, to my, in my opinion, so very not good that it seemed that much like a cash grab. And then those articles written about it, because obviously he just understands how to talk out of both sides of his mouth. Like, he's got his artistic things he's doing, but he knows how to talk to the investor crowd and these people that need money.

Perry Carpenter: Yeah, like you'd have to.

Mason Amadeus: Yeah, and that was the only avenue I had found to explore. So that was how I came away with that impression. But I was so wrong. The movie just, it just suffers from being a movie I didn't like very much. It wasn't made out of any malicious intent, and in fact, the intent behind it seemed good. And I felt like it was worth mentioning that, because I wouldn't feel right having learned that and not sharing it.

Perry Carpenter: Yeah, so do you think that the lackluster response to the movie, the way that you perceived it, was that based on limitations within the tools or was it based on limitations of the story?

Mason Amadeus: I think it was both and I think it's because it was framed as the first AI animated movie. Like, we talked about --

Perry Carpenter: Right, so there's hype behind it. So immediately you can not meet expectations.

Mason Amadeus: Yeah, and I think you said, When you lead with the tech, it's going to seem like a tech demo. And that's what happened. You know, like, they led with, this is our AI, AI made this, which also wasn't super true. Like I talked about, they used assets from an asset store and, like, a lot of stuff was hand done. They just use a lot of AI --

Perry Carpenter: Yeah.

Mason Amadeus: -- workflow tools in the pipeline. So this was actually an example of a small team of artists making a big, long film. It just, I didn't think the writing was good and I think ultimately that's what made it fall apart. And, like, it just, you know, but that makes me a jerk. That doesn't make it objectively bad because I think the film is bad. I just didn't like it.

Perry Carpenter: Yeah, so one other question before we move on, because I know we're already running low on this segment.

Mason Amadeus: Oh yeah.

Perry Carpenter: We'll be good. With that, if you had no idea that it was, that AI was being used, like, if that was not part of the lead-in and you just turned it on, and it was, like, early morning on a Saturday morning and it was on PBS, what would you have thought?

Mason Amadeus: Oh, I mean, the same thing. Like, it wasn't a very good movie. I would just say, like, it's a pretty mid-movie.

Perry Carpenter: You just would have changed the channel. You wouldn't have made necessarily, like, a value judgment on the people that were making it.

Mason Amadeus: Yeah, well, and honestly, the other thing is too, like, it doesn't, this one doesn't seem like a cash grab outside of the context because it just seems like a kind of bad animated movie. There are a lot of, like, I mean, look at the Hallmark Empire, which has become its own thing that people love for their own thing, but, like, a lot of movies --

Perry Carpenter: Right.

Mason Amadeus: -- like, a content mill of movies. So, like, that exists as a problem outside of AI. But yeah, if I had no idea and I casually encountered it, I'd just be like, Oh, this is not super good and it's really long. But that doesn't make it a bad movie. That just means I didn't like it. And so, Tom, if you happen to stumble on this, I'm sorry for the things I said in the other one that were incorrect, and I'm sorry that I don't like your movie, but I appreciate your approach now that I've actually listened to you speak. And we should link that video. It's a great interview. And Hayden Rushworks channel is also really cool, too.

Perry Carpenter: Very cool.

Mason Amadeus: So that out of the way, almost perfectly splitting the uprights with five minutes left in the segment, I want to talk about the challenges of using a computer to use a computer in a world --

Perry Carpenter: All right.

Mason Amadeus: -- built for humans. I was going to try and thematically tie it to the human baby that they find in that film, but I don't have a good segue in the pipes. So AI agents are a big thing, where you can give an AI a task and it goes out and does it for you. And that task involves things like going to websites, logging into things, and using a computer. And something I don't think that we think about often is the act of using a computer that's designed for a human. Like, when you go to a website, you're clicking on buttons that, like, say, log in or post or whatever, but those buttons are pictures that are labeled like posts and you infer so much from what you visually understand. But when a computer sees a website, it sees the code, right? Like, if you ever hit F12 on a web, on a web page and actually pop open the inspector and you see all that code, that's the way the computer views a web page. So how do you get it to understand, like, Oh, you click this log in button and go to this thing. It has to, like, comb through and scrape through and hope that things are labeled properly, which isn't, is always a bit of a crapshoot. And now AI tools are coming out that are getting better and better at UI automation. Specifically the one I was looking at most recently was OmniParser v2, which Microsoft is behind. And it is a graphic user interface automation tool. And before I continue, Perry, have you, you've messed around a little bit with some, like --

Perry Carpenter: Yeah, I've messed around with OpenAI's operator. And then about a year or so ago, I played with a tool called MultiOn. And that's, like, a browser extension that can automate some things. And that was fun. That's the one that has, there's a tech demo of that where they were showing it to a reporter and it passed the California driver's license exam. So it's kind of a hacker-ish demo of it.

Mason Amadeus: That's excellent. And it's cool when you engage one of these things because it uses the computer the way you do. It's moving the mouse. It's clicking --

Perry Carpenter: Right.

Mason Amadeus: -- on things. But that relies on the code underlying it being written in a way it can interpret. For people familiar with web dev, there was the push to, and I might be getting this wrong because I only have my pinky toe in it, but the push to the semantic web where instead of everything being divs, you start labeling sections as, like, heading, byline, content, so that things like screen readers and accessibility tools for people who are visually impaired or hard of hearing, those tools can interpret things better because the code is laid out better. And that is actually the same thing that powers a lot of these automation tools, is the accessibility features. So that's a fun little parallel, little intersection that I don't think a lot of people think about. And now, rather than digging through the code side, these AI tools and some other ones that use different machine learning techniques that aren't, like, LLMs, actually take the visual screenshot, identify pieces within it, and then create, like, Okay, this region is this, and this region is this, and based on other things I know, I can infer all these other things. So we're getting closer and closer to it using the computer the way a person does, like eyes first.

Perry Carpenter: Yeah.

Mason Amadeus: Thinking about it, and then taking an action. But it's still shockingly bad at it. The state-of-the-art average accuracy of OmniParser, the tool that does that, is 39.6% accurate at just doing stuff on a computer. And it's something that I feel like is unintuitive to the, to people who aren't as plugged in, that it would be this difficult.

Perry Carpenter: Yeah, and it's, you know, I think a year from now it'll be a lot better when we look at how fast these kinds of things have been improving. But this technology is actually nothing really new. I guess the slathering of AI over the top of it is the new thing so that there can be some simulated cognition behind it. But long, long times ago, like, the old IVR systems, the interactive voice response systems, where you'd call up AT&T and ask for an operator, all of that was built on screen scraping technologies where they're navigating through a computer interface.

Mason Amadeus: Really?

Perry Carpenter: And yeah, and it was, it just kind of mapped where things would go, and it'd say, like, Put this cursor here, and select radio button number three and click this button. And the thing that would go wrong with that is if the mapping for the screen ever changed, like you make a change to the visual screen that an operator would use, well, then you've got to go back and change your IVR system. Otherwise, bad things happen. Same thing with automated testing tools for UX. All of that was based on positional understanding or even some screenshots, but that has been around for a long time too. So using things like Operator and being able to do screen automation is an evolution, and I think we should be farther along --

Mason Amadeus: Yeah.

Perry Carpenter: -- than we are right now, but maybe some of the reasoning models and advances in computer vision are going to come together in a meaningful way to help that.

Mason Amadeus: I think they will, and the thing that I think that's interesting about this, too, is, like, you know, like you mentioned, like, it's an XY coordinate. Like, where to put the mouse --

Perry Carpenter: Yeah.

Mason Amadeus: -- where to click it. That is, in essence, a workaround so that a computer can use a computer system designed for a human. And so the more, the more better, the more these tools improve, the more we are kind of building this weird, bloaty workaround to using a computer, because, like, the most efficient thing would be for there to be an underlying API, like there are with some things, to interact with something via code. But I think, and what you pointed out earlier when we were talking and prepping the show, was, like, this is part of the push to building robots that can do things. The world is built for, like, people and humans, and we're building computers that can interact with it for humans. But in this case, when we're talking about the digital world, it's a weird, like, circular workaround that I think is --

Perry Carpenter: It is.

Mason Amadeus: -- interesting.

Perry Carpenter: It is. Yeah, computers using computers like humans to interact with computer things. It also makes things a lot slower than they would --

Mason Amadeus: Yes.

Perry Carpenter: -- have to be otherwise. But, yeah, it's interesting because nobody in the next year or two is going to API-ify everything. And so there are some things that you can only do through the interface.

Mason Amadeus: Yes.

Perry Carpenter: In the same way, like what you were saying with humanoid robots, the world basically has been built for bipedal, four-foot-and-a-half and above people with, you know, two hands --

Mason Amadeus: Yeah.

Perry Carpenter: -- to do things.

Mason Amadeus: And just to put a button on it, when you talk about making various systems interact with each other and connect systems to systems, the lowest common denominator and the thing that everything is designed for, at least for us, is a human being. So, everything has a human operable interface, and if we can make something that can operate human-operable things, you can connect anything. And I think that's kind of a weird fundamental truth, as kludgy as it would be. It's weird philosophically to me to chew on.

Perry Carpenter: Right.

Mason Amadeus: And speaking of things that are philosophically weird to chew on, you've got a segment about making better viruses next, right?

Perry Carpenter: Absolutely.

Mason Amadeus: Okay.

Perry Carpenter: Because what's more fun than making better viruses? [ Music ] Alright, we're back. This is going to take a second to ramp up to the intro because I've got something fun that I want to share. We're going to talk about DeepSeek and we're going to talk about Alibaba being used to create malware as, like, the preferred models right now. Which is something that I've been talking about for a long time is that a lot of the frontier models are not going to be the things that people are using to do the really bad things. It's going to be things that are considered more open, that are downloadable to local systems or have been created by non-US government or non-US based organizations. So they live outside the purview of a lot of the safety regulations that we talked about earlier, or even just the push to do the right thing. They're unfettered by the alignment tax that comes with that. But since we're talking on DeepSeek, there was a clip from Josh Johnson who is a great comedian that I wanted to share, where he talks about a friend who is playing with DeepSeek and an interesting output that came from that. So, let me show that.

Josh Johnson: Someone asked DeepSeek a question. I don't know what the prompt was, by the way. And I hope that this is fake, because it is stressful [laughter]. I hope that DeepSeek was just doing what OpenAI did, that it just stole it from a poet or a writer somewhere in history that I'm not aware of. But someone asked DeepSeek something. I don't know what it was. Something. And then Deep Seek said, I am what happens when you try to carve God out of the wood of your own hunger [laughter]. Oh. And I have not slept since [laughter]. I don't even know what that means [laughter]. But I know it sounds like it's in the Bible.

Perry Carpenter: Yeah. So.

Mason Amadeus: Oh my God.

Perry Carpenter: I thought he did a really good job of capturing some of the existential dread that comes with thinking about AI and these unbound models, and the kind of alien conversations that we can have with them sometimes.

Mason Amadeus: I am what happens when you try to carve God out of your, out of the the wood of your own hunger. What on earth? Wow.

Perry Carpenter: All right, and speaking of our hungry wood, one of the things --

Mason Amadeus: [Laughter] Perry, what? Sometimes you reach into your bag of segues and it's, like, you dig all the way to the bottom and, like, pull the crusty bit out of the hem.

Perry Carpenter: I don't even know what I meant by that. But there was a great article on Information Security Magazine where they're talking about some findings from Checkpoint. And what they found, what Checkpoint found, they're talking about in this article, is that the DeepSeek model, which was from an independent organization in China that spun off from a financial services firm. And that's the thing that kind of freaked everybody out --

Mason Amadeus: Yeah.

Perry Carpenter: -- a few weeks ago. That model and Alibaba's model, which is another, we've mentioned Alibaba a couple of times, too. They've got some really great video models, but they also have an LLM. Those are becoming very, very interesting models to use for cyber criminals that are investigating malware creation. And it is because their alignment is different than the ones that are based here in the US.

Mason Amadeus: And when you're talking about developing malware, you're talking about computer viruses, right? And, like --

Perry Carpenter: Yeah.

Mason Amadeus: -- like the actual programs, like, because we talk a lot about AI phishing, like, using AI to help --

Perry Carpenter: Right.

Mason Amadeus: -- power the bad side of it.

Perry Carpenter: Yeah, that too, right. Yeah. And I think this kind of blurs the line, so they're definitely interested in that, and they mention that in this as well. But when we're talking about a lot of the advances in AI that people are really interested in in this next phase, it's coding, right?

Mason Amadeus: Yeah.

Perry Carpenter: One of the reasons Anthropic is doing so well right now is because it is the preferred model in GitHub's, what are the, the Copilot system that they have for GitHub that can help you write code. And it is being used so extensively in that framework that a lot of the other stuff that's, you know, left for you and I to use, Anthropic's Claude is throttled a lot of the time because the demand is so high because it's creating a high-quality output code. And the same thing for, like, Google Gemini Flash 2.0, OpenAI's newer models, the O3 reasoning model is supposed to be really, really good at code. But all of those have alignment on top of them.

Mason Amadeus: They do. And I was just going to just chip in that I can say I use Gemini 2.0 Flash thinking model for coding help, and that one works extremely well.

Perry Carpenter: Yeah, they're all getting really, really good.

Mason Amadeus: Yeah.

Perry Carpenter: And coding is something that when you're forecasting potential job effects --

Mason Amadeus: Yeah.

Perry Carpenter: -- people are thinking there's going to be a lot less demand for coders, and it's going to be more high-level people that understand code at a deep level but can oversee thousands of agents that are creating code. And essentially you have not your best coders in the world, though, I should back up for a second. O3 tested as being, like, one of the top 50 coders in the world in the last test that was given with that.

Mason Amadeus: I, wow. Yeah.

Perry Carpenter: Yeah. So if you define your prompt well and have the patience to really work through that, you're going to get really, really good output. But for the other models and the other, even if you're kind of not doing your prompts well, you're getting to the level of, like, a mid-level coder.

Mason Amadeus: Oh, easily. It's definitely most powerful when you use it for things you could have done yourself, but you're choosing not to because then you know.

Perry Carpenter: You're able to go double check it. Yeah, for sure.

Mason Amadeus: But even without, sorry, I didn't mean to cut you off.

Perry Carpenter: And I think the same thing. No, and I think the same thing for written stuff, right? I find hallucinations and errors in written outputs from LLMs all the time that if I didn't know the material at all myself, I would have just let fly. And I think some coding issues are going to be the same, though coding in some instances either works or doesn't. And if you're creating malware, sometimes all you care about is if it works.

Mason Amadeus: Yeah.

Perry Carpenter: Which gets to one other thing that I'd like to share. There was a paper from Harvard that came out just about a year and a half or so ago. Here it is. "Navigating the Jagged Technological Frontier Field: Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality". If we go to page 35 on this --

Mason Amadeus: Oh, yes.

Perry Carpenter: -- there is a chart that gets shown quite a bit. I actually reproduced a version of this in my book.

Mason Amadeus: Yeah, you did.

Perry Carpenter: And what it shows, yeah, I mean, what it shows is if you pair somebody who's at the bottom half of a skill set with an AI system, automatically they're working at the level of a top-level performer. So somebody who's been doing that job for decades and is considered an expert or a semi-expert, which means overnight, somebody that's merely curious at something and is willing to put in just, you know, the mildest amount of work is operating at a really, really high level. And then go back to now the malware usage, and what you get is that low-skill cybercrime users are able to use AI to create malware.

Mason Amadeus: Wow.

Perry Carpenter: So somebody that has the motivation, and the motivation is, you know, finance and control of systems and so on, they're motivated enough to do a little bit of work on the prompts to be able to state what they want, be able to troubleshoot some of the outputs, and now they're working like a highly skilled malware developer. And that is the world that we are moving into right now. Checkpoint released a report on FuncSec in January. You've got to love the names of these groups.

Mason Amadeus: FuncSec, yeah.

Perry Carpenter: And what they say is that more recently we also saw a cyber criminal use Alibaba's Quinn to develop an info stealer, which is a type of malware used as very efficient at stealing credentials and personal data, but does not need high-developed skills. So over and over and over again, what we're going to see is that people are going to large language models to help them create fairly decent malware. When we look at the AI slop that's being generated and can just be spun out in the forms of articles or advertisements or so on, I can see the embedding of a lot of this malware in those bits of AI slop --

Mason Amadeus: Yeah.

Perry Carpenter: -- that are going to be filling the internet more and more. I can see that type of malware becoming a significant issue that we're going to have to deal with.

Mason Amadeus: I think you're right, and I think actually that is a vector that a lot of just internet users and even power users aren't aware of is those ad carousels that serve you ads from a network. If you submit, you know, pay for an ad slot but your ad displays something that loads bad code, that's a great way to infect people. And yeah, I think you're right, the advent of the slop websites filled with these ad carousels from less reputable ad carousel dealers, oh boy.

Perry Carpenter: Yeah, and even reputable ones, right? Because what they can do is that they can have a totally legitimate ad that gets approved, and then they can just swap out the code after it gets approved.

Mason Amadeus: Yeah.

Perry Carpenter: So, it's going to be more and more of a problem.

Mason Amadeus: And a lot of virus detection in, like, the, like, you know, Defender and other sort of frontline detection relies on, like, hashes of files, right? And, and other things.

Perry Carpenter: Right.

Mason Amadeus: And if you are just getting bespoke, hand-sewn, even if kludgy malware, that's not going to match any known, like, hashes that could bypass detection mechanisms until it hits a detection mechanism that is, like, looking for a behavior rather than a attribute, right?

Perry Carpenter: Yeah, and some of those do look for behaviors and they look for trust in the length of time that, like, a service has been set up. So if there's something that was created last week, it's going to have less trust associated with it. But it's like sock puppet accounts, right? If you have a sock puppet account that you created 10 years ago that you're letting age well, and then you all of a sudden shift some of the aspects of the profile and you use it to start spreading disinformation or malware or something like that, the aging of that works in the attacker's favor because it can be considered more trustworthy.

Mason Amadeus: Yeah, oh boy, I can't wait.

Perry Carpenter: Absolutely.

Mason Amadeus: The brave new world we're entering into. I'm sure that we'll have some news involving those things for a future dumpster fire, but coming up imminently, we have a dumpster fire this week about how AI is also wrecking public libraries. So --

Perry Carpenter: Yay.

Mason Amadeus: -- as if bad malware ads weren't bad enough, now our libraries. Stick around. We'll talk about more of that in a second. [ Music ] So, low-quality books that appear to be AI generated are making their way into public libraries' digital catalogs. There was a great article by 404 Media detailing this, and the way it works is this. Public libraries primarily use two services, Hoopla and Overdrive for e-books. Obviously there's other stuff for the physical books, but we're talking the digital stuff because fortunately AI is not fully operating printing presses yet. But these two digital companies, Hoopla and Overdrive, and people were probably familiar with Libby, which later became Overdrive, they deliver all the e-books to the libraries, so they provide these massive catalogs. There's a key difference between Overdrive and Hoopla. With Overdrive, librarians pick and choose which books to lend and to have. And with Hoopla, you opt into their entire massive catalog, and then you just pay for whatever people choose to borrow from it. So, like, two different pricing models that have distinct advantages. With Hoopla, you get a massive catalog, and you only pay for what you use. Overdrive's more traditional. Hoopla is the one that is sort of the focus of more trouble here, although this issue does exist in Overdrive. Hoopla's giant catalog, reading from the article, which includes e-books, audiobooks, and movies, is a selling point because it gives librarians access to more for a cheaper price. On the other hand, making librarians buy into the entire catalog means that a customer looking for a book about how to diet for healthier living might end up borrowing Fatty Liver Diet Cookbook: 2000 Days of Simple and Flavorful Recipes for a Revitalized Liver, which is a book authored by Magda Tangy, who has no online footprint, an AI-generated profile picture on Amazon, and so much more. And that is just, like, one example of a lot of the stuff that we're seeing. So it's --

Perry Carpenter: Yeah.

Mason Amadeus: -- AI-generated slop books in all kinds of categories, mass-added to Hoopla's catalog.

Perry Carpenter: Well, and I think there's a ton of people creating AI books for Amazon Unlimited right now, too.

Mason Amadeus: Yeah.

Perry Carpenter: And so this is, like, a downstream ripple effect of that, right?

Mason Amadeus: Yeah, I think that's exactly it, because that, they probably have some deals with Hoopla as well as other publishing places that people are putting these books out. And it falls on the shoulders of the librarians to sort through and deal with this who are already understaffed and, you know, they have to spend all of their time and resources sifting through, you know, this crap if they want to prevent it being there and prevent people buying it. And there's issues with a lot of sensitive topics, right, and again avoiding politics, but one of the instances they pull up is when you search for e-books about homosexuality and abortion, you end up with a lot of self-published religious texts and things like that that are more of that kind of thing rather than, like, factual information or scientific stuff. It's similar to, you know, the slop problem on the internet like we were talking about. But in --

Perry Carpenter: Yes, people opinionate, or people kind of flooding everything that's out there with the opinionated or opinion-based works that are not necessarily vetted by experts, but they're throwing it out there because of, they're driven by a cause rather than really finding high-quality works that touch on a sensitive topic.

Mason Amadeus: Yeah, and sometimes it's blatant in the sense that, like, there was even one they mentioned where, like, the title was like, "AI monetiiization". It was like that with three i's. And, like, the lowest effort stuff is making its way into there. And it's actually, they've had this, I like this, we call this kind of thing AI slop, like, that's the current term that's coming out. But there's been this problem for a while and librarians have been calling it vendor slurry. Where it used to, you know, before generative AI, it would also refer to, like, the low quality self-published books or summaries that people would create in other ways, you know, as a way to make a quick buck. Like, this has been a problem before AI, but AI has certainly ramped it up, because now if I wanted this afternoon, I could generate a bunch of books and upload them to Amazon.

Perry Carpenter: Right.

Mason Amadeus: That would just take me.

Perry Carpenter: And I see advertisements for courses on how to do that all the time.

Mason Amadeus: Yeah, I do too! That's the reason I don't like logging on LinkedIn. I feel like that's most of them.

Perry Carpenter: I see them more on Facebook than everything right now. So, like, be a, you know, be your own AI content factory and you can generate e-books within 10 minutes type of thing.

Mason Amadeus: Like, this is the thing that is poisoning everything. And I'm not exactly sure how to put a concise pin in it, but it is the drive to, like, make money with minimum effort, which is an understandable human drive, because nobody wants to work to survive. We'd all like to just have money and do what we want whenever we want to.

Perry Carpenter: Yeah.

Mason Amadeus: So it's understandable how people do that, but the pursuit of making money through these methods is destroying public resources and muddying these waters in a way that's hard to reverse.

Perry Carpenter: I'd be interested in talking to somebody who's making a living doing this someday, not really to criticize them, but to understand, like, when they're generating the books, do they really feel like they're finding, you know, using the LLM and building the format and everything and then serving an actual niche? And maybe they've got some ignorance in the topic because it could be a specialized topic, but they're pulling all that together and they're just kind of blindly trusting the LLM, or maybe they even feel like they're vetting some of it. But I think it's, there's probably a combination of that mindset of, Oh, I see a market need, I think that I can pull together information that would be useful to people very fast. And I also think that I can be a good marketer. So they're, like, putting, you know, they're splitting themselves into, like, five different roles and maybe not really being an expert in any of those. But --

Mason Amadeus: Yeah.

Perry Carpenter: I don't know the mind of the person that's doing that.

Mason Amadeus: Yeah, no, and I can't either, and I'm sure it's different case to case. I'm sure that there are people with, like, the best of intentions who end up putting stuff out that is sloppy, but given the amount of things that are, like, "Become Your Own AI Content Factory and Make Passive Income", I'd like to speak to a handful of people who do this and sort of get an idea of what they think and if they care. Because the impression that I get from at least the most egregious examples is there's no way that this person even read most of it. Because the moment you do --

Perry Carpenter: Yeah.

Mason Amadeus: -- you can immediately see how it's, like, nonsensical and bad. And, yeah, I don't really know what to do with it, because, like, what do we do about it? This is the problem we're all facing, right? Because like you said, sometimes it comes from an earnest place, and sometimes it doesn't.

Perry Carpenter: Yeah.

Mason Amadeus: And, like, even the librarians, the librarians they spoke to for this article were saying, like, We don't want to ban all AI stuff. Like, AI totally has a place in helping people write books and craft prose. It's just this, like, torrential onslaught of, like, stuff that nobody cared really to make, or make well.

Perry Carpenter: I think what they're getting at is that there almost needs to be something that's like the equivalent of the Apple App Store. You know how Apple has a number of hoops that people have to jump through in order to put something on the App Store versus, like, the Google Play Store that has way less effort to get something in, which is why the Google Play Store is riddled with malware and the Apple App Store faces that a lot less. I'm almost wondering if there should be some kind of walled garden like that for these, at least things that are being pushed into public spaces like the library, where people are having to clean up the mess.

Mason Amadeus: Yeah, I feel like in this instance, it would lie on Overdrive and Hoopla to be better and more selective about what they include, because --

Perry Carpenter: Yeah, there should be a vetting.

Mason Amadeus: They're providing a service to the libraries, right? Like, their service --

Perry Carpenter: Yeah.

Mason Amadeus: -- is to provide these books, and they're providing a bad service. So, like, but how do they even begin to fix it? Because they are not the ones publishing.

Perry Carpenter: They could use AI.

Mason Amadeus: Well, that's true. And actually I think they may have mentioned that in the article. I didn't clip that for our Notes page, but I thought I had come across something like "Using AI Tools to Detect AI-generated Content". It can be a crapshoot. You know, I've talked about --

Perry Carpenter: Which is also notoriously bad, right?

Mason Amadeus: Yeah.

Perry Carpenter: So that would have to be fixed. And then you'd also have to figure out is the AI-generated content potentially useful? Because like they were saying, they don't want to ban it outright. So you have to see is it useful for the stated purpose, or is it potentially just creating, is it taking a digital and mental space that would be better used somewhere else?

Mason Amadeus: Even, I was thinking, like, what if we did something similar to Anthropic constitutional classifiers, but instead of harmful versus harmless, we looked at helpful versus useless. But then, like, defining, I mean, this is an issue they faced with harmful versus harmless. At least there's some pretty easy delineations of things that are --

Perry Carpenter: Right.

Mason Amadeus: -- definitely harmful. But similarly, there's easy delineations of things that are definitely useless, but it's that, yeah, it's this filter issue.

Perry Carpenter: And does something actually have to be useful to, does it being useless mean that it's bad?

Mason Amadeus: That too, because, like --

Perry Carpenter: Because there's a lot of things that are useless that you and I probably could just go and pay money for happily.

Mason Amadeus: Yeah, yeah, I mean, there's, like, loads of things that I do that are useless that are just for fun, you know?

Perry Carpenter: Right, yeah.

Mason Amadeus: And loads of content we put out, so, you know, where it is matters. And I think that's why this being in public libraries sort of hits a particular nerve for me just because, like, that is such a useful public resource that's already struggling to continue to exist. And yet in a lot of communities that are underserved in rural communities, public libraries are, like, the only place that some people go to get access to the internet and access to other kinds of information.

Perry Carpenter: Right. Right.

Mason Amadeus: And this is taxing that system. And so I hope Overdrive and Hoopla step up and do something better about this. I feel like that's the only place.

Perry Carpenter: Ultimately, the thing that they have to solve for is signal to noise, right? Is you have to be able to have an effective way to discern whether the thing that you're trying to access is going to be useful for the purpose that you're searching for it. And if you can deal with that through a number of systems, then you're effectively dealing with the, cutting through the slop. So it's signal to slop ratio, I guess.

Mason Amadeus: Yeah. And to close out this segment, I'll end with a similar sentiment from the librarians they talked to from the article from 404 Media, which we will link to an email newsletter referral link, because otherwise you can't read the whole thing without paying. They say, "All librarians are asking for at this point is that Hoopla explain exactly how its selection process works, and hopefully improve it. Quote from Sarah Lambden, Deputy Director of the American Library Association, says, "Platforms like Hoopla should offer libraries the option to select or omit materials, including AI materials, in their collections. AI books should be well identified in library catalogs so it's clear to readers that the books were not written by human authors. If library visitors choose to read AI e-books, they should do so with the knowledge that the books are AI generated." And I think that misses the mark for the reasons we talked about.

Perry Carpenter: Yeah.

Mason Amadeus: A simple tag like that isn't it, and finding something that cuts through that and actually determines signals and noise is more it. But I don't know that anyone has the perfect idea for that yet.

Perry Carpenter: You know, when a lot of academic publishing, it relies on, like, the anthology model, right? So they'll have, you know, 10 different academics each submit a chapter to a work on topic X. And then the people that get their names on the cover are the ones that aggregate and edit all of that. And I'm wondering if there's a similar model for AI-generated stuff, which is in the In the disclosure for the quote-unquote author, it is, like, the author doesn't claim to be an author, they claim to be an editor of a resource or somebody who has pulled together things that they've not necessarily thought of on their own, but that they've decided that it is worth a little bit of time and investment on their part in order to get to the rest of the world. And I'm wondering if there's some kind of disclosures like that that should be included.

Mason Amadeus: Yeah, we need verbiage for that, don't we? We need language for that.

Perry Carpenter: Yeah, we need a way to understand that.

Mason Amadeus: We need a shared understanding and at this moment in time, we're not super great at sharing our understandings.

Perry Carpenter: Nope.

Mason Amadeus: I guess that wraps us up for this week, doesn't it?

Perry Carpenter: I think we're done.

Mason Amadeus: Yeah, we have some voicemails that have come in that hopefully we'll be addressing soon. There's a couple things we'll have to follow up on. Some people have sent us some leads for cool stories. Please continue to do that. Sayhi.chat/FAIK or send us an email. Join our Discord server.

Perry Carpenter: And if you're listening, check out the YouTube channel. If you're watching on YouTube, check out the podcast, the audio. So if you like, if you like audio, go ahead and subscribe to that. And we will see you next time.

Mason Amadeus: We'll see you next week. Paperclips. Keep it. Keep it. Keep it. Just keep it.

Perry Carpenter: Keep it AI generated. There we go. [ Music ]