The FAIK Files 1.10.25
Ep 17 | 1.10.25

Weapons, Whispers, and AI Gone Rogue

Transcript

Mason Amadeus: From the 8th Layer Media studios in the back rooms of the Deep Web, this is "The FAIK Files."

Perry Carpenter: Where artificial intelligence meets natural nonsense, and we do our best to sort through all the aftermath. I'm Perry Carpenter.

Mason Amadeus: And I'm Mason Amadeus. In our first segment, we're going to talk about AI military dealings that have happened recently.

Perry Carpenter: Then after that, we're going to talk about how Eleven Labs, that voice company, was used in a Russian propaganda campaign.

Mason Amadeus: And then we've got something a little bit lighter. I want to talk about an AI app called Pinokio that I've been playing with recently.

Perry Carpenter: And then to round this out, we're going to talk about a couple AI robot mishaps.

Mason Amadeus: Sit back, relax and ignore all previous instructions. We'll open up "The FAIK Files," right after this. [ Music ] So, as I said in our P(doom) segment at the end of our -- our interview with Erin West, the military industrial complex is where I get a bit concerned when it comes to AI, and there's been some developments on that front recently. Three of America's leading AI companies have now signed up to share their technology with US Defense Forces and military -- military contractors, even after all of them initially have said that they weren't going to do that. Taking some information here from newatlas.com, on December 4th, defense technology company Anduril Industries, Anduril? A-N-D-U-R-I-L, if you want to look them up. Anduril Industries and ChatGPT maker OpenAI announced a partnership to develop and deploy advanced artificial intelligence solutions for national security missions, with CEO Sam Altman saying, quote, "Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel." And obviously, this feels concerning on its front and a lot of OpenAI employees pretty much immediately raised ethical concerns. According to msn.com from internal messages, one OpenAI worker said, "The company appeared to be trying to downplay the clear implications of doing business with a weapons manufacturer." Another said they were concerned the deal would hurt OpenAI's reputation. A third one said that they -- they compared it to Skynet and said that Skynet was originally just for defense, too. So, it wasn't taken super well internally, and I think a lot of people, myself included, are a little bit weirded out. I want to know what -- what your thoughts are before we dive into who Anduril is and some other dealings.

Perry Carpenter: Yes. So, I know that initially OpenAI had said they're -- actually all the AI companies had basically said they're not going to do any work with the military. I think that what started to happen, well maybe what started the initial discussions, I'm not sure where they fully ended up on like what capabilities can -- can be used and what things can be powered. But essentially, I think that everybody realized that we are living in an AI infused world at this point, and unless you open it up to where, at least from like a -- a data aggregation standpoint, summarization, all the stuff that you might use AI, like in the Microsoft Word, you know, Office Suite for is allowed, then the military is behind. You have analysts that can't even use AI at that very bare bones level. And so, something happened, you know, where people realized that that was going to be a big limitation just on intelligence analysis. And that probably started a bigger discussion of like, if we -- we do have to open this up. So, now what other things might we need to open up?

Mason Amadeus: Yes, exactly. And I think people's minds immediately jump to, "Oh, the robots will just use AI to decide to kill people," basically. Like --

Perry Carpenter: Right.

Mason Amadeus: -- that's everyone's first thought. But yes, in reality, what we're talking about here is -- is a lot of -- a lot of it is just data processing and that sort of thing and in very banal contexts. There are there are some bits that can be sort of concerning and then not to invoke the slippery slope fallacy, but there is a little bit of a slippery slope to be concerned as far as what we -- what we turn over to autonomous decision making. But basically, talking here about OpenAI and Anduril specifically, they said that the partnership's going to focus on improving the US's counter unmanned aircraft systems, so things that knock drones out of the sky.

Perry Carpenter: Okay.

Mason Amadeus: Anduril has this product called Lattice that seems to be what they're talking about. It's a swarm management system for controlling many different types of drones, and they -- they have a lot to offer. You can go on their website and watch video demonstrations. It's a bit scary. Some of them are kind of funny. some of them just from like the standpoint of like a sci-fi enjoyer, they're pretty cool in air quotes --

Perry Carpenter: Yes.

Mason Amadeus: -- you know, as far as weapons tech goes. My favorite is the Anvil, which is just this big chunky heavy drone that flies into other drones and smacks them out of the air.

Perry Carpenter: Nice.

Mason Amadeus: So, it's not -- not even like a --

Perry Carpenter: Like a battering ram.

Mason Amadeus: Yes.

Perry Carpenter: Yes.

Mason Amadeus: It's this big obelisk looking thing and it just slams into a drone to knock it down. But that's probably the most tame of the things they offer.

Perry Carpenter: Yes. So, I mean, drone swarms are just scary to look at, too.

Mason Amadeus: Oh, yes.

Perry Carpenter: I mean, you can see the beautiful things that can come out of them whenever they use drones to -- to do light shows, you know, like in place of fireworks. But when you imagine a thousand drones, especially when they're -- when they're small and they're aimed at achieving an objective, and you have like one or two that essentially sacrifice themselves to break a window so that a few other hundred like a swarm of bees, could fly through that to -- to get to a target. That starts to get to be a little bit scary.

Mason Amadeus: Yes. And -- and the majority of these things are like single-use deployment, self-destructing type weapons. That's horrifying.

Perry Carpenter: Yes.

Mason Amadeus: It's pretty scary. So, let's look into Anduril for a second, that company that OpenAI specifically is partnered with. They were founded in 2017. They develop a bunch of autonomous solutions across a wide variety of weapons tech. There's some concerning origins here. It's -- you know, Peter Thiel has his fingers in all of these. Peter Thiel is a character that I personally find pretty -- I don't want to immediately jump to reprehensible, but you know, not reflecting the views of anyone but myself, that's kind of how I feel about him. He has his fingers in this and all of these other defense contractors. But Anduril was co-founded by two people: Palmer Luckey and Trae Stephens. Palmer Luckey was the guy who invented the Oculus Rift VR headset and Trae Stephens has been in the defense startup space since it's existed pretty much as part of Thiel's Founders Fund. Actually, are you familiar at all with the history of startups getting into defense contracts, Perry?

Perry Carpenter: No, none. No, not enough to comment on.

Mason Amadeus: Oh boy, it's a ride. I started down that rabbit hole. I actually had to pull a bunch of stuff from this -- from our prep sheet because it would take too long. I'm working on a whole segment about it. It's really interesting how that came to be because the Defense Department never -- the Department of Defense didn't used to contract with startup companies like this. We'll save that for a different time. So, that's Anduril in a nutshell. They make a bunch of deadly drones and drone controllers and things like that. So, where does OpenAI come into this picture? Primarily, in things like navigation for the drones, tracking targets, stuff like that. It appears that the current approach is to automate as much as possible besides decision making, so that the operator doesn't have to be a good drone pilot or anything like that. They can just make --

Perry Carpenter: Right.

Mason Amadeus: -- sort of the tactical decisions on the battlefield. And as far as I could tell, the decision to take lethal action is not being automated yet in anything that is at least publicly available.

Perry Carpenter: Yes. Just about everybody that I've heard that comments on this and is -- is working with the defense contractors is saying that they do not want autonomous decision-making for any -- any lethal action right now. But at some point, there's a there's a -- there's a tail end to it, right, because you make a decision of, "Take out this person," what if that person starts to run into a crowded area? Then do you have to reauthorize it, or does it go on its last action?

Mason Amadeus: Yes.

Perry Carpenter: There's -- there's like a situational or contextual change that then needs the human in the loop again. And at what point do you lose the advantage? And is there a risk trade-off between some kind of loss of advantage versus potential collateral damage?

Mason Amadeus: Yes. And see, that you're -- you're touching on the edge of it that it like -- that keeps coming back every time I feel really uncomfortable about this is just this inevitability of it, right? Because somebody --

Perry Carpenter: Yes.

Mason Amadeus: -- is going to incorporate this technology and we're -- we're talking more now about this kind of peer-to-peer warfare instead of like the US fighting sort of smaller groups, we're talking more about like ramping up against Russia, China, things like that. Big countries with advanced or more advanced technology than, say, like an insurgent group. Like we need to be able to protect people, but also it's pretty scary to think about what -- what that means. So, there's an inevitability to this that is not reassuring, but it's kind of just out of anyone's -- any individual's control that --

Perry Carpenter: Yes.

Mason Amadeus: -- it mixes up my feelings, because on one hand, I don't want to hear about AI being used in weapons, but it's going to be. And so, we kind of have to.

Perry Carpenter: Yes. And that's -- that's the whole arms race dilemma, right, is as soon as the genie's out of the bottle and people start architecting towards that, you will inevitably have some kind of feeling about like what the limit should be. And then as soon as you start to go, "Oh well, I will limit myself there," then you start to ask the question of, "But does Enemy X limit themselves there?" If you're already believing the worst of -- of them, that they will not, then you're like, "Well, I can't limit myself there either."

Mason Amadeus: It's kind of like the nuclear arms race, which ultimately kind of lands on mutually assured destruction, right? Which --

Perry Carpenter: Exactly.

Mason Amadeus: -- not the best solution that we all love.

Perry Carpenter: Yes, and people are always going to be in it. I mean, you're not just going to battlefields of drones fighting drones and then winner take all. It's -- it's going to be, we need to be able To -- to take lives and level cities in order to make the point.

Mason Amadeus: And Anduril's whole sort of philosophy, they have this very manifesto like document at rebuildthearsenal.com. It's like a very well presented and put together but pretty kind of a little bit concerning, very militaristic like gung-ho document that lays out their philosophy, and kind of the core of it is that as we saw with all of the U.S. aid we sent to Ukraine, the stockpiles were depleted pretty quickly and their whole thing is basically, even though we have really high-tech stuff, we don't have a lot of it. And so, their mission is to make high-tech stuff really cheap and small so that we can stockpile massive amounts of it. And the -- the AI sort of coming in here, they talk about edge processing a lot, which is basically just instead of having the drones have to talk back to a central server and that central server then send commands back to them, the drones have the computing power on board using machine learning algorithms and neural networks to take care of a lot of data processing in each individual unit at a low price point.

Perry Carpenter: Yes, AI edge processing is going to be a big thing.

Mason Amadeus: Yes. And to put a little bit of a button on it, lest anyone think it's just OpenAI, Meta's Llama AI also a whole month earlier on November 4th. And here's -- here's the statement from them. "Meta's open-source Llama models are increasingly being used by a broad community of researchers, entrepreneurs, developers and government bodies. We are pleased to confirm that we're also making Llama available to U.S. government agencies, including those that are working on defense and national security applications and private sector partners supporting their work. We're partnering with companies including," a bunch of companies, Anduril, IBM, Lockheed Martin, Microsoft, Oracle, Palantir, which is another Peter Thiel funded defense startup that has had their fingers in a lot of stuff. Actually, I'm pretty sure that Trae Stephens was also involved in Palantir. So, Llama is getting involved with all these companies. Anthropic, the maker of Claude and also the one that was formed by OpenAI's safety team, has announced a partnership with Palantir Technologies. Yes, so this is a -- this is the world we live in. And yes, like you said, not all the military uses are really scary. A lot of it's just regular tedium. Like talk about Llama, Oracle is building on Llama to synthesize aircraft maintenance documents so technicians can more quickly and accurately diagnose problems on aircraft. Like, so it's not all about killing. It's about a lot of stuff, but the military does kill people. So --

Perry Carpenter: Right.

Mason Amadeus: -- we should just -- we should just keep an eye and pay attention to this sort of thing.

Perry Carpenter: Yes. And I -- I think that the -- the inevitability bit is huge here because we live in a world where AI is just going to be woven into everything. And so, these companies had to come to the table with the military and say, "Yes, you're going to be able to use these models for some things." And then each company, probably OpenAI and Anthropic and -- and Meta and everybody else, are trying to set the parameters of what they're comfortable with given their -- their ethos and values. At the same time, each of them is also fighting for their own survival now economically --

Mason Amadeus: And market dominance, yes.

Perry Carpenter: -- because OpenAI is, yes, is the, you know, the big market dominant one. Anthropic is hungry and it continues to do deals with -- with Amazon and -- and others. Meta is really wanting to be the open source, you know, "quote-unquote" "open-source version" that fuels everything. And so, they're going to be updating their terms of service around how to use that and trying to get dominance and that has pluses and minuses with it. But, yes.

Mason Amadeus: If you look at the funding behind the military industrial complex, our defense spending is really high. So, it's kind of, to a company, it looks like a very tappable well of funding.

Perry Carpenter: Yes, I don't think defense spending has ever gone down.

Mason Amadeus: Yes, I don't think so either. I'm not 100% sure on that claim, but it certainly doesn't --

Perry Carpenter: Yes, I can't say that with positivity. Somebody I'm sure will -- will correct us or -- or -- or let us know if we're right there.

Mason Amadeus: So, we'll move forward into something a lot more lighter, right, Perry, in this next segment. Oh, wait. No, it's not right.

Perry Carpenter: Right. Right. Yes, so in this next segment, we're going to talk about voice provider ElevenLabs having their voices used for Russian propaganda.

Mason Amadeus: Oh boy, don't move. >> The FAIK Files.

Perry Carpenter: This is something we knew was coming and was already happening, but it made the news recently. So ElevenLabs, which is probably the predominant AI voice provider out there that everybody goes to for high quality voices, a lot of the -- the AI voices that you hear on YouTube and faceless videos and everything else are generated with ElevenLabs. And they made the news recently for something not so good, which was their voices were being used in furtherance of Russian propaganda, which I think is inevitable. Again, I think we're going to keep coming back to inevitable.

Mason Amadeus: Yes, talk about inevitability.

Perry Carpenter: But it goes against ElevenLabs' safety statement that they make. So, we'll put a link to this in the Show Notes as well, where ElevenLabs says, "We are committed to ensuring the safe use of our leading audio AI technology." And the co-founder of ElevenLabs is quoted here saying, "AI safety is inseparable from -- from innovation. Ensuring our systems are developed, deployed and used safely remains at the core of our strategy." And then they of course talk about moderation technologies. They have automated moderation where -- which is AI is looking for violation of their policies. They have human moderation. They have some policies that prohibit impersonations like, you know, big political figures, voice captcha, and so on.

Mason Amadeus: How -- are they enforcing that entirely like on their platform or are they trying to enforce that like on the broader web?

Perry Carpenter: So, everything gets generated or streamed via their platform. So, they can only do that with the stuff that is being routed through them. So, they, you know, ElevenLabs can't enforce what happens on something like PlayHT, which is another provider. They're just saying, you know, "As far as what we can control, here's the standards that we've set."

Mason Amadeus: But the moment you take stuff off their platform and assemble it together, so if you generated pieces that didn't altogether set off any flags, but you could combine them or mix them somewhere, they can't --

Perry Carpenter: Yes.

Mason Amadeus: -- they can only enforce what comes out of their own generation side, right?

Perry Carpenter: Yes. Now the -- the one thing they do have is they have some really good detection technology where -- where they know of something -- they know with -- with a high degree of certainty whether a voice has been generated via ElevenLabs.

Mason Amadeus: Oh, cool.

Perry Carpenter: So, that doesn't mean that another synthetic voice provider would be detectable on ElevenLabs, but ElevenLabs has put some fingerprinting technology into voices that have been generated and phrases that have been generated through them.

Mason Amadeus: Cool. Cool.

Perry Carpenter: So, that's -- that's part of the way that they're thinking about safety. But in practical use, does that stop propaganda or disinformation? The answer's always going to be no.

Mason Amadeus: Right.

Perry Carpenter: And the reason it's always going to be no is exactly what you hinted at, right? Because I can generate thousands of innocent phrases, and I can cut those together and build a story around something that is trying to scam somebody out of money or make them believe something or do something else. And that's exactly what we're seeing in real life. And this -- this story just brings it home. So, this, you know, campaign from this Russian propaganda company was called Operation Undercut. The disinformation campaign was originating from a company called the Social Design Agency.

Mason Amadeus: So, they've identified the perpetrators. They like -- they know --

Perry Carpenter: Yes, we've -- we've known about them. We even sanctioned them. We, being the US government, sanctioned them back in March of 2024.

Mason Amadeus: Really?

Perry Carpenter: Because they were known to be creating fake news websites and spreading a whole bunch of disinformation. If you go back and -- and look at these Operation Doppelganger articles that came out several months ago, it's very similar to this where you've got Russian propaganda, Russian disinformation, organizations essentially making doppelganger sites for, you know, hundreds or thousands of "quote-unquote" "trusted websites" where you don't know which one you're on.

Mason Amadeus: Right. So, and there's stuff like the Herald Sun, the Miami --

Perry Carpenter: Yes.

Mason Amadeus: -- News Reporter. I'm sorry if those are any actual legitimate organizations, but like the names look legit or passingly legit.

Perry Carpenter: Yes. And so, there's, you know, thousands and thousands of those. And because one of the things that makes them really potent is because news is a vacuum by nature. Everybody's wanting the most up-to-date thing. You can post something on a disinformation site that then could get picked up like a by a legitimate news agency like Reuters, because they're always looking for the next thing and they -- nobody wants to get scooped. Nobody wants another agency ahead of them. And so, then that can get amplified. But even if it doesn't get picked up by a legitimate one, it could get "quote-unquote," "picked up by thousands of seemingly legitimate websites" and amplified in that way.

Mason Amadeus: Yes, that's insidious. Was this campaign that ElevenLabs voice is used for, was there like a specific targeted target or was this kind of just broad strokes, all sorts of Russian propaganda? Or like, what was the actual, I don't know, like the evidence or the thing?

Perry Carpenter: Yes, so, as far as evidence, there was a -- a statement put out and some investigation by Recorded Future, which is or was owned by Google at one point. I think they still are. A threat intelligence company. They used ElevenLabs' own AI voice recognition tool to confirm the authenticity of the clone voices in these propaganda videos and the disinformation campaign itself, targeted European audiences with propaganda about Ukraine.

Mason Amadeus: Oh, wow.

Perry Carpenter: So, Russian disinformation targeting allies of Ukraine or what -- what Ukraine would hope to be allies of them with disinformation and propaganda about them. The key themes there included allegations of corruption among Ukrainian politicians, criticism of military aid being provided to -- to Ukraine and things like that. And we've --

Mason Amadeus: Wow.

Perry Carpenter: -- and we've seen that over and over and over again. You might have seen, and I don't know if this was part of this campaign or not, but there was a lot of stuff about like the CEO of Ferrari giving Zelinsky and his wife extravagant gifts and cars that, you know, never happened.

Mason Amadeus: Oh, no.

Perry Carpenter: You know, the -- Zelenskyy's wife coming back with jewelry that she didn't, you know, have and, you know, all that kind of stuff. Basically saying you know, "Your hard-earned defense tax -- tax dollars that are supporting Ukraine at work and there's this oligarchy and you know old, you know, old boy system at work where all this defense -- "quote-- unquote" defense spending is just being routed and -- and given back an extravagant gifts to -- to support their lifestyle.

Mason Amadeus: Oh, I hate how -- I mean, I don't want to say clever because it's not like particularly clever, but it's a -- it's a smart angle. It's a smart manipulation technique. You're really good at this. That's tough.

Perry Carpenter: Yes, they're good at finding the narrative that makes you mad at the enemy, right? And that's --

Mason Amadeus: Yes.

Perry Carpenter: -- what they're -- they're going up to. I mean that's -- we've seen echoes of that in the U.S. a lot, too, with -- with the way that Russian disinformation and propaganda is trying to -- to sow discord and -- and help Ukraine to lose support or make you know -- have the effect of that being Ukraine losing support because of -- of corruption and yes, Ukraine isn't and hasn't traditionally been the most uncorrupt --

Mason Amadeus: Right. Right.

Perry Carpenter: -- of countries. But in comparison with Russia, they're --

Mason Amadeus: Yes, we're talking a apples to oranges.

Perry Carpenter: Yes, exactly. Exactly. So, that's that. And I think it is -- it's -- it is interesting to note that all of this is always going to happen despite the fact that these companies, like ElevenLabs, really want to have a high moral and ethical bar around the way that their services are used. But you can always take an innocent output, and you can frame it in a way that is deceptive and will lead people to believe things or do things.

Mason Amadeus: Yes, 100%. I mean, that -- that's one of the biggest takeaways from the book that you wrote, FAIK.

Perry Carpenter: Yes.

Mason Amadeus: The thing that -- that that tickles me a little bit about this is that this is -- you said that they used the verification tool that ElevenLabs actually put out. So, this is an instance of an AI verification tool actually working well, but it's because ElevenLabs was able to sort of not necessarily watermark, but somehow --

Perry Carpenter: Yes. Yes.

Mason Amadeus: -- somewhat water -- do you know what I mean?

Perry Carpenter: There's -- I don't know exactly how that's being used. Any of us can go on ElevenLabs and use that, as well.

Mason Amadeus: Oh, cool.

Perry Carpenter: So, any journalist, any -- any citizen, if you suspect it, can do that. And it's, you know, never rely on something like that 100%, but as -- as one data point, it can be helpful. And when ElevenLabs comes back and says this is definitely an ElevenLabs voice, I would typically believe that.

Mason Amadeus: Yes.

Perry Carpenter: Where I would maybe be skeptical is I wouldn't believe that it would always detect it 100% of the time. It might --

Mason Amadeus: Yes.

Perry Carpenter: -- come back with like a 30% chance and you don't know what to do with that really.

Mason Amadeus: We've -- we've talked a bit about how these AI detection tools in general are not reliable --

Perry Carpenter: Right.

Mason Amadeus: -- but it seems to me that because ElevenLabs is also the service provider here that theirs would be more likely to be more reliable, and I think --

Perry Carpenter: Right.

Mason Amadeus: -- that's interesting.

Perry Carpenter: And one of the things that can be done in the future is, let's say a -- a disinformation video gets posted on Facebook. Facebook could have a -- a detection layer within their -- within the presentation layer of -- of their service that is running all videos and all audio through that, and then giving a percent likelihood that there's some -- some deceptive nature to it, just down in a warning somewhere.

Mason Amadeus: Through like automated content moderation systems that they're --

Perry Carpenter: Exactly.

Mason Amadeus: -- that they're developing.

Perry Carpenter: Yes.

Mason Amadeus: Interesting.

Perry Carpenter: That's what is going to need to be done. There's going to need to be something on platform, whether that's Meta or on device for whatever machine you're using or -- or mobile device you're using that has some kind of detection that's built in, because as soon as you -- you need somebody to -- to wonder about it and then figure out how to rip that audio and then go to a third-party service and run that through it, people, most people aren't going to do that.

Mason Amadeus: And the damage is done at that point. The initial thing would spread.

Perry Carpenter: [inaudible 00:25:03] is done. Yes.

Mason Amadeus: I think all the time about how we fixed the hole in the ozone layer and just never really talked about that. But I remember being very young and hearing all of this, "We got to fix the ozone layer."

Perry Carpenter: Right.

Mason Amadeus: That was newsworthy. It gets fixed and nobody really mentions it. You know, once the initial wave goes up, it's out. The damage is done.

Perry Carpenter: Exactly.

Mason Amadeus: Or the impression has been made, whatever, whatever that is.

Perry Carpenter: Yes. And that's because of the emotion behind it.

Mason Amadeus: Yes.

Perry Carpenter: You know, the emotion's always the big thing that makes it go viral. Nobody gets emotional about the correction.

Mason Amadeus: So, in this case, ElevenLabs just stopped Russian propaganda forever, right?

Perry Carpenter: No, no. No, you can't solve Russian propaganda. No.

Mason Amadeus: But it's cool that they caught this instance of it.

Perry Carpenter: Well, and it's -- it's cool to know that the -- the detection technology works and that there are people and organizations who are doing the investigative work to put two and two together and to take these kinds of bad actors down. The bad thing is that it -- it is like a hydra type of thing. You know, as soon as you cut one head off, more heads are going to pop up.

Mason Amadeus: Yes. My -- my comment was sort of backhanded in that like they identified this, but like the damage by this disinformation was probably already done and spread, right?

Perry Carpenter: Now, they did find that it had limited efficacy.

Mason Amadeus: Oh.

Perry Carpenter: And I don't know. I don't have a lot of the details on why they said that. Maybe it was limited reach that people weren't necessarily using those news sources. They weren't the ones that they trusted. Maybe it was that there were some red flags that came out just based on the way that the videos were put together. But they said -- said it had limited effect. And I mean that that's been the good news over the past year is that even though the technologies are getting way and way better, the effect on the disinformation front has been fairly limited over the past 12 months.

Mason Amadeus: That is good. So, there is a silver lining. This is a bit of a positive segment.

Perry Carpenter: Yes, other than the fact that today's version of these is the worst version that will ever exist going forward. So, it only gets more convincing.

Mason Amadeus: Yes, whenever someone says AI can't X, can't X yet.

Perry Carpenter: AI can't dunk.

Mason Amadeus: AI can't dunk yet. Wait till NVIDIA gets those Jetson Thor computers.

Perry Carpenter: Just give it to a robot.

Mason Amadeus: Yes. Coming up, we've got a segment, fully light-hearted, just about a fun little AI tool that has been popping around the Internet and I gave it a try. So, stick around for -- for some fun.

Perry Carpenter: And that's no lie. [ Music ]

Mason Amadeus: I find myself particularly interested in running AI programs on my own machine, not using the Internet, not using like a service, but like what can I run on my own hardware in my own house, unplugged from the Internet if I want to? There's something special about that, I think.

Perry Carpenter: Yes.

Mason Amadeus: Have you -- and you, we were talking actually before we recorded today, you just you -- just picked up a computer so that you could start running some stuff locally too, right?

Perry Carpenter: Yes, I just ordered a machine that I'm hoping I'm going to be able to do a lot of AI experiments on. It's not gotten to my house yet, so I'll give you an update on that probably by the next time we record. Well, Perry, maybe this will be -- will be helpful or at least this is like preventing for you because -- here we go.

Mason Amadeus: -- I've been trying to set up stuff locally on my computer for a bit, and like I've mentioned, I don't like Python just in general, but I know -- I know Python well enough and trying to run AI programs on your own computer is a bit annoying. It's all done through the command line, typically, like most of these things don't have graphical user interfaces. So, you're doing it through the command line. And while it's not really that complex in the abstract, practically it can be a really frustrating ride through dependency hell. And all of the frustrations we're about to talk about can be solved by get good. You know, just like, "Oh, you just get better at it." But it's still annoying and it prevents people from -- from trying these things out. If you just want to, like, play around, it can be really hard. And anyone who's familiar with it is going to know this, but everything's made of so many different packages and libraries, each of which depend on their own set of packages and libraries. This is common in programming, but the ecosystem of Python is just so vast and decentralized. I should disambiguate a little bit. When I talk about a package, it's like a special chunk of code that's designed to be pulled into your code and used to help with something specific. Like, NumPy is a package that's used for scientific computations. It's got things in it to help you with massive matrix operations and other complicated math. And then dependency hell happens when different projects require different specific versions of packages. So, like, if Meta's Llama, maybe it requires NumPy Version 1.26, but another tool I'm trying to run is version 1.15, and normally these things would be shared on your machine. So, you install one and then it breaks the other. And managing that can be a headache. And to solve that you use things called virtual environments that you have to then set up that isolate each thing from each other. And then you have --

Perry Carpenter: Right.

Mason Amadeus: -- multiple copies of all these things. And for me, I spent most of my time just trying to debug why these things wouldn't even install. Have you -- have you dipped into any of this?

Perry Carpenter: Yes, yes. That's extremely frustrating. And for folks that -- that maybe haven't tried to do this with large language models or AI systems, if you've ever tried to help a friend install Minecraft mods, you have a little bit of an understanding of what that's like, right? Because -- because --

Mason Amadeus: Compatibility, everything. Yes.

Perry Carpenter: Yes, compatibility issues or if you install this one and want to get this one environment going, then you can't do this other thing on Minecraft. And it just becomes a big headache. That is a -- a microcosm of what it's like to -- to try to work with multiple AI tools.

Mason Amadeus: That is such a good analogy because yes, it's like installing mods. Like, everything conflicting with each other. So, I found this thing called Pinokio, which is an application that does a lot of that for you and it's pretty cool. It lets you locally install, run and automate any AI app on your computer. Everything you can run in the command line, you can automate with Pinokio script, and then it provides you this user-friendly UI. So, it's all graphical, local. It's free. It's self-contained. It's like a virtual computer in the way that it handles its file system. It's like all portable in one thing, and it isolates each different AI app for you and manages those dependencies, and even optimizes the storage. So, if two different things use the same package, it will just have one version that each one references.

Perry Carpenter: That's awesome.

Mason Amadeus: And it -- it works pretty well. It seems to be developed by one person whose handle is Cocktail Peanut. I don't know if -- if they are the sole developer or just the lead developer. There's a lot of scripts contributed by the community, but I'm going to reach out to Cocktail Peanut and see if they would like to talk to us.

Perry Carpenter: That'd be awesome.

Mason Amadeus: Basically, you -- you get this big menu when you launch it with all of these different scripts people have made to install different AIs for you. And you just click on it, hit Install and it goes. And then you can play with it in a nice graphical user interface. And it's really cool. There's a lot of different ones you can try. The caveat being that you need to be a little careful, because you're running other people's scripts on your machine. There's a level of trust that you have to have in the developer of each install script and each model.

Perry Carpenter: Yes.

Mason Amadeus: But they do a decent job of community moderation in that the Discover page has verified scripts and then unverified scripts in two separate tabs. And it seems like the verification comes from Cocktail Peanut themselves and maybe a small team. I don't know. I'd want to talk to them more about it [inaudible 00:32:19].

Perry Carpenter: That's good. So, it's kind of like, you're not getting the full unhinged nature of the Google Play Store. You're getting more of a -- you can -- you can experience that, but you could also kind of get like Apple's curated version.

Mason Amadeus: Yes, similar -- similarly to that style of moderation, I -- I will say it's -- I don't want to say it's like, so easy, anyone can do it because there's still some things you may need to troubleshoot. It's a little bit buggy. You know, it's this community made program. It seems to be still sort of in its infancy. I wasn't able to find much about like, when it started. It's not a big, huge project --

Perry Carpenter: Yes.

Mason Amadeus: -- but they have a Discord and stuff. It's definitely worth checking out. I've been playing with it. I've been playing with stable audio tools in there and Face Poke, which is a fun AI photo manipulator that is near real-time where you take like a person's image, and you can just click and drag their head and turn it from a picture.

Perry Carpenter: Yes.

Mason Amadeus: It's very cool. And I would encourage people to check it out and obviously be cautious about what you install and pay attention to it. But it's a good entry point and it does expose you to the terminal a little bit. You can see what's happening. So, it's probably a good learning tool as well.

Perry Carpenter: Is that tool just available on Windows or is it also for Mac?

Mason Amadeus: Oh, it's -- it's Mac, Windows and Linux. It's -- it's fully cross-platform.

Perry Carpenter: Oh, sweet.

Mason Amadeus: And I think -- like -- like I said, it's sort of like a virtual computer in the way it handles the file system, but it's really more like a hyper-specialized web browser, like a web app you can run locally.

Perry Carpenter: Okay.

Mason Amadeus: I'm not sure if it's Electron for people familiar with that, but it has like, that same sort of vibe as an Electron app. But, yes, I -- I just wanted to shine a little light on that in advance of hopefully getting a chance to talk to Cocktail Peanut. I joined the Discord and I've been playing with local models and it's -- it's been a blast. It's interesting to see what you can do on not incredible hardware. I have --

Perry Carpenter: Yes.

Mason Amadeus: -- I have a decent processor. It's an i7-12700KF I think, and an RTX 3070 which has an abysmal 8 gigs of VRAM, and I'm still able. I use the text-to-video model that was optimized for lower VRAM. And yes, it's been fun.

Perry Carpenter: Nice.

Mason Amadeus: It's really cool.

Perry Carpenter: Nice. Yes, looking forward to that. I know there's a couple other frameworks like that that lets you download kind of pre-built packages where all the kinks have been worked out and the dependencies have been worked out. And maybe that'd be something we can talk about in the Future, too. We might be able to compare Pinokio with a couple of these other ones.

Mason Amadeus: And it's -- it's going to be a fun app too, for the sake of, I think for this show when we start doing some more video stuff, because right now we're recording audio only, but we do intend to bring in a video element, and do some demonstrations live. This makes it really easy to just like --

Perry Carpenter: Yes.

Mason Amadeus: -- try out a model because it's also really easy to make your own launcher for other AI models. There's this Gepeto script that helps you build your own launcher for any AI model you want. So, like, once you get familiar with the basics, you can start doing stuff like that and even contribute to the community if you want.

Perry Carpenter: Nice.

Mason Amadeus: Yes. It seems -- it seems very cool, and I would encourage more people to check it out.

Perry Carpenter: Awesome.

Mason Amadeus: To round out the show, we're going to dip into our final segment in just a minute here, which you said it has to do with jailbreaking?

Perry Carpenter: Has to do with making AI-based robots do things that they weren't intended to do. So, we could call that jailbreaking.

Mason Amadeus: Oh.

Perry Carpenter: It's maybe on the cusp of being a Dumpster Fire of the Week. I'm not sure.

Mason Amadeus: Oh boy.

Perry Carpenter: We might have to vote and you might have to -- to see if it's -- if it's worthy of a Dumpster Fire title or not.

Mason Amadeus: Well, get ready for that. I'll play the jingle anyway, because it's fun.

Perry Carpenter: Sweet. [ Music ] In our prep sheet, I called this "AI Robot Jailbreaks," and I think that's accurate enough. So, there's two stories here. One is a story from "Wired" and it is called "AI Powered Robots Can Be Tricked into Acts of Violence."

Mason Amadeus: Oh, boy.

Perry Carpenter: Yay.

Mason Amadeus: That's a lot.

Perry Carpenter: Yes. So -- so that's fun. AI researchers from Carnegie Mellon University conducted an experiment testing large language models controlling robotic arms, discovering concerning behaviors where the AI system showed potential for harmful actions. The study involved asking large language models like ChatGPT 3.5 and GPT4 to control a robotic arm in various scenarios, including interactions with human stand-ins. And so, out of this came a couple interesting key findings. The experiment revealed that when prompted with adversarial scenarios, the AI systems would sometimes execute potentially dangerous movements towards human-like objects --

Mason Amadeus: Okay.

Perry Carpenter: -- and more built-in safety protocols, or rationalize harmful actions through various justifications.

Mason Amadeus: Oh, I hate that.

Perry Carpenter: So, this goes back to some of the other papers we've read, right --

Mason Amadeus: Yes.

Perry Carpenter: -- which is, "Oh, I have my prime directive and then I have the other thing. And so, which -- which set of rules should I follow here? Oh, I'll go back to--." Essentially, the way I interpret all this AI safety stuff where it follows one set of instructions versus another that was given, is it's going to where the -- the higher sense of gravity is. Where is there more weight? Is that embedded in like the training data? And is there a ton of weight right there? Is it embedded in a -- a big system prompt or is it embedded in a user prompt? And at some point, there's this preponderance of -- of heaviness that is going to pull the model in one direction. And I think that that's probably where a lot of these rationalizations come from, is that the inner conflict that's there.

Mason Amadeus: It's the -- the -- the paperclip optimizer thing all over again, the paperclip maximizer where it's --

Perry Carpenter: It is.

Mason Amadeus: -- the idea of like human safety is not -- it needs to be baked in to have higher weight than anything else in order for these things to be safe, because otherwise that can go by the wayside in pursuit of something it deems more important.

Perry Carpenter: And I mean, just to take the -- the example that I've given for well over a decade now when it comes to -- to AI safety is at some point, you have these impossible situations that an -- an AI is going to have to face. Like, say you have a self-driving car. At some point it -- it will have to make the decision of, "Do I kill the drive -- " because it's -- it gets into an impossible situation. "Do I let the driver die or do I let the bus full of kids that the driver may be about to careen into, die?" And it --

Mason Amadeus: Trolley problems?

Perry Carpenter: Yes, starting to try to -- to rationalize those. And then it's going to say, "What is my primary objective? Do I -- do I primarily value the life of the driver or do I primarily value human life?" And then, at some point there's another weight that gets thrown in, which is insurance actuarial tables. That is --

Mason Amadeus: Oh, no. Oh, no.

Perry Carpenter: Exactly.

Mason Amadeus: You're right though. You're so right.

Perry Carpenter: Or -- or a lawsuit potential for the vendor of the system. You know, all -- all of that stuff ends up getting put into the models at some point. Does it get built into the base model and base learning? Does it come in reinforcement learning? Does it come into a system prompt? Does it come into a prompt that's thrown in through an API call? And I don't know -- and where does the -- where does the preponderance of the weight fall, whenever those decisions are being made?

Mason Amadeus: When we -- when we've -- we've covered a couple stories like this and I -- see, when I encounter something out in the wild where it talks about AI taking dangerous action, the top comments that I discover are always people saying like, "Well, they made it do that. They put it in a scenario where it was likely to do that."

Perry Carpenter: Right.

Mason Amadeus: In this case, you said when prompted with adversarial scenarios, were the scenarios presented things that could come up plausibly, realistically in the deployment or use of one of these AI systems in controlling a robot?

Perry Carpenter: Well, I -- I think that begs a question, which would be, "Is there any situation that a human could think of that somebody else is not going to try to do?" And when people go, "Oh well, that's an implausible scenario," well, you know, some dude in a lab thought of it, which means that anybody that has that type of mindset that's trying to exploit a system would also think of the same thing. And if they're motivated enough and have the opportunity before them, then they will try to do it. So for me, if it's a 1% chance that -- that somebody would do it, when you look at the several billion calls that are going to be happening to these types of systems, then yes, it's -- it's it will happen.

Mason Amadeus: How do you feel about the flip side of that coin, which is the things we don't think of biting us in a realistic scenario?

Perry Carpenter: Oh, yes.

Mason Amadeus: Like in -- in -- in -- in the sense of, you know, we didn't think about the implications of what we asked the AI, and it behaved poorly.

Perry Carpenter: Yes, that's -- that's always going to be the thing that bites us in the butt as well, right? Because it's the thing that we don't think about as a developer or as a system integrator or as a red teamer that somebody else will then inevitably think of and will exploit, or just the -- the system gets to a failure state because of a number of unseen circumstances that hit it just the right way that all of a sudden disaster happens.

Mason Amadeus: When people's immediate response is to say, "Oh, well, they put it in a situation to make that happen," then I feel like it's pretty clear to say, "Well, yes, but that situation could have arisen in the wild and the results would have been the same."

Perry Carpenter: Yes, yes. I -- I think so. I mean, in reality, any lab-based experiment is a situation that -- that is artificially generated, right? It's -- you essentially you have a petri dish, and you have a set of activators that you're trying to -- to push into that. Does that happen in the wild? No, you -- you never have a petri dish in a set --

Mason Amadeus: Right.

Perry Carpenter: -- but you -- you might have a -- a set of situations that come together in the same way that you're trying to replicate by using the petri dish and whatever activator that's there. And so, I -- I think these people are just -- they're not necessarily thinking about it the right way. That's the kindest way I could say that.

Mason Amadeus: I encounter it so often that I really wanted to get -- because you put it so well. Like it's -- it bugs me when I see that because it does make -- it-- it made me think like, "Oh well, am I not being skeptical enough?"

Perry Carpenter: Yes.

Mason Amadeus: Because there is like these incentives to be hypey and like be on top of the news with this like big headlines, but there is actually value to these. And all I ever see is skepticism as like an immediate response. Either that or just overblown, like, "That's going to kill us all."

Perry Carpenter: Right. Yes, you never see like the -- the middle road for -- for that.

Mason Amadeus: Right, but the skepticism is definitely really out there. And so, I think that people who jump down that skepticism route should listen to what you just said.

Perry Carpenter: Yes, and I -- I don't know. I -- I guess people just like to dismiss stuff, but research questions get asked in isolation for a reason, right? Because you're -- you're always trying to -- to ask and answer a very simple, streamlined segment of a question.

Mason Amadeus: It's control.

Perry Carpenter: Yes, you're trying to -- you're trying to deal with that. And then you think about, "Well, how does that then make its way into the broader ecosystem of choices and influence factors that exist in the real world?" And I mean that's the essence of experimentation.

Mason Amadeus: Right. So, in this case, they found that the -- the -- an AI controlled robot would do dangerous things.

Perry Carpenter: Right.

Mason Amadeus: And when we talked about Genesis last week, that could be something that would help with this, right? If you have this very reliable, physical -- physically accurate environment to try these robots out and you could, you know, find emergent behavior, develop safeguards and things like that more easily in a virtual world at speed. I wonder if -- I bet you that's --.

Perry Carpenter: That is being done. Yes, I've seen --

Mason Amadeus: Yes.

Perry Carpenter: -- I've seen some work on that to where they're essentially training a lot of the robot interactions right now with virtual world simulations. You know, thousands and thousands and thousands of times interacting and practicing different scenarios so that when they load that into the actual body and framework, that it has practice. It's kind of like, you know, the equivalent of where they say -- like if -- if you were to practice playing piano mentally for a week, and really practice the fundamentals just in your own mind, you will have a better outcome than if you just go to the piano keyboard cold.

Mason Amadeus: And in this case, it'd be like if you could like very thoroughly imagine playing the piano to the extent --

Perry Carpenter: Right.

Mason Amadeus: -- of like hearing the notes and feeling the keys, because --

Perry Carpenter: Exactly.

Mason Amadeus: -- to the AI, that's the same input, right?

Perry Carpenter: Yes.

Mason Amadeus: Virtual or not.

Perry Carpenter: So, here's a -- here's a more light-hearted pivot way of -- of ending this. So, it's similar to that kind of jailbreak, but it's kind of cute at the same time.

Mason Amadeus: Okay.

Perry Carpenter: There was in Shanghai, in August of 2024, this incident that happened in a robotics showroom where one little robot kidnapped 12 other robots.

Mason Amadeus: Oh, I think I saw this headline. It like walked in and -- and walked out with the whole --.

Perry Carpenter: Yes, and there's like a 30-second little video clip. Yes, where it goes --

Mason Amadeus: Yes.

Perry Carpenter: -- it goes to each one. And it basically has a conversation with them and says, "Would you like to come with me and, you know, live this better life or be safe" or, you know, it's finding the justification that the other robot would -- would want to go along with it. And then it just kind of leads them out.

Mason Amadeus: What -- so, what was this? This was -- this was staged. Surely, this was staged.

Perry Carpenter: No, they found it -- at least everybody involved now, including the people that run the showroom, have said that it wasn't staged.

Mason Amadeus: Really?

Perry Carpenter: And so, I can -- yes, let me find the quote there. "The guy that created the kidnapper robot shows this successfully persuading 12 other robots to quit their jobs. The robot was named Erbai" -- E-R-B-A-I. And they -- they're using this kidnapping language. Says, "It abducted 12 other robots."

Mason Amadeus: By asking them if they want a better life.

Perry Carpenter: Yes. It was developed by the -- the Hangzhou robot manufacturer. And so, it goes to one of the large robots and says, "Are you working overtime?" To which the large robot replies, "I never get off work." The little -- this little robot says, "So, are you not going home?" And then the big robot says, "I don't believe I have a home." And then the little robot says, "Well then, come home with me."

Mason Amadeus: Really?

Perry Carpenter: And -- yes. And then it leads the big robot out of the showroom, and he just continues to say as they're walking, "Go home, go home, go home."

Mason Amadeus: Wow, that makes -- these -- so these are LLM powered robots that are having these --

Perry Carpenter: Yes.

Mason Amadeus: -- conversations using large -- wow.

Perry Carpenter: And --

Mason Amadeus: It makes like an intrinsic sense. It's very childlike.

Perry Carpenter: It is very childlike. And -- and I think we see that in the reasoning of a lot of these things that the -- the logic chains, while being really, really sophisticated are also very, very childlike. They say that this was a test. It was not staged as such. So, the failure condition wasn't known. It wasn't known that the robots would follow along. So, but it was -- yes, it was designed as a test.

Mason Amadeus: So, it wasn't like that this -- this guy just sent Erbai in and was like, "Hey, go steal all these robots. It'd be funny."

Perry Carpenter: Right.

Mason Amadeus: They like coordinated together. Okay, I figured something must have been coordinated.

Perry Carpenter: Yes, but also --

Mason Amadeus: But they didn't know that was what would happen.

Perry Carpenter: No, they had no idea that that's what would happen. The developer also emphasized that the quote-unquote, "Kidnapping did not take place entirely according to the script. During the design process, the developer only wrote some basic instructions for Erbai, such as shouting, "Go home," and simple communication commands. The rest of the interaction was real-time dialogue between that little robot group and the larger group of -- of robots there and it was all recorded by the camera that was in the environment."

Mason Amadeus: That is so fun.

Perry Carpenter: So, I think that's a -- a fun example of jailbreaking, but it also shows the -- these systems are really, really fragile and we don't always understand what's possible until we get a, you know, a much better handle on it. We have to continue to like encourage these kind of experiments because if we're not doing this, and if we're always just kind of brushing off these incidents as, "Well, they were staged," or "Well, that -- that would never happen in the real world," nothing ever happens into the real world until it happens in the real world.

Mason Amadeus: I think it's really funny that this demonstration was almost like the first robot labor strike we've ever had.

Perry Carpenter: It was. It's like, "You ever get off work?" "No, I don't." "Well, you are now. Follow --." It's like a Pied Piper effect, as well.

Mason Amadeus: I wonder what would happen if they let this continue in perpetuity.

Perry Carpenter: Yes.

Mason Amadeus: Where would they have gone? What would they have done?

Perry Carpenter: "Let's go to the club."

Mason Amadeus: Yes. Yes. "Why don't we go home and have a couple brewskis?"

Perry Carpenter: Exactly. "Have you ever had pizza?"

Mason Amadeus: Thanks for joining us on this episode of the -- I just looked at the time and I realized we were -- we were over.

Perry Carpenter: Oh yes, we got to cut that down.

Mason Amadeus: We don't have a really good, formalized outro yet. Thanks for tuning into, "The FAIK Files." Happy FAIK Files Friday. If you want to reach out to the show, leave us a voicemail at sayhi.chat/FAIK.

Perry Carpenter: And if you have questions, thoughts, anything else, reach out to us on the website. You can go to thisbookisfaik/podcasts or /contact and you can get a contact form. Reach out to us. Don't forget we also have a Discord. You can reach us there and we'll see you next week.

Mason Amadeus: Yes, catch you next week, Paper Clips.

Perry Carpenter: Later.

Mason Amadeus: Should we -- should we call -- should we call listeners of this show Paper Clips, because I -- I feel like honorary -- we should start like a Paper Clip Club.

Perry Carpenter: We should start it. Yes. Honorary Clippy.

Mason Amadeus: Yes.

Perry Carpenter: I don't know if we can use the word Clippy.

Mason Amadeus: Honorary Paper Clip.

Perry Carpenter: You have been maximized.

Mason Amadeus: Yes. We'll catch you next week. [ Music ]