
Palmer Luckey on the Next Generation of Intelligence
Sasha Ingber: Welcome to Spycast, the official podcast of the International Spy Museum. I'm your host, Sasha Ingber, and each week I take you into the shadows of espionage, intelligence, and covert operations across the globe. He may dress like he's on a vacation in Hawaii, but Palmer Luckey has been busy designing weapons for the Pentagon.
He founded California based defense technology firm Anduril in 2017, named after a sword in the Lord of the Rings. And according to Palmer, 20% veteran owned. He's promising a market shift. Faster, cheaper, and more agile systems to fight the wars of today and tomorrow. This, after Palmer designed the Oculus Rift, a virtual reality headset at age 19, revolutionizing the world of virtual reality.
We sat down to talk about how Anduril’s work is also supporting the US intelligence community, an essential part of mission readiness. Welcome, Palmer. Nice to see you.
Palmer Luckey: Thanks for having me on.
Sasha Ingber: So most teenagers are not building a virtual reality headsets. They don't end up selling it at the age of 21 and becoming a billionaire.
I'm curious to learn about your upbringing and how it took you to where you are today.
Palmer Luckey: My dad was a car salesman who taught me a lot about the mechanical side of things. My mom was a homemaker. I was homeschooled through most of my childhood. Started going to community college when I was 14 years old, and, uh, started building virtual reality headsets when I was 15 years old.
So that was, that was my hobby for a while. I didn't figure out how to turn into a job till I was, uh, 18 or 19. But, uh, earlier in life I was very into the great outdoors and sports and, and swimming and, and sailing. But as soon as I discovered computers, it was, it was all over. Only exercise I was getting was typing on my keyboard.
Sasha Ingber: What was the appeal of electronics for you? The appeal of virtual reality even?
Palmer Luckey: Well, virtual reality's got a unique appeal. It's the only technology that can allow a person to experience anything that is within the human possibility of experiencing. There's a lot of interesting problems that you need to solve to interface a computer with a human's.
Physiological systems, perceptual systems. It's kinda your peripheral nervous system writ large well enough that you can trick your brain to thinking you're actually someplace that it is not. And so it's, it's, it's a very fascinating problem from a technical perspective. I was never one of the people who was into electronics or even mechanical engineering or optical engineering, because I was inherently interested in the engineering, I was interested in the things you could create with it. So not just VR but also like high powered laser systems and very, very various, you know, high voltage displays and, and uh, variety of early weapons development.
Projects that I did when I was a teenager, and so all, all this stuff, it was it it's a lot easier to become very interested in something and to learn it when you can apply that learning directly to a practical problem you're trying to solve versus just having it be theory on a piece of paper.
Sasha Ingber: What about the people in your life?
I know that your grandfather had been a pilot during Operation Desert Storm. Was there a person or a particular experience that made you want to build a different reality for yourself?
Palmer Luckey: You know, my, my, my grandpa was a huge inspiration to me. He was somebody who was very immersed in world affairs. What was happening not just in our country, but around the world and how it might influence what was going on here.
I actually moved to be closer to him when he got throat cancer for the second time. And so I moved, uh, to Newport Beach to be closer to him, and, uh, basically moved into a neighborhood that's more or less a retirement community and, uh was glad to be with him for the last couple of years, but when he passed away, I ended up, uh, staying in the neighborhood.
So I'm now, uh, I'm now, I'm now surrounded by all of my grandpa's friends, which is a, which is a pretty, a pretty funny situation.
Sasha Ingber: Yeah. Wearing Hawaiian shirts, hanging out in a retirement community
Palmer Luckey: in some ways. I'm already an old man. My grandpa was the one who, when I was probably six or seven years old lied about my age to get me on a field trip of JPL, uh, the NASA's Jet Propulsion Laboratory Laboratory, um, by saying that I was 10 years old.
And so, I mean, he, he was that kind of guy who would, uh, you know, he, he was willing to take risks and, and bend the, bend the rules if it meant that he got his grandson into, into seeing what NASA does when they're building satellites.
Sasha Ingber: So I've noticed that in a lot of your interviews you have made historical references, and at one point you asked people to imagine if Nazis were to be the ones who invented nuclear weapons.
You also recently said that the US doesn't have another D-Day in us right now. Tell me more about why you think that and when that moment would come in your view as a person who is innovating in the defense tech space.
Palmer Luckey: So there, there's two sides of this argument, and they're equally important. The first is that I think that America, after decades of misadventure in the middle, combined with wars like the Vietnam War, which really galvanized another generation against wars that a lot of people feel should not have been fought. Certainly not to the extent that they were. I think that the entire living American population is not sufficiently motivated to be dragged into another land war that requires boots on the ground at large scale. Like I don't think we have another draft in us right now.
I don't think that it's even close, particularly if the war is not one that everybody is aligned with. Now, if, let's say had China invade Hawaii. I think that there's a chance we would get our act together, but when I said we don't have another D-Day in us, what I meant is we probably do not have in our national Spirit right now, the will to go and fight for someone else to, you know, to to, to free another country or another continent on behalf of them using our people, our treasure, our young men and women's lives like.
And I think from some people's perspective, it's a good thing, I think from other people's perspective, very reasonably so it's a bad thing, but nonetheless, like I I, I am not sure if there's a single place in the world that you could convince Americans to go die for by the hundreds of thousands or by the millions right now. And that is very different from what we did in World War II, culminating in D-Day. And that's why I said, I don't think we have another D-Day in us for a good war or a bad war if we live in that reality.
What do we need to do? Well, we probably need to protect American interests by arming the people who are willing to fight for themselves, who are willing to spend their own lives defending their country. I think that the United States, probably from a practical perspective, needs to shift from being the world police with our boots on the ground, to being more of the world's gun store where we are working with our allies and partners around the world to make sure that they are able to work with us by fighting for themselves.
Sasha Ingber: So on the arms and munitions, we know that Anduril has a fantastic relationship with the Pentagon. You have gotten contracts with the Army, a huge contract last week since we're sitting down, uh, Navy Air Force, there are others. Do you have as close of a relationship with the CIA or with the Intel agencies under the Department of War?
Palmer Luckey: Yes, we have a very good relationship with the intelligence community.
Sasha Ingber: And can you tell me more about, um, what that relationship looks like? I'm aware that there is a classified program and some, some of it is expected to be produced in a new factory in Ohio.
Palmer Luckey: I think the people that we work with would probably say that the intelligent thing for me to do is to, is to stop talking,
Sasha Ingber: but here we are on a podcast.
Please don't do that to me.
Palmer Luckey: I think that a lot of the tools that we're building. Some of the tools are equally useful to our military community and to our intelligence community, and to the intelligence, uh, the, the intelligence elements that work inside of the military. Many of the, many of these tools are literally the same thing.
There's other things that are highly specialized, very, very specialized towards particular intelligence. Only use cases that are different than you would see. And of course there's different requirements. You're operating in different environments, you're operating under different types of threat levels.
You're maybe operating in conditions that you don't control as closely as you typically would. The military. And so we have a large team of people that are working on things for the military, obviously. Uh, but we have equally talented people who are working on things for intelligence specific problems, and some of those are the most interesting problems that I get to work on.
Growing up, I never would've thought I'd get to work on the things that I'm working on today as someone who got to grow up watching James Bond movies. With my dad. It is nice to see that America has better stuff than the Brits ever had.
Sasha Ingber: So let's drill down in this murky area just a little bit. We have seen some major tactical successes by the United States from our intelligence community, and we have seen it in Iran.
We have seen it in Ukraine, but at the same time, the US intelligence community has gotten things wrong on a strategic level. We incorrectly assessed Kiev's ability to defend itself and, uh, Russia's military prowess back in 2022 when the full scale invasion began. Hamas' true ambition in Gaza on October 7th, 2023.
So knowing that some of what you're talking about here is private, some of it is public in your arsenal, what can Anduril contribute in this space?
Palmer Luckey: I think we can help contribute the tools that make sure that predictions like those are as accurate as they can possibly be. They're never gonna be absolutely perfect.
In particular, when you're dealing with someone like Putin, there's a certain element of unpredictability because Putin and Xi Jinping on, on the side of China, they're both kind of classical autocrats or, or, or monarchs. They, they don't rule necessarily with just a calculator. There is an element that is in their heart and their soul, and they are just doing what they think needs to be done.
Economic consequences be damned. And so that makes it very hard to perfectly predict these things. What you can do is have all the information that surrounds it as perfect as possible. And I think in many cases we didn't have good enough information on, let's say the readiness levels that existed in Russia.
Right. It, it wasn't just Ukraine's ability to fight, it was also Russia's ability to actually carry their stuff forward. Um, I think we didn't know, and probably Russia didn't know, just how bad a lot of their own stockpiles are. They didn't know that so many tires had been stolen off of trucks and sold.
They didn't know that so much fuel had been stolen and sold. My favorite example is that the Russians discovered that a bunch of their night vision that they had on paper had actually been stolen years or decades prior and sold on the black market. And I, I, I actually would like to say I'm doing my part there.
I own a bunch of stolen black market Russian night vision, including stuff all the way up to the modern era. So you, there's some piece of paperwork that was out there at the start of this invasion that said that the night vision, sitting in my, in my home office is actually sitting in a case in Russia ready to be used.
And so it's hard for me to critique our intelligence apparatus for not knowing the state of their, of their military better than they did. I, I'll I'll say also that it's worth noting that this is, uh, this is not something that we should assume. We have the advantage on, like using artificial intelligence to do a better job of these things.
Putin hasn't managed to pull it off, but he, in particular of all world leaders has been more bullish on AI than anybody, much earlier than almost anybody. When we started Anduril in 2017, Putin had a quote that he had said a few years prior that we actually included in our pitch deck to investors and translated it.
It more or less said, the country that wins in the sphere of artificial intelligence will become the ruler of the entire world, which I appreciate 'cause it's a very bond villain esque quote. You know, it's not hiding the ball. You know, he believes that if they can beat the West to implementing this technology, they will rule the entire world, which is, is very, very different from most countries, which kind of dress it up in a, in a bunch of other ways.
And I met with Zelensky before the war started actually. Uh, they were trying to buy some US products that could be deployed along their eastern border border security products that we build for the United States to help them track Russian incursions and buildups.
And unfortunately the US State Department conclusion at the time was that Russia was not going to invade Ukraine at all. Later of course, it evolved to, well, we think it'll happen and, and the Ukraine won't fight back. But at the time that we met the State Department's assessment was, this is all saber rattling and man, we need better information and better access to it if we're gonna do the right things.
'cause we make real strategic errors when we have intelligence failures.
Sasha Ingber: What can Anduril provide? I mean, are we talking about, um, ISR Are we talking about, you know, early warning systems? Uh, give us some examples.
Palmer Luckey: It's, it's ISR, it's on the telecommunication side. It's on, it's on, you know, man, on the ground side.
You know, thi thi this gets into, I think, uh, probably don't want to, don't wanna talk about the specifics too much, but I, I will say there is nothing that automated systems don't touch even the things that you would think of as the most human face-to-face type of collection. There are useful things that you can do when you can automate processes and automate certain conclusions that allow you to make better decisions even in that moment.
So people like to try to put this in the context of a race. We need to do this faster than China, faster than Russia. My point is we need to do it quickly, completely independently of what those guys are doing because the gains are so massive and so huge that we shouldn't put them off. Even if we got perfect intelligence from a time machine, that China is going to fail to implement any of this stuff, that Russia's gonna fail. Even if that's the case, we need to be pushing it.
Sasha Ingber: You know you have this helmet, the Eagle Eye at the International Spy Museum in our new camo exhibit.
Palmer Luckey: That's right.
Sasha Ingber: And this is integrating different capabilities into the helmet. What are some of the intelligence capabilities that you foresee are going to matter on the battlefield that we can talk about today?
Palmer Luckey: So the idea with Eagle Eye is you have this combination day night. Heads up display that works not just as a night vision system, but as a data fusion and heads up display system that ties all the information you can see together with what everyone else can see and that everyone else includes. Uh, every other person on the ground, every single asset in the air, every asset you have in space, all of your radio frequency intelligence around where RF sources are, and it takes all that information, filters out the 10,000 things you don't need to know and then gives you the handful of things you strictly do need to know, like where the guy you're trying to shoot is. Where someone is operating an uh, an IED radio trigger. How long it's been since he operated that trigger and how far he could have possibly gotten away in the time that has been since then. If all the people in all the robots are working with a common view of the world and you can agree on what's important and what to focus on, it's a lot easier to make much better decisions much more quickly.
And so the idea of Eagle Eye, you have all of this stuff integrated into a pair of augmented reality glasses that are integrated into this helmet that is not only protecting you ballistically, but also giving you superhuman hearing, superhuman vision, a lot of onboard compute and processing and infusion.
Uh, you can really turn people into something a lot closer to a superhero than people would imagine. If I have, if I have x-ray vision and the ability to rewind time and see, you know, and, and the ability to hear things miles away. With perfect precision, subdegree pointing accuracy to the nearest gunshot.
Uh, you know, these are superhuman capabilities that we're already pushing out to people on the ground.
Sasha Ingber: And it has wolf ears. Let's just be honest.
Palmer Luckey: Well, you know, the, the, the wolf ear modules are useful because what you could do is have modular sensors, so some sensors, everyone needs, everyone needs a camera so that they can see what's going on.
Everyone needs night vision so they can operate at night. But what you don't necessarily need is for every soldier to have a long range high resolution thermal imager, or let's say, um, a SWIR imager and laser designator that allows them to mark targets for, for overhead aircraft, for laser guided bombs.
Uh, uh, you also don't need everyone to have a hyperspectral camera that allows you to, for example, just detect explosive residue or certain chemical residues. Or disturbances in the road that might indicate that an improvised explosive device has been buried. But you probably do want one guy in your squad to have each of those things.
And so the wolf ear modules allow you to not only upgrade the helmet over time, but also to have systems where I can have different people in my unit have different roles, and in fact, maybe even their roles shift. Over the course of a mission, maybe I begin my mission with long range targeting stuff on my helmet, and then I'm actually gonna pull that out, drop it in a pouch, swap on another sensor, just hot, swap it in.
Because I'm now in a different phase of the mission, and so that's something that's never really existed before. You've always had dedicated devices for each of these things, right? I'd be carrying my big thermal binoculars and I'd be carrying my ground penetrating radar system, and I'd be carrying my special SWIR imager.
By making it where you're only swapping out the final sensor and optic and the rest of the display and control and networking system is shared, you can make it possible for someone to carry all of these capabilities in a little tiny bag and use them without causing too much, uh, too much operational stress or complexity.
Sasha Ingber: So you're essentially offering someone the ability to be superhuman, but in a very lightweight capacity.
Palmer Luckey: Exactly. The tagline of the program internally was turning soldiers into superheroes,
Sasha Ingber: and you're also integrating intelligence on the ground forces themselves in events to include mass casualties.
Can you explain to people how that works and why that matters?
Palmer Luckey: Sure. I mean, you not only want to detect things that are out in the environment, like you know, targets that you're going after, potential threats. You also want to very intimately understand your own force and what's happening to it. You wanna understand if people are.
You wanna understand if people are underperforming. There's a lot of things that we don't classify necessarily in the moment as an injury, but for example, cumulative exposure to concussive events, uh, which could be enemy explosions, it could be just your own, your own systems. There are people where they have temporary and even lasting cognitive impact of those concussive events. And you probably want to know who's been exposed to all those things. One, so that you can try to maybe rotate them out with somebody who is not gonna be operationally degraded in that way, but also 'cause you want to make sure this guy doesn't have permanent lasting brain damage and that that's not just managing the force in the moment.
It's making sure you have a force that persists for weeks and months and years. If you don't know what's happening to your people in real time, you're not gonna make decisions that are nearly so good. And so one of the things that we're able to do is because we have onboard compute and a lot of onboard sensors, we can actually do a lot of this bio telemetry about what's going on on the soldier and also some of the assets that are around them.
Sasha Ingber: Is there ever a concern for you that by having so much data in your eyesight at your fingertips, that then it becomes distracting or that it puts a person into autopilot? There are so many studies that have come out that have shown some negative effects of using AI. Too much and not being able to think critically.
So where does the human survive in all of that? As you make these people superhuman.
Palmer Luckey: what you're talking about is often referred to as information overload, and you absolutely want to avoid it. And I think that when I've had this debate with people, my, my point to them is, look individual soldiers and individual missions are going to dictate certain levels of threat awareness.
And maybe I want to turn the dial and I do wanna see every radio transmission. I wanna see every possible moving thing. I wanna see every vehicle. I wanna see every satellite arcing across me in the sky, and I wanna know what it is. I probably want to be able to see if there's an incoming drone that is headed directly for me that is gonna kill me in the next five seconds. Like there, there is no universe where it is a good thing to not display that information. There are certain things that if you asked any soldier, if they want awareness of it, they would always say yes to the autopilot point you're making. This is absolutely true. So imagine if I have blue force tracking that is.
Let's say it's a hundred percent reliable. I always know where my buddies are. Let's say that I have enemy tracking that's 99% accurate. It tells me where the enemy is, and I can see them highlighted in my view, through buildings, through walls. Let's say it's 99% good. I think your point is in that 1%, you might still miss something, and if you become totally dependent on the technology, you might not be watching for the barely there signature that the AI itself did not detect.
I'll, I'll make two points in response to that one. This is gonna be a huge challenge for people training the forces of the future. And, and it has long been a challenge. Even today, we already struggle with technologies, things like false alerts on radar systems. And so these are, these are, these are training and discipline requirements where we need to figure out a way to show people real threats without making it so that they cannot operate without the system.
Um, I, I, I think that is, that is gonna be a challenge. I don't know how this is all gonna play out with AI. What I do know is that every time we've come up with new technology, people have brought up these exact same problems. Like calculators are the classic example. People like to bring up, you know, oh, you, when people invented calculators, say, this is gonna ruin mathematics.
It's going to make it so that nobody knows how to think. But here's my favorite example. Socrates was actually famously against, not calculators, but letters, he hated the written word. He thought that it was really bad and that it would lead people to not actually think for themselves, not actually learn anything.
Let me read this quote. For this invention will produce forgetfulness in the minds of all who learn to use it because they will not practice their memory. Their trust in writing produced by external characters, which are no part of themselves, will discourage the use of their own memory. You have invented an elixir, not of memory, but of reminding, and you offer your pupils the appearance of wisdom, not true wisdom for they will read many things without instruction and there will therefore seem to know many things when they are for the most part fully ignorant and hard to get along with since they are not wise, but only appear wise.
Now, I think most people would agree on the balance that the last few thousand years since Socrates are probably better for having the written word. We keep going through the cycle over and over again. Someday, me and you, we're both gonna be Socrates.
We're gonna be the people that the kids of a hundred years from now are laughing at. I think
Sasha Ingber: hide your hemlock, everybody.
When we come back, Palmer weighs in on the Anthropic Pentagon controversy and the war in Iran.
Part of the philosophy of Anduril is that if a weapon system is compromised, then it needs to have its own brain that if it is cut off, there's this other system, and a lot of that comes down to Lattice. Can you explain to us how that integrates with your ghost drones, with the, uh, Ghost Sharks, your autonomous submarines, that are being manufactured in Australia, and if it's relying on software, is, isn't that ultimately hackable too?
Does that not create its own vulnerability?
Palmer Luckey: So yeah, Lattice is the AI brain that powers everything that we make our, our cruise missiles, our robotic submarines, our autonomous fighter jets, and. It certainly does have. A different set of vulnerabilities than you would have with, let's say a remotely controlled system or even, you know, a, a mechanically controlled system.
What you cannot do is have systems that are reliant on a real time high fidelity radio link in order to operate, because now all your enemy has to do to wipe out your ability to wage war is wipe out that link, which might mean jam. But it gets more, it gets more simple than that. What if they just blow up the facility where all these things are being commanded and controlled?
What happens if they even just have some people in Nevada go out and cut a bunch of fiber optic lines that go to the nearby SATCOM field that was running all of these things, right? Like you, you end up with these vulnerabilities that are highly concentrated, not just technologically, but physically in space.
And so if you're subject to all those things in that long chain working at all times, you're gonna have a very, very fragile military. If, on the other hand, systems have the ability to continue to prosecute their targets, to finish their missions, even if you jam these systems. You make it less likely the enemy will even try to win in that way.
Like why? Why would you bother trying to sabotage a bunch of fiber lines in Nevada when all that's gonna happen is the systems currently on mission will still complete their missions and they're gonna get everything back online within a few hours. Now, people have also said, are people gonna hack these things?
Are they going to reprogram these things? AI also makes you less vulnerable there. Uh, with a system today, I probably need a radio to communicate with it. If I have an AI powered fighter jet, I could turn off every single radio. I could turn off every single way of getting in and outta the aircraft. In fact, I could literally launch it with no radios.
I could launch it literally, completely bare, unless somebody can get up to it in the sky and plug in an ethernet cable. They are not gonna be reprogramming anything on that drone. And so there of course are new vulnerabilities you introduced. What happens if an Anduril employee manages to hide code that makes it where I can hold up a QR code printed on a piece of paper and it makes me invisible to all of our security cameras that we've deployed on base all around the world.
The good news is it's actually pretty hard to do that, especially when you have a company that's hyper aware of that, and so we have a lot of security in place to make sure that we are sanitizing everything, making sure that does not happen. I wouldn't say that AI gets rid of all the vulnerabilities, and it certainly does introduce new ones, but I think on the balance, the ability to not be reliant on remote links or people who are generally the weakest factor is on the whole net safer than otherwise.
Sasha Ingber: Can you tell me more about your counterintelligence efforts at Anduril? You are describing an insider threat.
Palmer Luckey: That's right. Well, look, we have to assume that everyone in the company. Could be a threat at any moment. A lot of people who are not, uh, familiar with the counterintelligence side of things or insider threats, they, they ask me questions like, Palmer, how do you screen people to make sure that a bad guy doesn't get into Anduril?
Of course, you start by making sure the wrong people don't get in, but I have to explain to people all the time, even my own employees, that it doesn't stop when you walk in the door. You might be a totally cleared person, but maybe you inadvertently are hosting a payload on your laptop that someone managed to compromise at home.
Maybe someone manages to blackmail you. Maybe someone's threatening your family. Everyone at any moment could be an insider threat. Nobody gets special treatment for Anduril needs to assume that I could become an insider threat at any moment. Man, it's pretty bad if that happens.
So what you wanna do is have handed approach across the whole company. Assume that anyone could become an insider threat at any moment, and also compartmentalize information in a way where an insider threat, even once active, can only do minimum damage. Anduril has people who come from the intelligence world, from the military intelligence world, people who come from the industrial espionage side, continuously working to make sure that our stuff does not get leaked, does not get out there, and that nobody's able to bend our important systems to their will or the will of foreign adversary.
Sasha Ingber: I can see how you've taken some of the philosophy of the intelligence community as well as some of its personnel into Anduril. Do you ever worry about the tech failing people dying or suffering because there's so much riding on what you produce?
Palmer Luckey: I would say it's not so much like I'm continuously worried about exactly that. Because the stakes are very high. If I can't push a geo rectified target coordinate to a system that needs to respond for close air support, people are gonna die. Anduril absolutely has had things where it didn't work the way that we intended.
Luckily, mostly we catch that in test. Sometimes things make it out into the field, and I think the, the, the key is that we're continuously updating these systems where there's not vulnerabilities and flaws that just persist for long periods of time. And I think that that is, that is, that is a big difference between us and a lot of other companies where when these things get discovered.
It's like they, they don't have an infield update capability where they can push updates on a nightly basis. Uh, so ha having, having all that infrastructure has been very, very helpful. But man, the stakes are high. The things that you're doing. Are so important and the people relying on them in such dangerous situations that it really can grind on you in a way that no other industry can.
Sasha Ingber: So what do you tell yourself at night?
Palmer Luckey: I think what I say to myself and my employees is that we have been entrusted with a lot of responsibility. We have to do our absolute best. Our absolute best won't be perfect, but it needs to be damn close.
Sasha Ingber: Right. But really quick, curious. Have you ever been booked?
Palmer Luckey: Have you ever been bumped?
Sasha Ingber: Yeah. Like approached by a foreign intel service?
Palmer Luckey: Ah, um, I think that probably that's not something I should get into one way or the other.
Sasha Ingber: Okay. And as billionaire, do people ever tell you no. And is there value in that for you?
Palmer Luckey: Well. I'm married, so believe me, I, I get plenty of feedback from my wife, but she's certainly aware of the right time to say no.
Um, and I've got a 20 month old son and that's now one of his favorite words. No, I say, come on, give daddy a hug. No. Too busy playing with toy trucks. Look, I've cultivated that culture. I've made very clear that, uh, people should push back on me. If anything, you have to push back on me the hardest if I'm pushing for something.
The natural inclinations for people to just do it without applying critical thinking to it. It's very easy to say, oh, well this is a directive from Palmer, and what I've made clear to everyone is no, when you let me do the bad thing without telling me why it's bad. It will be your problem.
Sasha Ingber: So let's shift into some of the news of the day.
Uh, we have seen the battle between Anthropic and the Pentagon, uh, the Pentagon labeling Anthropic as a supply chain risk, and Anthropic wanting to put guardrails on how the Pentagon uses Claude, Its AI, to limit it from, uh, conducting surveillance, engaging in violence. The Pentagon doesn't want those limitations.
Where do you come down on this?
Palmer Luckey: I'm strongly, strongly, strongly on the side of the existing legal framework, which is civilian oversight of the military and accountable to elected leadership. People often get wrapped up in the exact asks, like, well, why? Why can't they just agree to not use these for autonomous weapons offensively, you know, for targeting civilians or whatever.
And, and there's, there's a few points that are kind of hard if you're not pretty deep in the weeds on this. Well, first of all, there's a, there's a whole question as to who gets to define target, who gets to define innocent, who gets to decide civilian, that, that, that it misses the whole point. The real point is that if you believe in the American democratic experiment, you have to believe that these military leaders are accountable to civilian leaders who are accountable to the public and that you can vote people out of office who make bad decisions that you can court-martial people who break laws passed by those elected leadership.
If you allow a company to come in with a policy that is on top of that and say, Hey, uh, you basically need to get our approval for this type of operation, that type of operation. And if, and if you do something that we don't like, we can just turn it off and your machines stop working. Your planes can't hit targets, your ships cannot conduct warfare. You have now given the CEO, who's making that determination, more power practically than most American presidents have had.
Like even American presidents have not had the ability to just unilaterally flip a switch and the military stops working. Pretty soon, you're going to have, like Anthropic has these two policies. Well, what happens when OpenAI has three of them and Google has one of them and Microsoft has another one?
Now you're gonna have to navigate a huge minefield of corporate. Regulations driven primarily by public relations concerns on the part of publicly traded companies before the United States can engage in military or foreign policy action. And by the way, the fact that that is limited in that way limits our diplomatic levers as well.
It's not just that it limits our military, it, it then means that. Like there's a lot of negotiations where like if Trump goes in and says, oh, you don't give us what you want on a diplomatic way with no bloodshed, oh, well, we're gonna come and blow you up. Imagine if they could say. No, they won't. We've spoken with Dario and Sergey and they said that they're not gonna let you do it, like you now live in a full on corporatocracy.
Sasha Ingber: I hear that the, the national security concern here, but the Pentagon could also just say, we're not going to continue to use Anthropic anymore. Labeling the company as a supply chain risk puts them on par with Huawei.
Palmer Luckey: No. I, I, I, I super disagree. And so what I would say is that the right answer here is if you want to have these principles, don't work with the military.
If you don't believe in civilian oversight of the military being the right mechanism to control it. You shouldn't work with them, but if you're going to work with the military, you have to trust in the whole chain, not decide that you get to be the king of this whole thing.
Sasha Ingber: Is your defense technology being used right now in Iran and what do you hope that the United States is learning about AI from what it's doing inside the country?
Palmer Luckey: Whether or not it's used in Iran right now would be an operational question. So I'm not the right person to say or answer or comment on it. But I will say that our systems have been deployed across dozens of US bases and footprints across the Middle East, four years now. That's in including the United States Army.
That's including SOCOM, that's including the Navy, that's including the Air Force. I think that we are learning a lot of lessons for the first time in terms of what works and also what doesn't. By having it actually in a real conflict. There's a lot of things that thus far have only ever been tested in exercises
Sasha Ingber: and in Ukraine.
Palmer Luckey: So I think the United States has gotten a, just a, an enormous, enormous advantage out of applying artificial intelligence to this. Uh, I don't think that Iran predicted that the United States would be able to use our traditional weapon systems in such a novel and, and, and rapid way. And that's one of the reasons that you've had them kind of caught, caught on their back foot.
They, they know what the usual US cadence looks like in terms of finding targets, you know, generating packages actually going after them. And it's all moving much, much faster. Even with traditional weapon systems. The, the next step is gonna be weapon systems that are themselves engineered around the use of artificial intelligence in that way, and that's gonna make things work significantly better.
Right? Like right now a lot of the problems are in the human level translation and the human level speed limits between these different systems.
Sasha Ingber: I appreciate you taking the time, Palmer, and there's a lot more that I wanted to ask you, but you'll just have to come back again sometime.
Palmer Luckey: Sounds good. Maybe sometime at the museum.
Sasha Ingber: Yeah, come to the museum or we'll just bring you here virtually. Thanks for listening to this episode of spycast. If you like the episode, give us a follow on Apple, Spotify, or wherever you get your podcasts, and leave us a rating or review, it really helps. If you have any feedback or you wanna hear about a particular topic, you can reach us by email at spycast@spymuseum.org.
I'm your host, Sasha Ingber, and the show is brought to you by N2K Networks, Goat Rodeo, and the International Spy Museum in Washington, DC.


