Life-And-Death Situations In War Are Too Morally Fraught And Critical To Not Use AI: Anduril CEO Palmer Luckey

There is plenty of opposition to using AI for military means, but Anduril founder and CEO Palmer Luckey has a persuasive argument on why AI use could be critical — and morally valid — in wars.

Luckey, who sold Oculus to Facebook for $2 billion before founding defense technology company Anduril Industries in 2017, has been contemplating the intersection of artificial intelligence and warfare longer than most critics of the technology. Speaking recently about the ethical dimensions of AI in military applications, the he offered a counterintuitive thesis: that the moral stakes of life-and-death decisions in war are precisely why advanced technology must be deployed, not avoided.

“I’d like to point out, I’ve been thinking about these problems for a lot longer than most of those critics,” Luckey said. “When I started Anduril, most people thought AI was this crazy sci-fi thing that was always going to be in the future, never quite in the present. They thought it almost like time travel or virtual reality. But Anduril Industries, you might notice the acronym is ‘AI’ for a reason. I’ve believed that AI was not just going to come eventually, but imminently, and that we were going to be able to apply it to these national security problems.”

At the heart of Luckey’s argument is a challenge to conventional thinking about the ethics of autonomous weapons systems. Rather than viewing AI as introducing new moral hazards, he frames it as a moral imperative for reducing civilian casualties and improving targeting accuracy.

“When it comes to life and death decision making, I think that it is too morally fraught an area, it is too critical of an area to not apply the best technology available to you regardless of what it is, whether it’s AI or quantum or anything else,” Luckey explained. “If you’re talking about killing people, you need to be minimizing the amount of collateral damage. You need to be as certain as you can in anything that you do. You need to be as effective as possible. And so to me, there’s no moral high ground in using inferior technology, even if it allows you to say things like, ‘we never let a robot decide who lives and who dies.'”

Luckey offered concrete examples to illustrate his point, comparing AI-enabled precision to legacy weapons systems. “Where’s the moral high ground in, for example, an anti-vehicle landmine that can’t tell the difference between a school bus full of kids and a column of Russian armor? Is that really making you a better person because you’ve made your smart weapon into a dumb weapon? Same thing for a missile that can’t differentiate between, let’s say, a civilian boat and a military vessel. Is there really a moral high ground in saying, ‘well, at least the AI didn’t make the decision as to which boat to strike?'”

He concluded his argument by emphasizing that the choice of technology should be driven by effectiveness rather than ideology. “I think that for these matters of life and death, it’s too critical a problem to not apply the best technology. Sometimes that will be AI, sometimes it won’t be AI, but it’s really about using the best tech to make sure that you’re treating these problems as serious things that require that level of diligence.”

Luckey’s comments arrive at a pivotal moment for AI in defense. Anduril has emerged as a major player in the defense technology sector, recently securing contracts worth billions and developing autonomous systems including the Lattice AI software platform and various drone technologies. The company represents a new generation of defense contractors challenging traditional players like Lockheed Martin and Raytheon with software-first approaches and rapid development cycles.

The debate over lethal autonomous weapons systems has intensified as AI capabilities have advanced. Critics, including organizations like the Campaign to Stop Killer Robots and numerous AI researchers, argue that delegating life-or-death decisions to machines crosses an unacceptable ethical line and could lead to accountability gaps in warfare. The United Nations has held discussions on lethal autonomous weapons systems for years, though no binding international treaty has emerged. Big-tech employees too have protested when their companies have appeared to support defense or military efforts.

However, Luckey’s argument — he’s previously made similar ones — shifts the ethical calculation by questioning whether abstaining from advanced technology in warfare is itself a morally defensible position when that technology could reduce civilian casualties. His comparison to “dumb” weapons like traditional landmines that cannot discriminate between targets reframes the AI debate not as a question of whether to introduce new risks, but whether to eliminate existing ones. As military forces worldwide race to integrate AI capabilities into their arsenals, from the U.S. Department of Defense’s investment in Project Maven to China’s aggressive AI military modernization, the question may no longer be whether AI will be used in warfare, but how to ensure it’s deployed responsibly when the alternative could mean greater loss of innocent life.

Posted in AI