There are plenty of critics that say that AI should be kept out of weapons systems, but AI might end up being an essential — and inevitable — part of modern warfare.
Palmer Luckey, founder of Oculus VR and Anduril Industries, a defense technology company, recently argued forcefully for the integration of AI into warfare. His comments highlight the complex ethical considerations surrounding the increasing autonomy of weapon systems. He notably states that refusing to explore the potential benefits of AI in warfare is a dereliction of duty, a provocative stance that challenges conventional wisdom on this increasingly relevant topic.

“I love Killer Robots,” Luckey begins, setting a deliberately challenging tone. He continues, “the thing that people have to remember is that this idea of humans building tools that divorce the design of the tool from when the decision is made to an act of violence – it’s not something new. We’ve been doing it for thousands of years: pit traps, spike traps, a huge variety of weapons, even into the modern era. Think about anti-ship mines, even purely defensive tools that are fundamentally autonomous. Whether or not you use AI is a very modern problem. It’s one that people who haven’t usually examined the problem fall into this trap.”
He goes on to address a common argument against autonomous weapons: “And there’s people who say things that sound pretty good like, ‘Well you should never allow a robot to pull the trigger. You should never allow AI to decide who lives and who dies.'” Luckey, however, offers a counterpoint: “I look at it in a different way. I think that the ethics of warfare are so fraught and the decisions so difficult that to artificially box yourself in and refuse to use sets of technology that could lead to better results is an abdication of responsibility.”
He dismisses the simplistic objections: “There’s no moral high ground in saying ‘I refuse to use AI because I don’t want minds to be able to tell the difference between a school bus full of children and Russian armor.’ There’s a thousand problems like this. The right way to look at this is problem by problem: Is this ethical? Are people taking responsibility for this use of force? It’s not to write off an entire category of technology and in doing so tie our hands behind our backs and hope we can still win. I can’t abide by that.”
Luckey’s argument hinges on the belief that AI, while potentially dangerous, can ultimately improve the ethics of warfare by making faster, more informed decisions than humans under duress. He suggests that clinging to traditional methods, while seemingly morally superior, could actually lead to worse outcomes.
“At our core we are about fostering peace,” he says about Anduril. “We deter conflict by making sure our adversaries know they can’t compete.” He specifically highlights China as an adversary. “AI is the only possible way we can keep up with China’s numerical advantage. We don’t want to throw millions of people into the fight like they do. AI software allows us to build a different kind of force — one that isn’t limited by cost or complexity or population or manpower.”
This perspective challenges the notion that human control is inherently more ethical, forcing us to confront the uncomfortable possibility that algorithms might, in some situations, make fewer mistakes than human soldiers. His analogy to historical examples of “autonomous” weapons like landmines underscores his point that delegating lethal decisions to technology is not a new phenomenon. The introduction of AI, he implies, simply represents an evolution of this long-standing practice.
Now Palmer Luckey is incentivized to say that AI is a necessity for modern warfare — his company makes AI-enabled weapons systems. But he does make a compelling point. If AI is impacting every aspect of our life from coding to art, and if other countries don’t have compunctions about using AI in weapons systems, it could be counter-productive for the US to keep AI out of modern weaponry over vague moral or ethical reasons.