Anduril Founder Palmer Luckey Explains Why Anthropic’s Position About AI Use For National Security Is “Untenable”

Anthropic is getting plaudits from some quarters over its stand of not allowing its AI systems to be used for all purposes that the US government wanted to use them for, but there might be a compelling argument on why this position could be misguided.

Palmer Luckey, the founder of defense technology company Anduril and one of Silicon Valley’s most prominent voices on military AI, has laid out what may be the most substantive counterargument to Anthropic’s position yet — one that cuts beneath the surface-level debate about surveillance and autonomous weapons and goes straight to a foundational question about democracy itself.

“This gets to the core of the issue more than any debate about specific terms,” Luckey wrote. “Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives?”

palmer luckey

The Problem With Anthropic’s Terms

Luckey’s argument is not that mass surveillance and autonomous weapons are unambiguously good things. It’s that allowing a private corporation to define and enforce the limits of their use is far more dangerous than it might appear — even when the terms sound reasonable on their face.

“Seemingly innocuous terms from the latter like ‘You cannot target innocent civilians’ are actually moral minefields that lever differences of cultural tradition into massive control,” he wrote. “Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a ‘target’ vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer.”

To illustrate the point, Luckey poses a thought experiment about a missile company enforcing a similar policy — that its products cannot be used to target innocent civilians, with the power to shut off access if elected leaders decide to break those terms. It sounds reasonable. But Luckey argues the problems run much deeper than the policy’s stated intent.

The Questions That Don’t Have Easy Answers

Luckey spells out a cascade of practical and geopolitical complications that such a corporate enforcement mechanism would create:

“What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? At what level of confidence does the cutoff trigger, both in writing and in reality?”

These are not abstract concerns. The credibility of military deterrence has historically rested on the certainty — or at least the perceived certainty — that a threat will be carried out. Introducing a private company’s terms of service as a potential override of that certainty has real strategic consequences that go well beyond any individual use case.

The AI Debate Is Not Unique

Crucially, Luckey argues that the fact this particular dispute involves AI doesn’t make it a special category requiring special rules. The same underlying questions apply to any capability a corporation might seek to regulate.

“The fact that this is a debate over AI does not change the underlying calculus,” he wrote. “The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say ‘But they will have cutouts to operate with autonomous systems for defensive use!’, but you immediately get into the same issues and more — what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive?”

Every one of those questions, Luckey is pointing out, involves a judgment call. And under Anthropic’s framework, it would ultimately be Anthropic — not an elected government, not a military commander, not a court — making that call.

A Question Of Democratic Faith

Luckey closes with what is essentially a statement of political philosophy, framing the entire debate as a test of whether you still believe in the American constitutional experiment.

“At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why ‘bro just agree the AI won’t be involved in autonomous weapons or mass surveillance why can’t you agree it is so simple please bro’ is an untenable position that the United States cannot possibly accept.”

A Debate That Isn’t Going Away

Luckey’s argument is unlikely to settle the dispute — Anthropic would almost certainly push back on the framing that maintaining safety guardrails is equivalent to seizing veto power over military operations. Dario Amodei’s original statement was explicit that Anthropic has never objected to specific military operations and has never sought to limit the military’s use of its technology outside of the two specific carve-outs in question.

But Luckey is raising something that the more sympathetic coverage of Anthropic’s stance has largely glossed over: that the line between “safety guardrail” and “corporate control of military decision-making” is not as clean as it sounds, and that the precedent being set here — whichever way it goes — will shape how AI companies and governments interact for decades to come. Whether you find Luckey’s argument persuasive or not, it deserves to be taken seriously.

Posted in AI