Anthropic Defies US Govt, Says Its Models Can’t Be Used For Domestic Surveillance Or Autonomous Weapons

Anthropic is locking horns with the US government over what it believes to be ethical use of its AI models.

In a statement published today, Anthropic CEO Dario Amodei disclosed that the Department of War has threatened to remove the company from its systems and even designate it a “supply chain risk” — a label historically reserved for adversarial foreign entities — unless it agrees to remove specific safeguards on two use cases: mass domestic surveillance and fully autonomous weapons systems. Anthropic says it will not comply.

The standoff represents one of the most significant and public clashes yet between a leading AI company and the US government over the limits of how frontier AI can and should be deployed by the military. It also comes at a moment when the AI industry is under intense pressure to align itself with national security priorities.

dario amodei

What Anthropic Is Refusing To Do

Amodei’s statement is notably candid about the two red lines his company has drawn. The first is mass domestic surveillance. While Anthropic says it supports AI use for lawful foreign intelligence and counterintelligence missions, it argues that deploying its models to surveil American citizens at scale is fundamentally incompatible with democratic values. He pointed to a legal grey area that has drawn bipartisan concern in Congress: under current law, the government can purchase detailed records of Americans’ movements, web browsing history, and associations from commercial data brokers without a warrant. Amodei argued that AI transforms this already troubling practice into something categorically more dangerous — capable of assembling scattered, individually innocuous data into a comprehensive picture of any person’s life, automatically and at massive scale.

The second restriction concerns fully autonomous weapons — systems that select and engage targets with no human in the loop. Anthropic draws a distinction between partially autonomous systems, which it supports, and fully autonomous ones, which it says today’s frontier AI is simply not reliable enough to power responsibly. The company says it offered to collaborate with the Department of War on R&D to improve the reliability of such systems, but the offer was declined.

Amodei’s statement details that Anthropic was the first frontier AI company to deploy its models on classified government networks, the first to bring them to National Laboratories, and the first to build custom models for national security customers. Claude is currently deployed across the Department of War for intelligence analysis, operational planning, cyber operations, modeling and simulation, and more.

The company also claims to have voluntarily forfeited several hundred million dollars in revenue by cutting off access to Claude for firms linked to the Chinese Communist Party — including some designated by the Department of War itself as Chinese Military Companies. It also says it shut down CCP-sponsored cyberattack attempts and has actively lobbied for stronger chip export controls to preserve a democratic advantage in AI.

“Anthropic understands that the Department of War, not private companies, makes military decisions,” Amodei wrote. “We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”

The Government Fires Back

Officials from the Department of War were quick to push back — and the tone was anything but diplomatic. Under Secretary of War Emil Michael took to social media to personally attack Amodei, writing: “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”

Michael also took aim at Anthropic’s internal AI guidelines — known as Claude’s “Constitution” — framing them as an attempt to impose corporate law on Americans. “Imagine your worst nightmare,” he wrote. “Now imagine that @AnthropicAI has their own ‘Constitution.’ Not corporate values, not the United States Constitution, but their own plan to impose on Americans their corporate laws.”

In a further broadside, Michael referenced what he described as an older version of Anthropic’s model guidelines, citing principles around avoiding responses offensive to non-Western audiences and framing them as evidence of ideological misalignment with American values.

The Threats On The Table

According to Amodei, the Department of War’s escalation has gone beyond simply threatening to cancel contracts. The government has raised the specter of designating Anthropic a “supply chain risk” under national security frameworks — a classification that has never previously been applied to an American company — and of invoking the Defense Production Act to compel the removal of the safeguards outright. Amodei called the contradictory nature of those threats self-defeating, noting that it was logically inconsistent to simultaneously label the company a security risk and declare its technology essential to national security.

What Happens Next

Anthropic says that if the Department of War ultimately chooses to remove it from its systems, it will cooperate with an orderly transition to another provider and keep its models available under the terms already proposed for as long as needed to avoid disrupting active military operations.

The conflict raises broader questions for the AI industry about what obligations — if any — AI companies have to refuse government use cases they consider unsafe or unethical, and whether such refusals can survive the enormous commercial and political pressure that comes with being a national security contractor. For Anthropic, which has built its brand around the responsible development of AI, backing down now would represent a significant credibility blow. Holding firm, on the other hand, may cost it one of the most strategically important customer relationships in the world.

The company says its preference remains to keep serving the Department of War — but only on its own terms. Whether Washington accepts that framing is another matter entirely.

Posted in AI