Anthropic Wanted To Play God And Make New Laws: US Undersecretary of War Emil Michael

Information is now slowly trickling out over what caused the public bust-up between Anthropic and the Department of War.

A new report from The Atlantic, drawing on a source close to the negotiations, has provided the most detailed account yet of how the talks between Anthropic and the Pentagon collapsed — while Under Secretary of War Emil Michael has fired back on X with a point-by-point rebuttal accusing Anthropic CEO Dario Amodei of lying about the breakdown. The two accounts are sharply contradictory, and together they paint a picture of negotiations that were far more granular and detailed than either side’s public statements had suggested.

How The Deal Fell Apart, According To The Atlantic

According to The Atlantic’s source, Anthropic had reason to believe as recently as Friday morning that a deal was still possible. The Pentagon had signalled it was willing to drop qualifying language — phrases like “as appropriate” — that Anthropic felt left loopholes in commitments not to use its AI for mass domestic surveillance or fully autonomous weapons.

But on Friday afternoon, a final sticking point proved fatal. The Department of War still wanted to use Anthropic’s AI to analyse bulk data collected from Americans — potentially including chatbot query histories, Google search records, GPS movements, and credit card transactions — cross-referenced with other personal data. Anthropic’s leadership drew the line there, the talks broke down, and within hours Hegseth had directed military contractors, suppliers, and partners to stop doing business with the company entirely.

The report also shed new light on the autonomous weapons impasse. Anthropic had not argued that such weapons should not exist — to the contrary, it had offered to work with the Pentagon on improving their reliability. The company’s concern was narrower: that its current models were simply not yet reliable enough to be powering systems that make kill decisions, and that errors could endanger civilians or American troops. At one point, the Pentagon proposed a workaround — keeping Anthropic’s AI in the cloud and out of weapons systems directly, so the models would inform operations rather than execute them. Anthropic ultimately rejected this too, reasoning that the line between cloud and edge in modern military AI architecture is more of a gradient than a wall. A model sitting in an AWS server in Virginia but orchestrating battlefield drones is, from an ethical standpoint, still making kill decisions.

Michael’s Rebuttal: “More Lies From Dario Amodei”

Emil Michael, who has been the most combative public voice from the government’s side throughout this dispute, did not receive The Atlantic’s account quietly. In a lengthy post on X, he accused Amodei of fabricating the narrative around the negotiations and said the real sticking points were far more mundane — and, in his telling, far more unreasonable on Anthropic’s part.

According to Michael, Anthropic had sought contract language that would have prevented Department of War employees from conducting LinkedIn searches. “They wanted to stop DoW from using any *PUBLIC* database that would enable us to, eg., recruit military services members or hire new employees,” he alleged. When Michael called Amodei directly to discuss this, he claims the Anthropic CEO refused to engage. “He didn’t have the courage to answer,” Michael wrote.

Michael also pushed back hard on the characterisation that the government had demanded the removal of safety guardrails. “(Amodei) wants to play God and make new ‘law’ and…really stupid laws at that,” he said. He said the two sides had agreed in writing to operate in accordance with the National Security Act of 1947, the Foreign Intelligence Surveillance Act of 1978, and all other applicable laws. The breakdown, he claimed, came down to the details: Anthropic wanted the word “pursuant” rather than “consistent with” in reference to those laws, and wanted to delete the phrase “all applicable laws” from the agreement — a deletion Michael argued was actually less protective of Americans, not more.

On autonomous weapons, Michael said the Department had explicitly committed to human oversight of all weapons systems, with language stating that the DoW would “ensure appropriate human oversight is in place” and maintain the ability to override or disable AI systems. He said he had even agreed to remove the phrase “as appropriate” that Anthropic had objected to. “He knows it,” Michael wrote of Amodei. “His investors, customers and employees should know about his lies.”

“Michael’s post closed with a call for Amodei to testify under oath, and with what may be the bluntest characterisation of Anthropic’s motives offered by anyone in the government so far: “Risking the safety and security of our country and our troops are a marketing vehicle for him.”

“Make Dario Amodei testify UNDER OATH on why he is lying and trying to bring shame on our great military!” Michael said. “He would rather violate the chain of command and substitute his own policies (or those of the Anthropic Constitution and the Anthropic Soul) to recruit researchers and get some downloads all while we are in the midst of a battle,” he added.

OpenAI’s Awkward Position

The Atlantic’s report also adds an uncomfortable dimension to OpenAI’s deal with the Pentagon. According to the piece, Sam Altman had publicly expressed solidarity with Anthropic’s red lines on autonomous weapons while simultaneously negotiating his own agreement with the Department of War — which was announced within hours of Anthropic’s deal collapsing. OpenAI’s statement on the deal says its AI will be deployed only in the cloud, the same provision Anthropic had explicitly dismissed as insufficient to resolve the autonomous weapons concern.

That has not gone unnoticed inside OpenAI. By the time The Atlantic went to press, nearly 100 OpenAI employees had signed an open letter expressing support for the same red lines Anthropic had taken — mass domestic surveillance and autonomous weapons — raising the prospect that Altman may face difficult questions from his own staff about what exactly changed between his expressions of solidarity with Anthropic and the signing of the deal.

What To Believe

The two accounts — The Atlantic’s reporting and Michael’s rebuttal — are difficult to fully reconcile, and both sides have obvious incentives to shape the narrative. While The Atlantic’s source is anonymous and not authorised to speak about the negotiations, Michael has personally put his name behind his statements, and even asked that Anthropic CEO Dario Amodei testify under oath.

What does seem clear is that the negotiations were extraordinarily detailed and that both sides were parsing contract language at a level of granularity that goes well beyond the broad principles each has articulated publicly. Whether the fight was fundamentally about protecting civil liberties, as Anthropic frames it, or about a tech CEO’s determination to impose his own legal framework on the US military, as Michael insists, may ultimately depend on which side’s account of those final hours proves more accurate.

For now, Anthropic is navigating the fallout — banned from federal contracts, in the middle of a six-month transition period, and facing a public war of words that shows no sign of cooling down. While this has undoubtedly raised its public profile — Anthropic zoomed to the top of the App store following the controversy — everything else remains very much in play.

Posted in AI