After Banning Anthropic From Federal Contracts, US Govt Partners With OpenAI For Uses Of AI In Military

As Anthropic dithered on its support to the US over using its AI for military operations, OpenAI has swooped in and struck a deal.

Just hours after the Department of War designated Anthropic a supply chain risk to national security and ordered federal agencies to cease using its technology, the US government has announced a new partnership with OpenAI to deploy its models on classified military networks. The agreement lands at a moment of maximum turbulence in the relationship between the AI industry and the US defense establishment — and hands OpenAI a significant strategic and commercial win at its rival’s direct expense.

The Deal

OpenAI CEO Sam Altman announced the agreement late on Friday, confirming that his company had reached terms with the Department of War to deploy its models in their classified network. Notably, the deal includes explicit protections on two of the exact same issues that triggered Anthropic’s standoff with the government — domestic mass surveillance and autonomous weapons — suggesting the Department of War was always willing to accept some version of these safeguards, raising fresh questions about whether its confrontation with Anthropic was ever purely about policy.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

Altman added that OpenAI would build technical safeguards to ensure its models behave as intended, deploy field deployment engineers to support and monitor the rollout, and operate exclusively on cloud networks. In a direct olive branch to the broader industry, Altman said OpenAI was asking the Department of War to offer the same terms to all AI companies — and expressed his belief that “everyone should be willing to accept” them.

A Pointed Message To Anthropic

That last line carries an unmistakable edge. By publicly stating that the terms his company accepted are reasonable enough for any AI lab to sign on to, Altman is implicitly arguing that Anthropic’s refusal was never necessary — that the same outcome it claimed to be fighting for could have been achieved through negotiation rather than confrontation. Whether that reading is fair is debatable. Anthropic maintains that it was the government, not the company, that refused to negotiate, and that the Department of War’s demand for “any lawful use” with no exceptions was the sticking point.

But the optics are difficult for Anthropic. OpenAI has walked away with a classified military contract that includes the very same domestic surveillance and autonomous weapons prohibitions Anthropic said it could not in good conscience surrender — and it did so without a public fight, without being called “radical left” and “woke” by the President, and without being designated a national security threat.

Under Secretary of War Emil Michael, who had been among the most aggressive voices against Anthropic earlier in the week, welcomed the OpenAI deal with notably warm language. “When it comes to matters of life and death for our warfighters, having a reliable and steady partner that engages in good faith makes all the difference as we enter into the AI Age,” he wrote, signing off with a pointed “Onwards @sama!”

The contrast with his treatment of Dario Amodei — whom Michael had publicly called a liar with a God complex — could hardly be starker.

The Irony Of The Outcome

There is a deep irony at the heart of this development. Anthropic staked its reputation, its government contracts, and its relationship with the most powerful administration in the world on the principle that it would not allow its models to be used for mass domestic surveillance or fully autonomous weapons. OpenAI has now secured a deal from the same government that enshrines exactly those principles in a legally binding agreement.

Altman himself had backed Anthropic’s stance publicly just yesterday, saying he did not think the Pentagon should be threatening AI companies with the Defense Production Act. Now his company is the Pentagon’s newest AI partner — on terms that, at least on paper, vindicate the very position Anthropic was punished for taking.

What Comes Next

The episode leaves the AI industry in an unusual position. OpenAI’s willingness to engage commercially where Anthropic drew a public line has, for now, resolved the immediate crisis for the Department of War. For Anthropic, the six-month transition period is now underway, and the company faces the task of rebuilding its government business from outside the federal contractor ecosystem — a significant undertaking for a company that had been the deepest-embedded frontier AI lab in the US national security apparatus. Whether it can reframe the episode as a principled stand that ultimately shaped the terms of the industry’s engagement with the military, or whether it goes down as a costly miscalculation, will depend largely on what the OpenAI deal looks like in practice.

Posted in AI