US Designates Anthropic A “Supply Chain Risk”, Trump Orders All Federal Agencies To Stop Using Its Technology

The US government appears to be in no mood to be lectured about the ethics of war by AI firms.

In a dramatic escalation of its confrontation with Anthropic, the US government has officially designated the Anthropic a “supply chain risk” to national security and ordered every federal agency to immediately cease using its technology. The moves — announced by both President Donald Trump and Secretary of War Pete Hegseth — mark the most severe government response yet to Anthropic’s refusal to remove safeguards preventing its models from being used for mass domestic surveillance and fully autonomous weapons systems.

The fallout brings to a head a dispute that had been simmering since Anthropic CEO Dario Amodei published a detailed statement on February 26 outlining his company’s objections to two specific use cases the Department of War had been pushing for. What began as a contractual disagreement has now become a full-blown political confrontation, with the President of the United States personally weighing in.

Trump Calls Anthropic “Radical Left” And “Woke”

President Trump took to Truth Social to announce that he was directing every federal agency to immediately cease all use of Anthropic’s technology, citing what he described as the company’s attempt to dictate how the US military operates.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!” Trump wrote. “That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.”

Trump accused Anthropic of trying to “strong-arm” the Department of War and force it to comply with the company’s terms of service rather than the US Constitution, saying the company’s stance was putting “AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.”

He announced a six-month phase-out period for agencies like the Department of War currently using Anthropic’s products, but warned the company to cooperate fully during that window — or face consequences. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” he wrote.

Hegseth: “A Master Class In Arrogance And Betrayal”

Secretary of War Pete Hegseth was equally unsparing in his own statement, framing Anthropic’s refusal to grant the military unrestricted access to its models as an act of ideological sabotage.

“This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon,” Hegseth wrote. “Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ they have attempted to strong-arm the United States military into submission — a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.”

Hegseth accused Amodei of seeking “veto power over the operational decisions of the United States military”. Hegseth announced that in conjunction with Trump’s directive, the Department of War would formally designate Anthropic a supply chain risk to national security — a classification that, as Amodei himself noted earlier this week, has never before been applied to an American company. Under the designation, no contractor, supplier, or partner doing business with the US military may conduct any commercial activity with Anthropic. Like Trump, Hegseth gave the company a maximum of six months to facilitate a transition.

“America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote. “This decision is final.”

What It Means For The AI Industry

The government’s response to Anthropic’s stance will be felt well beyond this single dispute. Every major AI company with government contracts is now watching to understand what Washington will demand of them — and what happens when they push back. The supply chain risk designation, in particular, sets a precedent that could have chilling effects on the industry’s willingness to maintain any safety guardrails that conflict with government preferences.

For Anthropic, the immediate commercial damage is significant. The company had built one of the most extensive footprints of any AI firm inside the US national security apparatus — classified networks, National Laboratories, custom models for intelligence agencies — and now stands to lose all of it. Whether that loss translates into broader reputational damage or, conversely, into credibility as a company that stood by its stated values when it counted, may define Anthropic’s identity for years to come.

Posted in AI