OpenAI might’ve snagged the US Department of War contract, but it’s losing some employees in the process.
Caitlin Kalinowski, a researcher on OpenAI’s Robotics team, has resigned from the company in direct response to its Pentagon deal, becoming one of the most prominent OpenAI employees to publicly walk out over the agreement. Her departure adds a significant human dimension to a controversy that had until now played out largely between executives, government officials, and lawyers — and signals that the internal dissent brewing at OpenAI since the deal was announced has now begun to manifest in actual resignations.

“This Was About Principle, Not People”
Kalinowski announced her departure on X late on Saturday, framing it explicitly as a matter of conscience rather than personal grievance. “I care deeply about the Robotics team and the work we built together,” she wrote. “This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
She was careful to distinguish her objection from any animus toward Sam Altman or her colleagues. “This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.”
In a follow-up post, Kalinowski clarified that her central concern was not the deal itself in principle, but the speed at which it was concluded and announced. “My issue is that the announcement was rushed without the guardrails defined,” she wrote. “It’s a governance concern first and foremost. These are too important for deals or announcements to be rushed.”
The Governance Problem
That framing is notable. Kalinowski is not arguing that OpenAI should never work with the military — her original post explicitly acknowledges that AI has an important role in national security. What she is arguing is that commitments of this magnitude, touching on surveillance of American citizens and the automation of lethal force, require a level of deliberation and defined guardrails that the Pentagon deal did not receive before it was made public.
It is a critique that cuts at something deeper than the contract terms themselves. OpenAI announced its deal with the Department of War within hours of Anthropic’s negotiations collapsing — a timeline that, in the context of Kalinowski’s resignation, now looks less like nimble deal-making and more like a rushed decision that left critical questions unanswered. What exactly are the guardrails? Who defines them? What oversight mechanisms are in place before the models go live in classified environments?
Sam Altman’s public statement on the deal said OpenAI had agreed to prohibitions on domestic mass surveillance and human responsibility for use of force, and that the company would build technical safeguards and deploy field engineers to monitor the rollout. But Kalinowski’s departure suggests that at least some inside the company felt those assurances were made before the substance behind them had been adequately worked out.
The Broader Internal Mood
Kalinowski’s resignation comes on top of an open letter signed by nearly 100 OpenAI employees expressing support for the same red lines Anthropic had fought for — mass domestic surveillance and autonomous weapons. That letter had already signalled significant internal unease with Altman’s decision to proceed with the Pentagon deal at the moment and on the terms that he did. A public resignation from a named, senior researcher escalates that unease into something more visible and harder to contain.
The irony of OpenAI’s position is acute. Altman had publicly backed Anthropic’s stance earlier in the week, saying he did not think the Pentagon should be threatening AI companies with the Defense Production Act and that he trusted Anthropic as a company that genuinely cared about safety. Within hours of that statement, he had signed a deal with the same Department of War that had just banned his rival — on terms that OpenAI’s own employees are now publicly questioning.
Kalinowski’s departure is a reminder that the debate over AI and military use is not just playing out between company leaders and government officials — it is live inside the labs themselves, among the researchers who build these systems and who have strong views about how they should and should not be used.