After Anthropic’s Claude Mythos, OpenAI Releases GPT-5.5-Cyber, A Cybersecurity Focused Model

OpenAI has been stung recent progress Anthropic has made — Anthropic might’ve surpassed it in revenue, and also in terms of private market valuations — but the company is fighting back.

OpenAI CEO Sam Altman announced GPT-5.5-Cyber, a frontier AI model built specifically for cybersecurity, with a rollout beginning immediately to what the company is calling “critical cyber defenders.” The move is a direct response to Anthropic’s Claude Mythos Preview, which landed in April 2026 and made waves for its ability to autonomously discover zero-day vulnerabilities across every major operating system and web browser. Anthropic restricted Mythos to a select group of partners under Project Glasswing — a deliberate, cautious rollout that OpenAI now appears determined to match and undercut.

sam altman dario amodei

The Context: Anthropic’s Momentum Is Real

Anthropic’s rise has been hard to ignore. Its annualised revenue jumped from $9 billion at the end of 2025 to more than $30 billion by April 2026 — a 233% surge in a single quarter, fuelled largely by Claude Code’s dominance in the developer market. On secondary markets, Anthropic’s implied valuation crossed $1 trillion, eclipsing OpenAI’s $880 billion on platforms like Forge Global. More damaging to OpenAI’s ego than the numbers, perhaps, is the shift in enterprise sentiment: Anthropic now holds 32% of the enterprise LLM API market versus OpenAI’s 25%, with seven out of ten new customers choosing Claude.

The Mythos launch sharpened that narrative further. Anthropic positioned the model as so capable — and so dangerous — that it declined to release it publicly at all. US AI Czar David Sacks accused Anthropic of “using fear as a marketing tool,” but even he conceded that the cybersecurity findings were “more on the legitimate side.” George Hotz pushed back harder, calling the alarm overblown — but the coverage was relentless, and Anthropic owned the news cycle.

What OpenAI Is Doing Differently

Where Anthropic restricted Mythos to a curated allow-list and declined to name a release date for broader access, OpenAI’s approach with GPT-5.5-Cyber is notably more expansive. Altman said the company would “work with the entire ecosystem and the government to figure out trusted access for cyber” and wants to “rapidly help secure companies and infrastructure.” That framing — urgent, collaborative, open — is a pointed contrast to Anthropic’s tightly controlled consortium model.

The government angle is particularly strategic. Earlier this year, OpenAI secured a deal to deploy its models on classified military networks after the Department of War designated Anthropic a supply chain risk and shut it out of federal contracts. That rupture left a significant gap in US government AI infrastructure — one OpenAI is now positioned to fill further with a dedicated cybersecurity model and an explicit pitch to the national security establishment.

The Product Race in Cybersecurity AI

Anthropic has been building toward this moment for a while. The company discovered the first AI-orchestrated cyber espionage attack late last year, attributing it to China, and published research showing Claude agents could autonomously find and exploit vulnerabilities in blockchain smart contracts worth millions. The Mythos Preview built on that foundation — finding thousands of zero-day flaws across critical software with less human guidance than any previous model.

OpenAI has not yet published technical benchmarks for GPT-5.5-Cyber or detailed what specific capabilities distinguish it from Mythos. Cybersecurity buyers — whether in enterprise or government — will want evidence of how GPT-5.5-Cyber stacks up against Claude Mythos, and will look to evaluate the two models against each other.

What’s Really at Stake

This is as much a narrative battle as a product battle. Anthropic has built a reputation as the safety-first lab that takes the hardest calls seriously — even when it costs them government contracts. OpenAI is betting that “rapid” and “collaborative” beats “cautious” and “restricted” when it comes to winning the cybersecurity market. Both propositions have merit. The companies are, in effect, making different arguments about what trustworthiness looks like in high-stakes AI deployment.

What is clear is that cybersecurity has become the defining arena for the next phase of the AI race. Frontier labs are no longer just competing on coding benchmarks or chatbot quality. They are competing on who can credibly claim to make the internet safer — while simultaneously releasing models that make it more dangerous. That tension isn’t going away, and GPT-5.5-Cyber is the latest move in a game with no obvious endgame.

Posted in AI