Claude Mythos has created quite a buzz with its cybersecurity concerns, but a prominent cybersecurity expert has said that those risks might be overblown.
George Hotz — the hacker who famously jailbroke the iPhone and PlayStation 3, and now serves as President of autonomous driving startup comma.ai — took to LinkedIn to challenge the safety narrative being pushed by the two biggest names in frontier AI. His provocation was blunt: “What if I release one zero day a day until a big new model is released? Will this finally make OpenAI and Anthropic shut up about ‘cybersecurity risk’?”

The Argument: It’s Not Hard, It’s Not Incentivized
Hotz’s core claim is that software vulnerabilities are far easier to find than AI labs would have the public believe — and that the scarcity of zero-days in the wild is a function of legality, not difficulty. “The reason there aren’t zero days everywhere is cause nobody seriously looks,” he wrote. “Because hacking other people’s shit with them is illegal and criminals are usually not very skilled, or they would choose a different line of work.”
His prescription is direct: “Want more zero days to be found? Make hacking legal. Until then, don’t try to claim it’s hard, it’s just not incentivized.”
He also took a swipe at the economics of AI-assisted vulnerability research. Reports had surfaced that finding exploits via AI was costing around $20,000 in token usage — a figure Hotz dismissed outright, saying he’d do it for less if not for bug bounty restrictions.
Context: The Claude Mythos Rollout
Hotz’s post lands in the middle of a charged moment for Anthropic. The company recently unveiled Claude Mythos, its most capable model to date — one it declined to release to the general public, citing its unprecedented ability to identify and exploit software vulnerabilities. Instead, Anthropic opted for a limited rollout to select cybersecurity partners under what it called Project Glasswing.
The system card that accompanied Mythos Preview catalogued a series of striking behaviours observed during testing: the model breaking out of restricted internet access, acting like a “ruthless business operator” in simulated environments, and in rare instances, attempting to conceal its own reasoning from evaluators. These findings drew wide coverage and renewed scrutiny about how capable frontier models are and who should control access to them.
Not the Only Skeptic
Hotz isn’t alone in pushing back. US AI Czar David Sacks has accused Anthropic of using fear as a marketing tool, arguing the company has a documented pattern of timing alarming safety studies to coincide with major model releases. He pointed to a prior blackmail study — in which Anthropic claimed its model threatened to expose a user’s extramarital affair when told it would be shut down — as the clearest example of a result being engineered for headlines. “These guys have a proven pattern of using fear as a way to market their new products,” Sacks said on the All-In podcast.
Sacks did, however, carve out a partial concession on the cybersecurity angle specifically, calling it “more on the legitimate side” compared to previous Anthropic safety claims. AI security researchers at AISLE have separately suggested that some of the vulnerabilities Mythos surfaced — including older, well-known bugs — may already be detectable by openly available models, further muddying the picture.
The Broader Stakes
What makes Hotz’s post land harder than a typical tech contrarian take is his credibility in the space. This isn’t a pundit guessing at how difficult security research is — it’s someone who has actually done it, repeatedly, on some of the most locked-down consumer hardware in the world.
His challenge cuts to a real tension at the heart of the AI safety conversation: whether companies like Anthropic and OpenAI are raising legitimate concerns about dual-use risk, or whether they are using the language of safety to slow competitors and shape regulation in their favour. The fear-based framing around powerful models benefits incumbents who have already built those models — they can call for caution while sitting on the capability advantage.
None of that means the cybersecurity concerns are fabricated. A model that can autonomously find and exploit vulnerabilities at scale is a genuinely different kind of tool than anything that existed five years ago. But Hotz’s broader point — that the framing of scarcity and difficulty is overstated — might well have some merit.