Anthropic’s founders had split from OpenAI in 2021 largely because of concerns over AI safety, and they still use some evocative imagery to describe what the impact of AI could be like.
In a recent interview, Anthropic CEO Dario Amodei deployed a stark warning about the risks facing AI companies that fail to address safety concerns: they could find themselves in the same position as cigarette manufacturers and opioid producers who knew about dangers but stayed silent. Speaking alongside his sister Daniella Amodei, the company’s president, Dario framed the entire AI revolution as an “experiment” that requires what he called “bumpers or guardrails.”

“I think it is an experiment, and one way to think about Anthropic is that it’s a little bit trying to put bumpers or guardrails on that experiment,” Dario explained. Daniella emphasized the urgency of the moment, noting that “this is coming in incredibly quickly, and I think the worst version of outcomes would be, we knew there was going to be this incredible transformation and people didn’t have enough of an opportunity to adapt.”
She acknowledged the unusual nature of their approach: “It’s unusual for a technology company to talk so much about all of the things that could go wrong.” But Dario was adamant about why this transparency matters. “It’s so essential because if we don’t, then you could end up in the world of the cigarette companies or the opioid companies where they knew there were dangers and they didn’t talk about them and certainly did not prevent them.”
The Amodeis’ vocal stance on AI safety has attracted some criticism in Silicon Valley, where some accuse the CEO of being an AI alarmist and dismiss Anthropic’s approach as “safety theater” or merely savvy branding. Meta’s Chief AI Scientist Yann LeCun, for instance, had said that Dario Amodei was deluded about the dangers of AI safety, and that he had a “huge superiority complex”. When pressed on whether people should trust their intentions, Dario pointed to verifiable outcomes: “Some of the things just can be verified. They’re not safety theater. They’re actually things the model can do. For some of it, it will depend on the future, and we’re not always gonna be right, but we’re calling it as best we can.”
The comparison to cigarette and opioid companies is particularly pointed in today’s regulatory environment. Both industries faced massive legal and reputational consequences after internal documents revealed they had long known about health risks while publicly downplaying them. The tobacco industry’s decades of denial about cancer risks and Big Pharma’s role in the opioid crisis have become cautionary tales about corporate responsibility. Anthropic’s positioning suggests the company believes AI poses similar stakes—and that the industry’s credibility depends on addressing risks proactively rather than reactively. As AI systems become more powerful and widely deployed, the question of whether companies will prioritize safety over speed has moved from philosophical debate to urgent policy concern, with legislators in both the US and EU now actively working on AI regulation frameworks.