We Don’t See Imminent Danger From AI: Anthropic CEO Dario Amodei

One of the most safety-focused AI labs sees no imminent danger from AI — yet.

In a recent podcast, Anthropic CEO Dario Amodei addressed the growing concerns surrounding the potential risks of artificial intelligence. While acknowledging the importance of mitigating these risks, Amodei cautioned against exaggerating the immediate threat, arguing that poorly presented evidence can be counterproductive.

“I often feel that the advocates of risk are sometimes the worst enemies of the cause of risk,” Amodei explained. “There’s been a lot of noise out there. There’s been a lot of folks saying, ‘Oh look, you can download the smallpox virus,’ because they think that that’s a way of driving political [action].” He argues that such comparisons, often lacking in concrete evidence, are easily dismissed by those less concerned about AI risk.

“And then, of course,” he continued, “the other side recognizes that and they say, ‘This is dishonest, that you can just get this on Google, who cares about this?’ So poorly presented evidence of risk is actually the worst enemy of mitigating risk, and we need to be really careful in the evidence we present.”

“In terms of what we’re seeing in our own models, we’re going to be really careful,” Amodei stated. “If we really declare that a risk is present now, we’re going to come with the receipts. Anthropic will try to be responsible in the claims that we make. We will tell you when there is danger imminently. We have not warned of imminent danger yet.”

Amodei’s message is clear: while AI safety is crucial, it’s equally crucial to ground discussions in verifiable facts and responsible analysis. There have been several dramatic pronouncements over AI safety, sometimes from extremely respected voice — Nobel Prize winner Geoffrey Hinton, for instance, has said that open-sourcing powerful models is like selling nuclear weapons at Radioshack. Amodei seems to be saying that such pronouncements need to come with extraordinary proof — hyperbole and fear-mongering, however well-intentioned, may ultimately hinder the very cause they seek to advance. Anthropic, for its part, has been putting AI safety at a forefront, being circumspect about releasing models and even announcing cash prizes for researchers who can jailbreak its models. And with powerful AI models being out for over two years now, and many of them open-source, it appears that thus far there’s no major cause for concern from AI.

Posted in AI