DeepSeek Has No Safety Blocks Against Generating Harmful Information: Anthropic CEO Dario Amodei

DeepSeek might have taken the world by storm with its AI models, but Anthropic CEO Dario Amodei isn’t happy with the safety guardrails the model has in place. 

“Anything you want to say to DeepSeek?” Amodei was asked on a podcast. “I don’t know,” he replied. “They seem like talented engineers. I think the main thing I would say to them is, take seriously these concerns about AI system autonomy. When we ran evaluations on the DeepSeek models — we have a spate of national security evaluations for models if they’re able to generate information about bioweapons that can’t be found on Google or can’t be easily found in textbooks — the DeepSeek model did the worst of basically any model we’d ever tested, in that it just had absolutely no blocks whatsoever against generating this information,” he added.

Amodei clarified that the DeepSeek model itself was likely not dangerous, but cautioned AI compaies about thinking deeply about AI safety. “I don’t think today’s models are literally dangerous in this way. Just like with everything else, I think we’re on an exponential, but later this year, perhaps next year, I think they might be. And so my advice to DeepSeek would be to take seriously these these AI safety considerations. You know, the majority of the of the AI companies in the U.S. have stated that these issues around AI autonomy. And also these issues around AI misuse are serious and potentially real issues,” he added.

“My number one hope would be that (DeepSeek engineers) come work in the U. S., they come work for us or another company. My number two hope would be that if they’re not going to do that, you know, they should, they should take some of these concerns about the risks of AI seriously,” he added.

Amodei seemed to be saying that while DeepSeek had released a very performant model, it had no safety guardrails that many US-made models do. This isn’t the first time that Amodei has appeared to criticize DeepSeek — when the model first came out, he said called for export controls of NVIDIA chips to China so that Chinese companies couldn’t compete with American companies, and America could get a lead in the AI race. Also, Anthropic itself seems to focus quite a bit of safety, so it’s understandable that they’re evaluating new AI models on how safe they are. But DeepSeek has created a pretty unusual situation in tech — thus far, US companies have criticized the Chinese for too much censorship and for having too many controls, but for once, the Chinese seem to have created a product that seems to have fewer restrictions than their own.