DeepSeek might have made a splash with its R1 model release earlier this year which led many to speculate whether it had pulled ahead of US AI labs, but an Anthropic researcher says that isn’t the case.
Anthropic researcher Trenton Bricken said that while DeepSeek was at the frontier of AI research, it hadn’t quite passed it, as some people assumed. He said that DeepSeek’s efficiency gains — which resulted in lowered prices — were because the model had been released several months after similar US models, and these efficiency gains had been seen in the US models as well.

“ It’s been wild seeing the efficiency gains that these models have experienced over the last two years,” Bricken said on the Dwarkesh podcast. “DeepSeek was released nine months after Claude 3 Sonnet, and if we retrained the same model today or at the same time as DeepSeek’s work, we also could have trained it for 5 million or whatever the advertised amount was,” he said.
“DeepSeek has gotten to the frontier, but I think there’s a common misconception still that they are above and beyond the frontier, and I don’t think that’s right. I think they just waited, and then were able to take advantage of all the efficiency gains that everyone else was also seeing,” he added.
DeepSeek had released R1 with abilities matching some of OpenAI’s best models in late 2024, and was priced 90 percent cheaper that most competition. This had led to the model going viral, and briefly becoming the top app on the US app store. DeepSeek had also made some innovations in the lower-level languages of the models to be able to get around US import restrictions on chips, and had made their models work just as well as they would have on the most advanced NVIDIA GPUs.
But top US labs have largely downplayed DeepSeek’s achievements. Anthropic’s Jack Clark had earlier said that DeepSeek’s hype was a bit overblown, and Google DeepMind CEO Demis Hassabis had said that while DeepSeek was impressive, it hadn’t actually invented anything new. Some AI labs had also sought to downplay DeepSeek’s hype, with OpenAI’s Chief Research Officer Mark Chen saying that DeepSeek had independently discovered some of their core ideas, but the ideas themselves weren’t necessarily novel. Others had said that DeepSeek had more resources than people assumed — Anthropic CEO Dario Amodei had claimed that DeepSeek had as many as 50,000 GPUs, and said the model had no guardrails against generating harmful information.
But even though DeepSeek might not have necessarily pushed the frontier forward, the fact that it reached the frontier while operating outside the US — and dealing with export restrictions of GPUs — is undeniably impressive. DeepSeek had been barely known outside the research community until it released its v3 model, but is now being seen as a “competitor” that’s on the frontier by top US labs. It remains to be seen how DeepSeek will fare in the coming months, but it’s certainly got the entire world — and the world’s top AI labs — to take notice.