There’s plenty of buzz around how open-source AI will eventually end up beating closed-source AI, but Anthropic CEO Dario Amodei says that open-source AI isn’t quite the same as open-source software.
Speaking recently, Amodei challenged the conventional wisdom that open-source AI development follows the same trajectory as traditional software. His perspective comes at a time when Chinese AI companies like DeepSeek have gained significant attention for releasing powerful open-source models, prompting debates about competitive advantages and the future of AI development. The Anthropic CEO’s comments reveal a nuanced view that questions whether the open-source model that revolutionized software development can be directly applied to artificial intelligence.

“I don’t think open source works the same way in AI that it has worked in other areas primarily because with open source, you can see the source code of the model here. We can’t see inside the model,” Amodei explained. “It’s often called open weights instead of open source, right, to kind of distinguish that. But a lot of the benefits, which is that many people can work on it, that it’s kind of additive, it doesn’t quite work in the same way.”
This distinction between “open weights” and true open source reveals a fundamental difference in how AI models operate compared to traditional software. While software developers can examine, modify, and improve upon open-source code line by line, AI models remain largely opaque even when their weights are publicly available.
Amodei’s approach to evaluating new models reflects this philosophy: “I’ve actually always seen it as a red herring when I see a new model come out. I don’t care whether it’s open source or not. If we talk about DeepSeek, I don’t think it mattered that DeepSeek is open source. I think I ask, is it a good model? Is it better than us at the things that – that’s the only thing that I care about. It actually doesn’t matter either way.”
The practical realities of deploying AI models further support his argument. “Ultimately you have to host it on the cloud. The people who host it on the cloud do inference. These are big models. They’re hard to do inference on,” he noted. “And conversely, many of the things that you can do when you see the weights, we’re increasingly offering on clouds where you can fine tune the model. We’re even looking at ways to investigate the activations of the model as part of like an interpretability interface. We did some little things around steering.”
Concluding his thoughts, Amodei emphasized: “I think it’s the wrong axis to think in terms of – when I think about competition, I think about which models are good at the tasks that we do. I think open source is actually a red herring, but if it’s free and cheap to run – it’s not free. You have to run it on inference and someone has to make it fast on inference.”
These comments come as the AI industry grapples with questions about competitive positioning, particularly regarding Chinese AI development. Amodei has previously advocated for export controls on advanced semiconductors and other measures to manage China’s growing AI capabilities. The rise of models like DeepSeek, which have demonstrated impressive performance while being openly available, adds complexity to these discussions. His perspective suggests that rather than focusing on whether models are open or closed, the industry should concentrate on performance, practical deployment challenges, and the infrastructure required to make AI systems truly useful. This view may reshape how investors, policymakers, and technologists think about the strategic importance of open-source AI in the global competition for AI supremacy.