These Were The Top Open-source Models On OpenRouter In 2025

The open-source AI landscape underwent a dramatic transformation in 2025, with usage data from OpenRouter revealing a newly diversified ecosystem where multiple model families now command substantial market share. While last year’s dominant players still lead in raw volume, the rapid ascent of newer entrants signals a fundamental shift in how enterprises and developers approach AI deployment.

DeepSeek Maintains Lead Despite Eroding Dominance

DeepSeek models processed 14.37 trillion tokens between November 2024 and November 2025, cementing their position as the most heavily utilized open-source models on the platform. However, this leadership position tells only part of the story. The Chinese AI lab’s share of the open-source market has declined notably as alternatives have proliferated, suggesting that technical performance alone no longer guarantees market dominance in an increasingly competitive field.

The company’s continued strength likely stems from early-mover advantages and proven reliability in production environments, but the narrowing gap with competitors indicates that differentiation in the open-source AI market now requires more than raw capability.

Alibaba’s Qwen Emerges as Serious Contender

Perhaps the most significant development of 2025 was Qwen’s rise to 5.59 trillion tokens processed, establishing Alibaba’s open-source offering as the clear second choice for developers and enterprises. This represents not just technical achievement but a strategic validation of China’s broader AI ambitions. Qwen’s strong showing demonstrates that alternatives to Western-developed models have achieved production-grade quality and reliability.

The model family’s rapid adoption suggests it has successfully addressed key enterprise concerns around multilingual capability, deployment flexibility, and performance across diverse use cases—factors that increasingly matter as organizations move beyond experimentation into scaled production deployments.

Meta’s LLaMA Secures Third Position

Meta’s LLaMA models captured 3.96 trillion tokens in usage, maintaining the company’s position as a major force in open-source AI despite intense competition. Meta’s approach of releasing progressively more capable models while fostering a robust developer ecosystem appears to be paying dividends, though the company now faces pressure from multiple directions as the field becomes more crowded.

The Middle Tier Tells a Story of Democratization

Below the top three, a cluster of model families—Mistral AI (2.92T), OpenAI (1.65T), Minimax (1.26T), Z-AI (1.18T), and TNGTech (1.13T)—each processed over a trillion tokens. This middle tier’s existence represents perhaps the most important structural change in the open-source AI market: the barrier to creating production-viable models has fallen dramatically.

Mistral AI’s strong showing at 2.92 trillion tokens underscores the French startup’s success in positioning itself as a European alternative with strong technical fundamentals. The company has carved out a distinctive niche by offering models that balance capability with computational efficiency—a critical consideration as inference costs remain a primary concern for enterprises scaling AI applications.

The presence of OpenAI in this list, though primarily known for closed models, reflects the company’s strategic decision to release certain capabilities as open weights, likely aimed at maintaining relevance in research communities and specific deployment scenarios where open-source models hold inherent advantages.

Emerging Players and the Long Tail

MoonshotAI (0.92T) and Google (0.82T) round out the top ten, each representing different strategic approaches to the open-source market. MoonshotAI’s presence demonstrates that well-funded startups can achieve meaningful scale relatively quickly in today’s environment. Google’s relatively modest showing is because it doesn’t have any frontline open models, and has only open-sourced the smaller Gemma family of models that are primarily meant to be run on-device.

What This Means for the Industry

The 2025 OpenRouter data reveals an open-source AI ecosystem that has matured beyond its early experimental phase into a genuinely competitive marketplace. Chinese models (DeepSeek, Qwen) command nearly 20 trillion tokens combined, while European (Mistral AI) and American players (Meta, OpenAI, Google) split the remainder. This geographic distribution has important implications for enterprises navigating data sovereignty concerns and geopolitical considerations in their AI strategies.

But the numbers show that open source is production-ready. The aggregate volume of over 33 trillion tokens processed across these model families in a single year demonstrates that open-source models have moved well beyond research applications into mission-critical production deployments at scale.

As we move beyond 2025, the competitive dynamics visible in this data suggest that the open-source AI market will continue to fragment and specialize. Organizations choosing AI infrastructure will increasingly need to evaluate not just which model performs best on benchmarks, but which model family offers the right combination of capability, cost, geographic considerations, and ecosystem support for their specific needs. The era of open-source AI as a monolithic alternative to closed models is over. In its place, we have a diverse, competitive ecosystem where multiple credible options coexist—precisely the outcome that open-source advocates have long predicted would drive innovation and adoption forward.

Posted in AI