Yet another country is taking its first steps in the AI race.
India’s Sarvam has released Sarvam 30B and Sarvam 105B models. Both models were trained from scratch in India by the company using GPUs provided by the government of India in exchange for an equity stake. Sarvam has said that both models will be open-sourced.
Sarvam 30 B
Sarvam 30B is Sarvam’s smaller model. It is a 30 billion parameter mixture of experts (MOE) model. The model has 1 billion active parameters for each query. Sarvam 30B has a 32,000 context window. Sarvam 30B was trained on 16 trillion tokens.
Sarvam 30B is comparable to Gemma 27B, Mistral 3.2-24B, OlMo 3.1 32B, Nemotron 30B, Qwen 30B, GLM Flash and GPT-OSS-20B.


Sarvam says that it is a thinking model, that can serve a real-time conversational agent with its large context window and affordable pricing. The model is open-source.

Sarvam 105 Billion
Sarvam 105B is the company’s larger model. It is also a mixture of experts model and has 9 billion active parameters. It has a context window of 128k tokens. It’s designed for complex reasoning tasks.
Sarvam 105B is comparable to GPT-OSS-120B and Qwen3-Next-80B on some benchmarks. Like the other models, it’s open-source.
On GPQA Diamond and HMMT, Sarvam 105B does better than Qwen3-Next-80B and GLM-4.5 Air. It outperforms GPT-OSS-120B on Beyond AIME and MMLU Pro.


Sarvam 105B also performs well on coding tasks. On Live Code Bench v6, it outperforms Qwen3-Next-80B and GLM-4.5-Air.
