India Ranks 6th Among Countries With Own AI Models With Sarvam 105B

India has taken its first steps into the AI race.

India’s Sarvam 105B has placed sixth in the list of top AI models by country in the Artificial Analysis Index. The Artificial Analysis Intelligence Index’s “Leading Models by Country” chart ranks nations by the score of their best-performing AI model. The US leads with Gemini 3.1 Pro Preview at 57, followed by China (GLM-4.5 at 50), South Korea (K-EXAONE at 32), France (Mistral Small 4 at 27), and UAE (K2 Think V2 at 24). India sits sixth with Sarvam 105B scoring 18, edging out Israel’s Jamba 1.7 Large (11) and Switzerland’s Apertus 70B Instruct (8).

The chart reflects a broader truth: the countries that matter in AI are those building foundation models, not just deploying them. The US and China each have multiple competitive labs — the chart’s top two entries are just their current best. Every other nation on the list is represented by a single model.

For India, that model is Sarvam 105B. Announced at the India AI Impact Summit 2026 and open-sourced under Apache 2.0, it’s a Mixture-of-Experts model with 105B total parameters (~10B active per token) and a 128K context window. Crucially, it was pre-trained from scratch entirely in India — not fine-tuned from an existing Western or Chinese base. Sarvam’s previous model, Sarvam M, was built on top of Mistral Small and scored 8 on the Intelligence Index. The leap to 18 with a fully indigenous pre-train is meaningful.

The compute for Sarvam 105B came from the IndiaAI Mission, which provided 4,096 Nvidia H100 GPUs in exchange for an equity stake in Sarvam. That structure — government compute for government equity — reflects how seriously India is now treating sovereign AI as a strategic priority, not just an R&D ambition.

That said, the benchmarks reveal where Sarvam 105B still trails. Among ~100B-class open-weights reasoning models, it scores below Mistral Small 4 (27), INTELLECT-3 (22), and GLM-4.5-Air (23). Its weakest areas are agentic coding — just 1.5% on TerminalBench Hard vs. GLM-4.5-Air’s 20.5% — and factual precision, with a negative AA-Omniscience score driven by high hallucination rates. Sarvam 105B attempts to answer rather than abstain, which hurts accuracy metrics. Its relative strengths are in select agentic tasks and science benchmarks like GPQA Diamond and HMMT.

India’s other major AI contender, Ola Krutrim, has not yet produced a model competitive enough to appear on this chart. That makes Sarvam the only credible entry India has on the global leaderboard right now.

Sixth place, with a single model, is not where India wants to end up. But it’s where India needed to start. A model pre-trained from scratch on domestic compute, open-sourced, and competitive enough to outrank two developed economies is a legitimate foundation. The gap to the US and China is vast — but the gap between India a year ago and India today is no longer zero.

Posted in AI