Yann LeCun has been saying for a while that LLMs won’t get humanity to AGI, and now he’s encouraging emerging AI powers like India to work on alternatives as a competitive hedge.
LeCun, Chief AI Scientist at Meta and one of the so-called “Godfathers of Deep Learning,” has made the case that the very competitiveness of the frontier model race might be its biggest weakness — and India’s biggest opportunity.

“There’s a race for producing the frontier models, and right now everybody is working on the same thing,” he said on the sidelines of the India AI Summit in New Delhi. “In Silicon Valley, there’s a handful of companies trying to compete with each other, and they’re all doing exactly the same thing. In fact, they are stealing each other’s engineers.”
The observation is interesting. OpenAI, Google DeepMind, Anthropic, xAI, and Meta itself are all locked in a war of attrition — competing for the same talent, training on similar data, and converging on similar architectures. The result, LeCun argues, is structural stagnation masquerading as progress.
“They don’t deviate from that because if they start deviating, they run the risk of actually falling behind. They’re all working in the same direction. That creates a monoculture.”
For LeCun — who has long championed alternative approaches to machine intelligence, including energy-based models and his own Joint Embedding Predictive Architecture (JEPA) — the monoculture framing is both a warning and an invitation. His own research bets heavily on the idea that the next leap in AI will come from somewhere unexpected, and that LLMs, for all their dazzle, represent a local maximum rather than a global one. He’s also put his money where his mouth is — he’s left his Meta AI Chief post and is now working on a startup called AMI that focuses on non-LLM approaches.
“One of the bets I’m making is that the next revolution is going to be something different,” he said. “Perhaps the purposeful thing to do for India would be to try to leapfrog — embrace perhaps the next revolution. Not necessarily try to catch up with the current trend.”
The advice lands at a moment when India is actively debating its AI strategy. The government’s IndiaAI Mission has committed over ₹10,000 crore to building compute infrastructure, datasets, and foundational model capacity — much of it oriented around catching up to frontier LLMs. This approach has begun to show results, with India’s Sarvam releasing two open-source 30B and 105B models. But LeCun’s implicit argument is that this may be the wrong race to run.
The broader context gives his words additional weight. Countries and blocs that have tried to compete head-on with American hyperscalers have largely struggled. The EU’s AI ambitions remain fragmented; China’s LLM push, while formidable, has been hampered by export controls on high-end chips. Meanwhile, the compute requirements for training frontier models have become so astronomical that the barrier to entry is essentially a function of capital, not ingenuity.
If LeCun is right that the next paradigm shift in AI is coming, the country or institution that bets on it early — rather than racing to catch up on LLMs — could find itself at the frontier rather than behind it. For India, with its deep pool of research talent and a government increasingly willing to fund ambitious science, thatit could be a bet worth taking seriously.