Not In An AI Bubble In Terms Of Apps, But In A Bubble In Believing LLMs Will Replicate Human Intelligence: Yann LeCun

Yann LeCun believes we’re in an AI bubble, but in a more nuanced way than most commentary on the topic.

Yann LeCun, Meta’s Chief AI Scientist and one of three “godfathers of deep learning,” recently offered a perspective that cuts through the binary debate about whether AI is overhyped. Speaking at an event, LeCun articulated a bifurcated view: the bubble isn’t in applications or infrastructure investment, but in something far more fundamental—the expectation that large language models will evolve into human-level intelligence. His argument reveals the gap between today’s commercial reality and tomorrow’s scientific ambitions.

“I think there are several points of view for which we’re not in a bubble and at least one point of view suggesting that we are in a bubble, but it’s a different thing. So we’re not in a bubble in the sense that there are a lot of applications to develop based on LLMs. LLM is the current dominant paradigm, and there’s a lot to milk there.”

LeCun pointed to the immediate commercial landscape as evidence. He referenced AI assistants transforming daily life, a future that demands both software innovation and massive infrastructure buildout. The scale of computation required would be staggering. As Jensen Huang of Nvidia has argued, once smart wearable devices become ubiquitous—helping people navigate their lives—the computational load to serve billions of users will be enormous. From this perspective, the billions flowing into AI infrastructure and application development aren’t speculative; they’re foundational.

“To help people in their daily lives with current technology, that technology needs to be pushed, and that justifies all the investment that is done on the software side and also on the infrastructure side. Once we have smart wearable devices in everybody’s hands, assisting them in their daily lives, the amount of computation that will be required, as Jensen was saying, to serve all those people is going to be enormous. So in that sense, the investment is not wasted.”

The bubble, LeCun argues, exists elsewhere. It lives in the conviction that scaling current LLM architectures—adding more parameters, training data, and compute—will spontaneously generate human-level artificial intelligence. This belief, he contends, misunderstands the nature of intelligence itself.

“But there is a sense in which there is a bubble, and it’s the idea somehow that the current paradigm of LLM will be pushed to the point of having human-level intelligence, which I personally don’t believe in. And you don’t either. We need a few breakthroughs before we get to machines that really have the kind of intelligence we observe, not just in humans but also in animals. We don’t have robots that are nearly as smart as a cat.”

The cat comparison serves as a humbling anchor. Despite AI’s mastery of language and protein folding, our most advanced robots can’t match the sensorimotor intelligence, common sense, and adaptability of a common housecat. This reveals a fundamental gap: intelligence isn’t just about processing information, but about understanding the world in ways that current models cannot.

“And so we’re missing something big still, which is why AI progress is not just a question of more infrastructure, more data, more investment, and more development of the current paradigm. It’s actually a scientific question of how do we make progress towards the next generation of AI, which is why all of you.”

This distinction has profound implications for how businesses and investors approach AI. While applications built on today’s LLMs—customer service bots, coding assistants, content generation tools—represent legitimate value creation, the race toward artificial general intelligence faces a scientific chasm that money alone cannot fill. The tension is visible across the industry: OpenAI’s Sam Altman has reportedly sought up to $7 trillion for chip manufacturing to power AGI, while LeCun’s own employer, Meta, invests heavily in both LLM products and fundamental research into “world models” that might bridge the gap to more robust intelligence.

For startups, this means opportunity lives in the application layer, not in betting on imminent superintelligence. For enterprises, it suggests that vendor claims about “AGI-ready” systems should be viewed skeptically. And for the ecosystem of chip makers, cloud providers, and developers, it validates current infrastructure spending while warning against magical thinking about where that infrastructure leads. The bubble LeCun identifies isn’t in the hardware or the apps—it’s in the belief that the path from here to human-level AI is a straight line rather than a scientific leap into the unknown.

Posted in AI