We’re Fooled Into Thinking LLMs Are Intelligent Because They Can Manipulate Language: Yann LeCun

Just because it walks like a duck and talks like a duck, it might not be a duck. And the same could be true for LLMs.

That’s the stark warning from Yann LeCun, Meta’s former Chief AI Scientist and one of the godfathers of deep learning who shared the 2018 Turing Award. In a recent interview, LeCun delivered a sobering reality check about the hype surrounding large language models, arguing that the AI community and the public are being “fooled” by machines that excel at linguistic manipulation but lack genuine intelligence. His perspective carries particular weight given his pioneering work in neural networks and his front-row seat to multiple waves of AI enthusiasm over the decades.

“We keep getting confused about the fact that it’s not because machines are good at a certain number of tasks, that they have all the underlying intelligence that we assume a human having those capabilities will have,” LeCun explained. “We’re fooled into thinking those machines are intelligent because they can manipulate language, and we’re used to the fact that people who can manipulate language very well are implicitly smart. But we are being fooled.”

LeCun was quick to acknowledge the practical value of these systems. “Now they’re useful. There’s no question. They’re great tools, like computers have been for the last few decades.” The comparison to computers as tools rather than thinking entities is deliberate—LeCun sees LLMs as powerful utilities, not emerging consciousness.

But it’s his historical perspective that cuts deepest. “Let me make an interesting historical point, and this is maybe due to my age,” LeCun said. “There’s been generation after generation of AI scientists since the 1950s claiming that the technique that they just discovered was going to be the ticket for human level intelligence. You see declarations of Marvin Minsky, Newell and Simon, Frank Rosenblatt, who invented the perceptron, the first learning machine in 1950, saying within 10 years we’ll have machines that are as smart as humans. They were all wrong. This generation with LLMs is also wrong. I’ve seen three of those generations in my lifetime. So it’s just another example of being fooled.”

LeCun’s warning comes at a pivotal moment. The AI industry has poured billions into LLM development, with OpenAI, Anthropic, Google, and Meta racing to build ever-larger models. Yet cracks in the narrative of inevitable progress toward artificial general intelligence are beginning to show. Experts like Ilya Sutskever have indicated that more breakthroughs are required to get to AGI. While models are improving rapidly on benchmarks, they aren’t yet seeing commensurable real-world adoption. Meanwhile, researchers have documented persistent failures in LLMs: they struggle with some simple tasks like counting the number of letters in a word, lack genuine world models, can’t reliably perform multi-step logic, and hallucinate with confidence.

LeCun has long been saying that LLMs won’t lead to AGI, and he’s now acted upon his belief. LeCun has quit Meta to found his own startup named AMI, which will research new breakthroughs to be able to create true intelligence. History suggests he may be right to urge caution. The AI winters that followed previous waves of overpromising serve as cautionary tales. If LeCun is right, the question now is whether the industry will heed the warning of someone who’s watched this cycle repeat, or whether, dazzled by LLMs’ verbal dexterity, we’re destined to learn the lesson again the hard way.

Posted in AI