LLMs Are Very Useful, But They Aren’t Taking Us In The Direction Of AGI: David Deutsch

It’s not just computer scientists like Yann LeCun and Richard Sutton who seem to believe that LLMs aren’t necessarily the path to AGI, but even British physicist and author David Deutsch, who is also considered to be the father of quantum computing, seems to be in the same camp.

Deutsch, a pioneering figure in quantum computation and a constructor theorist, recently offered a perspective on large language models that challenges the prevailing narrative in Silicon Valley. His comments are particularly noteworthy given his deep engagement with fundamental questions about knowledge, computation, and intelligence—themes central to his seminal work “The Beginning of Infinity.” What makes his position especially interesting is that it’s not a dismissal of LLMs, but rather a careful delineation between practical utility and the path toward artificial general intelligence.

Drawing on Einstein’s observation about the creative power of tools, Deutsch frames the current AI revolution in familiar terms. “There’s a quote by Einstein, which says my pencil and I are more clever than I. It’s the same with my computer and I,” he explains. “Now we have got LLMs, which are helping us to become more efficient. I think I have become twice as fast at writing than I was before LLMs. They’re very useful.”

But it’s what comes next that cuts against much of the industry rhetoric. “I keep saying that an LLM is nothing like an AGI, and people think I’m down on LLMs,” Deutsch continues. “I think that they’re going in the wrong direction. No, they’re going in a great direction, and will go further, I think, and hope. But it’s not the AGI direction. It’s almost the opposite.”

That final phrase—”almost the opposite”—is particularly striking. Deutsch isn’t merely saying that LLMs are a detour from AGI; he’s suggesting they may represent a fundamentally different trajectory altogether. His characterization of LLMs as productivity tools, akin to pencils and computers, positions them as cognitive amplifiers rather than nascent minds.

This perspective aligns with a growing chorus of voices questioning the AGI timeline promised by major AI labs. LeCun has repeatedly argued that current LLMs lack key capabilities like persistent memory, reasoning, planning, and understanding of the physical world—components he believes are essential for human-level intelligence. Sutton, despite his optimism about scaling, has emphasized that true intelligence requires real-world interaction and continual learning, not just pattern matching on static datasets.

The timing of these observations is significant. As companies like OpenAI, Google, and Anthropic pour billions into scaling LLMs ever larger, and as reports emerge of diminishing returns from simply adding more parameters and compute, the industry faces questions about whether it’s climbing the right hill. Recent developments—such as the focus on reasoning models, multimodal systems, and agentic architectures—suggest that even the labs themselves may be implicitly acknowledging that pure language modeling has limitations.

Deutsch’s distinction between “useful” and “AGI-directed” is perhaps the most valuable framing for the current moment. LLMs have already transformed how millions of people work, write, and solve problems. But productivity gains, no matter how substantial, don’t necessarily translate into the kind of open-ended creative intelligence that characterizes human cognition. If Deutsch is right, the industry may need to look beyond scaling laws and transformer architectures to find the true path to AGI—even as it continues to extract immense value from the direction it’s currently pursuing.

Posted in AI