LLMs are all the rage at the moment, but more and more computer scientists seem to be saying that they won’t necessarily be the path that leads to AGI.
Richard Sutton, a distinguished Canadian computer scientist often hailed as one of the godfathers of modern reinforcement learning, has suggested that the current global obsession with Large Language Models (LLMs) like those powering ChatGPT and Gemini is a temporary phase and that the future of artificial intelligence will look quite different. He frames this prediction within a broader historical lesson about the nature of progress in AI.

Sutton began by placing the current wave of generative AI into a broader timeline. “I think they’re going to be what will seem, in retrospect, to be a momentary fixation of the world on Large Language Models. The AI we have in the future will be quite different,” he asserted. “We’ve only been doing them for a few years.”
To explain his reasoning, Sutton referenced one of his most famous contributions to the discourse on AI, an essay titled “The Bitter Lesson.” “Some of you may have heard about the Bitter Lesson,” he said. “The Bitter Lesson is that when we make our AI systems, as we have many times in the past, we build them to work the way we think we work. This works well for a while, but in the long run, it’s counterproductive. It’s much better to do things that scale with computation rather than things that take advantage of our human knowledge.”
Applying this lesson to the current AI landscape, Sutton posited that LLMs, for all their impressive capabilities, are another example of AI systems built too closely in our own image—in this case, mimicking human language and knowledge structures.
“I’m thinking that Large Language Models will be another instance of that,” he explained. “In the long run, they will have this limited, important role, but they will not be representative of the leading edge of AI for more than a decade, probably half a decade.”
Sutton’s perspective is grounded in decades of research and observation. His “Bitter Lesson,” first published in 2019, argues that the biggest breakthroughs in AI have consistently come from leveraging raw computational power over elegant, human-centric designs. The essay points out that approaches that relied on human-curated knowledge or attempted to bake in cognitive theories, while initially promising, were eventually surpassed by methods that allowed the machine to learn from scratch using massive datasets and processing power. Chess programs, for example, stopped trying to mimic grandmasters and instead became dominant through brute-force search and self-play—methods that scale directly with computation.
From this viewpoint, today’s LLMs are a mixed bag. While they leverage enormous computational scale for their training, their fundamental architecture is deeply tied to human language and the vast corpus of text we have created. Sutton’s argument implies that we are once again embedding our own methods of understanding into our machines. This may yield impressive results now, but it might also create a ceiling that a more general, computation-driven learning method could eventually shatter.
This view is gaining traction among other AI luminaries. Meta’s chief AI scientist, Yann LeCun, has also expressed skepticism, arguing that LLMs lack true understanding, common sense, and the ability to plan—all critical components for AGI. He advocates for architectures that can learn from a wider range of data, including sensory inputs like video, to build a more robust model of the world. Google CEO Sundar Pichai has also said that it’s “entirely possible” that current paradigms don’t get humanity to AGI. The future leading edge of AI, according to this school of thought, may lie not in bigger language models, but in autonomous, agent-based systems that learn about the world through interaction and experience, much like humans do. These systems would be a truer application of the “Bitter Lesson”—less about processing human-generated text and more about using computation to generate their own understanding of reality. For the business and tech world, Sutton’s words are a crucial reminder that the current AI paradigm, while revolutionary, may just be a stepping stone and not the final destination.