For the best part of the last three years, psychologist and cognitive scientist Gary Marcus has been insistent that the generative AI boom won’t necessarily lead to humans creating real intelligence, but he now seems to have support from some unexpected — and distinguished — quarters.
Richard Sutton, a Turing Award winner who is considered the father of Reinforcement Learning, today went on the Dwarkesh podcast and said that he didn’t believe that scaling LLMs would lead to creating human-like intelligence. This was interesting coming from Sutton, because he is the author of the famed “Bitter Lesson” essay, which says that simply scaling AI systems usually leads to better outcomes than coming up with clever approaches or algorithms. Sutton said that LLMs take no feedback from the real world, and can formulate no goals, so they can’t be considered to be intelligent, and any systems built on top of them were unlikely to be intelligent either.

Gary Marcus responded to Sutton’s comments on X. “What has this world come to?! Mr Bitter Lesson, Richard Sutton, now sounds exactly like … me,” he said. “Astonishing. It’s been a long, hard, unpleasant road. But one by one, every major figure in AI has come around to the critique of LLMs that I began giving here in 2019. (Meta AI Chief Scientist) Yann LeCun was first. (Google DeepMind CEO) Sir Demis Hassabis sees it now, too. Turing Award winnner Richard Sutton, famous for the Bitter Lesson, is the latest to come around. The only people left still pretending scaling LLMs is all you need are grifters,” he added.
Richard Sutton then replied to Marcus, hinting that he’d been right all along, and commending him for his courage in speaking out first. “You were never alone, Gary, though you were the first to bite the bullet, to fight the good fight, and to make the argument well, again and again, for the limitations of LLMs. I salute you for this good service!” he said.
Richard Sutton would be one of the most high-profile members of the LLM-skeptic club, which believes that current AI approaches won’t lead to AGI, and the massive investments in GPUs and datacenters might not produce the results that big tech companies are hoping. There are people in the other camp too. Former OpenAI’s Chief Scientist Ilya Sutskever has said that LLMs do represent real world intelligence, giving the example of a long murder mystery, with the last sentence being “and the murderer is..”. Finishing this sentence would require the LLM to understand the entire plot of the book, find patterns and clues, and predict the murderer, so LLMs, even with their next token prediction architecture, do represent intelligence. But with with strong arguments from big names like Yann LeCun and Richard Sutton opposing this view, it might not be a given that simply scaling LLMs will be enough to take humanity to AGI.