Richard Sutton doesn’t only believe that digital intelligences will eventually succeed humans, but he has a four-part argument on how it’ll happen.
The Turing Award winner and reinforcement learning pioneer recently appeared on the Dwarkesh Patel podcast, where he laid out what he sees as an inevitable path toward digital intelligence succession. Sutton’s argument is notable not just for its stark conclusion, but for its methodical structure—breaking down what he views as the fundamental forces that make this transition unavoidable rather than merely possible.

When asked whether succession to digital intelligence or augmented humans is inevitable, Sutton presented his case systematically: “I have a four-part argument. The argument, step one is there’s no government or organization that gives humanity a unified point of view that dominates and that can arrange—there’s no consensus about how the world should be run.”
His second pillar builds on the relentless march of scientific progress: “We will figure out how intelligence works. Researchers will figure it out eventually.” This reflects Sutton’s confidence in the research community’s ability to decode the fundamental mechanisms of cognition—a field he has spent decades advancing through his groundbreaking work in reinforcement learning and computational neuroscience.
The third element of his argument assumes that human-level AI won’t be the endpoint: “We won’t stop just with human-level intelligence, we will reach super intelligence.” This progression from artificial general intelligence to artificial superintelligence has become a central theme in AI safety discussions, with researchers debating not whether it will happen, but how quickly and controllably.
Sutton’s final point ties together the previous three with what he sees as a fundamental law of power dynamics: “Once it’s inevitable over time that the most intelligent things around would gain resources and power.” He concludes, “Put all that together, it’s sort of inevitable that you’re going to have succession to AI or to AI-enabled augmented humans. Within those four things seem clear and sure to happen, but within that set of possibilities, there could be good outcomes as well as bad outcomes.”
This isn’t the first time that Sutton has talked about humans being succeeded by AI. He’s previously said that humans should “welcome” being succeeded by AI as a part of evolution. While this may sound like science fiction, the implications of Sutton’s framework extend far beyond academic speculation. His argument arrives as we witness unprecedented AI capabilities emerging from companies like OpenAI, Anthropic, and Google DeepMind, while simultaneously observing the exact coordination challenges he identifies—from regulatory fragmentation across nations to competitive pressures preventing unified approaches to AI development. Recent developments, from the race to achieve artificial general intelligence to ongoing debates over AI governance at forums like the UN AI Safety Summit, seem to validate each pillar of his thesis. Whether this trajectory leads to the “good outcomes” Sutton acknowledges as possible may depend on how seriously policymakers and technologists take the very coordination problems he suggests make succession inevitable.