AIs Are Descendants Of Humans, We Shouldn’t Look To Control Them Fully: Turing Award Winner Richard Sutton

There’s no shortage of experts who’re worried about the potential risks of developing AI systems, but a prominent computer scientist has an interesting reason why humanity shouldn’t be overly concerned.

Richard Sutton, a Canadian computer scientist and a recipient of the 2025 Turing Award—often referred to as the “Nobel Prize of Computing”—had recently shared some thought-provoking views on the future of AI. In essence, Sutton had argued that we shouldn’t strive for complete control over advanced AI because these systems are our descendants, not our slaves. He suggests embracing the uncertainty and decentralized nature of the universe, allowing AI to evolve in its own way, much like we allow our children to grow and develop their own identities.

Richard-Sutton-2

Sutton explains his reasoning as follows: “Suppose we understand intelligence so anyone could create an intelligent instance and give it a set of goals. The intention is that they are humanity’s mind children; they are the descendants of humanity. They will reflect us. And, you know, we are profoundly imperfect. So we don’t want them to totally reflect us, just like you don’t want your children to totally reflect yourself.” He continues, highlighting the evolving nature of societal values: “A society of all those values, of all things that we consider progressive and straightforward now, were considered just impossible to accept decades ago. We have to expect that to be changes.”

Moving into his core argument against control, he asks: “So my opinion is that we shouldn’t decide… like who are we to decide, you know, how the universe is going to evolve? And it’s more exciting that way. We shouldn’t be in charge of the long-term future. What’s the most evil thing? Making a choice that will affect permanently the long-term future. So, you know, if anyone saying ‘Oh, I want to make the future safe,’ they’re really saying ‘I want to control all the long-term future. I want to make sure that something can never happen.'”

He questions the authority of those who seek control: “So who is this person that’s going to determine the long-term future? Well, you know what I’m getting to is that we don’t decide the world. The universe is out of control. We know no one is in charge. You’re not in charge, obviously, but also, you know, the good and the great, the government leaders, they are not in charge either. They are just barely making it through the day. It’s good that they’re not in charge. I like the world being decentralized and and continue to evolve. And we’ll see how it (evolves).. I think it’s a grand adventure.”

This isn’t the first time that Sutton has had an unconventional view on controlling AI — two years ago, he’d said that humanity shouldn’t fear being succeeded by AI. But Sutton’s analogy of AI as our descendants frames the discussion in a fundamentally different light. Rather than viewing AI as tools to be controlled, he suggests we should see them as entities with their own potential for growth and change. This perspective challenges the prevailing narrative of AI safety, which often focuses on containing and limiting AI capabilities. His emphasis on the decentralized nature of the universe further reinforces the idea that attempting to exert absolute control over AI’s evolution might be not only futile but also potentially harmful, stifling innovation and limiting the potential benefits of AI. While acknowledging the imperfections of humanity, he suggests that allowing AI to evolve freely, with its own set of values and goals, could lead to a more dynamic and exciting future. This perspective, while potentially controversial, offers a interesting alternative to the often-expressed anxieties surrounding the development of advanced AI.

Posted in AI