My Greatest Fear Around Creating AI Is That Humans Would No Longer Be Needed: Geoffrey Hinton

The rate of progress of AI is breathtaking, but it could be taking us to a future where we lose our pre-eminent place on this planet.

This appears to be a stark warning from one of the godfathers of modern artificial intelligence, Geoffrey Hinton. The Nobel Prize-winning researcher, whose pioneering work on neural networks laid the foundation for today’s AI boom, has voiced profound concerns about the trajectory of the technology he helped create. His fear centers on a future where AI surpasses human intelligence, rendering us obsolete – a chilling prospect he believes we must confront sooner rather than later.

Hinton’s concerns are particularly weighty given his instrumental role in the AI revolution. His departure from Google in 2023, explicitly to speak more freely about the risks of AI, sent shockwaves through the tech world. In an interview, he articulated his deepest anxieties: “My greatest fear is that in the long run, it will turn out that these kinds of digital beings we are creating are just a better form of intelligence than people, and that that would be a bad thing.”

He acknowledges that not everyone shares this anthropocentric view. “There are some people who think we are very self-centered to think that will be a bad thing,” Hinton stated. However, he firmly believes, “I think it will be a bad thing for people.”

When pressed for the reasoning behind this grave assertion, Hinton’s answer is direct and disquieting: “Why? We’d no longer be needed. If you want to know what it’s like not to be the apex intelligence, ask a chicken.”

The implications of Hinton’s warning are far-reaching. The notion of humanity being supplanted as the dominant intelligence on Earth raises fundamental questions about our purpose, societal structures, and the very definition of what it means to be human. If machines can perform most, if not all, cognitive tasks better than humans, and robots can perform all physical tasks that humans can do, what becomes of human labor, creativity, and innovation? This isn’t just about job displacement, which is already a growing concern with current AI capabilities; it’s about a potential existential shift where human contribution becomes secondary, or even unnecessary, in the grand scheme of things. And if we do end up creating a species that can do everything we do, it will beg the question: what purpose would humans serve at that point?