Govts Understand Bias And Discrimination, But Don’t Understand AI Safety: Nobel Winner Geoffrey Hinton

Even as AI continues to grow ever-stronger, long-time AI safety voices seem to be despairing over how little governments seem concerned over its possible downsides.

Geoffrey Hinton, considered the “Godfather of AI”, has expressed deep concerns about the lack of political will to address AI safety. Hinton, who received the 2018 Turing Award for his groundbreaking work on neural networks, and the Nobel Prize in 2024, lamented the focus on more easily grasped issues like bias and discrimination, while the larger, more existential threat of uncontrolled AI goes largely unaddressed.

“The question is,” Hinton states, “are we going to be able to develop AI safely? And there seems to be not much political will to do that. People are willing to talk about things like discrimination and bias, which [are] things they understand.” Hinton says that true danger lies elsewhere. “But most people still haven’t understood that these things really do understand what they’re saying. We’re producing these alien intelligences.”

Hinton continues, emphasizing the shift from tools to agents: “For now, we’re in control, but we’re making them into agents. So, they get to do things in the world.” He then outlines the potential for these agents to develop a drive for power: “And they’re very soon going to realize that a good thing to do to achieve your goals is to get more control.”

This leads Hinton to his bleak conclusion: “And so I’m very worried that when [we’re in] a situation now where we’d like really strong, sensible governments with intelligent, thoughtful people running them. And that’s not what we’ve got.”

Hinton seemed to be saying that while governments are comfortable addressing the familiar ethical issues of bias and discrimination within AI systems, they seem unable or unwilling to grapple with the potentially catastrophic implications of increasingly autonomous and intelligent machines. It could be partly because most governments are run by people who might not always have the technical sophistication to understand how AI systems work, or have the background to understand how quickly an exponentially-growing technology could transform into something unrecognizable from what it was just a few years prior. The focus on near-term, understandable problems also distracts from the existential risks posed by AI agents capable of independent action and potentially driven by self-preservation and goal attainment.

But not everyone is as worried about AI risk as Hinton. Elon Musk has said that there’s an 80 percent chance of AI leading to prosperity for all humans, while there’s a 10 percent chance they can lead to complete destruction of humanity. Hinton, though, has long been saying that AI systems need to be controlled, once saying that open-sourcing AI models was like selling nuclear weapons at Radioshack. Chinese company DeepSeek has open-sourced a powerful model since his statement, but there appear to be — at least for now — no ill effects of doing so. It remains to be seen whether AI development continues to be safe in the coming years, but governments and regulators would do well to consider Hinton’s consistent warnings as AI development accelerates.

Posted in AI