Even as there’s no shortage of pronouncements on how AI will soon change the world for the better, some tech leaders are also warning about the potential risks the technology carries.
Demis Hassabis, CEO of Google DeepMind, has expressed his concerns, finding it baffling that some remain unconcerned about the transformative power of artificial intelligence. “Our colleagues, whom we all know, say there’s nothing to worry about here, which to me seems insane,” he said in an interview. “When you have clearly a super powerful new technology that’s going to be massively transformative and is obviously dual purpose, and we don’t fully understand that,” he added.

This “dual purpose” nature is precisely what worries many experts. AI can be used for incredible good, from medical breakthroughs to addressing climate change, but it can also be weaponized, used for surveillance, or employed to create increasingly sophisticated disinformation campaigns.
Despite his concerns, Hassabis maintains a cautious optimism, believing that humanity can navigate these challenges successfully. He believes that with sufficient time and effort, the right approach can be found: “…so I’m optimistic we’ll get this right given enough time, using the scientific method, and enough of our smartest people working on it. And that’s where international collaboration, among other things, comes in,” he said.
Hassabis points out that the complexities of AI safety go beyond simply building technically sound systems. “There are several issues that [need attention]. One is the technical design: can you build it technically safe? But even if one was able to do that,” he cautions, “if other groups or other countries or other companies don’t do that, then it doesn’t matter.”
This emphasizes the critical need for international agreements and standards for AI development and deployment. If only some actors prioritize safety, the potential for misuse by others remains, undermining the collective effort to ensure AI benefits humanity. With the stakes so high, the call for careful consideration and international cooperation is more crucial than ever.
Hassabis isn’t the only AI leader who’s spoken about the dangers of AI. Fellow Nobel laureate Geoffrey Hinton has said that AI models are dangerous if not built with adequate guardrails, and even said that open-sourcing powerful models was like selling nuclear weapons at Radioshack. He has also said that AI is already showing signs of being deliberately deceptive. And with another Nobel Prize winner saying that it’s insane to not worry about the risks of AI at all, the entire tech community would do well to keep these warnings in mind as they build and deploy AI systems around the world.