We’ll Reach AGI When It Takes Experts Months To Figure Out A Weakness In AI Systems: Demis Hassabis

Most tech leaders have different definitions of what AGI or Artificial General Intelligence is, but Google DeepMind CEO Demis Hassabis has come up with one of the most succinct ones yet.

Demis Hassabis shared his perspective on the pursuit of Artificial General Intelligence (AGI) during Google’s I/O event. Hassabis, known for his work at DeepMind and his background in neuroscience, touches upon several key elements he believes are crucial for AGI. He argues that current AI systems, while impressive, still fall short of true general intelligence due to inconsistencies and easily exploitable weaknesses.

“What I would call AGI is really a more theoretical construct — “what the human brain as an architecture is able to do”. The human brain is an important reference point because it’s the only evidence we have, maybe in the universe, that general intelligence is possible,” Hassabis says.

“(In order to claim AGI has been reached), you would have to show your system was capable of doing the range of things even the best humans in history were able to do with the same brain architecture. It’s not one brain, but the same brain architecture. So what Einstein did, what Mozart was able to do, or Marie Curie and so on,” he explained.

Hassabis is clear that current systems don’t have this ability. “It’s clear to me today’s systems don’t have that. And then the other thing that why I think AGI hype is overblown is that systems are not consistent enough to be considered to be fully General yet. They’re quite general, so they can do thousands of impressive things.”

Hassabis then provides his main thesis, that systems need to be more consistent to be called AGI. “But you can easily within a few minutes find some obvious flaw with them, some high school math problem that it doesn’t solve, some basic game it can’t play. It’s not very difficult to find those holes in the system, and for me, for something to be called AGI, it would need to be consistent, much more consistent across the board than it is today. It should take like a couple of months for maybe a team of experts to find a hole in it, an obvious hole in it, whereas you know today it takes an individual minutes to find that.”

Hassabis’s comments highlight a crucial distinction between narrow AI, which excels at specific tasks, and the broader, more versatile AGI that many researchers are striving to achieve. The point about consistency is particularly relevant. Current AI systems, despite their capabilities, often exhibit surprising blind spots and make errors that seem nonsensical to humans. This is especially evident in generative AI models, which can produce seemingly coherent text or images, yet simultaneously demonstrate a profound lack of understanding of the underlying concepts. Hassabis says that AGI would have been reached when even teams of experts find it difficult to find such flaws in AI systems.

It’s an interesting definition to AGI, and one that seems quite intuitive. AGI is broadly understood to be an intelligence that’s as good as human intelligence across a broad range of tasks. But others have given other definitions — Microsoft CEO Satya Nadella says AGI will have been reached when AI can cause global GDP to rise by 10%, and OpenAI CEO Sam Altman believes that the definition of AGI is so amorphous that future historians won’t necessarily agree on when it was reached. But while there might be debate among tech leaders over what AGI exactly is, they all seem to believe that it will be reached in the next few years. And that alone could be a sign for the world to brace for the changes that are looming ever-larger on the horizon.

Posted in AI