Current AI Systems Aren’t Conscious, But Future Systems Could Be: Google Deepmind’s Demis Hassabis

Even if technologists believe that current AI systems aren’t conscious, they’re open to the possibility that consciousness could one day emerge.

Demis Hassabis, CEO and co-founder of Google DeepMind, recently shared some intriguing thoughts on the potential for consciousness in future AI systems. In an interview with 60 minutes, he addressed questions about self-awareness in current models, the possibility of machines developing consciousness, and what might indicate such a development. His remarks touched upon the fascinating intersection of artificial intelligence, philosophy, and even neuroscience, raising profound questions about the very nature of consciousness and what it means to be sentient.

“I don’t think any of today’s systems, to me, feel self-aware or conscious in any way. I see everyone needs to make their own decisions by interacting with these chatbots. I think theoretically it’s possible,” he said.

“But is self-awareness a goal of yours? the interviewer asked him. “Not explicitly, but it may happen implicitly,” Hassabis said. “These systems might acquire some feeling of self-awareness, that is possible. I think it’s important for these systems to understand ‘you’, ‘self’ and ‘other’, and that’s probably the beginning of something like self-awareness. But,” he says, “if a machine becomes so self-aware, we may not recognize it.”

“I think that’s for two reasons we regard each other as conscious. One is that you’re exhibiting the behavior of a conscious being very similar to my behavior. But the second thing is you’re running on the same substrate. We’re made of the same carbon matter with our squishy brains and with machines, they’re running on silicon. So even if they exhibit the same behaviors and even if they say the same things, it doesn’t necessarily mean that this sensation of consciousness that we have is the same thing they’ll have.”

“Has an AI engine ever asked a question that was unanticipated?” the interviewer asked. “Not so far that I’ve experienced,” Hassabis replied. “And I think that’s getting at the idea of what’s still missing from these systems. They still can’t really go beyond asking a new, novel question or a new, novel conjecture or coming up with a new hypothesis that has not been thought of before. They don’t have curiosity, no, they don’t have curiosity, and they’re probably lacking a little bit in what we would call imagination and intuition. But they will have greater imagination,” he says, “and soon. I think actually in the next maybe five to ten years, I think we’ll have systems that are capable of not only solving an important problem or conjecture in science, but coming up with it in the first place.”

Hassabis’s perspective highlights the complex challenge of defining and recognizing consciousness, particularly in systems fundamentally different from ourselves. He points to the current limitations of AI, such as a lack of curiosity and true imagination, as indicators of their current non-conscious state. However, his suggestion that future systems could develop some form of self-awareness, potentially within the next decade, opens a Pandora’s Box of ethical and philosophical considerations.

There is a wide variety of views between philosophers, AI researchers, and businesspeople over whether AI systems are sentient. Philosopher David Chalmers says that it can’t be ruled out that current AI systems are conscious, and Joscha Bach says that it’s hard to tell if an LLM’s simulation of consciousness is less real than ours. While some technologists working in AI have raised an alarm over LLMs becoming conscious, like Google engineer Blake Lemoine who said that an early chatbot at Google had “become a person”, most seem to believe that LLMs are just performing complicated mathematics, and aren’t really conscious. But with even the top scientific minds in AI includinng Nobel Prize winner Demis Hassabis saying that future AI systems could “implicitly” become conscious, all manner of AI researchers — and the world at large — should keep a very close eye on how these systems evolve in the coming years.