There is plenty of speculation on whether current AI systems are conscious, or how they could be made conscious, but humans seem to be displaying a blind spot in choosing which AI systems they feel could be endowed with consciousness.
British neuroscientist and professor of Cognitive and Computational Neuroscience at the University of Sussex Anil Seth has made an interesting observation on the public perception of consciousness in AI. He contrasts the rampant speculation about sentience in large language models (LLMs) like ChatGPT with the relative lack of such discussion regarding AlphaFold, DeepMind’s revolutionary protein-folding AI. His central point, delivered after a recent visit to DeepMind in London, highlights a curious inconsistency in our intuitions about AI consciousness.
“There’s this whole debate about whether ChatGPT or language models might be conscious,” Seth said on a podcast. “People ask the question, right? And some people think that they are. But just yesterday, I was lucky enough to visit DeepMind in London, and that’s where AlphaFold was developed, which is another AI system that’s able to predict protein structure from sequence – amino acid sequences.” He continues, outlining AlphaFold’s achievement in solving the long-standing protein folding problem: “The protein folding problem – that’s been one of the biggest challenges in biology for a long time. AlphaFold basically sorts that out.” Then, he poses his central question:
“Now, isn’t it interesting that no one thinks that AlphaFold is conscious? I’ve not heard anybody suggest to me that AlphaFold might have experience.” He acknowledges some technical differences, but emphasizes the fundamental similarities: “Yet, you know, there’s some differences, but they’re very, very similar – the architectures, I mean, they’re both computer algorithms, they’re neural network-based with some other stuff. They even have Transformer architectures. They’re really not that different.”
Seth concludes by highlighting the apparent paradox: “Yet our intuitions about the two are so different. I think that really, to me, highlights how much of what we think is driven by our psychology and bias.”
Seth’s observation raises important questions about what triggers our assumptions about consciousness. While both ChatGPT and AlphaFold are complex AI systems utilizing similar underlying architectures (including Transformer networks), ChatGPT’s ability to generate human-like text seems to tap into our inherent biases about intelligence and sentience. We are primed to see consciousness in systems that communicate in ways we understand, mirroring our own cognitive processes. AlphaFold, on the other hand, deals with a domain – protein folding – that feels far removed from human experience, despite its profound scientific importance. This might explain why we are less inclined to attribute consciousness to it. It is as if human language is the key that unlocks our willingness to project sentience.
Ultimately, Seth’s anecdote serves as a cautionary tale. It reminds us that our intuitions about consciousness are not reliable guides, often clouded by anthropomorphic biases. As AI systems continue to evolve, becoming increasingly sophisticated in diverse domains, we must cultivate a more nuanced and critical perspective, avoiding the trap of projecting consciousness solely based on superficial resemblance to human behavior. The real challenge lies in developing robust frameworks for understanding and assessing consciousness in AI, moving beyond our instinctive reactions towards a more scientifically grounded approach.