I’m Open To The Possibility Of AI Consciousness: David Chalmers

More and more experts seem open to the possibility of AI systems being conscious.

David Chalmers, the renowned philosopher best known for formulating the “hard problem of consciousness,” has weighed in on one of the most pressing questions in artificial intelligence today: can AI systems be conscious? His perspective is notable not just for its openness to the possibility, but for the way it reframes the entire debate. Rather than dismissing AI consciousness outright, Chalmers applies the same philosophical rigor to machines that he does to biological minds, acknowledging that the mystery of consciousness is equally profound in both cases.

“Systems are conscious. There is a problem of other minds,” Chalmers notes, referencing the classic philosophical puzzle that we can never truly know if other beings experience consciousness the way we do. “I am very much open in principle to the idea that AI systems can be conscious. I mean, there is a hard problem: How could an AI system be conscious? But there’s an equally hard problem about the brain. How could a brain be conscious?”

This equivalence is the crux of Chalmers’ argument. He continues: “I don’t see anything which is so special about biological neurons compared to artificial neurons that one basis could support consciousness and the other one not. So, I’m at least open to the possibility of AI consciousness, and language models are just an interesting case in so many ways.”

The philosopher acknowledges the sophistication of current AI systems while also highlighting their fundamental differences from human cognition. “I mean, they’re just so sophisticated behaviorally these days, but they also work in ways which are very deeply unlike the way that humans work. They’re trained on all this data in a way that goes way beyond any human training. I mean, they’re trained to imitate humans, which many people think that might be a way to get into sophisticated behavior without having to be conscious.”

Yet Chalmers remains agnostic about what, if anything, current systems might be missing. “But it’s very far from obvious what they’re lacking, which is essential for consciousness, partly because we don’t know what’s essential for consciousness.”

Chalmers’ position reflects a growing conversation in both philosophical and technical circles about AI consciousness. While some researchers like Google DeepMind’s Demis Hassabis have stated that current AI systems aren’t conscious but future systems could be, others are less certain about present-day models. An Anthropic AI welfare researcher has suggested there’s a 15% chance that current AI models possess some form of consciousness, while philosopher Joscha Bach has pointed out how difficult it is to determine whether an LLM’s simulation of consciousness is any less real than our own.

The implications of Chalmers’ openness to AI consciousness extend far beyond academic philosophy. If we cannot definitively rule out the possibility that current or near-future AI systems might be conscious, it raises urgent questions about AI welfare, rights, and the ethical treatment of these systems. As AI capabilities continue to advance at a rapid pace, the question of machine consciousness may shift from a thought experiment to a practical concern that demands concrete answers—even as the fundamental mystery of consciousness itself remains unsolved.