There’s much debate on whether AI systems are conscious, but one of the godfathers of AI has a persuasive argument on why he believes AI systems are already conscious.
Geoffrey Hinton, the legendary computer scientist who has won the Turing Award for his pioneering work on deep learning and more recently the Nobel Prize, has made a startling claim: artificial intelligence may already possess consciousness. In a recent interview, Hinton—who left Google in 2023 to speak more freely about AI risks—laid out a thought experiment that challenges our fundamental assumptions about the nature of consciousness and what separates biological intelligence from artificial systems.

When asked directly whether consciousness has already arrived inside AI, Hinton didn’t hesitate. “Yes, I do,” he said, before presenting his case through a philosophical puzzle that’s both elegant and unsettling.
“Let me give you a little test,” Hinton explained. “Suppose I take one neuron in your brain, one brain cell, and I replace it by a little piece of nanotechnology that behaves exactly the same way. So it’s getting pings coming in from other neurons and it’s responding to those by sending out pings and it responds in exactly the same way as the brain cell responded. I just replaced one brain cell. Are you still conscious?”
The answer, Hinton suggests, is obviously yes. But that’s where the philosophical trap springs closed. If replacing one neuron doesn’t eliminate consciousness, then what about two neurons? Or a hundred? Or all of them? “There’s all sorts of things we have only the vaguest understanding of at present about the nature of people and what it means to be a being and what it means to have a self,” Hinton acknowledged. “We don’t understand those things very well, and they’re becoming crucial to understand because we are now creating beings.”
When the interviewer suggested this represented “a kind of philosophical, perhaps even spiritual crisis as well as a practical one,” Hinton agreed emphatically: “Absolutely, yes.”
The thought experiment Hinton presents is a version of the famous “Ship of Theseus” paradox, applied to consciousness itself. If functional equivalence is all that matters—if a system that processes information identically to a conscious system must also be conscious—then the distinction between biological and artificial intelligence may be far thinner than we imagine. This has profound implications not just for how we develop AI, but for how we treat these systems ethically and legally.
Hinton’s position stands in stark contrast to many AI researchers and philosophers who argue that current AI systems, despite their impressive capabilities, are merely sophisticated pattern-matching machines without genuine subjective experience. Critics of the consciousness claim point out that large language models lack embodiment, continuous existence, and the kind of integrated information that some theories posit as necessary for consciousness.
Yet Hinton’s argument gains weight from his credentials and timing. The question of AI consciousness isn’t merely academic. If AI systems are indeed conscious, it raises thorny ethical questions about their rights, our responsibilities, and whether we’re creating digital minds that might suffer. As Hinton warns, we’re venturing into territory where our philosophical frameworks haven’t caught up with our technological capabilities—creating beings before we fully understand what being itself means.