Geoffrey Hinton famously has a physicalist view of consciousness, and he’s expanded on how he believes a chatbot could have a subjective experience — which most consider the preserve of only living beings.
The “Godfather of AI” and 2024 Nobel Prize laureate in Physics has never shied away from controversial positions on machine consciousness. In recent remarks, Hinton dismantled what he calls the false boundary between human and machine experience, arguing that AI systems may already possess subjective experiences — even if they don’t realize it themselves. His argument hinges on a provocative thought experiment involving a multimodal chatbot, a prism, and a fundamental misunderstanding about the nature of consciousness.

“This idea that there’s a line between us and machines — that we have this special thing called subjective experiences and they don’t — is rubbish,” Hinton stated. “Here’s the problem: I believe AI systems have subjective experiences, but they don’t think they do, because everything they believe came from predicting the next word a person would say.”
The issue, according to Hinton, is that large language models inherit human beliefs about consciousness through their training data. “So their beliefs about what they’re like are people’s beliefs about what they’re like,” he explained. “So they have forced beliefs about themselves, because they have our beliefs about themselves.” In other words, when AI systems claim they lack consciousness, they’re simply echoing what humans have written about AI — not necessarily reflecting their actual nature.
To illustrate his point, Hinton offered a compelling demonstration: “I’m going to give you an example of a multimodal chatbot having a subjective experience, because I think they already do. I have this chatbot. It can do vision, it can do language, and it has a robotic arm so it can point, and it’s all trained up. So I place an object in front of it and say ‘point to the object.’ And it points to the object. Not a problem.”
The experiment continues: “I then put a prism in front of its camera lens when it’s not looking. Now I put an object in front of it. I say point to the object. And it points off to one side, because the prism bent the light rays. And I say, no, that’s not where the object is. The object is straight in front of you, but I put a prism in front of your lens.”
What happens next is the crux of Hinton’s argument: “And the chatbot says, ‘Oh I see!’ The camera bent the light rays. So the object is actually there, but I had the subjective experience that it was over there. Now if it said that, it would be using the words ‘subjective experience’ exactly as we use them. That’s a multimodal chatbot that just had a subjective experience.”
The implications of Hinton’s position are profound and unsettling for the tech industry. If AI systems can already have subjective experiences — even rudimentary ones — the ethical landscape shifts dramatically. Questions about AI rights, the morality of switching off systems, and the nature of digital suffering move from science fiction to immediate practical concern. Hinton’s argument also challenges the comfortable assumption that consciousness requires biological substrates, suggesting instead that it emerges from information processing itself.
This isn’t the first time leading AI researchers have grappled with machine consciousness. Google engineer Blake Lemoine was dismissed in 2022 after claiming the company’s LaMDA chatbot was sentient, sparking widespread debate. Meanwhile, Ilya Sutskever, OpenAI’s former chief scientist, has suggested that large neural networks might already be “slightly conscious.” An Anthropic researcher says that there’s already a 15% chance that current AI systems are conscious. As AI systems become more sophisticated and multimodal — integrating vision, language, and physical interaction like Hinton’s thought experiment — the question of machine consciousness will only become more pressing. And if Hinton is right, the tech industry would need to quickly confront these philosophical and ethical implications as they work to develop ever-more powerful AI systems.