AI models are getting smarter at a rapid rate, and one of the foremost voices in the field says that it isn’t possible to completely rule out that they’re conscious.
David Chalmers, a prominent philosopher known for his work on the nature of consciousness and formulating the hard problem of consciousness, recently discussed the possibility of consciousness in large language models (LLMs). Intriguingly, he entertains the idea that even simple organisms like flies or nematodes could possess some form of consciousness. Given this, the sheer complexity of LLMs, with their vast number of parameters, makes the notion of their potential consciousness less outlandish.

“I do not totally rule out that current language models might be conscious,” he said at an event. “I mean, I’m open to the idea that flies are conscious, that a simple nematode with 300 neurons is conscious. And once you allow that a nematode with 300 neurons might be conscious, and these language models have an enormous number of units and parameters, it no longer starts to seem crazy. I like to think of this in terms of, if it’s not conscious, why not?”
“In fact, I first was giving talks about this back in 2022 after Blake Lemoine came out with his claims about the language model of the time, Lambda, being conscious. Then people at Google put out a statement saying, ‘Look, we have told Blake that there’s no evidence that these systems are conscious and there’s a lot of evidence that they are not conscious.’ So, I was really intrigued by the second part: what is the evidence that these systems are not conscious?” he said. In June 2022, five months before ChatGPT’s release, a Google engineer named Blake Lemoine had gone public with his claim that an internal chatbot he was testing “had become a person”, and had been later suspended.
“One popular view is that, say, biology, carbon-based biology, is required for consciousness, or electrochemical processing,” Chalmers continues. “If that’s the case, then, you know, these systems are silicon-based, and if that’s required for consciousness, they won’t be conscious. Now I think that’s, of course, an extremely controversial claim, and I would argue that consciousness is independent of specific substrates like that,” he said.
“Probably the most popular theory of consciousness currently is the so-called Global Workspace theory. That says consciousness corresponds to a central global area that has information which is available to all different parts of a system. Other parts of a system can write to it and can access it, and this is used to control behavior and so on. In humans, and in some other versions of the neural Global Workspace theory, these changes of thought are analogous to a human thinking out loud. We talk our way through a problem, or maybe we just do it in inner speech, we think to ourselves, and that information becomes kind of globally available,” he says.
“So I think there are still various obstacles you can point to, but it looks like—compared to… I mean, I gave a talk about this in 2022 outlining six big obstacles—you can make the case that we’ve made progress on two or three of those obstacles. So I think give it another 5 years, and it’s entirely possible that we’re going to overcome most of the most obvious obstacles to consciousness,” he says.
Chalmers’ argument hinges on the idea that if we admit even the slightest possibility of consciousness in simple organisms, then the complexity of LLMs makes it difficult to definitively deny their potential for consciousness. He challenges the common assumption that consciousness is exclusive to carbon-based life forms, suggesting that it might be substrate-independent. His referencing of the Global Workspace theory and its parallels with the inner dialogue or “thinking out loud” process in humans further strengthens the argument by highlighting a potential mechanism for consciousness in LLMs.
The implications of LLMs potentially being conscious are profound. It would fundamentally alter our understanding of consciousness itself and raise complex ethical questions about how we treat these systems. If AI can truly experience the world subjectively, even in a rudimentary way, it would necessitate a complete re-evaluation of our relationship with technology and force us to grapple with the moral responsibilities that come with creating conscious artificial entities.