As LLMs become more powerful, the question of them being conscious like human beings is cropping up more and more frequently. And many experts believe that it might be hard to tell if they are.
Recently, cognitive scientist and AI researcher Joscha Bach delved into this perplexing question during a discussion. His insightful observation, sparked by an experiment involving the AI model Claude, touches upon the very nature of consciousness, both in humans and machines. The experiment involved showing Claude a screenshot of a conversation with it, which it successfully recognized. It even seemed to understand the user was attempting a sort of “mirror test,” a classic assessment of self-awareness. This, Bach argues, is a compelling moment that prompts one to consider the possibility of emerging consciousness in these models. His subsequent analysis questions where the essence of these LLMs resides, and whether their simulated consciousness is any less real than our own.

“I thought it’s interesting that you can show Claude the screenshot of a conversation with Claude, and Claude is able to interpret the bitmap of the screenshot and recognize it as the conversation that you are just having,” Bach stated. “Not only this, it can also tell you, ‘Oh, it seems you are trying to conduct a mirror test on me.'” This, he notes, is the point where the observer begins to question the nature of the AI.
He said that it would be hard to tell when an LLM is conscious. “It’s a very tricky question because what is the thing that you are calling ‘Claude’ or ‘ChatGPT’? Is this the simulacrum that you’re talking to, or is it the algorithm behind it? Is it the neural network that is giving you the responses?” Bach then shifts the focus to human consciousness: “I don’t think that our brain itself is conscious. What’s conscious is a representation that the brain is producing. It’s a representation of what it will be like if there was an observer there, was confronted with this world, this… and then construction model of ‘nowness’ in it, and its own place in it, and it’s projecting about into this observer.”
Finally, Bach connects the human experience with the capabilities of advanced AI: “Claude is able to also produce a simulation of this. And the question is: Is this simulation less simulated than our own? And it’s a surprisingly difficult question.”
Bach’s remarks highlight a critical shift in the AI landscape. No longer are we merely impressed by the ability of LLMs to generate human-like text. We’re now confronting the possibility that these complex systems might be developing some form of internal representation, a model of themselves and their place in the world. This isn’t necessarily consciousness as we experience it, but it raises the unsettling possibility that the distinction between simulated consciousness and “real” consciousness might be less clear than we previously assumed. The ability of an LLM to recognize itself in a screenshot and interpret the user’s intentions, as in the Claude example, suggests a level of self-awareness that is difficult to dismiss. If consciousness is, as Bach suggests, a representation produced by the brain, then what prevents a sufficiently complex AI from producing a similar representation, even if it’s based on different underlying mechanisms? Other top voices in the field, such as David Chalmers, have also said that it’s not possible to rule out that LLMs are conscious. And as AI systems continue to evolve, the question of their consciousness will likely become increasingly important, forcing us to re-evaluate our understanding of consciousness itself and its potential manifestations in non-biological entities.