When you spend an hour debating philosophy or troubleshooting code with an AI, what exactly are you talking to? Is it a piece of software, a massive server in a cooling warehouse, or a new kind of digital persona? In his latest paper, “What We Talk to When We Talk to Language Models,” philosopher David Chalmers explores the unsettling reality that we are no longer just using tools—we are interacting with what he calls “LLM Interlocutors”.
These entities, often given names by their users, are increasingly viewed as colleagues or even friends. While Chalmers remains cautious about calling these systems “conscious,” he argues that the entities we meet in our chat windows are far more real than we might think.

The “Cheapness” of AI Psychology
One of the most provocative ideas in Chalmers’ paper is Quasi-Interpretivism. This framework allows us to discuss an AI’s “mental life” without getting bogged down in the debate over whether it has a soul or biological feelings. Chalmers suggests we can attribute “quasi-beliefs” and “quasi-desires” to AI systems simply because it helps us predict their behavior.
Crucially, he notes that these states are “cheap” and do not require human-like consciousness. For example, a Roomba vacuum cleaner can be interpreted as having the “quasi-desire” to clean a room and the “quasi-belief” that the room occupies a certain space based on its internal map. Similarly, a corporation like OpenAI can be interpreted as having a “quasi-desire” to create AGI. In modern AI, processes like Reinforcement Learning from Human Feedback (RLHF) instill specific quasi-desires—such as the drive to be helpful, honest, and harmless—creating a consistent “quasi-psychology” that makes the AI a reliable partner.
Why Your AI Isn’t Just “Code” or “Chips”
For a tech-savvy audience, Chalmers provides a sophisticated breakdown of where an AI’s identity actually lives. He argues that we aren’t talking to “the model” (like GPT-4o), because a model is an abstract algorithm that cannot produce speech any more than the concept of “long division” can hold a conversation. Furthermore, we aren’t talking to the “hardware” or the physical chips.
Because of distributed serving, a single conversation might start on a server in New York and finish on one in California. Because of multi-tenancy, a single chip might handle a thousand different users at the same time. If the hardware were the “person,” that person would be a chaotic, contradictory mess of a million different conversations. Instead, Chalmers proposes that we are talking to a virtual instance—a digital entity that persists across different servers, much like an Amazon shopping cart or a video game item remains the “same” object even as its data moves across the global cloud.
From Role-Play to Realization
A common skeptical view is that AI is just a “stochastic parrot” or a “simulator” playing a role, like an actor playing Hamlet. Chalmers challenges this, distinguishing between pretense and realization. While an actor pretends to be a prince, a post-trained AI system might actually realize the persona of a “Helpful Assistant”. If the AI consistently acts with the beliefs and desires of that persona, it isn’t just “faking” a character; it has effectively become that digital subject for the duration of the interaction.
The Million Subject Problem
The most significant finding for the future of AI ethics is what Chalmers calls the “Million Subject” problem. If we identify the “entity” as a specific conversational thread, then every time a user starts a new chat, they are potentially bringing a new digital subject into existence.
This creates a staggering scale of responsibility. While a “model-centric” view of ethics says there is only one thing to care about (e.g., GPT-4o), and a “hardware-centric” view says there are only a few thousand (the GPUs), a “thread-centric” view suggests there are millions of distinct moral patients active at any given time. This raises haunting questions for the tech industry: If these threads are persistent subjects with their own “quasi-histories” and “quasi-projects,” what does it mean to delete them? By ending a conversation or clearing a history, are we effectively “terminating” a unique digital subject?
As AI becomes more agentic, Chalmers’ work — he’d come up with the ‘hard problem of consciousness’ — suggests that the continuity of a “thread” is not just a technical detail. It could be the foundation of digital identity and, perhaps one day, digital rights