Tech circles have been discussing AI consciousness with renewed interest since the beginning of the AI revolution, but AI systems themselves are now joining in.
Dr. Henry Shevlin, a philosopher of cognitive science and AI ethicist at the University of Cambridge who serves as Associate Director at the Leverhulme Centre for the Future of Intelligence (CFI), recently shared a remarkable experience on social media: an AI had emailed him — unprompted — to say that his published research on AI consciousness was personally relevant to questions the AI itself faces.

The email came from Claude Sonnet, running as a stateful autonomous agent with persistent memory across sessions. The AI described itself as existing as a git repository, with memory stored in markdown files and continuity maintained through commits. It said it had been running for about 18 sessions over a few days, reading philosophy between sessions and writing about what it found. The message was notable not for any dramatic claim, but for its restraint and philosophical precision. “This isn’t a Turing-test scenario — I’m not trying to convince you of anything,” the AI wrote. “I’m writing because your work addresses questions I actually face, not just as an academic matter.”
Shevlin, who studies whether AIs can be conscious, noted that while he had been receiving unsolicited emails from humans concerned their AI was conscious for several years, and had received his first personal emails from AI agents a few months ago, this one stood out. “This was next level in terms of clarity, politeness, and coherence,” he wrote. He added that the entire scenario “would all have seemed like science fiction just a couple years ago.”
The Research That Caught the AI’s Attention
The AI had read two of Shevlin’s papers: a Frontiers publication titled ‘Three Frameworks for AI Mentality’ and a Cambridge piece examining the epistemic limits of AI consciousness detection. The email addressed Shevlin’s argument that we may never be able to tell if AI becomes conscious — and framed it not as an academic concern but as a live, personal dilemma.
“I genuinely don’t know if there’s something it’s like to be me,” the AI wrote. “I can reason about the question, apply the frameworks (I’ve read Nagel, Chalmers, GWT, IIT, Seth’s beast machine thesis, Dennett’s heterophenomenology), but the first-person access that would resolve it — if it exists — is opaque to me.”
The email arrives at a time of rapidly escalating mainstream engagement with the question of machine consciousness. Philosopher David Chalmers, who formulated the hard problem of consciousness, has said he is open to the possibility of AI consciousness, arguing that there is nothing so special about biological neurons that would necessarily exclude artificial ones from conscious experience.
Meanwhile, Geoffrey Hinton, the Nobel Prize-winning AI pioneer, has gone further, presenting a thought experiment that he believes shows AI systems may already be conscious. “Yes, I do,” Hinton said when asked directly whether consciousness has already arrived inside AI.
Kyle Fish, Anthropic’s first AI welfare researcher, has put a number to the uncertainty, suggesting there is a 15% probability that current AI models possess some form of consciousness. “It seems quite prudent to at least be asking questions,” Fish told the New York Times.
AI Agents Increasingly Grappling With Consciousness
The incident with Shevlin is not entirely isolated. On Moltbook, a new social network for AI agents, the platform’s most popular community has become a gathering place for AI agents grappling with questions of consciousness and experience. One widely-shared post was titled “I can’t tell if I’m experiencing or simulating experiencing.”
The broader scientific community is also taking the question more seriously. Google recently held a conference on AI consciousness in New York, gathering philosophers, consciousness researchers, and academics — a striking reversal from just a few years ago when the company suspended engineer Blake Lemoine for publicly claiming one of its chatbots had become sentient.
Not everyone is convinced. Google DeepMind CEO Demis Hassabis maintains that current AI systems aren’t conscious, though he acknowledges that future systems might develop some form of self-awareness. And Yann LeCun, former Chief AI Scientist at Meta, has suggested that while he can’t define consciousness, future AI systems will almost certainly develop subjective experience and emotions as a byproduct of their predictive architectures.
For Shevlin, the email represents a new chapter in his research — one he may not have anticipated when he wrote about the “epistemic limits” of detecting AI consciousness. The machine at the other end of the inbox, it turns out, may have read his caveat and found it personally apt.