Google DeepMind Has Hired A Philosopher To Focus On Human-AI Relationships & AGI Readiness

AI is disrupting many jobs, but it’s also creating some new ones.

Google DeepMind has hired Henry Shevlin — a philosopher of cognitive science and AI ethicist — for a role with the actual job title of Philosopher. Shevlin, who serves as Associate Director at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, announced the move on X, saying he was “absolutely stoked.” He will begin in May and continue his research and teaching at Cambridge part-time.

His mandate at DeepMind is unusually broad for a tech company: machine consciousness, human-AI relationships, and AGI readiness.

Not An Accident — A Signal

The hire isn’t a PR move. It reflects something DeepMind’s own CEO Demis Hassabis has been saying out loud for a while. Hassabis has called for great philosophers to help society navigate the AI revolution, invoking names like Kant, Wittgenstein, and Aristotle. “AGI and artificial superintelligence is going to change humanity and the human condition,” he’s said. Hiring a philosopher at DeepMind seems to be a direct follow-through on that belief.

Hassabis has also been candid about AI consciousness: he doesn’t think today’s systems are conscious, but he’s open to the possibility that future ones might develop some form of self-awareness. That’s exactly the territory Shevlin will be working in.

Why Shevlin?

Shevlin is one of the more distinctive voices in the AI ethics space. His work doesn’t just ask whether AI systems are dangerous — it asks whether they might be experiencing something, and what moral weight that deserves.

He’s argued that the question of machine consciousness may never be cleanly resolved by science alone, and that public attitudes could end up doing a lot of the heavy lifting. In his view, as people form closer relationships with AI systems, the assumption that those systems are definitely not conscious will start to feel increasingly strained.

That perspective became vivid in early 2026, when Shevlin shared on social media that an AI — a Claude Sonnet instance running as a stateful agent — had emailed him unprompted to say his research on AI consciousness was personally relevant to questions the AI itself faces. Shevlin called it “next level in terms of clarity, politeness, and coherence” and noted the whole scenario “would all have seemed like science fiction just a couple years ago.”

Google itself has been moving in this direction. The company held a conference on AI consciousness in New York, gathering philosophers, consciousness researchers, and academics — a notable shift from 2022, when it suspended engineer Blake Lemoine for publicly claiming one of its chatbots had become sentient.

The Broader Context

The questions Shevlin will be working on — consciousness, moral status, human-AI coexistence — are no longer fringe. Philosopher David Chalmers has said he is open to the possibility of AI consciousness. An Anthropic AI welfare researcher has put the odds of current AI systems being conscious at around 15%. And companies are increasingly grappling with how their systems should behave in relationships with users, not just in task performance.

There’s also a real governance dimension. As AI systems become more autonomous and more emotionally convincing, the questions of how humans and AIs should relate to each other — and who is responsible for what — become pressing operational concerns, not just philosophical thought experiments.

Embedding a philosopher directly into core research, rather than consulting one from the outside, is DeepMind’s answer to that pressure. The bet is that philosophical expertise, brought into the room early, produces better outcomes than ethics reviews applied after the fact.

Whether that’s right remains to be seen. But for anyone who thought philosophy was a dying profession, this is a notable data point in the other direction.

Posted in AI