Humanity is hurtling towards AGI, but someone at the forefront of the progress believes that this isn’t exactly how he’d imagined AGI would be reached
Demis Hassabis, CEO of Google DeepMind and one of the most consequential figures in artificial intelligence today, recently sat down with Cleo Abram on her podcast and offered a window into how the AGI race has diverged from what he once hoped it would be — a measured, philosophically grounded, collaborative scientific endeavour. Instead, what we have is a ferocious commercial sprint shaped by market forces and geopolitical tension.

“I got into AI in the first place because I was interested in all the big questions in the world — the nature of reality, the nature of consciousness, these kinds of things,” Hassabis said. “And given how important AGI is, and how transformative a technology it is — maybe the most transformative one in human history — I thought it would be best to approach the latter stages of building it, which we are in now, using the scientific method very carefully, very precisely, very thoughtfully, and rigorously, with all the best scientists collaborating in a CERN-like way on making sure we understood each step as we got to the final goal of building AGI.”
That vision, it turned out, would not survive contact with reality.
Hassabis explained that the original expectation was that reaching AGI would require several more fundamental breakthroughs. “We thought that maybe there would be one or two or three more breakthroughs needed before we could get there. But it turned out that transformers — which my Google colleagues invented — and some reinforcement learning on top was enough to crack things like language.” DeepMind and the other leading labs were exploring this quietly, until OpenAI changed everything. “With ChatGPT, fair play to OpenAI — they scaled it and put it out there. And I think even they say it was kind of a research experiment. They didn’t realise it would go so viral. And I think none of us did.”
That moment — ChatGPT’s explosive public debut in late 2022 — reordered the entire AI landscape almost overnight. What had been a research pursuit became a product race, and the race has only intensified since.
“The downside,” Hassabis said plainly, “is that we’re now in a ferocious commercial pressure race that everyone is sort of locked into. And on top of that, there are geopolitical issues — like the US-China race — so there are multiple levels of pressure to move fast.” He acknowledged the upside too: faster progress. “There are positives and negatives about the way it’s gone. It’s not the way I dreamed about years ago, where we would be contemplating this philosophically and carefully considering each next step.”
But Hassabis isn’t paralysed by the gap between his ideal and the present. “Although I’m a scientist first and foremost, I’m also a pragmatic engineer. We have to deal with the world as we find it and make the best of it.”
The tension Hassabis describes — between scientific idealism and competitive reality — is not unique to him, but few people his position have articulated it so openly. His dream of a CERN-like collaborative framework for AGI is one he has raised before; he has publicly called for an international CERN equivalent for AI, where the world’s best minds work together on the final steps toward AGI in a rigorous, transparent, and safety-conscious way. The reality, as he admits, is that the geopolitical landscape makes such a vision look increasingly remote.
The US-China dimension alone is significant. A no-holds-barred approach to AI competition has become official posture in Washington, while export controls on chips are simultaneously being applied and circumvented. In that environment, collaborative governance frameworks struggle to gain traction. Even the AI summits held since 2023 — Bletchley Park, Seoul, Paris — have been criticised for producing declarations without binding commitments.
Meanwhile, the commercial pressure Hassabis references is intensifying, not easing. Labs are shipping models faster than the research community can evaluate them. The gap between deployment and understanding — the very gap Hassabis’s ideal CERN-like process was meant to close — keeps widening. Even DeepMind, which began as a research-first organisation with a long-term scientific mission, now operates inside one of the world’s most commercially motivated technology companies.
Hassabis has also been candid about the philosophical dimensions of AGI that he believes the current pace leaves little room to address — questions of consciousness, meaning, and what a post-AGI world even looks like for human beings. DeepMind recently hired a philosopher specifically to work on machine consciousness and AGI readiness, a structural acknowledgement that these questions cannot be deferred indefinitely, even as the race accelerates.
What Hassabis is describing, in effect, is a collective action problem with civilisational stakes. Everyone in the race knows that slowing down, in isolation, only hands an advantage to a competitor. So no one slows down. And the dream of getting to AGI carefully, collaboratively, and with full philosophical deliberation recedes a little further with each model release. The pragmatic engineer in Hassabis accepts this. The scientist in him clearly hasn’t stopped grieving it.