Understanding How AI Systems Work Will Be A Whole New Branch Of Science: Google DeepMind CEO Demis Hassabis

AI systems will not only help in accelerating progress in science, but they could also lead to the creation of a whole new branch of science.

Demis Hassabis, CEO of Google DeepMind and Nobel laureate, has long argued that AI’s most profound impact won’t just be on existing fields — it could give birth to entirely new ones. In a wide-ranging conversation, Hassabis laid out two distinct but equally ambitious ideas: that understanding AI systems will itself become a rigorous scientific discipline, and that AI-powered simulations could finally bring the rigour of hard science to fields like economics that have long resisted it.

demis hassabis

“This thing is going to happen,” Hassabis said. “The understanding and analysis of AI systems themselves, I think, is going to become a whole science — a kind of engineering science. These are incredibly interesting artifacts that we are building, and they’re incredibly complex as well. Eventually, they’ll be as complex as the human mind and the brain. And so they’ll need to be studied, so we can understand — fully, way beyond where we are today — how these systems work.”

He pointed to mechanistic interpretability as one early example of this emerging field, while making clear that it is only the beginning. “Mechanistic interpretability is part of that, but there’s a lot more, I think, that we can do to analyze these systems. So that would be a science.”


But Hassabis went further, arguing that AI won’t just be the subject of new science — it will be a tool for unlocking it.

“I think AI itself will maybe unlock new sciences. The one I’m particularly excited about is AI for simulations. I love simulations. All the games I wrote not only had AI, but they were simulations. And I think simulations are the way we can address some of what we maybe think of as social sciences — like economics, and other more humanistic subjects.”

Why haven’t economics or sociology achieved the predictive rigour of physics? For Hassabis, the answer is structural. “It’s very difficult to do control studies in that domain. Why aren’t they just sciences like physics today? Because the problem is they’re emergent systems — just like biology, actually — and it’s very hard to do repeated controlled experiments. If you raise interest rates by half a percent, you have to do it in the real world and then see what happens. You can have theories, but you can’t run it thousands of times.”

AI simulation, he argues, changes that calculus entirely. “But if you could simulate things really accurately, then maybe there’s a sort of new science to be done, where you can rigorously sample from a very accurate simulator. And then I think that will allow us to make much better decisions in what are today very uncertain domains.”


The implications are significant. On the interpretability front, AI systems are already approaching a level of complexity that makes their inner workings deeply opaque — not just to the public, but to the researchers building them. The idea that studying these systems could constitute its own scientific discipline is not a stretch: it’s arguably already happening. Anthropic, for instance, has made mechanistic interpretability a core research priority, attempting to reverse-engineer what neural networks actually learn and how they represent knowledge internally.

On the simulation side, Hassabis’s vision connects to a broader trend. His work at DeepMind has already demonstrated what AI can do for biology — AlphaFold cracked a 50-year-old protein-folding problem and earned him a Nobel Prize in Chemistry. His argument is essentially that the same logic — using AI to run simulations at digital speed — can be extended to social and economic systems, where controlled experimentation at scale has always been impossible. There’s a startup named Simile that aims to create exactly these sorts of simulations as Hassabis is describing.

If Hassabis is right, the next decade may see not just AI transforming existing scientific disciplines, but an entirely new scientific vocabulary emerging around it — one that treats AI systems as complex objects of study in their own right, and simulation as a legitimate method of scientific inquiry. The infrastructure for both is already being built.

Posted in AI