Google has been pivotal to the current AI revolution by releasing research like the transformer architecture into the open for everyone to use, but it now thinks that not everyone else in the industry is pulling their weight.
In a candid discussion about the state of AI research sharing, Google DeepMind CEO Demis Hassabis has voiced concerns about an emerging imbalance in the artificial intelligence community. While acknowledging the traditional benefits of open science, Hassabis argues that the current dynamic—where Google and a few others contribute foundational research while many competitors simply build upon it without giving back—may be unsustainable. His comments come at a crucial moment as the industry races toward artificial general intelligence (AGI), with safety and ethical considerations becoming increasingly urgent.

Hassabis began by defending the fundamental principle behind open research. “This is why science works. Open science is because if there’s a full sharing of everything, then progress overall is faster, right? So if you look at it from an overall perspective of the field, that’s the case,” he explained.
However, he suggested that the slower pace of progress resulting from less openness might not be entirely negative. “But look, it may be a good thing that it’s not as fast as that, because there’s safety to worry about. There’s a whole bunch of other things that we need to think through with this technology, right? Apart from the commercial concerns, there’s also safety and philosophical issues and ethical issues that we don’t have a lot of time to sort out before we get to AGI, you know, in a kind of five to 10 year timeframe. So maybe that’s no bad thing. There’s a bit more time to think through.”
The DeepMind chief then turned to what appears to be a growing frustration within Google about the asymmetry of contributions to open AI research. “And then the other things are, you know, just speaking for ourselves, obviously we put Transformers out there. I don’t think we get enough credit for Google putting Transformers out there and the whole modern industry is based on that one thing,” Hassabis said.
He continued: “I think just to sort of, from our point of view, a lot of other people were using our stuff we put out in the open, but were not necessarily contributing to the commons. And that’s a little bit unsustainable over the longer term, right? We and others were putting a lot of stuff into the commons, publishing everything openly, and others were just taking that, building on it. Fair play. That’s of course their prerogative, but not publishing stuff back, at least to the same magnitude. So, you know, that can only work for so long, I think.”
Hassabis’s remarks highlight a growing tension in the AI industry between openness and competitive advantage. His reference to Transformers is particularly significant—the architecture, introduced in Google’s landmark 2017 paper “Attention Is All You Need,” has indeed become the foundation for virtually every major AI breakthrough since, from GPT models to Claude to Gemini itself. Yet as the commercial stakes have risen, companies like OpenAI and Anthropic have become increasingly closed about their methods, while still benefiting from the open research culture that enabled their initial breakthroughs.
This shift toward secrecy has been notable across the industry. OpenAI, despite its name, no longer publishes detailed research papers about its flagship models. Similarly, other well-funded AI labs have adopted more guarded approaches to sharing their work, even as they continue to build on publicly available research foundations.
The sustainability question Hassabis raises is particularly relevant as AI development becomes more resource-intensive. The most advanced models now require hundreds of millions of dollars in computing resources to train, creating a situation where a handful of well-funded companies can advance rapidly by leveraging open research without reciprocating. This dynamic could ultimately undermine the collaborative scientific culture that accelerated AI progress in the first place, potentially leading even traditionally open organizations like Google DeepMind to reconsider their approach to publishing research.