There’s plenty of chatter about whether AI might currently be in a bubble, but Google DeepMind CEO Demis Hassabis says that Google is well-positioned either way.
In a recent podcast, Hassabis offered a nuanced take on the AI bubble debate, arguing that the question isn’t binary and that different parts of the AI ecosystem face different risks. The neuroscientist-turned-AI pioneer, who co-founded DeepMind in 2010 before it was acquired by Google, also revealed why he believes Google’s unique position gives it an advantage regardless of how the market evolves.

“I still subscribe to AI is overhyped in the short term and still underappreciated in the medium to long term in how transformative it’s going to be,” Hassabis said. “There is a lot of talk, of course, right now about AI bubbles. In my view, it’s not one binary thing. I think there are parts of the AI ecosystem that are probably in bubbles.”
He pointed to seed-stage startups as one area of concern. “One example would be just seed rounds for startups that basically haven’t even got going yet and they’re raising at tens of billions of dollars valuations just out of the gate. It’s sort of interesting to see how can that be sustainable. My guess is probably not, at least not in general.”
However, Hassabis drew a distinction between startup valuations and the broader technology sector. “Then people are worrying about obviously the big tech valuations and other things. I think there’s a lot of real business underlying that. But remains to be seen.”
But Hassabis emphasized that Google’s diversified position insulates it from market fluctuations. “I don’t worry too much about are we in a bubble or not, because from my perspective, leading Google DeepMind and obviously with Google as Alphabet as a whole, our job and my job is to make sure either way we come out of it very strong. And I think we are tremendously well positioned either way.”
He outlined Google’s advantages: “If it continues going like it is now, fantastic. We’ll carry on with all of these great things that we’re doing and experiments and progress towards AGI. If there’s a retrenchment, fine. Then also I think we’re in a great position because we have our own stack with TPUs. We also have all these incredible Google products and the profits that all makes to plug in our AI into.”
Hassabis highlighted specific integration opportunities. “We’re doing that with Search—it’s totally revolutionized by AI Overviews, AI mode with Gemini under the hood. We are looking at Workspace, at email, at YouTube. So there’s all these amazing things—in Chrome, there’s a lot of these amazing things that AI, we can see, are already low-hanging fruit to apply Gemini to, as well, of course, as Gemini app, which is doing really well now, and the idea of a universal assistant.”
He concluded: “So there’s new products, and I think they will, in the fullness of time, be super valuable. But we don’t have to rely on that. We can just power up our existing ecosystem, which is what’s happened over the last year.”
Hassabis’s comments highlight a critical competitive advantage that distinguishes Google from its AI rivals. While OpenAI, Anthropic, and xAI have raised billions in venture funding and achieved impressive valuations, these startups lack the diversified revenue streams and integrated product ecosystem that Google possesses. OpenAI, for instance, despite its ChatGPT success and $500 billion valuation, still relies heavily on its partnership with Microsoft for compute infrastructure and doesn’t have established products like Search or YouTube to integrate its AI into. Similarly, Anthropic, backed by Google and Amazon among others, and Elon Musk’s xAI, must build standalone businesses rather than leveraging existing platforms with billions of users.
This structural advantage means that if AI investment does cool—as it has periodically with other technologies—Google can continue advancing its AI capabilities while monetizing them through its core businesses. The company’s TPU infrastructure also gives it control over its compute costs, a significant factor given the enormous expenses associated with training and deploying large language models. For pure-play AI startups, a funding slowdown could prove existential; for Google, it would merely shift the focus from moonshots to integration. Hassabis’s confidence suggests that in the AI race, having a safety net of profitable businesses may matter as much as having cutting-edge models.