OpenAI has been struggling with its governance and corporate structure for around a decade, but Google’s DeepMind unit had wrestled with some of the same questions as well.
A new book by journalist Sebastian Mallaby, The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence, published March 31, 2026, reveals in remarkable detail how DeepMind’s co-founders — Demis Hassabis and Mustafa Suleyman — spent several years trying to wrest meaningful independence from Google, driven by a deep belief that artificial general intelligence was too consequential to be left under the sway of a single corporation’s shareholders.

The Spark: A Failed Safety Board Meeting
The story begins in August 2015, when Hassabis and Suleyman convened what they hoped would be a landmark AI safety oversight meeting at SpaceX. Elon Musk hosted, with Google’s leadership, Reid Hoffman, and other tech luminaries in attendance. The meeting was a failure. Personal tensions, particularly between Musk and Google CEO Larry Page, and clashing visions for AI governance overwhelmed any constructive discussion. Worse, Musk went on to use what he had learned about DeepMind’s progress at that meeting to help found OpenAI as a direct rival.
The collapse of that gathering galvanized Suleyman in particular. He imagined building a “novel, post-capitalist form of governance” — a structure that could balance profit, existential risk, and social justice in an era of rapidly advancing AI. The ambition was not modest.
Project Mario: The Bid For Independence
Their opening came from Google itself. In 2015, Google began restructuring into a holding company called Alphabet, spinning out specialist units as semi-independent “bets.” Google’s M&A chief, Don Harrison, suggested to Hassabis and Suleyman that DeepMind could regain independence through this route. The resulting governance talks were given a secretive code name: Project Mario.
The proposed structure was a so-called 3-3-3 board: three seats for DeepMind, three for Alphabet, and three for independent members. To DeepMind’s founders, this was exactly the kind of credible external oversight they wanted. The commercial logic of Alphabetization even seemed to align with their interests — a more independent DeepMind would give Google a valuation boost, making implementation more likely.
Talks got underway in earnest in early 2016. Hassabis met with Larry Page four times to work through details, while Suleyman launched DeepMind Health, envisioning a future where the unit would earn revenues from AI-generated savings for hospitals. Hassabis, meanwhile, quietly assembled a secretive team of around 20 researchers to build high-frequency trading algorithms — an attempt to become financially self-sufficient and potentially rival Renaissance Technologies. Google was not pleased with the trading project, and it was eventually disbanded.
Google’s Changing Signals
Progress seemed real. By the summer of 2016, a formal term sheet had been drawn up after a fifth round of talks with Page. Then came a rude awakening. In November 2016, Google’s chief legal officer, David Drummond, arrived in London not to finalize the deal but to introduce “concerns” and a vague alternative formula. Days later, in a call with new Google CEO Sundar Pichai, the DeepMind founders heard a harder line: AI was now considered strategically central to Google’s core products like Search and Cloud. It did not belong in the Alphabet “bets” bucket reserved for long-shot moonshots.
The founders were confused by the mixed signals — Page seemingly supportive, Pichai increasingly resistant — but resolved to keep pushing.
The Walk-Away Option and Reid Hoffman’s $1 Billion
At the end of 2016, Hassabis and Suleyman cooked up a Plan B: gather $5 billion in outside investment pledges, and use the threat of a walkout as leverage. The legal form they settled on was a “company limited by guarantee” — the structure commonly used by nonprofits, issuing no shares and paying no dividends. They called it a Global Interest Company.
In January 2017, at the famous Asilomar AI safety conference in California, Suleyman sat down with Reid Hoffman to test the waters. Hoffman, who had already backed OpenAI for safety reasons, was receptive. He agreed on the spot to commit over a quarter of his net worth — $1 billion — to the vision of an independent, public-interest DeepMind. It was 100 times more than he had pledged to OpenAI just over a year before.
“I said, look, this is the most impactful technology of my lifetime,” Hoffman recalled. “I support the idea of an independent DeepMind with a public-interest mission… This technology shouldn’t be used to entrench a monopoly.”
Hassabis, however, was wary of a drawn-out legal fight with Google. The cleaner route, he believed, was still a negotiated spin-out within Alphabet.
The Aviemore Disaster
Talks ground on through 2017. In June of that year, DeepMind’s roughly 500-person staff was flown to Aviemore in the Scottish Highlands for a company off-site. In a moment of optimism, Suleyman took to the stage and unveiled a slide labeled “DeepMind: A Global Interest Company,” showing an org chart in which Suleyman would lead Applied AI folded into Google, while Hassabis would lead a semi-independent AGI research unit. Employees were stunned.
Ten days later, the reality hit. Google sent back a heavily red-lined negotiating document. Pichai had not approved the plan. Hassabis and Suleyman now had to walk back a vision they had announced to their entire company. Suleyman was forced to un-promise a move to California that he had already told his deputies to prepare for. For many inside DeepMind, this was the moment things began to unravel.
It was a chaotic period internally. That same week, OpenAI chief scientist Ilya Sutskever was separately reading the newly published transformer paper — a breakthrough whose significance Hassabis, distracted by the governance fight, was slower to appreciate than he might otherwise have been.
The Fall of Suleyman
By 2019, Suleyman’s troubles had compounded. A handful of DeepMind employees alleged bullying — harsh language, intimidating messages, and behavior that frightened subordinates. An outside investigation concluded that his management style amounted to misconduct. He was given the choice of accepting the findings and taking a voluntary sabbatical, or contesting them and facing formal proceedings with the likely outcome of dismissal and forfeiture of compensation. He accepted.
Suleyman sent an all-staff email saying he was stepping back to recharge. Almost no one inside the company knew of the investigation. Then Bloomberg published a story headlined “Google DeepMind Co-Founder Placed on Leave from AI Lab.” Suleyman felt blindsided, believing the story could not have gone out without Hassabis’s implicit approval — or at least without DeepMind failing to contain it.
He never returned. By the end of 2019, Google made him a vice president, but one with no direct reports. He eventually departed entirely, co-founded Inflection AI with Reid Hoffman, and was later appointed CEO of Microsoft AI — where he now leads AI efforts for one of DeepMind and Google’s fiercest rivals.
What The Experiment Revealed
Looking back, the book frames the entire Project Mario saga as an ominous lesson in the limits of AI governance. Hassabis and Suleyman had fought for three years across multiple iterations of the idea. They had attracted a $1 billion commitment from one of Silicon Valley’s most prominent investors. They had assembled top lawyers, communications strategists, and investment bankers. And they had achieved essentially nothing structurally.
The parallel story of OpenAI’s governance collapse — where the nonprofit board’s 2023 attempt to fire Sam Altman was swiftly reversed by investor pressure — only underscored the point. As Mallaby writes, even Google’s attempt in 2019 to create an external AI ethics advisory council collapsed within days under social media pressure, after controversy over one appointee’s views.
Hassabis, reflecting on the period, told Mallaby that the whole effort had been misguided in conception:
“Safety isn’t about governance structures. I mean, even if you have a governance board, it probably wouldn’t do the right thing when it came to the crunch… So discussing these things didn’t really help. It made it harder to build useful trust, because when you are negotiating a trustless structure, it implies that you can’t trust the other person.”
His conclusion was a pragmatic pivot: rather than fighting for structural independence, the better strategy was to earn genuine influence from inside — by racking up shared successes with Google and being present at the table when decisions were made.
Both Hassabis and Suleyman, the book argues, ultimately arrived at the same destination: the belief that the best available safety mechanism is their own personal authority within their respective organizations. Hassabis now leads Google DeepMind, having absorbed Google Brain and multiple allied teams into a single unified AI operation. Suleyman leads AI at Microsoft. Two North Londoners with immigrant parents, once united by a shared idealistic mission, now head AI at two of the world’s most powerful technology companies — and compete fiercely against each other.
Whether that personal influence will prove sufficient — as AI systems grow more powerful and the stakes grow higher — is the question the book leaves open.