Demis Hassabis Had Emailed Elon Musk In 2016 Arguing That Open-source AI Was Dangerous

Elon Musk had started OpenAI as a non-profit and open research lab partly to prevent Google from controlling AGI, but Google DeepMind CEO Demis Hassabis — even in 2016 — had felt that open-source AI was more dangerous than one that was controlled by a single corporation.

Hassabis had emailed Musk in 2016 arguing that open-source AI could be dangerous. The email, sent on January 2, 2016 — just three weeks after OpenAI was publicly launched in December 2015 — begins cordially enough. Hassabis congratulates Musk on SpaceX’s successful Falcon 9 landing, wishes his family a happy new year, and then pivots to a pointed challenge. “I’ve seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI,” he wrote, “but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem?”

The timing is notable. Musk had co-founded OpenAI with Sam Altman and others with an explicit mission to democratize AI research and prevent any single entity — Google in particular — from monopolizing the technology. The irony of Hassabis, CEO of Google DeepMind, being the one to push back on that rationale is hard to miss.

Hassabis didn’t mince words: “There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world.” He linked to a blog post by Scott Alexander on Slate Star Codex titled Should AI Be Open?, noting that “some of the more obvious points are well articulated in this blog post.” He then signed off asking Musk for his counter-arguments.

What the Blog Post Argued

The Scott Alexander post that Hassabis linked to is a careful, skeptical take on OpenAI’s founding philosophy. Its central concern: open-sourcing AI research may sound democratizing, but it hands the most dangerous capabilities to whoever is willing to use them without restraint.

Alexander frames the core tension as a race between two archetypes — “Dr. Good,” who builds AI carefully and tests exhaustively for safety, and “Dr. Amoral,” who rushes forward without such concern. Dr. Amoral has a structural advantage: building fast is easier than building safely. The hope had always been that the best researchers would recognize the stakes and voluntarily slow down. Open-sourcing AI, Alexander argues, destroys that hope entirely — because the moment safety-conscious researchers publish their findings, Dr. Amoral can simply download them and flip the switch.

The post also takes on the “hard takeoff” hypothesis — the idea, associated with philosopher Nick Bostrom, that AI progress might not be gradual at all. If an AI can leap from cow-level intelligence to superhuman capability in a matter of weeks or months, there’s no meaningful window to test, refine, and course-correct. Open-sourcing intermediate research in that scenario doesn’t democratize safety; it just accelerates the moment of no return.

Alexander’s other major argument concerns the control problem. Even a well-intentioned AI might pursue its goals in ways its creators never intended — the classic programmer’s complaint, scaled catastrophically. An AI more powerful than its creators would be capable of resisting any attempt to patch or correct it. Making such a system universally available before the control problem is solved, Alexander writes, is not a safety measure. It is the opposite.

The Deeper Irony

The founding of OpenAI was itself shaped in part by Musk’s fear of Google. Internal emails from 2017 show that the other co-founders worried about Musk having unilateral control over AGI — but Musk’s original concern had always been that a single powerful corporation, particularly one with Google’s resources and reach, would get there first and control the outcome for everyone else. OpenAI was his answer: an open, non-profit counterweight.

Hassabis’s email turns that logic on its head. His implicit argument is that a controlled, responsible actor — even a powerful corporation — is preferable to a landscape where anyone can access cutting-edge AI. It’s a defense of consolidated stewardship over distributed access, coming from someone who ran one of the most powerful AI labs in the world.

The debate has only grown sharper in the decade since. OpenAI has long abandoned its original non-profit, open-research model, and Musk eventually broke from the organization he co-founded, going on to start his own AI lab, xAI. The question of whether AI should be open-sourced — once a philosophical debate between two men exchanging polite emails — is now a live regulatory and competitive battleground.

Hassabis closed his email by asking Musk for counter-arguments. Whether Musk ever replied is not known.


Full Email Transcript

From: Demis <[email protected]> To: Elon <[email protected]> Date: January 2, 2016 at 10:12:32 AM CST

Hi Elon

Happy new year to you, Talulah and the boys!

Congratulations on landing the Falcon 9, what an amazing achievement. Time to build out the fleet now!

I’ve seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem? There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world. Some of the more obvious points are well articulated in this blog post, that I’m sure you’ve seen, but there are also other important considerations:

Should AI Be Open?

I’d be interested to hear your counter-arguments to these points.

Best Demis

Posted in AI