Much drama has already taken place over OpenAI’s non-profit structure, and details around the tensions that led to the current situation are slowly emerging.
A recently-released email thread from 2017 involving Elon Musk and OpenAI founding members Sam Altman, Greg Brockman, and Ilya Sutskever show how the four had negotiated over the future of OpenAI. It appears that Greg Brockman, Ilya Sutskever and Sam Altman wanted to convert OpenAI from a non-profit into a traditional startup, and Elon Musk was opposed to it. Greg and Ilya — who were the technical leads on the company — also seemed to have reservations about both Elon Musk and Sam Altman being CEOs and having sole control of OpenAI.
The mail chain starts with Greg Brockman and Ilya Sutskever writing a mail with the subject “Honest thoughts” to Elon Musk and Sam Altman. Greg and Ilya seem to hint that both Elon and Sam are insistent on having control of OpenAI and being its CEO. Greg and Ilya appear to have doubts over either of them being CEO — they say that they’re worried about Elon having control of AGI if he controlled OpenAI fully, and they say that they don’t understand why Sam Altman’s is so insistent on being Chief Executive.
But just 11 minutes after the 1000-word email is sent, Elon Musk replies curtly and says that all discussions are over. Musk insinuates that Sam, Greg and Ilya Sutskever want to convert OpenAI from a non-profit into a traditional startup. Elon is completely opposed to the idea, and threatens to stop funding OpenAI if they don’t agree with the non-profit structure. The email chain ends with Sam Altman saying he was “enthusiastic” about the non-profit structure.
Here is the email exchange in its entirely.
On Sep 20, 2017, at 2:08 PM, Ilya Sutskever wrote:
Elon, Sam,
This process has been the highest stakes conversation that Greg and I have ever participated in, and if the project succeeds, it’ll turn out to have been the highest stakes conversation the world has seen. Its also been a deeply personal conversation for all of us.
Yesterday while we were considering making our final commitment given the non-solicit agreement, we realized we’d made a mistake. We have several important concerns that we haven’t raised with either of you. We didn’t raise them because we were afraid to: we were afraid of harming the relationship, having you think less of us, or losing you as partners.
There is some chance that our concerns will prove to be unresolvable. We really hope its not the case, but we know we will fail for sure if we don’t all discuss them now. And we have hope that we can work through them and all continue working together.
Here are the emails in their entirely
Elon:
We really want to work with you. We believe that if we join forces, our chance of success in the mission is the greatest. Our upside is the highest. There is no doubt about that. Our desire to work with you is so great that we are happy to give up on the equity, personal control, make ourselves easily firable — whatever it takes to work with you.But we realized that we were careless in our thinking about the implications of control for the world. Because it seemed so hubristic, we have not been seriously considering the implications of success.
• The current structure provides you with a path where you end up with unilateral absolute control over the AGI. You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you.• As an example, you said that you needed to be CEO of the new company so that everyone will know that you are the one who is in charge, even though you also stated that you hate being CEO and would much rather not be CEO.
• Thus, we are concerned that as the company makes genuine progress towards AGI, you will choose to retain your absolute control of the company despite current intent to the contrary. We disagree with your statement that our ability to leave is our greatest power, because once the company is actually on track to AGI, the company will be much more important than any individual.• The goal of OpenAl is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.
We have a few smaller concerns, but we think it’s useful to mention it here:
• In the event we decide to buy Cerebras, my strong sense is that it’ll be done through Tesla. But why do it this way if we could also do it from within OpenAl? Specifically, the concern is that Tesla has a duty to shareholders to maximize shareholder return, which is not aligned with OpenAl’s mission. So the overall result may not end up being optimal for OpenAI.• We believe that OpenAl the non-profit was successful because both you and Sam were in it. Sam acted as a genuine counterbalance to you, which has been extremely fruitful. Greg and I, at least so far, are much worse at being a counterbalance to you. We feel this is evidenced even by this negotiation, where we were ready to sweep the long-term AGI control questions under the rug while Sam stood his ground.
Sam:
When Greg and I are stuck, you’ve always had an answer that turned out to be deep and correct. You’ve been thinking about the ways forward on this problem extremely deeply and thoroughly. Greg and I understand technical execution, but we don’t know how structure decisions will play out over the next month, year, or five years.
But we haven’t been able to fully trust your judgements throughout this process, because we don’t understand your cost function.• We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.
• Is AGI truly your primary motivation? How does it connect to your political goals? How has your thought process changed over time?
Greg and Ilya:
We had a fair share of our own failings during this negotiation, and we’ll list some of them here (Elon and Sam, I’m sure you’ll have plenty to add…):
• During this negotiation, we realized that we have allowed the idea of financial return 2-3 years down the line to drive our decisions. This is why we didn’t push on the control — we thought that our equity is good enough, so why worry? But this attitude is wrong, just like the attitude of Al experts who don’t think that Al safety is an issue because they don’t really believe that they’ll build AGI.• We did not speak our full truth during the negotiation. We have our excuses, but it was damaging to the process, and we may lose both Sam and Elon as a result.
There’s enough baggage here that we think its very important for us to meet and talk it out. Our collaboration will not succeed if we don’t. Can all four of us meet today? If all of us say the truth, and resolve the issues, the company that well create will be much more likely to withstand the very strong forces it’ll experience.Greg & Ilya
From: Elon Musk
Subject: Re: Honest Thoughts
Date: Wednesday, September 20, 2017 2:17 PM
Guys, I’ve had enough. This is the final straw.
Either go do something on your own or continue with OpenAI as a nonprofit. I will no longer fund OpenAI until you have made a firm commitment to stay or I’m just being a fool who is essentially providing free funding for you to create a startup.
Discussions are over.
Let me know if you need anything else!
On Wed, Sep 20, 2017 at 3:08 PM Elon Musk wrote:
To be clear, this is not an ultimatum to accept what was discussed before. That is no longer on the table.
From: Sam Altman
Subject: Re: Honest Thoughts
Date: Thursday, September 21, 2017 9:17 AM
I remain enthusiastic about the non-profit structure!
Much water has flowed under the bridge since these emails were sent. Elon Musk pulled out of OpenAI, but OpenAI managed to continue its operations regardless. In late 2022, OpenAI released ChatGPT, and brought about the current AI revolution. Soon after Elon Musk sued OpenAI for having started off as a non-profit but then engineering its corporate structure and tying with with a for-profit entity in Microsoft. As things stand, OpenAI is looking to officially change its corporate structure to a for-profit entity, but it appears that the issues of having started off as a non-profit could continue to haunt it for quite some time to come.