Much water has passed under the bridge since the infamous OpenAI coup of 2023 in which Sam Altman and Greg Brockman had been fired by the board, but new insights into what really transpired keep emerging.
An internal Microsoft email from CTO Kevin Scott, released during the Musk-OpenAI legal proceedings, sheds new light on the personal dynamics that may have contributed to the shocking November 2023 board revolt. According to Scott’s assessment, shared with top Microsoft executives including CEO Satya Nadella, chief scientist Ilya Sutskever’s discontent with his diminishing research role played a significant part in the dramatic ouster.

Research Rivalry and Internal Tensions
In the email sent on November 19, 2023—just hours after the crisis unfolded—Scott identified two key issues that had put Sutskever at odds with Altman. The second issue, Scott wrote, was “deeply personal to Ilya” and centered on researcher Jakub Pachocki’s rising prominence within the organization.
Scott explained that “Jakub moreso than Ilya has been making the research breakthroughs that are driving things forward, to the point that Sam promoted Jakub, and put him charge of the major model research directions.” The promotion appeared to catalyze further success: “After he did that, Jakub’s work accelerated, and he’s made some truly stunning progress that has accelerated in the past few weeks.”
The Microsoft CTO assessed the personal impact on Sutskever bluntly: “I think that Ilya has had a very, very hard time with this, with this person that used to work for him suddenly becoming the leader, and perhaps more importantly, for solving the problem that Ilya has been trying to solve the past few years with little or no progress.”
Scott defended Altman’s decision, stating: “Sam made the right choice as CEO here by promoting Jakub.”
Resource Allocation Disputes
Beyond the personal research rivalry, Scott also detailed a broader organizational tension between OpenAI’s Research and Applied divisions over GPU allocation. He characterized this as a “perfectly natural tension” where “the success of Applied has meant that headcount and GPUs got allocated to things like the API and ChatGPT.”
Scott noted that research teams, responsible for training new models, operate in a domain that is “literally insatiable” for computational resources. He suggested that researchers failed to appreciate the bigger picture: “The researchers at OAI do not appreciate that they would not have anywhere remotely as many GPUs as they do have if there were no Applied at all, and that Applied has a momentum all its own that must be fed.”
A Board Without Guardrails
What made these internal disputes explosive was OpenAI’s unusual governance structure. Scott emphasized a critical vulnerability: “the employee was also a founder and board member, and the board constitution was such that they were highly susceptible to a pitch by Ilya that portrays the decisions that Sam was making as bad.”
He characterized some board members as “effective altruism folks who all things equal would like to have an infinite bag of money to build AGI-like things, just to study and ponder, but not to do anything with.” More damaging, he wrote, was their lack of experience: “None of them were experienced enough with running things, or understood the dynamic at OAI well enough to understand that firing Sam not only would not solve any of the concerns they had, but would make them worse.”
The Chaotic Timeline
Scott’s email also provided a detailed timeline of the November 17, 2023 events. The board—consisting of Sutskever, Tasha McCauley, Helen Toner, and Adam D’Angelo—informed then-CTO Mira Murati of their plans late Thursday night. Murati called Scott and Nadella “about 10-15 minutes before the board talks to Sam,” marking the first time Microsoft leadership learned of the impending move.
At noon on Friday, the board notified Altman he was out, removed Brockman from the board, and immediately published a blog post. An all-hands meeting with “rattled staff” followed at 2 PM. Brockman, who “was blindsided and hadn’t been in the board deliberations,” subsequently resigned.
The researcher exodus began almost immediately. Scott wrote that “Jakub and a whole horde of researchers reach out to Sam and Greg trying to understand what happened, expressing loyalty to them, and saying they will resign.” By Friday night, “Jakub and a handful of others resign.”
The Aftermath
The coup ultimately failed. Within days, overwhelming pressure from employees, investors, and Microsoft led to Altman’s reinstatement. The board was reconstituted with new members, and Sutskever, who had reportedly expressed regret for his role, did not return to the reformed board.
Pachocki, meanwhile, continued his ascent. He was named Chief Scientist of OpenAI in May 2024, succeeding Sutskever after the latter’s departure from the company. Under Pachocki’s technical leadership, OpenAI has released several major models, including the GPT-4 family and the o1 reasoning models. Ilya Sutskever, of course, went on to found his own company, SSI, which is looking to directly create safe superintelligence without bothering about releasing products or services in the interim. And Kevin Scott’s email from when the drama was unfolding provides a rare insider perspective on how personal grievances, organizational tensions, and governance failures can combine to create corporate crisis—even at the world’s most valuable AI company.
Here’s the full text of the email:
From: Kevin Scott
Sent: Sunday, November 19, 2023 7:31 AM
To: Frank X. Shaw, Satya Nadella, Brad Smith, Amy Hood, Caitlin McCabe
Frank,
I can help you with the timeline and with our best understanding of what was going on. I think the reality was that a member of the board, Ilya Sutskever, had been increasingly at odds with his boss, Sam, over a variety of issues.
One of those issues is that there is a perfectly natural tension inside of the company between Research and Applied over resource allocations. The success of Applied has meant that headcount and GPUs got allocated to things like the API and ChatGPT. Research, which is responsible for training new models, could always use more GPUs because what they’re doing is literally insatiable, and it’s easy for them to look at the success of Applied and believe that in a zero sum game they are responsible for them waiting for GPUs to become available to do their work. I could tell you stories like this from every place I’ve ever worked, and it boils down to, even if you have two important, super successful things you’re trying to work on simultaneously, folks rarely think about the global optima. They believe that their thing is more important, and that to the extent that things are zero sum, that the other thing is a cause of their woes. Its why Sam has pushed us so hard on capacity: he’s the one thing about the global optima and trying to make things non-zero sum. The researchers at OAI do not appreciate that they would not have anywhere remotely as many GPUs as they do have if there were no Applied at all, and that Applied has a momentum all its own that must be fed. So the only reasonable thing to do is what Sam has been doing: figure out how to get more compute.
The second of the issues, and one that’s deeply personal to Ilya, is that Jakub moreso than Ilya has been making the research breakthroughs that are driving things forward, to the point that Sam promoted Jakub, and put him charge of the major model research directions. After he did that, Jakub’s work accelerated, and he’s made some truly stunning progress that has accelerated in the past few weeks. I think that Ilya has had a very, very hard time with this, with this person that used to work for him suddenly becoming the leader, and perhaps more importantly, for solving the problem that Ilya has been trying to solve the past few years with little or no progress. Sam made the right choice as CEO here by promoting Jakub.
Now, in a normal company, if you don’t like these two things, you’d appeal to your boss, and if he/she tells you that they’ve made their decision and that its final, your recourse is accept the decision or quit. Here, and this is the piece that everyone should have been thinking harder about, the employee was also a founder and board member, and the board constitution was such that they were highly susceptible to a pitch by Ilya that portrays the decisions that Sam was making as bad. I think the things that made them susceptible, is that two of the board members were effective altruism folks who all things equal would like to have an infinite bag of money to build AGI-like things, just to study and ponder, but not to do anything with. None of them were experienced enough with running things, or understood the dynamic at OAI well enough to understand that firing Sam not only would not solve any of the concerns they had, but would make them worse. And none of them had experience, and didn’t seek experience out, in how to handle something like a CEO transition, certainly not for the hottest company in the world.
The actual timeline of events through Friday afternoon as I understand them:
Thursday late night, the board let’s Mira know what they’re going to do. By board, its Ilya, Tash, Helen, and Adam.
Mira calls me and Satya about 10-15 minutes before the board talks to Sam. This is the first either of us had heard of any of this. Mira sounded like she had been run over by a truck as she tells me.
OAI Board notifies Sam at noon on Friday that he’s out, and that Greg is off the board, and immediately does a blog post.
OAI all hands at 2P to rattled staff.
Greg resigns. He was blindsided and hadn’t been in the board deliberations, and hadn’t agreed to stay.
Jakub and a whole horde of researchers reach out to Sam and Greg trying to understand what happened, expressing loyalty to them, and saying they will resign.
Friday night Jakub and a handful of others resign.