Update: There is additional context for the email which was the subject of the article that follows. The email was in response to an email by Microsoft’s Corporate Vice President of Communications Frank Shaw, in which he’d looked to establish a public narrative for what had transpired at OpenAI.
“Assuming we reach a resolution this morning, we’re going to need to drive a set of stories with OAI that establishes a clear narrative of what happened, why, and what the future looks like,” Shaw had written in the email addressed to Microsoft CEO Satya Nadella, CTO Kevin Scott and others. “The coverage and story so far is all over the map, so we have a short window to lay down a clear “what” and timing. It needs to have some color and give context that allows people to piece together the last 48 hours. I’ve got a start below. I’ve picked the developer event as the inciting incdent — not fully sure how true it is but have heard this from several people so it has the ring of plausible to it. I’d like any reaction to this, and any additions — if we don’t start (with OAI) telling a consistent story our recovery journey will be harder,” he had added.
Shaw then came up with a narrative that would explain the events at OpenAI, including the “inciting incident” he’d come up with, the timing and the resolution.
“Narrative/Timeline:
Inciting incident: Developer event — too commercial, felt like it could have been apple/meta/etc. Board member x felt like it lost the sense of non profit/ea/mission the board had signed up for.
Timing: Without any significant conversation with either Sam, Mira, Greg, board decided to fire Sam <need time/date/color>. They <who> told Sam, then Mira, then Greg. All were shocked, reached out to each other, then other investors.
Kevin Scott and Satya Nadella were told, but not given a clear “why” behind what the board was doing, and in subsequent conversations with OAI team became clear that a massive talent exodus was imminent. In conversations with both the board and leaders at OAI, pursued two path — express support for OAI, and push to reverse the changes. Given the uncertainty about the motives of the board, also talked to Sam and Greg about what a potential “next” might be.
Saturday morning <confirm> Mira met with the remaining four boards members and laid out where things were — employees were quitting, investors had lost confidence, the current status quo would not hold. Either the board would restate Sam as CEO, resign and appoint an interim board, or Mira and other would quit. Investors, to include Microsoft, supported this.<do we say Mark Hurd was helping>
Board deliberates through the day. Sticking point is who should be on the board. <names kicked around> More employees weigh in on Twitter with expressions of support for Sam and other engineering leaders. Investors make clear that given confidence in board and structure, future investments not likely.
Resolution: At <time> board decides. OAI hall hands takes place. Back on track after one of the truly strangest 48 hours in corporate life.“
Kevin Scott’s email was in response to this email. Based on the wording of his email, it does seem that Kevin Scott described what he actually believed had transpired at OpenAI, but regardless, it should be seen in conjunction with the email he was responding to.
The entire email thread can be read here.
The original article (with minor edits) follows.
Much water has passed under the bridge since the infamous OpenAI coup of 2023 in which Sam Altman and Greg Brockman had been fired by the board, but new insights into what really transpired keep emerging.
An internal Microsoft email from CTO Kevin Scott, released during the Musk-OpenAI legal proceedings, sheds new light on the personal dynamics that may have contributed to the shocking November 2023 board revolt. According to Scott’s understanding, shared with top Microsoft executives including CEO Satya Nadella, chief scientist Ilya Sutskever’s discontent with his diminishing research role played a significant part in the dramatic ouster.

Research Rivalry and Internal Tensions
In the email sent on November 19, 2023—just hours after the crisis unfolded—Scott conveyed two key issues that he felt had put Sutskever at odds with Altman. The second issue, Scott wrote, was “deeply personal to Ilya” and centered on researcher Jakub Pachocki’s rising prominence within the organization.
Scott said that “Jakub moreso than Ilya has been making the research breakthroughs that are driving things forward, to the point that Sam promoted Jakub, and put him charge of the major model research directions.” The promotion supposedly appeared to catalyze further success: “After he did that, Jakub’s work accelerated, and he’s made some truly stunning progress that has accelerated in the past few weeks.”
Although it’s unclear how exactly Scott came to these conclusions, the Microsoft CTO assessed the personal impact on Sutskever bluntly: “I think that Ilya has had a very, very hard time with this, with this person that used to work for him suddenly becoming the leader, and perhaps more importantly, for solving the problem that Ilya has been trying to solve the past few years with little or no progress.”
Scott defended Altman’s decision, stating: “Sam made the right choice as CEO here by promoting Jakub.”
Resource Allocation Disputes
Beyond the personal research rivalry, Scott also detailed what he felt was broader organizational tension between OpenAI’s Research and Applied divisions over GPU allocation. He characterized this as a “perfectly natural tension” where “the success of Applied has meant that headcount and GPUs got allocated to things like the API and ChatGPT.”
Scott noted that research teams, responsible for training new models, operate in a domain that is “literally insatiable” for computational resources. He suggested that researchers failed to appreciate the bigger picture: “The researchers at OAI do not appreciate that they would not have anywhere remotely as many GPUs as they do have if there were no Applied at all, and that Applied has a momentum all its own that must be fed.”
A Board Without Guardrails
What made these internal disputes explosive was OpenAI’s unusual governance structure. Scott emphasized what he felt was a critical vulnerability: “the employee was also a founder and board member, and the board constitution was such that they were highly susceptible to a pitch by Ilya that portrays the decisions that Sam was making as bad.”
He characterized some board members as “effective altruism folks who all things equal would like to have an infinite bag of money to build AGI-like things, just to study and ponder, but not to do anything with.” More damaging, he wrote, was their lack of experience: “None of them were experienced enough with running things, or understood the dynamic at OAI well enough to understand that firing Sam not only would not solve any of the concerns they had, but would make them worse.”
The Chaotic Timeline
Scott’s email also provided a detailed timeline of the November 17, 2023 events as he came to know of them. The board—consisting of Sutskever, Tasha McCauley, Helen Toner, and Adam D’Angelo—informed then-CTO Mira Murati of their plans late Thursday night. Murati called Scott and Nadella “about 10-15 minutes before the board talks to Sam,” marking the first time Microsoft leadership learned of the impending move.
At noon on Friday, the board notified Altman he was out, removed Brockman from the board, and immediately published a blog post. An all-hands meeting with “rattled staff” followed at 2 PM. Brockman, who “was blindsided and hadn’t been in the board deliberations,” subsequently resigned.
The researcher exodus began almost immediately. Scott wrote that “Jakub and a whole horde of researchers reach out to Sam and Greg trying to understand what happened, expressing loyalty to them, and saying they will resign.” By Friday night, “Jakub and a handful of others resign.”
The Aftermath
The coup ultimately failed. Within days, overwhelming pressure from employees, investors, and Microsoft led to Altman’s reinstatement. The board was reconstituted with new members, and Sutskever, who had reportedly expressed regret for his role, did not return to the reformed board.
Pachocki, meanwhile, continued his ascent. He was named Chief Scientist of OpenAI in May 2024, succeeding Sutskever after the latter’s departure from the company. Under Pachocki’s technical leadership, OpenAI has released several major models, including the GPT-4 family and the o1 reasoning models. o1, however, had been the brainchild of Sutskever’s, and he went on to found his own company, SSI, which was last valued at $32 billion. SSI has the mission of directly creating safe superintelligence, without bothering about releasing products or services in the interim. And Kevin Scott’s email from when the drama was unfolding provides a rare insider perspective — which could be one-sided — on how personal grievances, organizational tensions, and governance failures can allegedly sometimes combine to create corporate crisis—even at the world’s most valuable AI company.
Here’s the full text of the email:
From: Kevin Scott
Sent: Sunday, November 19, 2023 7:31 AM
To: Frank X. Shaw, Satya Nadella, Brad Smith, Amy Hood, Caitlin McCabe
Frank,
I can help you with the timeline and with our best understanding of what was going on. I think the reality was that a member of the board, Ilya Sutskever, had been increasingly at odds with his boss, Sam, over a variety of issues.
One of those issues is that there is a perfectly natural tension inside of the company between Research and Applied over resource allocations. The success of Applied has meant that headcount and GPUs got allocated to things like the API and ChatGPT. Research, which is responsible for training new models, could always use more GPUs because what they’re doing is literally insatiable, and it’s easy for them to look at the success of Applied and believe that in a zero sum game they are responsible for them waiting for GPUs to become available to do their work. I could tell you stories like this from every place I’ve ever worked, and it boils down to, even if you have two important, super successful things you’re trying to work on simultaneously, folks rarely think about the global optima. They believe that their thing is more important, and that to the extent that things are zero sum, that the other thing is a cause of their woes. Its why Sam has pushed us so hard on capacity: he’s the one thing about the global optima and trying to make things non-zero sum. The researchers at OAI do not appreciate that they would not have anywhere remotely as many GPUs as they do have if there were no Applied at all, and that Applied has a momentum all its own that must be fed. So the only reasonable thing to do is what Sam has been doing: figure out how to get more compute.
The second of the issues, and one that’s deeply personal to Ilya, is that Jakub moreso than Ilya has been making the research breakthroughs that are driving things forward, to the point that Sam promoted Jakub, and put him charge of the major model research directions. After he did that, Jakub’s work accelerated, and he’s made some truly stunning progress that has accelerated in the past few weeks. I think that Ilya has had a very, very hard time with this, with this person that used to work for him suddenly becoming the leader, and perhaps more importantly, for solving the problem that Ilya has been trying to solve the past few years with little or no progress. Sam made the right choice as CEO here by promoting Jakub.
Now, in a normal company, if you don’t like these two things, you’d appeal to your boss, and if he/she tells you that they’ve made their decision and that its final, your recourse is accept the decision or quit. Here, and this is the piece that everyone should have been thinking harder about, the employee was also a founder and board member, and the board constitution was such that they were highly susceptible to a pitch by Ilya that portrays the decisions that Sam was making as bad. I think the things that made them susceptible, is that two of the board members were effective altruism folks who all things equal would like to have an infinite bag of money to build AGI-like things, just to study and ponder, but not to do anything with. None of them were experienced enough with running things, or understood the dynamic at OAI well enough to understand that firing Sam not only would not solve any of the concerns they had, but would make them worse. And none of them had experience, and didn’t seek experience out, in how to handle something like a CEO transition, certainly not for the hottest company in the world.
The actual timeline of events through Friday afternoon as I understand them:
Thursday late night, the board let’s Mira know what they’re going to do. By board, its Ilya, Tash, Helen, and Adam.
Mira calls me and Satya about 10-15 minutes before the board talks to Sam. This is the first either of us had heard of any of this. Mira sounded like she had been run over by a truck as she tells me.
OAI Board notifies Sam at noon on Friday that he’s out, and that Greg is off the board, and immediately does a blog post.
OAI all hands at 2P to rattled staff.
Greg resigns. He was blindsided and hadn’t been in the board deliberations, and hadn’t agreed to stay.
Jakub and a whole horde of researchers reach out to Sam and Greg trying to understand what happened, expressing loyalty to them, and saying they will resign.
Friday night Jakub and a handful of others resign.