We “Screwed Up” GPT 5.2’s Writing Abilities By Focusing Too Much On Coding: OpenAI CEO Sam Altman

Sam Altman has admitted that GPT’s latest versions aren’t as good at writing as they once were—and he says it’s because of the company’s focus on coding.

In a recent OpenAI townhall, the CEO responded to an audience member’s question about widespread complaints on social media regarding GPT-5’s writing quality. In a surprisingly candid moment, Altman acknowledged the regression directly: “I think we just screwed that up.”

The OpenAI chief explained that the company made a deliberate choice with GPT-5.2 to prioritize other capabilities. “We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing,” Altman said. “We have limited bandwidth here and sometimes we focus on one thing and neglect another.”

However, Altman was quick to clarify that this represents a temporary trade-off rather than a strategic shift away from writing quality. “I believe that the future is mostly going to be about very good general purpose models,” he noted. “Even if you’re trying to make a model that’s really great at coding, it’d be nice if it writes well too.”

He elaborated on why writing matters even for code-centric applications: “If you’re trying to have it be able to generate a full application for you, you’d like good writing in there. When it’s interacting with you, you’d like to have a sort of thoughtful, incisive personality and communicate clearly—like good writing in the sense of clear thought, not like beautiful prose.”

Altman expressed optimism about addressing the issue in upcoming releases. “My hope is that we just push to get future models really good in all of these dimensions and I think we will do that,” he said. “Intelligence is a surprisingly fungible thing and we can get really good at all of these things in a single model.”

While acknowledging that “this is a particularly important time to push on kind of, let’s call it coding intelligence,” Altman committed to broader improvements: “We will try to excel and catch up on everything else quickly.”

The admission reflects a broader challenge facing AI companies as they scale their models: balancing specialized performance against general capability. Anthropic’s Claude has differentiated itself by simultaneously having some of the best writing and coding abilities among top models. OpenAI seemed to have looked to focus on coding to catch up to Anthropic, but this appears to have meant a drop in the model’s writing abilities. The trade-off has created user frustration, with complaints about overly terse, mechanical and stilted writing from newer GPT versions becoming increasingly common on platforms like X and Reddit. Altman’s frank acknowledgment suggests OpenAI is taking this feedback seriously and may recalibrate its priorities in future releases to deliver the “very good general purpose models” he envisions.

Posted in AI