OpenAI’s New ‘Spud’ Model Is A Fresh Pretrain, Outcome Of 2 Years Of Research: Greg Brockman

OpenAI has come out with some capable models in the last couple of years, but its upcoming model — codenamed ‘Spud’ — might be a bigger step change than most people expect.

Greg Brockman, co-founder and president of OpenAI, recently shed light on what makes Spud different from recent releases: it is a brand new base model, not an iteration on existing foundations, and it carries roughly two years of accumulated research.

greg brockman spud openai

“The way that our development process works is you have pre-training,” Brockman explained. “So you produce a new base model, that then is the foundation that we build further improvements on top of. And that is always a huge effort across many people in the company.”

Brockman revealed that he has been deeply involved in the infrastructure work behind this effort. “That’s where I’ve actually been spending most of my efforts over the past eighteen months — really focused on our GPU infrastructure, on supporting the teams that do all of the training frameworks to scale up at these big runs.”

On Spud specifically, he was direct about its significance: “I think of Spud as a new base, as a new pre-train, and I’d say it’s like we have maybe two years worth of research that is coming to fruition in this model.”

As for what users will actually experience, Brockman kept it measured but optimistic. “It’s going to be very exciting, and I think that the way that the world will experience it is just improved capabilities.”

The implications are significant. A fresh pretrain from OpenAI would likely be the first since GPT-4o in May 2024 — a gap that semiconductor research firm SemiAnalysis had flagged as a concern, suggesting that OpenAI’s newer models had been built on older foundations rather than new base runs. If Spud represents a genuinely fresh pretrain, it would directly address that criticism and signal that OpenAI has resolved the infrastructure and scaling challenges that had reportedly plagued its training runs. Brockman’s mention of eighteen months spent on GPU infrastructure suggests the company has been systematically working toward exactly this.

The timing also matters. OpenAI has lost the benchmark crown to competitors in the past year, with Google and xAI trading top spots while OpenAI’s releases drew mixed reviews. Meanwhile, Google has publicly credited pretraining improvements as a key driver of Gemini 3’s performance. Brockman’s comments suggest OpenAI is now positioned to make the same leap. Two years of research compressing into a single model launch is, if accurate, a meaningful event — not just an incremental update. Whether Spud lives up to that framing will be one of the more consequential AI releases to watch.

Posted in AI