Scaling Could Now Give Diminishing Returns, We’re Back To The Age Of Research: Ilya Sutskever

Former OpenAI Chief Scientist Ilya Sutskever believes that AI is re-entering an era it was in previously — the era of research.

Sutskever, who departed OpenAI in May 2024 and co-founded Safe Superintelligence Inc., has offered a candid assessment of where artificial intelligence development stands today. In remarks that challenge the prevailing wisdom of simply throwing more resources at larger models, he argues that the industry’s five-year scaling sprint may be reaching its natural limits. His perspective carries particular weight given his central role in developing GPT-3 and establishing the scaling paradigm that has defined modern AI.

“The way ML used to work is that people would just tinker with stuff and try to get interesting results. That’s what’s been going on in the past,” Sutskever explained. “Then the scaling insight arrived. Scaling laws, GPT-3, and suddenly everyone realized we should scale.”

He outlined why this approach became so attractive to the industry: “Companies love this because it gives you a very low-risk way of investing your resources. It’s much harder to invest your resources in research. Compare that. If you research, you need to be, ‘Go forth researchers and research and come up with something,’ versus get more data, get more compute. You know you’ll get something from pre-training.”

But Sutskever sees this era coming to an end. “At some point though, pre-training will run out of data. The data is very clearly finite. What do you do next? Either you do some kind of souped-up pre-training, a different recipe from the one you’ve done before, or you’re doing RL, or maybe something else. But now that compute is big, compute is now very big, in some sense we are back to the age of research.”

He framed the shift in historical terms: “Maybe here’s another way to put it. Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling—maybe plus or minus, let’s add error bars to those years—because people say, ‘This is amazing. You’ve got to scale more. Keep scaling.’ The one word: scaling.”

His skepticism about continued scaling returns was pointed: “But now the scale is so big. Is the belief really, ‘Oh, it’s so big, but if you had 100x more, everything would be so different?’ It would be different, for sure. But is the belief that if you just 100x the scale, everything would be transformed? I don’t think that’s true. So it’s back to the age of research again, just with big computers.”

Interestingly, Sutskever’s comments come at at time when Google researchers have said that they’re still seeing gains from pre-training. Sutskever acknowledged this in his comments, but still made the point that pre-training data will eventually run out. Other researchers too have indicated at the need for new research breakthroughs — Meta Chief AI Scientist Yann LeCun has quit the company said that LLMs won’t get humanity to AGI, and has said he will work on finding new breakthroughs at his own startup. And with Sutskever, who was OpenAI’s former Chief Scientist, also hinting at the need for more breakthroughs, it does appear that AI progress could slow down in the coming years as researchers find novel methods to spur AI progress once again.

Posted in AI