Thus far, simply increasing the size of models is helping AI companies build more performant models, but these scaling laws might not hold indefinitely.
Kevin Scott, the Chief Technology Officer of Microsoft, recently offered a interesting perspective on the seemingly unstoppable scaling of artificial intelligence on the 20VC podcast. He challenged the prevailing belief that AI capabilities will continue to expand exponentially. His argument introduces the concept of a “scaling asymptote,” a point where the economic cost of further development outweighs the practical benefits.

Scott expressed his skepticism about the unlimited scalability of AI, saying: “Some people believe that there is no such limit and things will continue to scale into weird territory. I don’t really believe that.”
He continues, outlining his alternative view: “I believe we will get to some point where we’ll hit a scaling asymptote there [will] just be diminishing marginal returns. And it’s so expensive that we will decide it’s not worth spending that next dollar to make this thing one unit smarter because we haven’t figured out how that translates into something that’s useful for the people who are using the tool. I think that point will come.”
Scott seemed to be implying that simply scaling up models will not indefinitely keep producing better results — there will be a time when the cost to improve a model will improve its abilities by so little that it won’t be worthwhile to spend money make the model any smarter.
There are several schools of thought on how model scaling will progress in the coming years. OpenAI and Sam Altman have repeatedly said that there’s no wall to scaling, and have hinted that models can be made better indefinitely. It’s not clear if OpenAI means that this will happen because of scaling laws or new techniques, such as test time compute being discovered. Meanwhile Elon Musk has highlighted another problem — he’s said that humans have already used up most of the data to train models, and this lack of fresh data might present a challenge. Ilya Sutskever, for his part, too has said that data is the fossil fuel of AI, and it will eventually get exhausted. And with Microsoft CTO Kevin Scott too saying that models can’t simply keep getting better by adding more resources, it appears that the AI community might need to keep coming up with new techniques and breakthroughs to get to superintelligence.
[The headline of this article has been corrected]