Proprietary Frontier AI Models Are “Very Expensive Sandcastles”: François Chollet

The AI giants are putting in massive sums of money into training their top-of-the-line frontier models, but they might find it hard to recoup these costs from customers.

François Chollet, author of the ‘Deep Learning with Python’ book and founder of the ARC-AGI prize, has said that proprietary frontier models, such as GPT-5, Grok 4 or Gemini 2.5 Pro, were expensive sandcastles that would soon be worth almost nothing. He suggested that their value would be eroded away both by open-source models, and by newer algorithmic approaches.

“The proprietary frontier models of today are ephemeral artifacts,” Chollet posted on X. “Essentially very expensive sandcastles. Destined to be washed away by the rising tide of open source replication (first) and algorithmic disruption (later),” he added.

His views seemed to be echoed by David Ha of Sakana AI. “The high cost required to train large foundation models makes them the fastest depreciating asset in human history,” he said on X.

There might be more than a bit of truth here. Companies like OpenAI, Google and Grok are pouring billions of dollars into building bigger and bigger datacenters to train their models. But these models seem to go obsolete within six months — OpenAI had itself deprecated all its older models in favour of GPT-5, and Google too keeps deprecating older models. As such, all the resources that went into training these older models are essentially a sunk cost, and currently not giving any returns.

Meanwhile, open-source approaches are quickly catching up to the frontier — China’s open-source models, such as Qwen and DeepSeek, seem to only be around a year behind the frontier models created by closed-source companies. Once these models catch up, there is no value in opting for a proprietary model, and these models largely become worthless. And there are concerns that LLMs might not be the right path to get to AGI anyway, and newer approaches might be needed.

All this could mean that companies training frontier models might not even be able to recoup their investments on most of their large training runs. These models are anyway at the frontier for a very short period of time, and even during that time, users have the option to switch to competing models or to open-source local models. And like Francois Chollet says, after each cycle of model progress, older frontier models are washed away and forgotten, much like sandcastles on a beach.

Posted in AI