AI companies are spending billions of dollars to build the best LLMs, but Salesforce CEO Marc Benioff thinks that they’re already commoditized.
In a recent post on X, Benioff declared that “LLMs are the new disk drives: commodity infrastructure you hot-swap for whoever’s cheapest + best.” “The fantasy that the model is a moat just expired,” he added. Interestingly, he chose a Grok-created AI image of LLMs fitting into racks like drives to make his point.
The comparison to disk drives is provocative. Just as companies once competed fiercely on storage technology before it became standardized infrastructure that businesses could freely swap between vendors, Benioff is arguing that large language models are heading down the same path—becoming interchangeable components rather than defensible competitive advantages.
A High-Profile Model Switcher
Benioff’s claim isn’t just theoretical. He’s publicly demonstrated this commoditization himself. Two weeks ago, he revealed that after three years of daily ChatGPT use, he switched to Google’s Gemini 3 and isn’t looking back. “I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back,” he posted. Benioff’s experience suggests the barriers to migration are lower than many assumed. If users can seamlessly move from one leading model to another based purely on performance and cost, then the models themselves may indeed be commoditizing.
Benioff’s endorsement carries particular weight given Salesforce’s aggressive AI adoption. The company said in January it wouldn’t be hiring software engineers this year due to productivity gains from AI. By June, Benioff reported that AI was handling 30-50% of work at the company. In September, Salesforce cut 4,000 customer support roles, replacing them with AI agents.
The Evidence of Commoditization
Benioff isn’t alone in his willingness to switch. Social media has been filled with users declaring they’re moving from ChatGPT to Gemini following the release of Gemini 3. The numbers back this up—traffic to Gemini has jumped 17% since Gemini 3’s launch and is up fivefold over the past year.
If a Fortune 500 CEO like Benioff can publicly abandon three years of ChatGPT habits after just two hours with a competitor, what does that say about user stickiness in the LLM market? The ease of switching suggests that models may be becoming undifferentiated enough that users will simply chase whoever delivers the best performance at the best price—exactly how commodity markets function.
The Case for Commoditization
Benioff appears to be correct that LLMs are commoditizing, at least at the current moment. The major frontier models from OpenAI, Google, Anthropic, and others have converged toward similar capability levels on many tasks. For most business use cases, these models are increasingly interchangeable.
This convergence makes economic sense. The fundamental transformer architecture underlying modern LLMs is widely understood. Training techniques have been extensively documented. The key ingredients—compute, data, and engineering talent—are available to well-funded competitors. With multiple companies capable of training cutting-edge models, differentiation becomes harder to maintain.
Moreover, enterprises have strong incentives to treat models as commodities. Avoiding vendor lock-in, maintaining negotiating leverage, and optimizing for cost all push organizations toward a multi-model strategy where they can switch providers as conditions change. If Salesforce, deeply embedded in enterprise software, is treating models this way, many other companies will likely follow.
The Counter-Argument: Divergence Ahead
However, Benioff’s commodity thesis faces a significant challenge: what happens if model capabilities begin to diverge substantially in the coming years?
The current period of convergence may be temporary. AI labs are pursuing different architectural innovations, training methodologies, and scaling approaches. If one company achieves a breakthrough in reasoning, memory, or multi-modal understanding that others can’t quickly replicate, the commodity market could fracture.
We’ve already seen hints of this. When OpenAI released GPT-4, it maintained a clear capability lead for months. Google’s recent Gemini 3 breakthrough has now tilted the balance the other way. These shifts suggest the technology is still advancing rapidly enough that sustained differentiation is possible. This is even more evident in image generation, where the top models are developing a significant lead over competitors.
Additionally, companies are beginning to offer specialized models optimized for specific domains—medical diagnosis, legal analysis, coding, scientific research. If these vertical-specific models prove substantially better than general-purpose alternatives, they could command premium pricing and avoid commoditization.
There’s also the question of whether the most advanced models will remain accessible as commodities. As capabilities increase, leading labs may restrict access to their most powerful systems, deploy them only through proprietary platforms, or embed them in differentiated products. This would prevent hot-swapping and rebuild moats around frontier capabilities.
Implications for the AI Industry
If Benioff is right about commoditization, the implications for AI companies are significant. Billions in investment aimed at building defensible model moats could end up being hard to recoup. Companies would need to find defensibility elsewhere—in proprietary data, distribution, user experience, integrated applications, or ecosystem effects.
This would favor companies like Salesforce itself, which can embed commodity models into established enterprise software platforms. It would challenge pure-play AI companies like OpenAI and Anthropic, whose value proposition centers heavily on model superiority.
For enterprises, commoditization would be welcome news. It would mean competitive pricing, freedom to switch vendors, and reduced risk of dependency on any single AI provider. The hot-swappable model world Benioff describes would give buyers maximum leverage.
The next twelve to eighteen months will be telling. If the major labs continue converging toward similar capabilities, Benioff’s commodity thesis will strengthen. But if we see sustained differentiation—one model pulling significantly ahead on reasoning, or specialized models creating defensible niches—then reports of the death of the model moat may prove premature.
For now, at least one influential CEO is betting that in AI, as in storage, the best strategy is to remain flexible and swap freely to whoever’s cheapest and best.