NVIDIA CEO Jensen Huang Explains How AI Differs From Traditional Software, And Why It Isn’t In A Bubble

There are murmurs about the AI space being in a bubble, but NVIDIA CEO Jensen Huang has given an interesting argument about how AI differs from traditional software — and why these massive buildouts aren’t necessarily a bubble.

In a recent interview, Huang offered a compelling perspective on why the current wave of AI infrastructure investment represents something fundamentally different from previous technology cycles. The CEO of the company that has become synonymous with AI computing power drew a sharp distinction between traditional software and AI, arguing that the latter requires an entirely new industrial paradigm — one that justifies the hundreds of billions of dollars being poured into AI infrastructure.

jensen huang

The Fundamental Difference: Real-Time Intelligence vs. Pre-Compiled Code

Huang began by addressing the core technical distinction between AI and traditional software:

“Fundamentally, what’s different between AI today and the software industry of the past? Well, software in the past was pre-compiled, and the amount of computation necessary for the software is not very high. But in order for AI to be effective, it has to be contextually aware. It can only produce the intelligence at the moment. You can’t produce it in advance and retrieve it. That’s called content. AI intelligence has to be produced and generated in real time.”

This distinction is crucial to understanding Huang’s thesis. Traditional software could be written once, compiled, and then run with relatively minimal computational resources. AI, by contrast, must generate its output — its “intelligence” — dynamically for each interaction, requiring vastly more computational power.

AI Factories: A New Industrial Model

Building on this foundation, Huang introduced a metaphor that reframes how we should think about AI infrastructure:

“As a result, we now have an industry where the computation necessary to produce something that’s really valuable and in high demand is quite substantial. We have created an industry that requires factories. That’s why I remind ourselves that AI needs factories to produce these tokens, to produce the intelligence. And this is a once — it’s never happened before, where the computer is actually part of a factory. And so we need hundreds of billions of dollars of these factories in order to serve the trillions of dollars of industries that sit on top of intelligence.”

The “AI factory” concept is Huang’s answer to those questioning whether massive capital expenditures on data centers and GPU clusters represent a bubble. In his view, these aren’t speculative investments but rather essential infrastructure — analogous to building automotive factories or semiconductor fabs — required to power a new generation of economic activity.

From Software Tools to Intelligent Labor

Huang then pivoted to perhaps his most provocative argument: that AI represents a category shift from tools to labor:

“Come back and take a look at software in the past. Software in the past are software tools. They’re used by people. For the first time, AI is intelligence that augments people. And so it addresses labor, it addresses work. It does work.”

This framing positions AI not as another software category competing for IT budgets, but as a fundamental transformation in how work itself is performed — potentially justifying far larger total addressable markets than traditional enterprise software ever commanded.

We’re Still at the Beginning

When pressed about whether the current buildout might be excessive, Huang pushed back firmly:

“This is not about — I think this is where we’re well in the beginning of the buildout of intelligence, and the fact of the matter is most people still don’t use AI today. And someday in the near future, almost everything we do, every moment of the day, you’re going to be engaging AI somehow. And so between where we are today, where the usage is quite low, to where we will be someday, where the usage is basically continuous.”

The Implications: Industrial Economics Meet Information Technology

Huang’s argument represents a sophisticated defense of current AI spending levels, but it also raises important questions. If AI truly requires “factories” producing intelligence in real-time, then the economics look radically different from traditional software’s high-margin, low-marginal-cost model. Companies like OpenAI, Anthropic, and Google are indeed spending enormous sums on compute infrastructure — OpenAI’s partnership with Microsoft reportedly involves tens of billions in infrastructure commitments, while Meta has announced plans to spend upwards of $40 billion on AI infrastructure in 2025 alone.

The “AI factory” model also has implications for energy consumption and sustainability. These facilities require massive amounts of electricity, prompting major tech companies to invest in nuclear power and renewable energy sources. Microsoft recently signed a deal to restart the Three Mile Island nuclear plant, while Google and Amazon have both announced investments in next-generation nuclear reactors specifically to power AI infrastructure.

Whether Huang’s vision proves prescient or whether the current buildout exceeds near-term demand remains to be seen. But his framework offers a useful lens for evaluating AI investments: not as software R&D or IT spending, but as the construction of a new industrial base for an intelligence-driven economy. The question isn’t whether we’re building too much too fast — it’s whether the “trillions of dollars of industries” he envisions will materialize quickly enough to justify the factories being built today.

Posted in AI