NVIDIA has become the most valuable company in the world, and there seems to be good reasons why.
In a recent discussion, NVIDIA CEO Jensen Huang made a striking claim about his company’s competitive advantage in the AI chip market. The leather jacket-wearing executive, known for his bold statements and technical expertise, laid out a compelling argument for why NVIDIA’s infrastructure package maintains its dominance even in the face of aggressive pricing from competitors. His explanation centers on a fundamental constraint facing all data center operators: power limitations.

“Two ways to think about it. One way is, let’s just think about it from a perspective of revenues,” Huang began. “So everybody’s power limited, and let’s say you were able to secure two more gigawatts of power. Well, that two gigawatts of power you would like to have translate to revenues.”
The CEO then outlined the core of NVIDIA’s value proposition, emphasizing the superior performance per watt that comes from what he calls “deep and extreme co-design.” He explained: “So your performance or tokens per watt was twice as high as somebody else’s token per watt because I did deep and extreme co-design. And my performance was much higher per unit energy than my customer can produce twice as much revenues from their data center. And who doesn’t want twice as much revenues?”
Huang then presented a hypothetical scenario that illustrates the mathematics behind NVIDIA’s competitive moat. “And if somebody gave them a 15% discount, you know, the difference between our gross margins was, call it 75 points, and somebody else’s gross margins, call it 50 to 65 points. It’s not so much as to make up for the 30 times difference between Blackwell and Hopper.”
To drive home his point about the performance gap, Huang used NVIDIA’s own product generations as a comparison benchmark: “Let’s pretend Hopper’s an amazing chip, an amazing system. Let’s pretend somebody else’s ASIC is Hopper. Blackwell’s 30 times. So you’ve got to give up 30x revenues in that one gigawatt. It’s too much to give up.”
The CEO concluded with perhaps his most provocative assertion: “So even if they gave it to you for free, you only have two gigawatts to work with, right? Your opportunity cost is so insanely high. You would always choose the best performance per watt, right?”
This perspective reveals the deeper strategic reality behind NVIDIA’s market dominance in AI infrastructure. While competitors like AMD, Intel, and various startups focus on undercutting NVIDIA’s pricing, Huang’s argument suggests they’re fighting the wrong battle. In power-constrained data centers where every watt must generate maximum revenue, the superior efficiency of NVIDIA’s chips creates an economic advantage that transcends simple pricing considerations. The implication is stark: when opportunity costs are factored in, even free competitor chips cannot compete with NVIDIA’s performance-per-watt advantage. This explains why hyperscalers like Microsoft, Google, and Amazon continue investing billions in NVIDIA hardware despite the premium pricing, and why the company’s valuation has soared past $3 trillion. As AI workloads continue to scale and power becomes an increasingly critical constraint, NVIDIA’s technical leadership in efficiency may prove to be an even more formidable moat than previously understood.