Coffeezilla breaks down why Nvidia has more in common with Enron than one might think

When Nvidia declares “We are not Enron,” it raises more questions than it answers. The tech giant has become the centerpiece of an AI-fueled economic boom that some observers believe bears troubling similarities to past financial disasters—not through fraud like Enron, but through infrastructure overbuilding like Cisco during the dot-com bubble.

Nvidia has transformed from a gaming company into a data center powerhouse, with AI-related revenue now dwarfing its traditional PC gaming business. The company sits at the heart of a massive economic shift where AI investment accounts for roughly half of U.S. GDP growth.

Nvidia’s CEO Jensen Huang recently acknowledged the precarious position, noting that being “off by just a hair” in earnings could cause “the whole world to fall apart.” It’s a reality even government officials recognize—David Sacks admits the economy can’t afford to reverse course on AI spending without risking recession.

The controversy centers on criticisms from Michael Burry, the investor famous for predicting the 2008 housing crash. Burry argues that the current situation mirrors the dot-com bubble’s infrastructure overbuilding rather than the valueless vaporware of pets.com. His concern focuses on GPU depreciation schedules. Major tech companies have extended GPU depreciation from three or four years to six years, which increases stated earnings by spreading costs over longer periods. The problem? If GPUs actually become obsolete faster than six years, these companies are overstating their assets and profitability.

This is where Nvidia faces a contradiction. The company boasts about accelerating its development cycle to an annual cadence, with Jensen Huang explaining that performance improvements are so dramatic that competitor chips “could literally price them at zero” and customers would still choose Nvidia due to superior efficiency. He emphasizes that their advancement pace must match exponential growth in AI demands.

But here’s the irony: if Nvidia keeps making chips dramatically better each year, older models—including their own—become economically obsolete faster, not slower. When new GPUs offer substantially better performance per watt, running older chips becomes unprofitable due to electricity costs. This acceleration contradicts the six-year depreciation schedules that underpin current earnings reports across the industry.

Burry’s point isn’t that AI is fake or that Nvidia is committing fraud. Rather, he argues “Nvidia is clearly Cisco”—a reference to the networking company that led the dot-com infrastructure boom before demand failed to materialize as projected. The concern is that trillions are being committed to building data centers before clear demand exists to justify the investment.

With Google developing competitive in-house chips and Chinese open-source models improving rapidly, Nvidia cannot slow its innovation pace without losing market dominance. Yet this very pace may be undermining the accounting assumptions that make the entire AI buildout appear profitable.