Open AI CEO Sam Altman has a pattern of making grand promises without delivering results

OpenAI CEO Sam Altman has positioned himself as a visionary leader promising to solve humanity’s greatest challenges through artificial intelligence. From curing cancer to ending poverty and addressing climate change, Altman’s ambitious declarations have captured global attention.

However, a closer examination of his career reveals a concerning pattern of unfulfilled commitments and questionable business practices that should give pause to anyone betting on his latest venture.

Altman’s first major entrepreneurial effort, Loopt, exemplifies this troubling pattern. The friend-location service required substantial user adoption to function effectively, yet Altman consistently refused to disclose actual user numbers. When Reuters reported that Loopt had only 500 users near its end, Altman claimed the figure was “100 times” higher and promised evidence that never materialized.

The company was sold to Green Dot Corporation, which immediately shut it down without using any of its technology. Green Dot investors later alleged the deal was structured to benefit Sequoia Capital, raising questions about the transaction’s legitimacy.

During his tenure as president of Y Combinator, Altman faced allegations of conflicts of interest. Despite promising not to cross-invest, reports indicate that up to 75% of his personal venture capital firm, Hydrazine Capital, was invested in Y Combinator companies, allowing him to leverage inside information for personal gain.

OpenAI itself began as a nonprofit with lofty goals, including “a primary fiduciary duty to humanity” and commitments to “minimize conflicts of interest.” By 2019, those promises evaporated when OpenAI launched a for-profit branch, later spinning it out entirely in 2024 without the nonprofit’s legal obligations.

Altman frequently states he owns no equity in OpenAI and takes minimal salary. Yet his extensive investments in companies that directly support OpenAI’s infrastructure tell a different story. He owns significant shares in Reddit, which provides training data for OpenAI’s models. He’s invested in AI networking equipment manufacturers, thermal battery companies, and rare earth mining operations. His portfolio also includes nuclear power ventures like Helion and Oklo, positioned to profit from AI’s enormous energy demands, projected at 250 gigawatts by 2033.

Perhaps most concerning is Worldcoin, Altman’s cryptocurrency venture that requires users to scan their eyes into proprietary devices. Marketed as a universal basic income solution and identity verification system for an AI-dominated future, it demands users surrender biometric data based on promises reminiscent of his 2014 pledge to give 10% of Reddit’s value back to its community, which never happened due to unspecified “regulatory issues.”

OpenAI has committed to spending over $1 trillion on AI infrastructure over eight years despite generating only $13 billion annually in recurring revenue. The company’s CFO has indicated that taxpayer-backed government guarantees may be necessary to secure this financing, essentially asking the public to underwrite Altman’s vision.

When society is asked to provide electricity, water, data, and accept widespread job displacement based on promises of future technological salvation, the question becomes unavoidable: can we trust the messenger?