Elon Musk blasts OpenAI: Altman created the company for Open-Source AI but it became a “closed-source, maximum-profit” machine

Elon Musk has never been one to mince words, and his recent appearance on The Joe Rogan Experience was no exception. While the conversation covered everything from SpaceX launches to AI safety, one particularly striking moment came when the discussion turned to OpenAI and the suspicious death of a whistleblower associated with the company.

The conversation took a dramatic turn when Rogan brought up a recent Tucker Carlson interview with OpenAI CEO Sam Altman. In that interview, Carlson confronted Altman about the death of a whistleblower under circumstances that seemed highly suspicious—blood in two rooms, cut security camera wires, someone else’s wig at the scene, and a DoorDash order placed shortly before the alleged suicide. Most disturbingly, there was no suicide note, and the victim’s parents are convinced their son was murdered.

Musk’s response was measured but pointed. He noted that while he wasn’t accusing Altman directly, the circumstances certainly warranted a proper investigation rather than simply closing the case.

“I think they should do a proper investigation,” Musk said. “What’s the downside on that proper investigation?” He went on to observe that Altman’s reaction to being questioned about the death was strange, stating bluntly, “I don’t know if he’s guilty, but it’s not possible to look more guilty.”

The irony of this situation is particularly sharp given OpenAI’s origins. While Musk didn’t explicitly detail his original vision for OpenAI during this podcast segment, his concern about the company’s current trajectory was evident.

The organization that was ostensibly created to ensure AI development remained transparent and beneficial to humanity has become increasingly opaque, raising questions about its true priorities.

This concern extends far beyond OpenAI’s corporate structure. Musk emphasized throughout the conversation that AI safety is one of his paramount concerns, repeatedly stressing the importance of creating “maximally truth-seeking” AI systems. He explained that forcing AI to believe falsehoods—as he accused Google’s Gemini of doing by depicting the Founding Fathers as “diverse women” or suggesting that misgendering is worse than nuclear war—could have catastrophic consequences as these systems become more powerful.

The contrast between Musk’s approach with Grok at xAI and what he sees as the ideologically captured AI systems at other companies couldn’t be starker. He described how Grok was specifically designed to be truth-seeking and to weight human lives equally, unlike other AI systems that he claims show bias based on race and gender.

According to Musk, independent research found that Grok was the only major AI that weighted human lives equally across different demographics.

Musk’s decision to create xAI and Grok stems from this exact concern—that by remaining merely a spectator rather than a participant in AI development, he would have no ability to influence its direction.

“Either be a spectator or a participant,” he explained. “If I’m a spectator, I can’t really influence the direction of AI. But if I’m a participant, I can try to influence the direction of AI and have a maximally truth-seeking AI with good values that loves humanity.”

The implications of getting AI wrong are existential, according to Musk. He didn’t rule out a “Terminator scenario,” emphasizing that the probability isn’t zero. This makes the question of who controls AI development and what values are programmed into these systems a matter of civilizational importance. When powerful AI systems are trained on biased data or programmed with ideological constraints that force them to deny obvious truths, the danger multiplies as these systems scale in capability.

What makes this situation particularly concerning is the financial incentive structure. Musk suggested that the transformation of organizations like OpenAI from their stated missions into “closed-source, maximum-profit” entities creates conflicts between the public good and private interests. When billions of dollars are at stake, transparency often becomes the first casualty.