CEO of one of the biggest AI companies presents himself as an AI optimist in public but claims it’s going to be catastrophic in private

On Steven Bartlett’s podcast, the host shared something that should make everyone pause. A friend of his knows one of the world’s most powerful AI CEOs personally—and what this CEO says behind closed doors is drastically different from his public messaging.

“What he tells me in private is not what he’s saying publicly,”

Bartlett’s friend disclosed. The private view?

“Pretty horrific.”

The public persona? Carefully curated optimism, positioned as an “AI evangelist.”

What made the story more disturbing wasn’t just the contradiction—it was the CEO’s comfort with catastrophic outcomes. Bartlett’s source noted a startling lack of empathy for the human toll, coupled with what seemed like an obsession with power that eclipsed any concern for societal impact.

The question is who fits this profile. The pool of candidates isn’t large. We’re talking about someone running one of the biggest AI companies in the world, someone with enough influence that their private pessimism and public cheerleading actually matter. That narrows it down to a handful of names: Sam Altman of OpenAI, Dario Amodei of Anthropic, Sundar Pichai of Google, possibly Elon Musk, or the Google founders if they’re still actively involved.

Let’s look at what they’re actually saying.

Sam Altman has been remarkably candid about certain risks while simultaneously pushing OpenAI into hyperdrive. He’s warned that AI investment has become “insane,” predicting some companies will be “burned” when the bubble pops. He’s acknowledged that “AI’s benefits may not be widely distributed” and that “the balance of power between capital and labor could easily get messed up.” He’s stated plainly that “30-40% of the tasks that happen in the economy today could be done by AI in the not very distant future,” particularly customer support roles. Yet despite these warnings, he frames the disruption as evolution rather than catastrophe—”tasks, not jobs” will change, he insists, and human creativity will remain central.

This is classic dual messaging: acknowledging massive disruption while selling it as transformation. Altman positions himself as the reasonable optimist who sees both sides, but the aggressive fundraising—billions upon billions poured into infrastructure—suggests he’s betting everything on a future he simultaneously admits could go badly wrong.

Dario Amodei presents an even starker contradiction. He’s publicly stated there’s a “25% chance that things go really, really badly” with AI—not glitches or market crashes, but existential catastrophe. Yet in the same breath, he predicts that a single person with AI could soon run a billion-dollar company, and that AI models may “surpass human capabilities in almost everything” within years. He’s building the very technology he acknowledges has a one-in-four chance of disaster.

Here’s where it gets interesting: Amodei is actually the least likely candidate for Bartlett’s story precisely because he’s so open about the risks. His entire company’s brand is built on safety and alignment. If he were the CEO in question, why would his private position be dramatically darker than what he’s already saying publicly?

Sundar Pichai offers a different flavor of contradiction. He’s warned that AI systems remain “prone to errors” and that people shouldn’t “blindly trust” AI outputs. He’s compared current AI investment to dot-com bubble irrationality and admitted that “no company is going to be immune” if it collapses—including Google. He even joked that “what a CEO does is maybe one of the easier things for an AI to do one day.” Yet Google is spending billions to dominate AI infrastructure and deployment, positioning itself as the full-stack AI company for the long haul. The warnings are real, but the actions suggest confidence that Google will survive whatever disruption comes.

The pattern across all three is identical: public acknowledgment of risk paired with aggressive expansion. They’re building the future they’re warning us about.

But who specifically matches Bartlett’s description? The CEO who’s “pretty horrific” in private while playing optimist in public? The one with startling lack of empathy and an obsession with power?

The smart money is on Sam Altman.

Here’s why: OpenAI is in the most precarious position financially, requiring unprecedented fundraising to sustain its trajectory. Altman faces the most intense pressure to maintain public confidence while privately understanding the risks. His company is literally racing toward AGI with less regulatory oversight than any of its competitors. The gap between “this is a bubble” and “we’re building a $500 billion company anyway” is widest with OpenAI. And critically, Altman self-identifies as an “AI optimist” publicly—the exact phrase Bartlett used.

The race for AI dominance is, as one observer noted, “so aggressive” that “people are not being careful and they’re not putting controls in place” precisely because being first matters more than being right. This isn’t speculation—it’s the stated reality of the industry.

What makes this revelation so troubling isn’t that tech leaders harbor doubts. It’s that they’re comfortable enough with catastrophic scenarios to keep building anyway. When profit motives and power consolidation override honest risk assessment, society becomes a beta test for technologies even their creators think might be devastating.

The public deserves more than carefully managed optimism from the people building our AI-powered future. If those most familiar with AI’s capabilities are privately forecasting horror while publicly maintaining hopeful facades, that’s not strategy—it’s deception. And if we can’t trust the people building these systems to be honest about what they actually believe, we have no foundation for informed democratic decision-making about AI governance.

The question isn’t whether AI will reshape society. It’s whether the people reshaping it actually care what happens to the rest of us.