In a recent appearance on the American Optimist podcast with venture capitalist Joe Lonsdale, Jake Paul discussed his involvement with OpenAI’s Sora video generation tool. In the same conversation, however, he also voiced concerns about AI-generated content flooding the internet.
When asked directly about his work with OpenAI and Sora, Paul confirmed that he had been involved in the early stages of the project.
“Yeah, correct,” he said. “We worked with their team for a couple of months, just helping them develop the product and the idea of what it would even be.”
Paul explained that the collaboration began through conversations with OpenAI CEO Sam Altman, where they identified what he believed was a gap in social media.
“It started from just chatting with Sam and identifying that there was a gap for a potential new social media app,” he said. “So I used all my expertise of being a creator and utilizing these apps for so long, like 15, 16 years now, and the psychology around how people interact, what kind of content they want, all these different things. Then I granted my name, image, and likeness to be the first.”
Paul’s business partner, Jeff Woo, expanded on how realistic early AI-generated videos of Paul appeared, noting that even people close to them were fooled.
“There were friends being like, ‘Did Jake get arrested? Did Jake come out?'” Woo said.
Jake agreed, adding: “They were messaging me like, ‘Yo, what? I can’t believe you did that.’ And it would be the most absurd video, but it looked real.”
Lonsdale stated that seeing AI-generated versions of celebrities and influencers are unsettling. “It’s actually scary,” he said. “They’re going to be able to make you do or say anything.”
Paul agreed, saying, “They did.”
Later in the conversation, Lonsdale shifted the focus to the impact of AI-generated content and asked Paul whether he was concerned about what many now refer to as “AI slop” dominating online platforms.
“It’s bad,” Paul said. “I think it’s important to have regulations and authentication out there.”
Lonsdale stated: “We have to say if things are real or not, because I’m very scared of government making too many rules where they break things. But you kind of want to know if something’s real, right? Or if something’s pretend.”
Paul also suggested that older users may be particularly vulnerable to misleading AI content, noting that many struggle to distinguish between real and fabricated media.
“They have no idea what’s going on,” Lonsdale said.
Paul agreed, saying: “And then they’re just posting these videos thinking that it’s real. And I’m like, ‘Yo, boomers.'”
Looking ahead, Lonsdale argued that responsibility will likely fall on platforms to create verification systems that help users confirm authenticity. He added that authentication could become a key signal of credibility online.
“There’s something where if it’s really coming from you, then you can authenticate it,” Lonsdale said. “And if you haven’t authenticated it, it’s probably not real.”