A TikTok creator known as Randy Savage has sparked controversy for using artificial intelligence to create elaborate pranks on unsuspecting strangers, raising serious questions about consent, ethics, and the increasingly blurred line between digital manipulation and reality.
The content creator’s approach involves generating AI deepfakes of random people he encounters in public spaces, then confronting them with these fabricated videos while secretly recording their reactions.
In one instance, he showed a stranger an AI-generated video depicting them in a compromising situation with another man. The target’s confused response became the punchline, though critics note the underlying homophobic undertones and complete lack of consent.
What started as potentially harmless confusion quickly escalated into more problematic territory. In one particularly concerning video, Randy Savage created AI footage showing a Black man apparently stealing from another customer’s pocket at a retail store. He then showed this fabricated evidence to store employees, who proceeded to confront the innocent customer. The racial implications of falsely accusing someone of theft using manipulated video footage cannot be overstated.
Content analyst Jarvis Johnson expressed alarm in a recent YouTube video at how the pranks crossed ethical boundaries. “You’re framing someone for a crime,” Johnson noted while reviewing the content. The situation becomes even more troubling when considering that many targets likely lack the resources to track down and challenge this content online.
Another recurring prank involves showing people AI-generated videos of their vehicles being stolen. One man’s genuine distress was captured as he believed his truck had been taken, only to discover it remained parked nearby. The emotional manipulation involved in these scenarios goes far beyond traditional prank content.
The creator labels his work as an “AI awareness experiment,” though this framing appears disingenuous given the exploitative nature of the content. Unlike legitimate social experiments, there’s no educational component or meaningful consent process. The targets often appear visibly upset even after learning they’ve been pranked, suggesting the compensation offered doesn’t adequately address the emotional distress caused.
Perhaps most concerning is how this content reflects our current media landscape. With AI technology becoming increasingly sophisticated and accessible, distinguishing real footage from fabricated content grows more challenging daily. When everything can potentially be questioned as AI-generated, it creates a dangerous environment where genuine documentation of real events can be dismissed.
The popularity of this content, with some videos receiving over 240,000 likes, demonstrates a troubling appetite for watching strangers experience distress. The excessive editing, including outdated sound effects and unnecessary additions, suggests an attempt to distract from the fundamentally mean-spirited nature of the pranks.
While technology offers creative possibilities, using it to deceive and emotionally manipulate unsuspecting people for online engagement crosses a clear ethical line that deserves scrutiny rather than celebration.