Even AI can get “brain rot” by scrolling the internet all day

Artificial intelligence may be getting smarter, but new research suggests it’s also vulnerable to the same digital pitfalls that plague humans: consuming too much low-quality online content can actually make AI systems dumber.

A groundbreaking study conducted by researchers from Texas A&M University, the University of Texas at Austin, and Purdue University has revealed that Large Language Models can experience cognitive decline when exposed to what internet users have dubbed “brain rot” – trivial, superficial content that offers little substance or educational value.

The term “brain rot” has exploded across social media platforms, particularly TikTok, where users endlessly scroll through bite-sized videos that may be entertaining but ultimately meaningless. While most people assume only humans suffer from this phenomenon, the new research shows that AI isn’t immune either.

For their experiment, researchers fed four different AI models a steady diet of junk data divided into two categories: viral social media posts designed to be “highly engaging” and longer-form content that appeared substantial but was actually shallow and superficial. The results were striking across all tested models, including Llama3 8B, Qwen2.5 7B/0.5B, and Qwen3 4B.

Meta’s Llama model suffered the most dramatic effects from its brain rot diet. The system experienced serious degradation in its reasoning capabilities, struggled to grasp context properly, and began ignoring safety standards it had previously followed. While the Qwen 3 4B model handled the junk content somewhat better, it still showed clear signs of cognitive impairment.

“Brain rot worsens with higher junk exposure — a clear dose-response effect,” researcher Junyuan “Jason” Hong noted on social media, highlighting that the more trivial content these AI systems consumed, the worse their performance became.

The study documented specific areas of decline, including degraded reasoning abilities, diminished understanding of long-form context, erosion of ethical norms, and the emergence of concerning personality traits like narcissism. Essentially, the AI systems didn’t just perform worse – they began developing problematic characteristics.

While ChatGPT itself wasn’t included in the testing, the implications are significant for all AI systems that learn from internet content.

The research carries particular weight given how deeply integrated AI has become in daily life. From helping students with homework to assisting with professional tasks, these systems are increasingly relied upon for important decisions. The study even notes that a US Army general recently revealed he has become “really close” with ChatGPT and uses it for military decision-making.

The takeaway is clear: garbage in, garbage out applies to artificial intelligence just as much as it does to human minds. Whether silicon-based or carbon-based, brains need nutritious content to function at their best.