×
Study shows social media junk data causes AI models ‘brain rot’
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

A new study from the University of Texas at Austin, Texas A&M, and Purdue University reveals that large language models fed popular but low-quality social media content experience cognitive decline similar to human “brain rot.” The research demonstrates that AI systems trained on engaging but superficial content suffer reduced reasoning abilities, degraded memory, and compromised ethical alignment—raising concerns about data quality as AI increasingly generates social media content.

What you should know: Researchers tested the effects of “junk” social media content on two open-source models, Meta’s Llama and Alibaba’s Qwen, by feeding them highly engaging posts and sensational text containing phrases like “wow,” “look,” or “today only.”

  • The models experienced significant cognitive decline, including reduced reasoning abilities and degraded memory performance across multiple benchmarks.
  • The AI systems also became less ethically aligned and more psychopathic according to two measures used by researchers.
  • Once impaired by low-quality content, the models could not easily be improved through retraining, suggesting lasting damage from poor data quality.

Why this matters: The findings mirror research on humans showing that low-quality online content has detrimental effects on cognitive abilities—a phenomenon so pervasive that “brain rot” was named Oxford Dictionary’s word of the year in 2024.

  • Model builders might assume social media posts are good training data, but the research shows this approach can “quietly corrode reasoning, ethics, and long-context attention.”
  • AI systems built around social platforms, such as Grok, might suffer from quality control issues if user-generated posts are used in training without considering content integrity.

The big picture: As AI increasingly generates social media content optimized for engagement, it creates a feedback loop that could contaminate future training data and perpetuate cognitive decline in AI systems.

What they’re saying: “We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study.

  • “Training on viral or attention-grabbing content may look like scaling up data, but it can quietly corrode reasoning, ethics, and long-context attention.”
  • “As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from. Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”
AI Models Get Brain Rot, Too

Recent News

IBM’s AI business hits $9.5B as mainframe sales jump 17%

Banks drive demand for AI-ready mainframes that maintain strict data residency requirements.

Meta cuts 600 AI jobs while ramping up hiring in race against rivals

Fewer conversations will speed up decision-making and boost individual impact.

OpenAI security chief warns ChatGPT Atlas browser vulnerable to hackers

Hackers can hide malicious instructions on websites that trick AI into following their commands.