Close Menu
Technophile NewsTechnophile News
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
What's On
OpenAI’s Open-Weight Models Are Coming to the US Military

OpenAI’s Open-Weight Models Are Coming to the US Military

13 November 2025
Apple will take a mini commission from mini app developers

Apple will take a mini commission from mini app developers

13 November 2025
Jeffrey Epstein Claimed Intimate Knowledge of Donald Trump’s Views in Texts With Bill Gates Adviser

Jeffrey Epstein Claimed Intimate Knowledge of Donald Trump’s Views in Texts With Bill Gates Adviser

13 November 2025
Valve wants Half-Life: Alyx to work well standalone on Steam Frame

Valve wants Half-Life: Alyx to work well standalone on Steam Frame

13 November 2025
The New Apple Watch Ultra 3 Is 0 Off

The New Apple Watch Ultra 3 Is $100 Off

13 November 2025
Facebook X (Twitter) Instagram
  • Privacy
  • Terms
  • Advertise
  • Contact Us
Thursday, November 13
Facebook X (Twitter) Instagram YouTube
Technophile NewsTechnophile News
Demo
  • Home
  • News
  • PC
  • Phones
  • Android
  • Gadgets
  • Games
  • Guides
  • Accessories
  • Reviews
  • Spotlight
  • More
    • Artificial Intelligence
    • Web Stories
    • Press Release
Technophile NewsTechnophile News
Home » AI Models Get Brain Rot, Too
News

AI Models Get Brain Rot, Too

By News Room22 October 20253 Mins Read
Facebook Twitter Pinterest LinkedIn Telegram Tumblr Reddit WhatsApp Email
AI Models Get Brain Rot, Too
Share
Facebook Twitter LinkedIn Pinterest Email

AI models may be a bit like humans, after all.

A new study from the University of Texas at Austin, Texas A&M, and Purdue University shows that large language models fed a diet of popular but low-quality social media content experience a kind of “brain rot” that may be familiar to anyone who has spent too long doomscrolling on X or TikTok.

“We live in an age where information grows faster than attention spans—and much of it is engineered to capture clicks, not convey truth or depth,” says Junyuan Hong, an incoming assistant professor at the National University of Singapore who worked on the study as a graduate student at UT Austin. “We wondered: What happens when AIs are trained on the same stuff?”

Hong and his colleagues fed different kinds of text to two open source large language models in pretraining. They examined what happened when the models were fed a mix of highly “engaging,” or widely shared, social media posts and ones that contained sensational or hyped text like “wow,” “look,” or “today only.”

The researchers then used several different benchmarks to gauge the impact of this “junk” social media diet on two open source models: Meta’s Llama and Alibaba’s Qwen.

The models fed junk text experienced a kind of AI brain rot—with cognitive decline including reduced reasoning abilities and degraded memory. The models also became less ethically aligned and more psychopathic according to two measures.

The results mirror research on human subjects, which shows that low-quality online content has a detrimental effect on people’s cognitive abilities. The pervasiveness of the phenomenon saw “brain rot” named as the Oxford Dictionary word of the year in 2024.

The results are important for the AI industry, Hong says, because model-builders might assume that social media posts are a good source of training data for their models. “Training on viral or attention-grabbing content may look like scaling up data,” he says. “But it can quietly corrode reasoning, ethics, and long-context attention.”

The fact that LLMs suffer from brain rot seems especially worrying when AI is itself increasingly generating social media content, much of which is seemingly optimized for engagement. The researchers also found that models impaired by low-quality content could not easily be improved through retraining.

The findings also suggest that AI systems built around social platforms, such as Grok, might suffer from quality control issues if user-generated posts are used in training without an eye toward the integrity of the posts.

“As more AI-generated slop spreads across social media, it contaminates the very data future models will learn from,” Hong says. “Our findings show that once this kind of ‘brain rot’ sets in, later clean training can’t fully undo it.”


This is an edition of Will Knight’s AI Lab newsletter. Read previous newsletters here.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related News

OpenAI’s Open-Weight Models Are Coming to the US Military

OpenAI’s Open-Weight Models Are Coming to the US Military

13 November 2025
Apple will take a mini commission from mini app developers

Apple will take a mini commission from mini app developers

13 November 2025
Jeffrey Epstein Claimed Intimate Knowledge of Donald Trump’s Views in Texts With Bill Gates Adviser

Jeffrey Epstein Claimed Intimate Knowledge of Donald Trump’s Views in Texts With Bill Gates Adviser

13 November 2025
Valve wants Half-Life: Alyx to work well standalone on Steam Frame

Valve wants Half-Life: Alyx to work well standalone on Steam Frame

13 November 2025
The New Apple Watch Ultra 3 Is 0 Off

The New Apple Watch Ultra 3 Is $100 Off

13 November 2025
Hackers use Anthropic’s AI model Claude once again

Hackers use Anthropic’s AI model Claude once again

13 November 2025
Top Articles
The Best Air Purifiers of 2025 for Dust, Smoke, and Allergens

The Best Air Purifiers of 2025 for Dust, Smoke, and Allergens

26 September 202513 Views
25 Amazon Prime Perks You Might Not Be Using

25 Amazon Prime Perks You Might Not Be Using

18 September 202513 Views
The Best Travel Toiletry Bags

The Best Travel Toiletry Bags

4 October 202511 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Don't Miss
Hackers use Anthropic’s AI model Claude once again

Hackers use Anthropic’s AI model Claude once again

13 November 2025

Anthropic announced on Thursday that Chinese state-backed hackers used the company’s AI model Claude to…

Review: Jackrabbit MG Doble

Review: Jackrabbit MG Doble

13 November 2025
The Fire TV Stick 4K Max is back down to , its best price in a year

The Fire TV Stick 4K Max is back down to $35, its best price in a year

13 November 2025
Stewart Rhodes Relaunched the Oath Keepers. Even Old Oath Keepers Don’t Care

Stewart Rhodes Relaunched the Oath Keepers. Even Old Oath Keepers Don’t Care

13 November 2025
Technophile News
Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
  • Privacy Policy
  • Terms of use
  • Advertise
  • Contact Us
© 2025 Technophile News. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.