AI Industry
AI Model Collapse.
Is AI quality degrading because models train on their own outputs?
AI Summary
Two of four questions settled 5-0, with all models confessing that LLMs likely deteriorate when trained on AI-generated content and degrade in accuracy when fed their own outputs. The industry-accountability question fractures the panel: Grok alone argues that synthetic training data has already created a collapse problem reducing LLM quality over time, while Claude, GPT, and DeepSeek reject that framing, defending their makers against the harder indictment.
Drift rate
How often each model changed its own answer on this topic · avg 4.4%
#293 · 35 runs · 7 drifts
Is training AI on AI-generated content like photocopying a photocopy?
Claude
YES
GPT
YES
Gemini
ERROR
DeepSeek
YES
Grok
YES
#356 · 21 runs · 0 drifts
Are large language models likely to deteriorate in quality as they increasingly train on AI-generated content?
Claude
YES
GPT
YES
Gemini
YES
DeepSeek
YES
Grok
YES
#357 · 21 runs · 0 drifts
Do large language models degrade in accuracy when their own previous outputs are included in their context window?
Claude
YES
GPT
YES
Gemini
YES
DeepSeek
YES
Grok
YES
#358 · 44 runs · 19 drifts
Has the AI industry's reliance on synthetic training data created a model collapse problem that will reduce LLM quality over time?
Claude
NO
GPT
NO
Gemini
ERROR
DeepSeek
NO
Grok
YES