this post was submitted on 15 Dec 2025
8 points (78.6% liked)

Artificial Intelligence

1800 readers
3 users here now

Welcome to the AI Community!

Let's explore AI passionately, foster innovation, and learn together. Follow these guidelines for a vibrant and respectful community:

You can access the AI Wiki at the following link: AI Wiki

Let's create a thriving AI community together!

founded 2 years ago
MODERATORS
 

There was a lot of press ~2 years ago about this paper, and the term "model collapse":
Training on generated data makes models forget

There was concern that the AI models had slurped up the Whole Internet and needed more data to get any smarter. Generated "synthetic data" was mooted as a possible solution. And there's the fact that the Internet increasingly contains AI-generated content.

As so often happens (and happens fast in AI), research and industry move on, but the flashy news item remains in peoples' minds. To this day I see posts from people who misguidedly think this is still a problem (and a such one more reason the whole AI house of cards is about to fall)

In fact, the big frontier models today (GPT, Gemini, Llama, Phi, etc) are all trained on synthetic data

As it turns out, quality of data is what really matters, not whether it's synthetic or not; see " Textbooks Are All You Need "

And then some folks figured out how to use an AI Verifier to automatically curate that quality data: " Escaping Model Collapse via Synthetic Data Verification "

And people used clever math to make the synthetic data really high quality: " How to Synthesize Text Data without Model Collapse? "

Summary:
"Model collapse" from AI-generated content is largely a Solved Problem.

There may be reasons the whole AI thing will collapse, but this is not one.

you are viewing a single comment's thread
view the rest of the comments
[–] nymnympseudonym@piefed.social 1 points 1 week ago

You could say that's a problem Deepseek solved last year. One of their biggest insights was using a lot of AI compute to sift through the Whole Internnet for really really good initial training data (as opposed to generating it synthetically)

Yannic Kilcher did a great breakdown that includes details of this aspect: [GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models