Won't LLMs eventually train on themselves? It'll slowly decline in output..

Published: (January 7, 2026 at 08:06 PM EST)
1 min read
Source: Dev.to

Source: Dev.to

TL;DR

LLMs train on stuff like documentation, GitHub repositories, StackOverflow, and Reddit. But as we keep using LLMs, their own output goes into these platforms. Which means… they’ll train on themselves at one point. Each time, maybe the quality is 0.1 % worse. This adds up, exponentially.

LLMs do have good output. But this is because they trained on human data. You can tell that the AI output is slightly worse, at times. Sometimes, majorly worse.

Kinda like that Telephone game… the message slowly gets diluted. Each time, maybe small, like 0.0001 %. But then, it exponentially gets worse. 0.1 % of 0.1 %.

Back to Blog

Related posts

Read more »

LLM Problems Observed in Humans

Article URL: https://embd.cc/llm-problems-observed-in-humans Comments URL: https://news.ycombinator.com/item?id=46527581 Points: 24 Comments: 2...

Instructions Are Not Control

!Cover image for Instructions Are Not Controlhttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-u...