Won't LLMs eventually train on themselves? It'll slowly decline in output..
Source: Dev.to
TL;DR
LLMs train on stuff like documentation, GitHub repositories, StackOverflow, and Reddit. But as we keep using LLMs, their own output goes into these platforms. Which means… they’ll train on themselves at one point. Each time, maybe the quality is 0.1 % worse. This adds up, exponentially.
LLMs do have good output. But this is because they trained on human data. You can tell that the AI output is slightly worse, at times. Sometimes, majorly worse.
Kinda like that Telephone game… the message slowly gets diluted. Each time, maybe small, like 0.0001 %. But then, it exponentially gets worse. 0.1 % of 0.1 %.