TL;DR LLMs train on stuff like documentation, GitHub repositories, StackOverflow, and Reddit. But as we keep using LLMs, their own output goes into these platf...
Article URL: https://gwern.net/doc/science/2025-kusumegi.pdf Comments URL: https://news.ycombinator.com/item?id=46505296 Points: 4 Comments: 0...
There's a meaningful distinction between using large language models and truly mastering them. While most people interact with LLMs through simple question-and-...
'markdown “Won’t AI just get better at this?” Short answer No. Understanding why reveals something fundamental about how we should think about AI safety.