AI Makes You Boring
Source: Hacker News
This post expands on a comment I made on Hacker News recently, referencing a blog post that highlighted an increase in volume and a decline in quality among “Show HN” submissions.
I don’t actually mind AI‑aided development—a tool is a tool and should be used if you find it useful—but I think the vibe of code‑centric Show HN projects has become overall pretty boring. They generally don’t have a lot of work put into them, and as a result the author (pilot?) hasn’t thought deeply about the problem space, leaving little room for discussion.
The cool part about pre‑AI Show HN was that you got to talk to someone who had thought about a problem for much longer than you had. It was a real opportunity to learn something new and to get an entirely different perspective.
AI’s Effect on Programming Discussions
I feel like this is what AI has done to the programming discussion: it draws in people with uninteresting projects who don’t have anything compelling to say about programming.
This isn’t limited to Show HN or even Hacker News; you see it everywhere. While part of the phenomenon is likely an upswing of people who don’t usually program but get swept up in the fun of building a product, I want to argue that it’s much worse than that.
AI Makes People Boring
- Lack of original thinking – AI models are extremely bad at original thought, so any thinking offloaded to an LLM is usually not very original, even if the model treats your inputs as genius‑level insights.
- Surface‑level ideas – Prompting an AI model is not the same as articulating an idea. You get output, but in terms of ideation the output is discardable; it’s the work that matters.
- Shallow immersion – The way humans generate original ideas is by immersing in a problem for a long period of time, something that simply doesn’t happen when LLMs do the thinking. This leads to shallow, surface‑level ideas.
Counterarguments and Their Flaws
Some argue that you need a human “in the loop” to steer the work and handle high‑level thinking. That premise is fundamentally flawed:
- Original ideas arise from the very work you’re offloading to LLMs.
- Having humans in the loop doesn’t make the AI think more like people; it makes human thought more like AI output.
- Ideas are refined when you try to articulate them—hence why we make students write essays and why professors teach undergraduates. Prompting an AI does not provide that articulation step.
Conclusion
Using AI as a shortcut can produce output, but it doesn’t build the mental “muscle” needed for original thought. You don’t develop interesting ideas by using a GPU to think, just as you don’t build muscle by using an excavator to lift weights. The reliance on AI for ideation risks making both projects and their creators increasingly boring.