AI makes you boring
Source: Hacker News
Background
This post expands on a comment I made on Hacker News recently, referencing a blog post that highlighted an increase in volume and a decline in quality among “Show HN” submissions. I don’t mind AI‑aided development—tools are useful when they serve a purpose—but I think the vibe of AI‑generated Show HN projects is overall pretty boring. They often lack depth, and the authors haven’t thought deeply about the problem space, leaving little room for discussion.
The Pre‑AI Show HN Experience
Before AI became prevalent, Show HN offered the chance to talk to someone who had spent a long time contemplating a problem. It was a genuine opportunity to learn something new and gain a completely different perspective.
How AI Is Changing the Conversation
I feel AI has turned programming discussions into a venue for boring people with boring projects who have little interesting to say about programming. This isn’t limited to Show HN or Hacker News; the same pattern appears across many platforms.
While part of the phenomenon may be an influx of newcomers who are swept up by the excitement of building a product, I argue that the issue runs deeper: AI makes people boring.
Why AI Models Hinder Original Thinking
- Lack of originality: AI models are extremely poor at original thinking. Any thought offloaded to a large language model (LLM) tends to be unoriginal, even if the model presents the output as genius‑level insight.
- Surface‑level ideas: The model’s output is often shallow. Original ideas usually emerge from immersing oneself in a problem for an extended period—a process that LLMs do not replicate.
- Ideation vs. articulation: Prompting an AI is not the same as articulating an idea. The output may be discardable; the real work lies in the thinking process that leads to refined concepts.
Counterarguments and Their Limitations
Some argue that a human‑in‑the‑loop approach can steer AI work and provide high‑level thinking. However, this premise is flawed:
- Human thought becomes AI‑like: When humans intervene only to correct or guide AI output, their thinking tends to mirror the AI’s surface‑level ideas rather than fostering genuine originality.
- Original ideas require deep engagement: The way humans generate truly novel concepts is by immersing themselves in a problem over time, a practice that does not happen when the heavy lifting is delegated to an LLM.
Conclusion
AI can be a valuable tool, but relying on it for core ideation leads to shallow, unoriginal work. Just as we make students write essays and professors teach undergraduates to develop critical thinking, we need to engage directly with problems rather than outsource the thinking to models. Building mental “muscle” requires effort, not the computational horsepower of a GPU.