I'm Not Consulting an LLM

Published: (March 8, 2026 at 04:43 AM EDT)
3 min read

Source: Hacker News

I’m not consulting an LLM

Here’s my problem with using GPT, or an LLM generally for anything, even if the LLM would do it effectively. I will speak specifically of looking for information as an example, and let’s assume the following scenario: ever used the “I’m Feeling Lucky” button in Google? This button usually gives the first result of the search without actually showing you the search results.

Now, imagine a perfect world where every Google search you ever did, you clicked this button, and it was extremely precise and efficient in finding the perfect fit for whatever you were looking for. In other words, every search you have ever done in your life was successful from the first hit.

In such a world, would your intellect have grown the same as it does when you actually do proper research—encountering crazy people, cultures, controversies, jokes, interesting writers you follow, arguments you disagree with but can’t quite dismiss, footnotes that lead nowhere and everywhere at once, half‑broken blogs, bad takes that force you to sharpen your own, or sources that contradict each other so hard you have to build a model of the world just to survive the tension?

I guess not.

Because what would be missing isn’t information but experience. Experience is where intellect actually gets trained.

“I’m Feeling Lucky” intelligence is optimized for arrival, not for becoming. You get the answer—but nothing else (keep in mind we are assuming that it’s a good answer). You don’t learn how ideas fight, mutate, or die. You don’t develop a sense for epistemic smell or the ability to feel when something is off before you can formally prove it.

Now back to reality: LLMs are never that good; they’re never near that hypothetical “I’m Feeling Lucky”. This has to do with how they’re fundamentally designed. I have never asked GPT about something that I’m specialized in and received a sufficient answer that I would expect from someone as expert as me in that field. People tend to think that GPT (and other LLMs) does well only when it comes to things they themselves do not understand well (Gell‑Mann Amnesia). Even when it sounds confident, it may be approximating, averaging, exaggerating (Peters 2025) or confidently (Sun 2025) reproducing a mistake. There is no guarantee whatsoever that the answer it gives is the best, the contested, or even a correct one—only that it is plausible. That distinction matters because intellect isn’t built on plausibility but on understanding why something might be wrong, who disagrees with it, what assumptions are being smuggled in, and what breaks when those assumptions fail.

A tool can be efficient and still be intellectually corrosive, not because it lies all the time, but because it lies well enough. Its smoothness hides uncertainty, which is important unless you want intellect‑rot.

Modus Vivendi #LLMs

References

  • Peters, Uwe and Chin‑Yee, Benjamin (2025). Generalization bias in large language model summarization of scientific research. The Royal Society. Link
  • Sun, Fengfei and Li, Ningke and Wang, Kailong and Goette, Lorenz (2025). Large Language Models are overconfident and amplify human bias. arXiv. Link
0 views
Back to Blog

Related posts

Read more »