You're not prompting it wrong.
Source: Dev.to
Background
I was listening to Grady Booch on The Third Golden Age of Software Engineering episode of The Pragmatic Engineer. During the episode he mentioned a website called Victorian Engineering Connections—an interactive diagram of how Victorian engineers knew and influenced each other.
Interaction with Claude
While in the middle of a session with Claude, I asked it to remind me of the name. Claude pointed me to sixdegreesoffrancisbacon.com. I then mentioned that I heard Grady Booch talk about the site on a podcast. Claude gave me three plausible‑sounding but incorrect answers (the links worked, but they weren’t the site I was looking for). When I typed the actual name, Claude found it immediately.
I searched Google for victorianengineeringconnections.net and the first link was to a blog post reviewing the website, confirming that Claude had references to the site in its training data.
Analogy to the Library of Babel
The experience made me think of two concepts:
- The Library of Babel – an imagined library containing every possible string of text.
- “You’re prompting it wrong.” – the common advice that an LLM fails because the user’s prompt is inadequate.
There are virtually infinite strings of text one could use to prompt an LLM, and each will produce different responses. In a sense, an LLM is a giant Library of Babel (with some randomness) where each person has their own unique index.
Prompting and Subject‑Matter Expertise
One reason LLMs seem amazing is that they can answer almost any question we have, appearing to be “right next to you” the whole time. This perception stems from the fact that LLMs are trained on vast amounts of data and can generate plausible answers.
However, people often say, “LLMs don’t work for you because you’re prompting them wrong.” What if LLMs only work well for people who are already subject‑matter experts? Their unique way of forming prompts—specific strings of text—might be the key that leads to the correct answers.