모든 AI를 Hallucination이라고 부르는 것을 멈춰라

발행: (2026년 3월 28일 PM 03:07 GMT+9)
3 분 소요

Source: Hacker News

Overview

Hallucination has become a garbage‑can label for AI misses.

Sometimes the model really did make something up. Fine, call that a hallucination.

But a lot of what engineers are calling hallucination is something else: omitted scope, default fill‑in, or blended inference. If you diagnose all of it the same way, you stop seeing what actually failed.

I’m using hallucination the way OpenAI does: plausible but false statements. I’m separating that from omitted scope, default fill‑in, and blended inference because those are different failures and need different fixes.

Omitted scope

This is one of the most common misses in coding work.

You ask for one change. The model makes that change. But the same rule also needed to hold somewhere else, and you never named that second place.

So one path changes and the other does not.

That is not best described as hallucination. It is a scope failure.

The model did the named work and missed the unnamed consequence.

Default fill‑in

Sometimes the model is not inventing facts. It is filling open slots.

You left key choices unspecified, so it picked something plausible:

  • a library
  • an implementation pattern
  • a default behavior

That may still be wrong, but wrong because the system guessed a default is not the same as wrong because it made up unsupported content.

These are different problems and need different corrections.

Blended inference

This is the failure type buried in your AI answers.

The model mixes:

  • what is grounded
  • what is inferred
  • what is assumed
  • what is missing

and returns all of that in one fluent answer.

You are no longer just checking whether the answer sounds good. You are trying to figure out where the model stopped knowing and started guessing.

That is not the same thing as the model making something up either.

Why this matters

These failures need different fixes.

  • If the model made something up, you need grounding, verification, or tighter constraints.
  • If it missed scope, you need to name the full change and the other places where the rule has to hold.
  • If it filled in defaults, you need to specify the choices you left open.
  • If it blended facts and assumptions, you need a way to separate them before acting on the answer.

Calling all of that hallucination makes the model look magical when the failure is often much more ordinary.

Where VDG fits

This is why I built VDG: Verified / Deduction / Gap.

VDG does not make the model smarter. It does not eliminate hallucinations. It breaks blended answers apart.

That matters most with blended inference. In a normal answer, what is grounded, what is inferred, what is assumed, and what is missing can all be blended into one smooth response. VDG is designed to break that blend apart.

It also helps with the other failure classes. VDG will not fix omitted scope for you, but it makes missing scope more visible. It will not stop the model from choosing defaults, but it forces those defaults to surface instead of passing as if they were simply part of the answer.

The practical rule

Stop using “hallucination” as the label for every AI miss.

Use it when the model actually made something up.

Then diagnose the rest more precisely.

That alone will help you see what failed and why it failed.

0 조회
Back to Blog

관련 글

더 보기 »