AI Vocab 101

Published: (March 19, 2026 at 06:25 PM EDT)
8 min read
Source: Dev.to

Source: Dev.to

Why Vocabulary Matters When Talking About AI

I’ve been having a lot of conversations with non‑tech people recently about AI. What I keep running into is the same pattern: smart, curious people who are genuinely trying to understand what’s happening, but who don’t have the vocabulary to name what they don’t know. And when you can’t name it, you can’t ask the right question, which means you stay stuck at the surface.


The Car‑Wash Test

A few months ago, screenshots flooded social media of people asking ChatGPT, Claude, and Grok a deceptively simple question:

The car wash is 40 meters from my house. Should I walk or drive?

The chatbots said walk.

What many people in the conversation didn’t understand is that the people getting bad results weren’t using a bad AI. They were using a lesser model, probably the free tier of a product, without knowing that’s what they were doing. And without vocabulary, there’s no way to even articulate that distinction.


What Actually Happened

“ChatGPT” isn’t one thing.
It’s a product that runs on a family of models. In ChatGPT, there are three models: GPT‑5 Instant, GPT‑5 Thinking, and GPT‑5 Pro, and a routing layer selects which to use based on your question.

On top of that, the current flagship family looks like this:

  • GPT‑5.4 – think of it as a full‑service restaurant kitchen.
  • GPT‑5.4 mini – the fast‑casual version: quicker, cheaper, good enough for most everyday questions.
  • GPT‑5.4 nano – even lighter, like a food‑truck setup.
  • GPT‑5.4 pro – takes extra time to think through the really hard problems, like a chef who slow‑cooks instead of microwaving.

Key difference: free users don’t get the full kitchen. They get routed to whichever option is fastest and cheapest at that moment. That version can answer a car‑wash question correctly, but it’s also more likely to give inconsistent results on anything with nuance. Paying users get reliable access to the better models.

So when someone says “ChatGPT told me X” and someone else says “ChatGPT told me Y,” they may have been talking to completely different models—without either of them knowing it. That’s not a gotcha; it’s simply what happens when you don’t have the vocabulary to describe what you’re actually using.


The Terms That Close the Gap

Below are the core terms that help you move from “I don’t know what’s happening” to “I can ask the right question.”

1. Artificial Intelligence (AI)

  • The broad category. Any system performing tasks we’d normally associate with human reasoning—recognizing images, detecting fraud, recommending what to watch next, etc.
  • Analogy: AI is “transportation.” It’s the whole category.

2. Large Language Model (LLM)

  • A type of AI trained specifically on enormous amounts of text. It works with words, reading, predicting, generating.
  • Examples: GPT‑5.4, Claude, Gemini, Llama.
  • Analogy: LLMs are like cars within the broader “transportation” category.

3. Model

  • The specific trained artifact underneath the product.
  • When someone asks “Which model are you using?” they want the exact version, because different models in the same family behave differently, cost differently, and have different knowledge cutoffs.
  • Analogy: Asking whether you’re driving a 2024 Civic or a 2026 Accord—same manufacturer, very different capabilities.

These nest: AI contains LLMs; LLMs come in specific models. They are not synonyms.

4. Token

  • The LLM doesn’t read words the way you do. It reads tokens: chunks of text that might be a full word, part of a word, a punctuation mark, or a space.
  • Everything about LLM capacity and pricing is measured in tokens, not words or characters.
  • Analogy: Tokens are like syllables in speech—sometimes a whole word (“cat”), sometimes a fragment (“un‑break‑able”).

5. Context Window

  • The total amount of text, in tokens, the model can hold in working memory at once.
  • Your prompt, the conversation history, any documents you’ve passed in, and the response being generated all count.
  • When the window fills, older content gets dropped.
  • Analogy: A whiteboard where you can only write so much before you have to start erasing from the top to make space at the bottom.

6. Hallucination

  • When the model generates text that is confident, fluent, and wrong.
  • Not lying: the model has no concept of truth or intent to deceive. It’s pattern‑matching on what a plausible response looks like, and sometimes that leads to inaccurate output.
  • Hallucinations range from small factual errors to completely fabricated citations.
  • Knowing this term lets you distinguish between “the model reasoned badly” versus “the model stated something false with full confidence.”
  • Analogy: Giving directions to a restaurant that closed three years ago—misinformation, not malice.

7. Prompt

  • Your instruction to the model—everything it receives before it starts generating.
  • Prompt quality is one of the highest‑leverage variables in any AI system. Vague prompts produce vague, unpredictable outputs.

8. Agent

  • An AI system that can take actions, not just generate text. It has access to tools, search, email, databases, APIs, and decides which to use and in what order.
  • The defining characteristic is that it can affect the world outside the conversation.
  • Analogy: If an LLM is a consultant who gives advice, an agent is an assistant who can actually book your flight, send the email, and update the spreadsheet.

9. Harness

  • The scaffolding you build around an LLM t (the original text cuts off here; keep as‑is).

Takeaway

Having the right vocabulary gives you handles on things you can actually change—whether it’s selecting a better model, crafting a clearer prompt, or understanding why an AI might “hallucinate.” Once you can name the pieces, you can ask the right questions and get the right answers.

Understanding the AI “Harness”

“The model is the engine. The harness is everything that makes it go where you want.”

Think of a Formula 1 car: the engine is powerful, but it’s useless without the steering wheel, brakes, suspension, and chassis that let you actually control it. In AI, the model is the engine, and the harness (system prompt, retrieval logic, error handling, tool connections, etc.) is what lets you steer it.


Key Vocabulary

TermWhat It MeansWhy It Matters
API (Application Programming Interface)The formal connection point between two pieces of software.Almost every AI tool either calls an API to get a model’s response or offers an API so other tools can connect to it. Think of it as the electrical outlet in your wall – a standardized interface that lets different appliances plug in without rewiring.
MCP (Model Context Protocol)A protocol that lets AI access your personal data (files, calendar, email, etc.).It’s an early‑stage attempt to make AI‑to‑your‑data connections smoother. Tools that advertise “MCP support” are trying to play nice with AI, even if the setup can still be a bit rough.
Model vs. Model FamilyA model is a specific version (e.g., GPT‑4). A model family groups related versions (e.g., GPT‑3, GPT‑3.5, GPT‑4).Knowing the difference lets you ask the right question: “Which version are they using?” instead of “Is AI smart or dumb?”
Context WindowThe amount of text a model can consider at once.Understanding this prevents you from blaming the AI when it “forgets” earlier parts of a long conversation.
HallucinationWhen a model generates information that isn’t grounded in its training data or external sources.Using the term correctly stops it from becoming a catch‑all for any output you distrust.

Why Vocabulary Helps

  • Turns vague frustration into specific, solvable problems.
    Knowing the right terms lets you pinpoint the issue (e.g., a too‑small context window vs. a model’s inherent limitation).

  • Enables clearer communication with teammates and tool providers.
    When everyone speaks the same language, troubleshooting and integration become faster and less error‑prone.

  • Guides better decision‑making.
    Understanding concepts like APIs, MCP, and context windows helps you choose the right tools and design more reliable AI‑driven workflows.


Takeaway

  • Model = Engine
  • Harness (system prompt, retrieval logic, error handling, tool connections, etc.) = Steering, brakes, suspension, chassis

Mastering the vocabulary around these components lets you move from “AI is either smart or dumb” to “Here’s exactly what’s happening, and here’s how to fix it.”

0 views
Back to Blog

Related posts

Read more »

Context Engineering Has a Blind Spot

The biggest shift in agent design over the past year has been context engineering rather than improved models Most of the published guidance focuses on codebas...

What is RAG?

Introduction Most AI models don't actually “know” your data. They generate answers based on what they were trained on — which means they can be outdated, incor...

You're not prompting it wrong.

Background I was listening to Grady Booch on The Third Golden Age of Software Engineering episode of The Pragmatic Engineer. During the episode he mentioned a...