Beyond the Buzzwords: Context, Prompts, and Tools
Published: (February 12, 2026 at 08:11 PM EST)
7 min read
Source: Dev.to
Source: Dev.to

# Beyond the Buzzwords: Context, Prompts, and Tools
Wake up in 2026, open a coding assistant, and you're jumping into a terminology soup: *Agents, Subagents, Prompts, Contexts, Memory, Modes, Permissions, Tools, Plugins, Skills, Hooks, MCP, LSP, Slash Commands, Workflows, Instructions*, etc.
Companies building these tools love creating new branding for every slight variation in interaction. Instead of getting trapped in the vocabulary treadmill, look at the architecture. Every AI coding tool—no matter how fancy the marketing—deals with the same three things:
**Context, Tools, and Prompts**.
---
## Context: The Memory Budget
*Context* is the agent's working memory. It’s also the bottleneck. While context windows have grown, they’re still finite and expensive. Every file you open, every tool output you receive, and every turn in the conversation eats into your budget.
The different terminologies you see imply different strategies for managing this constraint.
- **Slash Commands** (`/commit`, `/explain`): Reusable instructions. When you catch yourself typing the same prompt over and over, slash commands are the shortcut.
- **Context Compaction**: The AI's long‑term‑memory mechanism. When the chat gets bloated, the system summarizes previous turns. This keeps the conversation going, but you lose the granular details of why a specific decision was made.
- **Selective Loading**: Load only what you need, when you need it. Keep the context window empty until the last possible second, then load the specific snippets required for the current line of code.
The underlying problem hasn’t changed since the first chatbots: context windows are limited. The terminology keeps expanding, but we’re still solving the same problem—how to give the agent what it needs without exceeding its memory budget.
---
## Tools: The Action Layer
A chatbot can only generate text. An *agent* can take action. Tools bridge the gap between thinking and doing. From an architectural perspective, everything beyond the prompt is a tool call.
### Categories of Tools
| Category | Description |
|----------|-------------|
| **Actuators** (File System & Terminal) | Modify the world—write files, create directories, run shell commands for builds and tests. |
| **Navigators** (LSP & Indexers) | Give the agent sharp insight into code syntax, letting it locate the right function definition without reading everything. |
| **Executioners** (Sandboxes) | Run code in isolated environments. If the agent isn’t sure about a logic block, it can execute a small script and see the real output before suggesting it. |
| **Researcher** (Browsers & RAG) | Let the agent step outside the local machine. A browser tool lets it read information created after the model was trained. |
---
## The Bridge: MCP (Model Context Protocol)
MCP is **not** a tool. It’s the standardized interface that connects the agent to tools.
- **Before MCP**: Each tool implementation added maintenance burden to the developer team that owned the agent, leading to duplicated effort.
- **With MCP**: The transport layer and tool specification are standardized. MCP server implementations become far more reusable, making the entire ecosystem plug‑and‑play.
---
## Prompts: More Than Just Talk
When you talk directly to an LLM, you’re giving it a *user prompt*. *System prompts* work differently: they tell the agent **how** to behave.
There are many fancy terms for system prompts, but they’re all about shaping how the AI thinks and acts. The key difference is **when** and **how** they load.
### Base Instructions
- Load when the session starts.
- Set ground rules: what tools are available, what constraints exist, what background knowledge applies.
- Typically stored in files like `CLAUDE.md` or `AGENTS.md`.
- Can be global or per‑project.
### Skills
- Load only when needed (lazy loading).
- A skill usually contains:
1. **Name**
2. **Description** – a short piece of text used by the LLM to decide when to load the entire content.
3. **Body** – detailed instructions.
> Example: A debugging skill won’t activate until you ask to debug something; a refactoring skill stays dormant until you mention refactoring. This saves context by not loading domain expertise until it’s actually needed.
### Commands
- Shorthand for user prompts.
- Example: typing `/refactor` triggers a pre‑crafted prompt such as “Analyze the selected code and suggest a refactoring that improves readability while preserving functionality.”
- Commands are shortcuts for specific prompts you’d otherwise have to type out.
### Modes
- Combine system prompts with tool configurations.
- **Plan mode**: Loads a more analytical system prompt and restricts tools to those useful for planning (reading files, analyzing code structure, understanding dependencies).
- **Build mode**: Uses a more action‑oriented prompt and prioritizes tools for writing code, running tests, and making changes.
- Modes are presets that bundle together behavior and the supporting tools.
---
## The Hierarchy of Prompts
The core idea follows the same principles as software development: **lazy loading** resources and making them reusable. The hierarchy looks like this:
1. **Base Instructions** – always loaded at session start.
2. **Modes** – load a set of system prompts + tool configs for a specific workflow.
3. **Skills** – loaded on demand when the LLM decides they’re relevant.
4. **Commands** – user‑triggered shortcuts that map to specific prompts.
5. **Slash Commands** – reusable, user‑defined snippets that act as micro‑prompts.
By treating prompts as a layered, lazily‑loaded resource tree, you keep the context window lean while still giving the agent the full power it needs to act effectively.
---
*Understanding the architecture—context, tools, and prompts—lets you cut through the buzzwords and work with any AI coding assistant efficiently, no matter how it’s branded.*
## Modes replace the entire configuration with a pre‑baked set of prompts and tools
- **Base instructions** are the foundation—they always apply.
- **Skills** layer on top, adding domain expertise when needed.
- **Commands** trigger specific user prompts on top of whatever system prompts are active.
Every prompt terminology does the same thing: layering prompts to shape behavior. Some load early, some load late. Some replace others, some build on them. But they’re all prompts.
---
### Delegation: When Context Hits a Wall
This is where the three fundamentals intersect. Because context is limited, we try to split work among multiple agents.
Interestingly, even for humans, working with high‑level planning and design doesn’t require considering low‑level details. Working with details doesn’t require knowledge about the big picture beyond its own scope—as long as the spec is good enough.
Think of it as the **architect agent** calling a **worker agent** as a tool.
1. The architect identifies a task that requires deep focus (e.g., “Refactor this 2,000‑line module”).
2. Instead of doing it itself and bloating its own context, the architect **delegates** it to a sub‑agent.
3. The sub‑agent starts with a fresh, empty context and a specific skill prompt for refactoring.
4. Once the work is done, the sub‑agent returns a concise summary to the architect.
This recursive agency allows us to handle larger tasks without the main AI brain becoming a garbled mess of too much information.
Sub‑agent delegation is just another tool—a tool that spawns another agent with its own context. That mental model—thinking of sub‑agents as tools—helps when working with coding agents.
---
### Takeaways
- The terminology will keep expanding at the speed of marketing, but the underlying mechanics of AI development stay the same.
- The agent vocabulary in 2026 is noisy. By focusing on **context**, **tools**, and **prompts**, you move from being a consumer of buzzwords to an architect of the technology.
- You gain the ability to see the same reliable patterns beneath every new interface that hits the market.