80% of 'AI Is Stupid' Complaints Are Actually Context Problems
Source: Dev.to
The uncomfortable truth
I watched a teammate spend 20 minutes complaining that Copilot doesn’t understand our codebase. Then I looked at the repo: no README, no architecture docs, no module descriptions—just code.
Most AI code‑quality problems aren’t AI problems. They’re context problems.
The experiment that changed my mind
I took the same task — “add pagination to the users endpoint” — and tried it two ways:
Round 1: Prompt only
AI generated code that technically worked but used the wrong ORM pattern, wrong error‑handling style, and a pagination approach nobody on the team uses.
Round 2: Prompt + AGENTS.md
I added a 40‑line AGENTS.md file describing our project conventions (ORM patterns, error handling, pagination style, test expectations).
The difference was night and day. Not because the AI got smarter, but because the context did.
Why this matters more than model upgrades
Everyone’s waiting for GPT‑5, Claude Next, or whatever to “finally get it right.” I’ve found that well‑documented context with a mediocre model outperforms zero context with a frontier model.
Think of onboarding a new developer. You wouldn’t drop a senior engineer into a codebase with zero documentation and expect them to match your team’s patterns on day 1. Why do we expect that from AI?
What actually works: the AGENTS.md pattern
I keep a simple markdown file at the project root that describes the essential conventions:
# AGENTS.md
## Project Overview
Express API with PostgreSQL, using Knex for queries.
## Conventions
- **Error handling:** wrap in `try/catch`, use `AppError` class
- **Pagination:** cursor‑based, not offset
- **Tests:** co‑located with source, use test factories
- **Naming:** `camelCase` for JS, `snake_case` for DB columns
## Common Gotchas
- Don't use the `users` table directly — go through `UserService`
- Rate limiting is middleware‑level, don't add it per‑route
That’s it—about 30 lines, maybe an hour to write well.
Key insight: the file is portable. I’ve used variations with Cursor, Copilot, and Claude Code. The format may shift slightly, but the content—your project’s actual knowledge—stays the same.
The trade‑off nobody talks about
- Setup cost: 2–3 days to create context files for a large project, plus ongoing maintenance as patterns evolve.
- Limitations:
- Greenfield projects without established patterns still see AI hallucinate conventions.
- High‑stakes code (auth, payments, data migrations) still requires manual review regardless of context quality.
For the ~80 % of code that follows established patterns, context files are the highest‑leverage investment I’ve found.
The question I’m still working through
How do you keep context files in sync with a fast‑moving codebase?
I’ve tried pre‑commit hooks that validate AGENTS.md against actual code patterns. It sort of works, but I’m curious—has anyone found a better approach? Or do you accept some drift and do periodic manual updates?
P.S. I’m packaging the workflow patterns I use daily into a toolkit—project templates, AGENTS.md examples, verification scripts. If you’re interested, I share more at updatewave.gumroad.com.