Why AI Code Review Fails Without Project Context
Source: Dev.to
Background
Every AI code review starts the same way.
The bot opens your PR, scans the diff, flags a missing try/catch, suggests a more descriptive variable name, and notes that you could memoize that function for performance. All technically correct, but none of it useful.
Because it doesn’t know that fetchUser is an intentional naming convention your team enforces, that error handling is delegated to a global boundary, or that correctness is the priority over performance. The bot never knew your project.
This isn’t a model problem. It’s a context problem.
That’s what pi‑reviewer is built around — a GitHub Action and pi TUI extension that brings your project conventions into every review.
Before the agent sees a single line of diff, it reads:
AGENTS.mdorCLAUDE.md– your general project conventions: naming rules, architecture decisions, patterns to followREVIEW.md– review‑specific rules: what to always flag, what to explicitly skip
Markdown links in those files are followed recursively. If AGENTS.md links to docs/api-conventions.md, that file gets inlined too. The agent sees the full picture, not just a summary.
Review Guidelines
Always flag
fetchcalls missingres.okcheck before.json()- API endpoints not versioned under
/api/v1/ - Functions named
getData,doStuff, or other generic names
Skip
- Formatting‑only changes
- Changes inside
pi-review.md
Adding project context
Before: the agent flagged a missing type annotation on an internal helper, suggested renaming a variable, and noted a stray console.log.
After: it caught an unversioned API endpoint added in the same PR, flagged a fetch call missing the res.ok check — exactly the rule in REVIEW.md, and skipped the formatting‑only change in the generated file, as instructed.
Same model. Same diff. Completely different review.
Severity Filtering
Not every finding deserves equal weight. pi‑reviewer lets you filter by severity so you can focus on what matters.
uses: zeflq/pi-reviewer@main
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
pi-api-key: ${{ secrets.PI_API_KEY }}
min-severity: warn
Set min-severity: warn and the agent skips INFO‑level suggestions entirely — both in what it generates and in what gets posted to the PR. You can also trigger a manual review from the GitHub Actions UI and choose the severity level on the fly.
Three tiers
- 🔴 CRITICAL – bugs and security issues
- 🟡 WARN – logic and type errors
- 🔵 INFO – style and suggestions
Running pi‑reviewer
pi‑reviewer runs on pi — a terminal‑based coding agent that sits on top of the pi mono platform. One PI_API_KEY works across all supported models and providers. You pick the model; pi routes the request. This means you’re not locked into a single provider — swap models without touching your workflow, and the review logic stays the same.
It also works over SSH. If your project lives on a remote machine, --ssh mode lets the agent fetch the diff and read your conventions directly on the remote — no local copy needed.
Comparison with Anthropic Code Review
Anthropic recently shipped Code Review, a managed PR review service built into Claude Code. It reads CLAUDE.md and REVIEW.md, runs multiple specialized agents against your full codebase in parallel, and posts inline findings with severity tags. It’s genuinely impressive, but it comes with constraints:
- Managed service on Anthropic’s infrastructure
- Requires a GitHub App installation, available only on Teams and Enterprise plans
- Reviews cost roughly $15–25 each
- Claude‑only, with no control over where it runs
pi‑reviewer runs in your own CI, costs only what your token usage costs, works with any model through pi mono, and needs nothing more than a secret and a workflow file. No GitHub App, no admin approval flow.
If you want to review locally before you push — without opening a PR at all — the pi TUI extension gives you /review in your terminal.
Both tools read your CLAUDE.md and REVIEW.md. The difference is where they run, what they cost, and how much control you keep.
Getting Started
npx github:zeflq/pi-reviewer init
The command generates a workflow file. Add your PI_API_KEY secret. Every PR from that point on gets a review that knows your project.
The context files — AGENTS.md, REVIEW.md — live in your repo. They are version‑controlled, team‑editable, and evolve with the project. The better you document your conventions, the better the reviews get.
Conclusion
The insight isn’t that AI can review code. It’s that AI review without project context is just another linter with better prose. The review that matters is the one that knows why your codebase looks the way it does — and checks the diff against that, not against some generic idea of good software.
Context is everything. Diff without it is just noise.