Contextual Code Review
Source: Dev.to
Contextual AI Code Reviews
AI code reviews fail not because AI is weak, but because we ask the wrong kind of question without context.
Ask AI to review your code without context, and you’ll get a checklist of idealistic complaints:
- “Consider adding null checks here”
- “This method name could be more descriptive”
- “Security: validate user input”
- “Consider using dependency injection”
Some of these might be valid, but most are noise. The AI doesn’t know that this service runs in a protected internal environment, that performance matters more than readability, or that the “inconsistent naming” follows a legacy convention the team deliberately kept.
Without context, AI reviews against platonic ideals. With context, AI reviews against your actual requirements. This issue is most pronounced when reviewing human‑written legacy code—code written before AI assistance.
Legacy codebases often have:
- Inconsistent namespace conventions
- Class names that evolved organically
- Implicit agreements the team never documented
- Technical debt the team consciously accepted
AI sees all of these as “problems to fix,” but many are acknowledged trade‑offs, not oversights. If the compiler can catch an issue, exclude it from the AI review. Every token spent on “missing semicolon” or “unused variable” is a token not spent on meaningful analysis—your linter and IDE already handle those.
Review Perspectives
Specify the lens you want the AI to use; otherwise it will flag issues that are non‑issues in your context.
| Perspective | Typical Questions |
|---|---|
| Logic check | Does the code do what it’s supposed to do? |
| Security check | Are there vulnerabilities? Is input validation adequate? |
| Performance check | Is the resource usage optimal? What is the algorithmic complexity? |
| Thread safety | Could there be race conditions, deadlocks, or shared‑state issues? |
| Framework conformance | Does it follow the framework’s patterns? |
| Architecture fit | Does it fit the existing structure? |
A service running behind three layers of authentication doesn’t need input‑sanitization warnings. A batch job that runs once daily doesn’t need microsecond‑level optimization suggestions.
Providing Context to AI
Before AI can review effectively, it needs to understand:
- Where does this service sit in the architecture?
- What security boundaries protect it?
- What are the performance requirements?
- What external interfaces does it connect to?
Example context
This service runs in an internal VPC with no external exposure.
It processes batch data nightly; latency is not critical.
Input comes from a validated upstream service.
For well‑known frameworks (ASP.NET, Spring, Rails), AI has abundant training data. For custom architectures, AI cannot grasp the full structure at once. In those cases:
- Human manages the scope – review proceeds layer by layer.
- Check whether additions/changes conform to the established structure.
- Don’t expect AI to understand your entire custom framework from a single file; build understanding incrementally.
Systematic Review Process
- Load system context – position, constraints, interfaces.
- Load structural context – architecture, conventions.
- Baseline – identify existing issues and mark them as acknowledged.
- Define review perspective – logic, security, performance, etc.
- Review new changes against the defined criteria.
This is not a prompt; it’s a preparation phase before the prompt.
Aligning with Quality Models (ISO 25010)
Select the characteristics relevant to your review; don’t check everything every time.
| Characteristic | Check Focus |
|---|---|
| Functional correctness | Does it meet requirements? |
| Performance efficiency | Resource usage, response time |
| Compatibility | Coexistence, interoperability |
| Usability | API clarity, error messages |
| Reliability | Fault tolerance, recoverability |
| Security | Confidentiality, integrity |
| Maintainability | Modularity, testability |
| Portability | Adaptability, installability |
Decision Making After Baseline Analysis
- If the surrounding code is highly inconsistent, demanding strict consistency from new additions may create friction without value.
- If consistency is important, accept the baseline debt but ensure new code does not worsen it.
This judgment call is a human decision, not something to delegate entirely to AI.
Approach vs. Result
| Approach | Result |
|---|---|
| “Review this code” (no context) | Idealistic noise |
| Contextual review (with defined perspective) | Relevant findings |
Key Takeaways
- Exclude compiler‑checkable issues; let linters handle them.
- Define the review perspective explicitly.
- Load both system and structural context before prompting.
- Establish a baseline of acknowledged technical debt.
- Use quality characteristics (e.g., ISO 25010) as a focused checklist.
By providing context, AI transforms from a pedantic critic into a useful reviewer. This insight is part of the Beyond Prompt Engineering series, which explores how structural and cultural approaches outperform pure prompt optimization in AI‑assisted development.