Anthropic launches code review tool to check flood of AI-generated code

Published: (March 9, 2026 at 03:41 PM EDT)
3 min read
Source: TechCrunch

Source: TechCrunch

When it comes to coding, peer feedback is crucial for catching bugs early, maintaining consistency across a codebase, and improving overall software quality.

The rise of “vibe coding” — using AI tools that take plain‑language instructions and quickly generate large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code.

Anthropic’s Code Review tool

Anthropic’s solution is an AI reviewer designed to catch bugs before they make it into the software’s codebase. The new product, called Code Review, launched Monday in Claude Code.

“We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?” — Cat Wu, Anthropic’s head of product (TechCrunch)

Pull requests are a mechanism that developers use to submit code changes for review before those changes make it into the software. Wu noted that Claude Code has dramatically increased code output, which in turn has created a bottleneck in pull‑request reviews.

How Code Review works

  • Integration – Once enabled, Code Review integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code with explanations of potential issues and suggested fixes.
  • Focus on logic – The tool prioritizes logical errors over style concerns, aiming to surface the highest‑priority problems that are immediately actionable.
  • Reasoning – The AI explains its reasoning step‑by‑step, outlining what it thinks the issue is, why it might be problematic, and how it could be fixed.
  • Severity labeling – Issues are color‑coded: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to pre‑existing code or historical bugs.
  • Multi‑agent architecture – Multiple agents examine the codebase from different perspectives in parallel; a final agent aggregates and ranks findings, removing duplicates and prioritizing the most important items.
  • Security analysis – A light security scan is included, and engineering leads can customize additional checks based on internal best practices. Anthropic’s newer Claude Code Security provides deeper security analysis.

Pricing and resource considerations

The multi‑agent approach can be resource‑intensive. Pricing is token‑based and varies with code complexity; Wu estimated each review would cost $15–$25 on average. She described the offering as a premium experience necessary as AI tools generate increasing volumes of code.

“Code Review is something that’s coming from an insane amount of market pull. As engineers develop with Claude Code, they’re seeing the friction to creating a new feature decrease, and they’re seeing a much higher demand for code review.” — Cat Wu

On the same Monday, Anthropic filed two lawsuits against the Department of Defense after the agency designated Anthropic as a supply‑chain risk. The dispute is expected to push Anthropic to rely more heavily on its booming enterprise business, which has seen subscriptions quadruple since the start of the year. Claude Code’s run‑rate revenue has surpassed $2.5 billion since launch, according to the company.

“This product is very much targeted towards our larger‑scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of pull requests that it’s helping produce,” Wu said.

0 views
Back to Blog

Related posts

Read more »