Anthropic Launches Claude Code Security for AI-Powered Vulnerability Scanning
Source: The Hacker News
Introduction

Anthropic, an artificial intelligence (AI) company, has begun rolling out a new security feature for Claude Code that can scan a user’s software codebase for vulnerabilities and suggest patches. The capability, called Claude Code Security, is currently available in a limited research preview to Enterprise and Team customers.
“It scans codebases for security vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix security issues that traditional methods often miss,” Anthropic said in a Friday announcement.
Feature Overview
Claude Code Security is positioned as an AI‑driven tool to help developers discover and remediate security flaws that might be missed by conventional static analysis. By automating the detection of vulnerable patterns and proposing concrete patches, the service aims to give security teams a faster, more comprehensive view of their code health.
Security Implications

As AI agents become more capable of uncovering hidden vulnerabilities, the same technology could be weaponized by threat actors to automate exploit discovery. Anthropic notes that Claude Code Security is designed to counter such AI‑enabled attacks by providing defenders with an advantage and raising the overall security baseline.
Technical Capabilities
Anthropic claims that Claude Code Security goes beyond traditional static analysis:
- Reasoning‑based analysis: The system evaluates the codebase similarly to a human security researcher, understanding component interactions and data flows.
- Dynamic vulnerability detection: It can flag issues that rule‑based tools might miss, thanks to its ability to reason about context and logic.
- Multi‑stage verification: Each identified vulnerability undergoes several checks to filter out false positives.

Verification Process & Human‑in‑the‑Loop
Each vulnerability is assigned a severity rating to help teams prioritize remediation. The findings are presented in the Claude Code Security dashboard, where analysts can:
- Review the identified issue.
- Examine the AI‑suggested patch.
- Approve or reject the recommendation.
Anthropic emphasizes a human‑in‑the‑loop (HITL) approach:
“Because these issues often involve nuances that are difficult to assess from source code alone, Claude also provides a confidence rating for each finding. Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call.”