Endor Labs launches free tool AURI after study finds only 10% of AI-generated code is secure
Source: VentureBeat
Endor Labs Launches AURI
Endor Labs, the application‑security startup backed by more than $208 million in venture funding, today launched AURI, a platform that embeds real‑time security intelligence directly into the AI coding tools that are reshaping how software gets built. The product is available free to individual developers and integrates natively with popular AI coding assistants—including Cursor, Claude, and Augment—through the Model Context Protocol (MCP).
The Security Crisis Hiding Inside the AI Coding Revolution
While 90 % of development teams now use AI coding assistants, research published in December by Carnegie Mellon University, Columbia University, and Johns Hopkins University found that leading models produce functionally correct code only about 61 % of the time—and just 10 % of that output is both functional and secure.
“Even though AI can now produce functionally correct code 61 % of the time, only 10 % of that output is both functional and secure,” Endor Labs CEO Varun Badhwar told VentureBeat in an exclusive interview.
“These coding agents were trained on open‑source code from across the internet, so they’ve learned best practices — but they’ve also learned to replicate a lot of the same security problems of the past.”
That gap between code that works and code that is safe defines the market AURI is designed to capture — and the urgency behind its launch.
Why the Gap Exists
- Training data – AI coding models are trained on massive repositories of open‑source code scraped from the internet.
- Mixed quality – Those repositories contain both best practices and well‑documented vulnerabilities, insecure patterns, and flaws that may remain undiscovered for years.
- Feedback loop – AI tools generate code at unprecedented speed, often mirroring insecure patterns, while security teams scramble to keep up. Traditional scanning tools, built for human‑speed development, are increasingly overmatched.
“Every day, every hour, new vulnerabilities are found in software that might have been written 5, 10, 12 years ago — and that information isn’t easily available to the models,” Badhwar explained.
“If you started filtering out anything that ever had a vulnerability, you’d have no code left to train on.”
How AURI Traces Vulnerabilities Through Every Layer of an Application
AURI’s core technical differentiator is what Endor Labs calls its “code context graph.” This is a deep, function‑level map of how an application’s:
- First‑party code
- Open‑source dependencies
- Container layers
- AI models
interconnect.
What Sets AURI Apart
- Granular reachability analysis – Unlike competitors such as Snyk and GitHub Dependabot, which flag every known vulnerability in imported libraries, AURI pinpoints exactly how, where, and in what context a vulnerable component is used, down to the specific line of code.
- Example – A developer may import a large library like the AWS SDK but only call two services comprising ~10 lines of code. The remaining ~99,000 lines are unreachable. Traditional tools flag every vulnerability in the entire SDK; AURI trims away irrelevant findings.
Investment in Expertise
- 13 PhDs specializing in program analysis (formerly at Meta, GitHub, Microsoft) were hired.
- The company has indexed billions of functions across millions of open‑source packages and created over half a billion embeddings to identify the provenance of copied code, even when function names or structures have changed.
AI‑Powered Detection & Remediation
- Deterministic analysis combined with agentic AI reasoning.
- Specialized agents collaborate to detect, triage, and remediate vulnerabilities automatically.
- Multi‑file call graphs and data‑flow analysis uncover complex business‑logic flaws spanning multiple components.
“The result is an average 80 %–95 % reduction in security findings for enterprise customers — trimming away what Badhwar called ‘tens of millions of dollars a year in developer productivity’ lost to investigating false positives.”
A Free Tier for Developers, a Paid Platform for the Enterprise
In a strategic move aimed at rapid adoption, Endor Labs is offering AURI’s core functionality free to individual developers through an MCP server that integrates directly with popular IDEs, including VS Code, Cursor, and Windsurf.
The free tier requires no credit card and provides:
- Real‑time vulnerability insights while coding
- Seamless integration via the Model Context Protocol
- Access to the code context graph for personal projects
Enterprise customers can upgrade to a paid plan that adds:
- Organization‑wide policy enforcement
- Advanced reporting and compliance dashboards
- Dedicated support and SLAs
Bottom Line
AURI aims to close the functional‑vs‑secure gap that currently plagues AI‑generated code. By delivering deep, context‑aware security intelligence directly inside the developer’s workflow, Endor Labs hopes to make secure AI‑assisted development the new default.
Endor Labs’ AURI: A Freemium Approach to AI‑Assisted Application Security
The Core Offering
- Free tier – No sign‑up process, no complex registration.
- Key promise – “No policy, no administration, no customization. It just helps your code‑generation tools stop creating more vulnerabilities.” – Badhwar
Privacy‑first architecture
- The free product runs entirely on the developer’s machine.
- Only non‑proprietary vulnerability intelligence is pulled from Endor Labs’ servers.
- “All of your code stays local and is scanned locally. It never gets copied into AURI or Endor Labs or anything else.” – Badhwar
Enterprise Edition
| Feature | Description |
|---|---|
| Customization | Full policy configuration and role‑based access control for thousands of developers. |
| CI/CD Integration | Seamless hooks into pipelines across the organization. |
| Pricing | Based on the number of developers and scan volume. |
| Deployment Options |
- Local scanning
- Ephemeral cloud containers
- On‑premises Kubernetes clusters with tenant isolation
| | Flexibility Claim | “The most any vendor offers in this space.” – Badhwar |
Go‑to‑Market Strategy
- Freemium model mirrors the playbooks of GitHub and Atlassian: win individual developers first, then expand into their organizations.
- Rationale: AI coding agents are proliferating across every team; Endor Labs must be present where code is written, not hidden behind a procurement process.
“Over 97 % of vulnerabilities flagged by our previous tool weren’t reachable in our application,” said Travis McPeak, Security at Cursor. “AURI by Endor Labs shows the few vulnerabilities that are impactful, so we patch quickly, focusing on what matters.”
Why Independence from AI Coding Tools Matters
- Market landscape: Snyk, GitHub Advanced Security, and a wave of startups compete for developer attention.
- AI model providers entering the fray: Anthropic recently announced a code‑security product built into Claude, creating ripples.
Badhwar’s perspective
- Anthropic’s move is a validation of the problem, not a threat.
- The real question: Do enterprises want to trust the same tool that generates code to also review it?
“Claude is not going to be the only tool you use for agentic coding. Are you going to use a separate security product for Cursor, a separate one for Claude, a separate one for Augment, and another for Gemini Code Assist?” – Badhwar
Three guiding principles for security in the agentic era
- Independence – Security review must be separate from the code‑generation tool.
- Reproducibility – Findings must be consistent, not probabilistic.
- Verifiability – Every finding must be backed by evidence.
Purely LLM‑based approaches are “completely non‑deterministic tools that you have no control over in terms of having verifiability of findings, consistency.” – Badhwar
AURI’s hybrid approach
- Uses LLMs for reasoning, explanation, and contextualization.
- Couples them with deterministic tools that provide the consistency enterprises require.
- Simulates upgrade paths and recommends remediation routes that avoid breaking changes.
- Developers can apply fixes themselves or route them to AI coding agents with confidence that the changes have been deterministically validated.
Real‑World Results
-
February 2026: AURI identified seven zero‑day vulnerabilities in OpenClaw, an agentic AI assistant.
- Six were patched by the OpenClaw team (high‑severity SSRF, path traversal, authentication bypass, etc.).
- “These are zero days. They’ve never been found, but AURI did an incredible job of finding those.” – Badhwar
-
Ongoing detection: Active malware campaigns in ecosystems like NPM, including long‑term tracking of the Shai‑Hulud campaign.
Funding & Scale
| Metric | Detail |
|---|---|
| Series B | $93 M (oversubscribed) – April 2025, led by DFJ Growth; participants: Salesforce Ventures, Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32, Citi Ventures |
| Growth | 30× annual recurring revenue (ARR) growth; 166 % net revenue retention since Series A (18 months earlier) |
| Usage | Protects >5 M applications; >1 M scans per week |
| Customers | OpenAI, Cursor, Dropbox, Atlassian, Snowflake, Robinhood, plus dozens of enterprises using AURI for FedRAMP, NIST, and European Cyber Resilience Act compliance |
The Bigger Question
Can security tooling evolve fast enough to keep pace with AI‑driven development?
Critics of “agentic security” argue that the rapid emergence of autonomous software agents could outstrip traditional security processes. Endor Labs’ AURI aims to answer that challenge by combining deterministic analysis with LLM‑driven insight, offering a path forward for enterprises that need both speed and assurance.
[Title Placeholder]
Security concerns rise as AI agents gain broader access to critical systems.
“I’ve seen this play out when I was building cloud security products, and people were fearful of moving to AWS,” Badhwar said. “There was a perception of control when it was in your data center. Yet, guess what? That was the biggest movement of its time, and we as an industry built the right technology and security tooling and visibility around it to make ourselves comfortable.”
Badhwar acknowledges the industry’s rapid push to grant AI agents permissions across critical systems without fully understanding the risks, but he argues that resistance is futile.
For Badhwar, the most exciting implication of agentic development is not the new risks it creates but the old problems it can finally solve. Security teams have spent decades struggling to get developers to prioritize fixing vulnerabilities over building features. AI agents, he argues, do not have that problem—if you give them the right instructions and the right intelligence, they simply execute.
“Security has always struggled for lack of a developer’s attention,” Badhwar said. “But we think you can get an AI agent that’s writing software’s attention by giving them the right context, integrating into the right workflows, and just having them do the right thing for you, so you don’t take an automation opportunity and make it a human’s problem.”
It is a characteristically optimistic framing from a founder who has built his career at the intersection of tectonic technology shifts and the security gaps they leave behind. Whether AURI can deliver on that vision at the scale the AI coding revolution demands remains to be seen. But in a world where machines are writing code faster than humans can review it, the alternative—hoping the models get security right on their own—is a bet few enterprises can afford to make.