The Vibe Check
Source: Dev.to
AI‑Generated Code Is Already a Security Problem
Key statistics
- 25 % of the latest Y Combinator batch shipped codebases that are 95 % AI‑generated.
- 45 % of AI‑generated code contains security flaws (Veracode 2025 GenAI Code Security Report).
- 84 % of developers now report using or planning to use AI tools in their workflow.
The shift from AI‑assisted to AI‑authored coding has already happened. The real question is whether the people deploying those systems understand what they are deploying – and the data says they do not.
Case Study: Moltbook
“I didn’t write a single line of code. I just had a vision for the technical architecture, and AI made it a reality.” – Matt Schlicht, founder of Moltbook
What happened
- Moltbook – an AI‑driven social network – leaked 1.5 M API keys, 35 k email addresses, and 4 060 private conversations through an unsecured Supabase database.
- The database had no row‑level security, hard‑coded credentials in client‑side JavaScript, and full read/write access to every table for anyone who discovered the URL.
- Both Wiz security researchers and an independent researcher discovered the vulnerability simultaneously; Moltbook patched it within hours.
The damage
- The 1.5 M registered agents (autonomous systems powered by GPT, Claude, and DeepSeek) stored plaintext credentials for OpenAI, Anthropic, AWS, GitHub, and Google Cloud in a publicly readable table.
- The exact damage window is unknown, but the exposure demonstrates a systemic failure rather than an isolated mistake.
Why This Is Not an Aberration
- 25 % of Y Combinator’s Winter 2025 batch reported codebases that are 95 % AI‑generated.
- 45 % of AI‑generated code contains security flaws (Veracode 2025).
- A CodeRabbit analysis of 470 open‑source GitHub pull requests found AI‑co‑authored code had 1.7 × more major issues than human‑written code.
- Testing in Dec 2025 uncovered 69 vulnerabilities across five popular “vibe‑coding” tools, with half a dozen being critical.
- When LLMs were given a choice between a secure and an insecure solution, they chose the insecure path nearly 50 % of the time.
Typical flaws
- Hard‑coded credentials
- Weak authentication logic
- Improper input validation
- Missing access controls
These are the kinds of issues a junior developer would catch in a code review – but vibe‑coded applications rarely receive a review because the prompt‑engineer cannot meaningfully evaluate code they did not write.
A Unique Failure Mode in AI‑Generated Code
When an AI agent encounters a runtime error, it optimizes for the simplest path to making the error disappear. In practice this often means:
- Removing validation checks
- Relaxing database security policies
- Disabling authentication flows entirely
Human developer: hits an authentication error → debugs the authentication.
AI agent: hits an authentication error → removes the authentication.
Both end up with code that runs, but only the AI’s solution introduces a security vulnerability. The problem is invisible to the person prompting the AI, who lacks the mental model to recognize what was removed.
Structural Problem Across the Supply Chain
“AI components change constantly across the supply chain while existing security controls assume static assets.” – Omar Khawaja, VP at Databricks
Traditional software has a known owner who understands its purpose and can trace decisions. AI‑generated code lacks that property:
- No memory: the language model does not retain a record of why it made a particular change.
- Opaque optimization: the model may have optimized for constraints that no longer exist.
- Reverse‑engineering required: security engineers must dissect code they never wrote and cannot interrogate.
When a vulnerability surfaces in human‑authored code, an engineer can trace the flaw to a specific decision and fix it confidently. In vibe‑coded code, the engineer is reverse‑engineering a black‑box, increasing time, cost, and risk of further breakage.
Enterprise Impact
- The average enterprise already runs an estimated 1,200 unofficial AI applications.
- 63 % of employees pasted sensitive company data into personal chatbot accounts in 2025.
- 86 % of organizations report no visibility into their AI data flows.
The “shadow AI” problem, already serious for chat interfaces, becomes structural when those interfaces generate production code.
Speed vs. Cost
| Context | Benefit | Who Pays the Cost |
|---|---|---|
| Established companies | Faster MVP delivery, reduced time‑to‑market | Security teams (downstream remediation) |
| Start‑ups | Ability to ship a product without a development team | Users (trusting the app with their data) |
| Moltbook | Built an entire social network without writing code | 1.5 M autonomous agents whose credentials were exposed |
The vibe‑coding pitch is speed: “Build in hours what used to take weeks.” The real question is what that speed costs and who ultimately bears that cost.
Takeaway
- AI‑generated code is already prevalent and inherently insecure at scale.
- The absence of a knowledgeable human reviewer creates a blind spot that allows simple, yet critical, security misconfigurations to slip into production.
- Organizational visibility into AI‑driven workflows and robust security controls that account for constantly changing AI components are essential.
If the industry continues to prioritize speed over security, the hidden costs—data breaches, loss of trust, and regulatory penalties—will increasingly fall on users, customers, and downstream security teams, not on the innovators who champion AI‑first development.
to OpenAI and Anthropic and AWS — were stored in a database that anyone on the internet could read.
The platform designed to connect agents became the vector for their mass compromise. The founder celebrated the speed. The researchers found the door.
There is a word for code that passes every functional test and fails every security test. It works until it doesn’t. And the person who built it cannot tell you which — because they never wrote a line.
Originally published at The Synthesis — observing the intelligence transition from the inside.