Slopsquatting: AI Hallucinations as Supply Chain Attacks
Source: Dev.to
One in five AI‑generated code samples recommends a package that does not exist.
Attackers are registering those phantom names on npm and PyPI with malware inside. The term for this is slopsquatting, and it is already happening.
What is slopsquatting?
- Typosquatting bets on human misspellings.
- Slopsquatting bets on AI hallucinations.
The term was coined by Seth Larson, Security Developer‑in‑Residence at the Python Software Foundation, to describe a specific attack:
Register the package names that LLMs consistently fabricate, then wait for developers to install them on an AI’s recommendation.
Evidence
| Study | Scope | Key Findings |
|---|---|---|
| USENIX Security 2025 | 576 000 code samples across 16 language models | • ~20 % of samples recommend at least one non‑existent package. • Hallucinations fall into three categories: - 51 % pure fabrications (no real basis). - 38 % conflations of real packages (e.g., express-mongoose).- 13 % typo variants of legitimate names. |
| Consistency | Repeated queries (10×) | • 43 % of hallucinated names appeared every time. • 58 % appeared more than once. |
Implication: An attacker does not need to guess which names an LLM will invent. They ask the same question a few times, collect the phantom names, and register them.
Why existing defenses fail
- Traditional typosquatting registers names like
crossenvhoping someone mistypescross‑env. - Registry defenses flag new names that are too close to popular ones.
- Hallucinated names bypass this entirely – they are novel strings that no filter anticipates because there is no real package to start from.
Real‑world examples
| Package | Ecosystem | Outcome |
|---|---|---|
huggingface-cli | PyPI | Registered as an empty placeholder (no malicious code). Within three months it amassed 30 000+ organic downloads from developers (or their AI tools) running pip install huggingface-cli. |
unused-imports | npm | Confirmed malicious; still pulling ≈ 233 downloads/week (early 2026). The legitimate package is eslint-plugin-unused-imports. |
react-codeshift | npm | Conflation of jscodeshift + react-codemod. Appeared in LLM‑generated agent‑skill files committed to GitHub—no human planted it. Propagated through automated code generation. |
Payloads are typically post‑install scripts that steal API keys, cloud tokens, and SSH keys. Newer variants use npm’s URL‑based dependency feature to fetch malicious code at install time, leaving package.json looking clean.
Cross‑ecosystem angle: 8.7 % of hallucinated Python package names turned out to be valid JavaScript packages. An attacker can register the same phantom name on both npm and PyPI, catching traffic from both ecosystems with a single fabricated name.
Defenses – What works today
1. Lock your dependencies
# npm / yarn / pnpm
npm ci # uses package-lock.json, fails if lockfile is out‑of‑date
# poetry (Python)
poetry lock
Lockfiles pin exact versions and checksums, so a later malicious package with the same name does not affect existing installs.
2. Verify before you install
- npm:
npm info <package>– shows publisher, creation date, weekly downloads. - PyPI: Browse
https://pypi.org/project/<package>/– look for a recent creation date, missing README, single version, no GitHub link → red flag. - Cross‑reference the name against the library’s official documentation.
3. Use a scanning wrapper
Aikido SafeChain (open‑source) intercepts install commands and validates packages against threat‑intelligence feeds.
curl -fsSL https://github.com/AikidoSec/safe-chain/releases/latest/download/install-safe-chain.sh | sh
# Restart your terminal, then use npm/pip/yarn normally – SafeChain intercepts automatically
npm install some-package
Free, no API tokens required, adds only a few seconds per install.
4. Sandbox autonomous agents
- Run AI coding agents that install packages inside ephemeral containers or VMs.
- A malicious post‑install script in a throwaway Docker container cannot exfiltrate host credentials.
- At minimum, restrict the agent’s permissions so it cannot run
npm installwithout explicit approval.
5. Disable post‑install scripts for untrusted packages
npm install --ignore-scripts # skips all lifecycle scripts
# Afterwards, manually enable scripts for known‑good packages if needed
Blocks the most common slopsquatting payload vector at the cost of some manual setup.
6. Add a CI gate
- Software Composition Analysis (SCA) – integrate tools like OWASP Dependency‑Check or Dep‑Scan into your pipeline.
- Generate and sign Software Bills of Materials (SBOMs) for every build; each dependency becomes auditable.
- If a package does not appear in your organization’s approved registry, the build should fail.
The scale matters
As AI coding tools evolve from pair‑programming assistants to autonomous agents that install dependencies without human oversight, the attack surface expands dramatically. The combination of:
- Consistent hallucinations (high repeatability),
- Novel, unfiltered names, and
- Automatic post‑install execution
creates a fertile ground for slopsquatting attacks.
TL;DR
- Lock dependencies.
- Verify any AI‑suggested package before installing.
- Deploy a scanning wrapper like SafeChain.
- Sandbox autonomous agents.
- Disable untrusted post‑install scripts.
- Enforce CI gates with SCA and SBOMs.
By layering these controls, you dramatically reduce the risk of inadvertently pulling in a malicious, AI‑fabricated package.
The Expanding Attack Surface
Without human review, the attack surface expands.
A developer who reads a suggestion and checks the docs has some protection. An AI agent runningnpm installin an automated loop does not.
Current Registry Defenses
- Registries have no automated defense against slopsquatting yet.
- npm’s existing protections catch names similar to popular packages, but hallucinated names often bear no resemblance to real ones.
- These are novel strings that no similarity filter anticipates.
The Feedback Loop (React‑Codeshift Example)
- LLM hallucinates a package name.
- AI agent writes code that imports the non‑existent package.
- The code gets committed to GitHub.
- A different LLM trains on or retrieves that code.
- The hallucination spreads further.
Each step:
- Increases the download count, making the package look more legitimate.
- Makes the next LLM more likely to recommend it.
Who Bears the Risk?
Whether or not registries catch up, the exposure falls on developers who accept AI package suggestions at face value.
Mitigation Recommendations
-
Verify before installing any AI‑suggested package:
npm info <package> # or curl https://pypi.org/pypi/<package>/jsonCheck that it exists, its age, and its publisher.
-
For automated workflows:
- Install SafeChain as a drop‑in wrapper.
- Never let an AI agent run package installs outside a sandboxed environment.
-
Remember the 20 % hallucination rate: one in five suggestions could be a trap.