AI Slop Detector v2.6.2: Integration Test Evidence (because “green CI” can still be hollow)

Published: (January 15, 2026 at 11:15 AM EST)
2 min read
Source: Dev.to

Source: Dev.to

Cover image for AI Slop Detector v2.6.2: Integration Test Evidence (because “green CI” can still be hollow)

What is “AI Slop”?

AI Slop is code that looks legitimate but carries little causal weight.
It’s not “broken.”
It’s not “malicious.”
It’s just convincingly empty.

Typical symptoms:

  • promises outrunning evidence (“production‑ready”, “scalable”)
  • tests that exist, but don’t hit real dependencies
  • structure and documentation growing faster than implementation

Community Feedback (and why this release exists)

This release exists because of a thoughtful comment from OnlineProxy (https://onlineproxy.io/).
They described a “complete‑looking” repo with green CI that still felt hollow—and pointed out the real red flag:

CI is green, but 0 integration tests hit real dependencies.

That’s not a nitpick. It’s a real production failure mode.
Treating that feedback like a bug report led to v2.6.2.

What’s new in v2.6.2

1) Integration Test Evidence (explicit split)

Tests exist” isn’t enough. v2.6.2 distinguishes:

  • tests_unit (fast, isolated)
  • tests_integration (hits real dependencies / realistic boundaries)

Detection uses four layers:

  1. Path‑based (tests/integration/, e2e/, it/)
  2. Filename patterns (test_integration_*.py, *_integration_test.py)
  3. Pytest markers (@pytest.mark.integration, @pytest.mark.e2e)
  4. Runtime signals (TestClient, testcontainers, docker‑compose)

2) Claims now require integration evidence

Strong claims now require stronger proof:

  • production‑ready → requires tests_unit and tests_integration
  • scalable / fault‑tolerant → requires tests_integration

This closes the gap where code looks complete but proves nothing under real dependencies.

3) Clearer reports & questions

The goal isn’t “more numbers.” It’s more inspectable output. Reports and questions now surface:

  • unit vs. integration test breakdown
  • explicit warnings when integration tests are missing but production claims exist
  • more readable evidence labels (e.g., “integration tests”)

Quick start

# Install / upgrade
pip install -U ai-slop-detector

# Scan a project
slop-detector --project .

CI examples

# Soft: report only (never fails)
slop-detector --project . --ci-mode soft --ci-report

# Hard: fail on thresholds
slop-detector --project . --ci-mode hard --ci-report

# Claims strict: fail when production claims lack integration‑test evidence
slop-detector --project . --ci-mode hard --ci-report --ci-claims-strict

Why this matters (in one line)

AI‑era failures often aren’t syntax failures. They’re verification gaps hidden behind clean structure and green CI. v2.6.2 makes one of the most common gaps measurable: “0 integration tests” is now something you can detect, report, and gate.

  • Repo:
  • CI:
  • Changelog:
Back to Blog

Related posts

Read more »