We’ve been shipping 'slop' for 20 years. We just used to call it an MVP.
Source: Dev.to

Introduction
A lot of people have started using the word “slop” as shorthand for AI‑generated code. Their stance is that AI is flooding the industry with low‑quality software, and we’ll all pay for it later in outages, regressions, and technical debt.
That argument sounds convincing—until you look honestly at how software has actually been built for the last 20 years.
The uncomfortable truth is that “slop” didn’t start with AI. In fact, AI made it impossible to keep pretending otherwise.
Let’s pull back the curtain on a silent pact the industry followed long before the first LLM was trained.
Software Has Always Optimized for Execution
Outside of Google’s famously rigorous review culture, most Big‑Tech giants (Meta, Amazon, Microsoft, etc.) have historically prioritized speed.
- In the real world, PRs are often skimmed.
- Bugs are fixed after users report them.
- Architecture evolves after the product proves itself.
We didn’t call this “slop” back then; we called it an MVP (Minimum Viable Product).
By comparison, some of the code that coding agents deliver today is already better than the typical early‑stage PRs in many companies. AI isn’t introducing a new era of “good enough” code; it’s just the latest tool for a strategy we’ve used for decades. In hindsight, we have always been willing to trade internal code purity for external market velocity.
The Open‑Source Antidote
The primary exception is open‑source projects, which operate differently. Open source has consistently produced reliable, maintainable code—even with contributions from dozens or hundreds of developers.
Why?
-
Modularity is enforced.
Contributors work in isolation and must respect strict API boundaries and clean abstractions so that someone with zero internal context can contribute without breaking the system. -
Aggressive iteration loops.
Every contribution undergoes automated tests and diverse human peer reviews. Feedback comes from many sources, which usually converges on higher overall quality than code written for one or two specific use cases.
This environment shows that prioritizing execution over perfection can work—if you give contributors clear boundaries and automated feedback. If we treat an AI agent like an external open‑source contributor, the “slop” disappears.
Engineering Quality into the Agent
At Pochi, we believe the output of an AI agent is only as good as the contextual guardrails you build around it. To avoid “slop,” you have to go further than simple chat prompts. Here are the tips that have worked for us:
1. Solve the Hallucination Problem
The biggest issue with AI‑generated code is its tendency to hallucinate nonexistent libraries or deprecated syntax. This happens when developers think in terms of Prompt Engineering rather than Environment Engineering.
Solution: Integrate the agent directly into your CI/CD pipeline. Every line of code is instantly validated against compilers, linters, and static analysis tools. The environment catches mistakes the moment they appear, so you don’t have to wait for the AI to get it right.
2. Use “Cloud Markdown”
A “Cloud Markdown” approach is useful for high‑scale design practices. Instead of a static PDF with verbose architectural standards, create a README.pochi.md file that acts as the agent’s source of truth.
Example guardrails file (README.pochi.md):
# Project Design Patterns
## Architecture Overview
- Follow a hexagonal architecture.
- All external dependencies must be injected via interfaces.
- No direct calls to third‑party services from core business logic.
## Coding Standards
- Use TypeScript strict mode.
- Enforce ESLint `@typescript-eslint/recommended`.
- No `any` types; prefer explicit generics.
## CI/CD Checks
- Run `npm run lint` on every PR.
- Execute `npm test -- --coverage` and require ≥ 80 % coverage.
- Verify that generated code compiles with `tsc --noEmit`.
## Dependency Management
- All dependencies must be pinned to a specific version.
- No transitive dependencies without a lockfile entry.
- Use `npm audit` and fail the build on any high‑severity findings.
## Review Process
- Every AI‑generated PR must have at least one human reviewer.
- Reviewer must confirm that the code adheres to the above guardrails.
By feeding this markdown to the agent, you give it a concrete, machine‑readable contract that mirrors the rigor of an open‑source project.
Bottom Line
- “Slop” isn’t a new AI problem; it’s a long‑standing trade‑off between speed and perfection.
- Open‑source projects demonstrate that strict modular boundaries and automated feedback can keep that trade‑off in check.
- Treat AI agents as external contributors, give them clear guardrails (CI/CD validation, “cloud markdown” specifications), and the “slop” disappears.
Engineering quality into the agent is the key to turning AI‑generated code from a liability into an asset.
Data Fetching
- Rule: No direct
fetchcalls in components. - Pattern: Use the
useQuerywrapper from@/lib/api. - Reasoning: Guarantees that global error handling and caching are applied.
State Management
-
Constraint: All shared state must reside in
LiveStore. -
Pattern:
const [data, set] = useLiveStore(key);
Full‑screen Controls (example)
Enter fullscreen mode
Exit fullscreen mode
Critical Workflows
-
Documentation as Context
Store Markdown files that contain deep architectural rules and design patterns directly in the repository. -
Prompt Injection
Before an agent begins a task, it “reads” these Markdown files to understand global restrictions (e.g., “Always use local‑first storage patterns via LiveStore”). -
Context Scaffolding
This ensures the agent isn’t writing a snippet in a vacuum; it follows the specific scaffolding of the existing codebase.
Embedding architectural knowledge in this way lets the agent gather as much file‑level context as possible before every major migration, producing more accurate results.
Conclusion
At the end of the day, users never see “slop.” They see broken interfaces, slow loading times, crashes, and unreliable features.
If you dismiss AI‑generated code as “slop,” you miss the greatest velocity shift in the history of computing. By combining open‑source discipline (rigorous review and modularity) with AI‑assisted execution, we can finally build software that is both fast to ship and resilient to change.
