Clean Architecture in the Age of AI: Preventing Architectural Liquefaction
Source: Dev.to
Introduction
AI has made execution cheap; models optimize locally, not for architecture. In many teams the side effect is not bad code or broken builds, but something more structural: architectural liquefaction.
What Is Architectural Liquefaction?
Architectural liquefaction is the progressive loss of structural boundaries under sustained probabilistic code generation and accelerated change cycles. It does not happen in a single PR — layer boundaries soften, dependencies cross the wrong way, contracts drift, invariants weaken, “temporary” shortcuts pile up. Everything still works, until the cost of change quietly multiplies. Without explicit constraints, entropy grows as we ship faster.
Clean Architecture as a Deterministic Shell
Clean Architecture is often described as a layering discipline. In the context of AI‑assisted development, it can serve a different purpose: a deterministic shell around probabilistic execution. It is not dogma or aesthetic preference — it is a stabilizing mechanism. When boundaries are explicit and dependency direction is enforced:
- The solution space narrows.
- Drift becomes detectable.
- Structural violations surface earlier.
- Local optimization cannot silently destroy global design.
The architecture becomes a control surface.
Before AI Enforcement
Before AI, architectural violations required effort. A developer had to consciously decide to break a boundary.
After AI Enforcement
Now, violations can be generated in seconds, and because AI‑generated code often “looks right,” structural erosion is harder to notice. The real cost is not bad code in the moment; it is that the drift stays invisible until a refactor suddenly touches half the codebase. The more “flexible” and underspecified your prompts and rules are, the faster liquefaction tends to happen — the model fills in the gaps in whatever direction is locally easiest.
Enforcing Boundaries with Project Rules
I once wrote down all our architectural principles — boundaries, dependency rules, what lives where — into a docs/ folder in plain Markdown, then wired them into Cursor as project rules so they get injected into every prompt.
tree ./docs/
.
├── ARCHITECTURAL-STYLE-GUIDE.md
├── CLEAN-NEST-APP.md
├── architecture
│ ├── adapters.md
│ ├── core.md
│ ├── controllers.md
│ ├── events.md
│ ├── inter-module-communication.md
│ ├── modules.md
│ ├── structure.md
│ ├── testing.md
│ └── when-to-simplify.md
└── guides
├── cheat-sheet.md
├── common-patterns.md
└── quick-start.md
Before that, Cursor would often put repository calls straight into controllers or leak infrastructure imports into the domain layer — it simply followed the patterns it saw in the codebase. After the rules were in place, it started routing through use cases and keeping adapters out of core. It’s still not perfect: sometimes it over‑engineers or picks the wrong abstraction. But the rate of cross‑layer violations dropped sharply. The model now had something to optimize for instead of only optimizing for “code that runs”.
Hypothesis and Testability
That single data point fits the hypothesis:
Explicit boundaries plus enforcement reduce structural drift, even when the code is AI‑generated.
To make this testable we would need drift metrics (e.g., dependency violations, cross‑layer calls), review cost over time, and refactor scope when fixing violations. The hypothesis would be falsified if teams with strict rules drift as much as others, or if review and refactor cost keep growing despite enforcement. I’m preparing concrete ways to define and track these — drift metrics and cost — for follow‑up posts.
Rethinking Clean Architecture in an AI‑Heavy Workflow
Clean Architecture is usually framed as boundaries, inward dependencies, business logic isolated from the rest. True enough — but in an AI‑heavy workflow the useful way to see it is:
probabilistic execution, deterministic governance.
We are not removing uncertainty; we are putting a box around it so that the model’s choices stay inside the box. The architecture becomes the box.
Open Questions
- If you are using AI heavily in development, are your boundaries getting stronger or weaker?
- Is the cost of keeping the structure in your head going up or down?
I don’t have a conclusion yet — only a hypothesis. AI has optimized execution; whether we’ve optimized stability, or are just producing entropy faster, is open. Obvious structures are often the first to dissolve when everything speeds up. In upcoming posts I’ll explore other ways to keep things from liquefying.