From Prompt to Platform: Architecture Rules I Use

Published: (January 20, 2026 at 02:36 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

The “build → surprise → restructure → repeat” Loop

The loop is amazing early on. After a while, though, it feels like two clowns trying to out‑prank each other: it gets funnier and funnier, lots of laughs… until one pulls out a flamethrower for one last prank and the laughter gets a little awkward.

This type of iteration is fun until it isn’t. So I went looking for guidance.

Experiences With LangGraph Tutorials

Most examples show you how to:

  1. Build a graph.
  2. Define some nodes.
  3. Wire them together.
  4. Ship it.

Great for prototyping.

They don’t show where to put things when you have:

  • 8 nodes
  • 3 agents
  • 5 tools
  • Shared state across sub‑graphs
  • Middleware for guardrails
  • A platform layer that stays framework‑independent

I searched, found bits and pieces, but no complete picture. So I built it.

A Folder Structure That Scales

Here’s what my LangGraph component looks like:

app/
├── agents/           # Agent factories (build_agent_*)
├── graphs/           # Graph definitions (main, subgraphs, phases)
├── nodes/            # Node factories (make_node_*)
├── states/           # Pydantic state models
├── tools/            # Tool definitions
├── middlewares/      # Cross‑cutting concerns (guardrails, redaction)
└── platform/
    ├── core/         # Pure types, contracts, policies (no wiring)
    │   ├── contract/ # Validators: state, tools, prompts, phases
    │   ├── dto/      # Pure data‑transfer objects
    │   └── policy/   # Pure decision logic
    ├── adapters/     # Boundary translation (DTOs ↔ State)
    ├── runtime/      # Evidence hydration, state helpers
    ├── config/       # Environment, paths
    └── observability # Logging

Why this structure?

It mirrors LangGraph’s mental model: agents are agents; nodes are nodes; graphs are graphs. In the orchestration layer, things are easy to find and responsibilities stay separated.

The real insight is the platform/ layer.

The Platform Layer: Why It Exists

Separating the LangGraph components was easy; separating the wiring was hard. The structure didn’t appear on day 1—it emerged after several iterations. Each cycle surfaced a missing architectural rule, and the absence of those rules made refactors increasingly painful as new components were added.

Without a platform layer – everything gets spaghettified

# WITHOUT PLATFORM LAYER – everything mixed together
def problem_framing_node(state: SageState) -> Command:
    # Guardrail logic mixed with state management
    if "unsafe" in state.messages[-1].content:
        state.gating.guardrail = GuardrailResult(is_safe=False, ...)

    # Evidence hydration mixed with node orchestration
    store = get_store()
    for item in phase_entry.evidence:
        doc = store.get(item.namespace, item.key)
        # ... inline hydration logic

    # Validation mixed with execution
    if "problem_framing" not in state.phases:
        raise ValueError("Invalid state update!")

    # ... good luck writing tests for it!

With the platform layer – clean separation

# WITH PLATFORM LAYER – clean separation
def problem_framing_node(state: SageState) -> Command:
    # Use platform contracts for validation
    validate_state_update(update, owner="problem_framing")

    # Use platform runtime helpers for evidence
    bundle = collect_phase_evidence(state, phase="problem_framing")

    # Use platform policies for decisions
    guardrail = evaluate_guardrails(user_input)

    # Use adapters for state translation
    context = guardrail_to_gating(guardrail, user_input)

    # Node only orchestrates – all logic lives in platform!

The node becomes what it should be: orchestration only. No domain logic, no direct store access, no inline validation.

The Hexagonal Split

The pattern that solved the problem is hexagonal (ports‑and‑adapters) architecture. The core stays pure—no framework dependencies, no imports from outer layers. Everything else can depend on the core, but the core depends on nothing. This makes boundaries testable and rules enforceable.

┌─────────────────────────────────────────────────────────┐
│                APPLICATION LAYER                         │
│ (app/nodes, app/graphs, app/agents, app/middlewares)     │
│ - LangGraph orchestration                               │
│ - Calls platform services via contracts                  │
└───────────────────────────────┬─────────────────────────┘


┌─────────────────────────────────────────────────────────┐
│                PLATFORM LAYER                           │
│  ┌───────────┐ ┌───────────┐ ┌─────────┐ ┌───────────┐   │
│  │ Adapters │ │ Runtime   │ │ Config │ │Observabil.│   │
│  │ DTO↔State│ │ helpers   │ │ env/paths│ │ logging │   │
│  └─────┬─────┘ └─────┬─────┘ └────┬────┘ └─────┬─────┘   │
│        │             │            │            │       │
│        └─────────────┴──────┬─────┴────────────┘       │
│                             ▼                        │
│  ┌───────────────────────────────────────────────────┐ │
│  │  Core (PURE – no framework dependencies)           │ │
│  │  - Contracts and validators                       │ │
│  │  - Policy evaluation (pure functions)              │ │
│  │  - DTOs (frozen dataclasses)                     │ │
│  └───────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘

The rule: core/ has NO imports from anything above it—no app orchestration (agents, nodes, graphs, etc.), no wiring, no adapters. Dependencies point inward only.

This isn’t just a guideline; it’s enforced.

How to Enforce a Guideline?

Simple: write a test for it that would catch the violation:

# tests/unit/architecture/test_core_purity.py

FORBIDDEN_IMPORTS = [
    "app.state",
    "app.graphs",
    "app.nodes",
    "app.agents",
    # ... all app orchestration and platform wiring
]

def test_core_has_no_forbidden_imports():
    """Core layer must remain pure – no wiring dependencies."""
    core_files = Path("app/platform/core").rglob("*.py")

    for file in core_files:
        content = file.read_text()
        for forbidden in FORBIDDEN_IMPORTS:
            assert forbidden not in content, (
                f"{file} imports {forbidden} – core must stay pure"
            )

If you break the boundary, the test fails. No exceptions.

Beyond guidelines, you can also define contracts that validate at runtime.

Contracts That Validate

The core/contract/ directory contains validators that enforce contract rules at runtime:

ContractWhat it does
validate_state_update()Restricts mutations to authorized owners
validate_structured_response()Forces validation before persisting
validate_phase_registry()Ensures phase keys match declared schemas
validate_allowlist_contains_schema()Ensures tool‑allowlist correctness

These aren’t optional – every node calls them:

# Every state update goes through the contract
update = {"phases": {phase_key: phase_entry}}
validate_state_update(update, owner="problem_framing")
return Command(update=update, goto=next_node)

The contracts themselves are also tested (validation logic, phase dependencies, invalidation cascades). See the full suite in test_state.py.

Test Structure That Scales

Tests are organized by type (unit, integration, e2e) and category (architecture, orchestration, platform). This makes coverage gaps obvious and lets you run targeted subsets.

tests/
├── unit/
│   ├── architecture/      # Boundary enforcement
│   │   ├── test_core_purity.py
│   │   ├── test_adapter_boundary.py
│   │   └── test_import_time_construction.py
│   ├── orchestration/    # Agents, nodes, graphs
│   └── platform/         # Core + adapters
├── integration/
│   ├── orchestration/
│   └── platform/
└── e2e/

Pytest markers

# pyproject.toml
# Test markers for categorizing tests by purpose and scope
markers = [
  # Test Type Markers (by scope)
  "unit: Fast, isolated tests with no external dependencies",
  "integration: Tests crossing component boundaries (may use test fixtures)",
  "e2e: End‑to‑end workflow tests (full pipeline validation)",

  # Test Category Markers (organizational categories)
  "architecture: Hexagonal architecture enforcement (import rules, layer boundaries)",
  "orchestration: LangGraph orchestration components (agents, nodes, graphs, middlewares, tools)",
  "platform: Platform layer tests (hexagonal architecture – core, adapters, runtime)",
]

Run unit‑architecture tests alone:

uv run pytest -m "unit and architecture"

The architecture is validated by 110 tests – 11 of which specifically enforce architecture boundaries.

What This Enables

Here’s where it gets interesting.

You might be thinking: cool story, but…

...but why?

When your architecture is predictable and enforceable, something curious happens: coding agents stop being a liability and start being useful.

  • When every node follows the same pattern…
  • When every state update goes through a validator…
  • When every boundary is well‑defined and tested…

…an AI agent can’t accidentally break your architecture without the tests catching it. It can’t import forbidden modules, skip validation, or bypass contracts – not without failing the test suite.

The rules become more than just documentation; they’re guardrails for both humans and AI.

Want the Full Thing?

Next Up

What happens when you point Claude Code at an architecture it can’t break.

The CLAUDE.md file isn’t just a conglomerate of instructions – it’s a contract that preserves context and enforces boundaries during development. I built a framework for it with measurable results.

Coming next: The CLAUDE.md Maturity Model.

This is part of my “From Prompt to Platform” series documenting the SageCompass build. Start from the prologue.

Back to Blog

Related posts

Read more »

Top 5 CLI Coding Agents in 2026

Introduction The command line has always been home turf for developers who value speed, clarity, and control. By 2026, AI has settled comfortably into that spa...