AI Agents, Source Context, and Prompt History: A New Software Development Paradigm

Published: (February 11, 2026 at 10:42 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Software Development is Shifting from “Writing Code” to “Curating Intent”

Modern LLMs can produce a large part of an implementation if the AI agent is grounded in the project’s truth.

The Simplest Mental Model

  • AI agent – like a developer with anterograde amnesia.
  • Source context – curated, modular documentation that defines requirements, constraints, architecture, and invariants.
  • Prompt history – the running dialogue that captures decisions, feedback, and rationale as the project evolves.

Together, these form a language‑native codebase: a project defined by intent and constraints, with code generated and maintained under human oversight.

This aligns with Andrej Karpathy’s “Software 3.0” framing: prompts and context increasingly behave like programs, and development becomes a conversation where natural language is the dominant control surface.


What the Claim Is (and Isn’t)

  • Not: “Docs replace code.”
  • Yes: Context becomes the project’s constitution; code remains the executable artifact.

AI‑First Repo as a Layered System

LayerDescription
1️⃣ CodeThe executable artifact (still necessary).
2️⃣ Source contextThe normative spec – “what must be true”.
3️⃣ Prompt historyWorking memory + rationale – “why we chose this”.
4️⃣ AgentThe compiler/contributor that converts (2)+(3) into (1) under review.

The breakthrough is treating layers 2 and 3 as versioned, reviewed, and intentionally maintained—not as accidental chat logs.


Practical Pattern: Module‑Scoped Context Files

Large systems fail with a single monolithic context file for the same reason monolithic codebases rot: everything is coupled.
Instead, create small, focused context files that act as “README for humans and agents”.

product_context.md
orders_context.md
payment_context.md
user_auth_context.md

Benefits

  • Clarity – agents load only what matters.
  • Separation of concerns – requirements and constraints evolve locally.
  • Easier onboarding – humans and agents ramp faster.
  • Parallel work – multiple agents can operate safely in different domains.

Reliable Context File Shape

Each _context.md should contain:

  1. Purpose / Non‑goals
  2. Public API / Contracts (endpoints, events, schemas)
  3. Core invariants (“must always hold”)
  4. Data model (field meaning; avoid raw schema dumps)
  5. Workflows / State machines
  6. Security & privacy constraints
  7. Operational constraints (latency, retries, idempotency)
  8. Failure modes & recovery
  9. Observability (logs/metrics/traces expectations)
  10. Test expectations (golden paths + edge cases)
  11. Changelog (dated, human‑readable)

Key: Context should state constraints and invariants, not mirror implementation details.


From Conversation to Governance

  • Conversation = meeting transcript.
  • Context = meeting minutes.

Distilling Prompt History

A mature workflow actively extracts decisions from the prompt history and updates context files:

  • If a decision affects future work → record it.
  • If a rule is corrected → add it to the relevant context file.
  • If a decision changes → mark the old one as superseded.

This turns “chat” into governance.


Updating Context When Implementing Features

When an agent implements a feature, it should update the relevant context file(s) in the same PR:

  • Add or revise invariants/contracts.
  • Record edge cases discovered during implementation.
  • Add a dated changelog entry.

Example (simplified)

# orders_context.md – Updated 2024‑11‑03

## Core Invariants
- **Order total** must always be ≥ 0.
- **Order status** transitions must follow the state machine:
  `created → paid → shipped → delivered`.

## New Edge Case
- If a payment is refunded after `shipped`, the order must transition to `refunded` and trigger a restock workflow.

## Changelog
- **2024‑11‑03** – Added invariant for non‑negative order total and documented refund‑after‑ship edge case.

TL;DR

  1. Treat context as a versioned constitution.
  2. Keep context files small, focused, and stable.
  3. Distill decisions from chat into context updates.
  4. Let the AI agent generate code from (source context + prompt history) under human review.

By doing so, teams can harness LLMs effectively while maintaining rigorous, auditable software development practices.

Changelog

2026‑02‑02

  • Added wishlist support: users can store product IDs in wishlistItems.
  • Added endpoints: GET /users/{id}/wishlist, POST /users/{id}/wishlist, DELETE /users/{id}/wishlist.
  • Enforced owner‑only access and idempotent add/remove behavior.

Preventing Context Drift

Docs and code must change together. A practical AI‑first loop looks like this:

PhaseDescription
PlanHuman states the goal. Agent proposes an approach and lists affected modules.
ReviewHuman checks architecture, security, and product intent. Agent revises the plan.
ImplementAgent writes code and tests, following context rules.
VerifyRun tests, static checks, and human review. Agent fixes any issues.
Update ContextAgent updates module context files and the changelog. Human reviews the changes.

This is not “autopilot.” It’s delegation with constraints.


Example: Adding a Wishlist to an Existing E‑Commerce System

Modules Involved

  • product_context.md – product IDs, catalog lookup rules.
  • user_auth_context.md – identity, authorization constraints.

Agent’s Initial Load

  • Global rules (AGENTS.md).
  • product_context.md.
  • user_auth_context.md.
  • Most recent changelog entries.

Proposed Implementation

  1. Data Model – Store wishlistItems: string[] (product IDs) on the user profile.
  2. Endpoints – Add, view, and remove wishlist items.
  3. Validation – Verify the product exists before adding.
  4. Idempotency – Prevent duplicate entries.
  5. Authorization – Enforce “owner‑only” access.

Human‑Provided Additions

  • Removal endpoint details.
  • Maximum wishlist size.
  • Logging requirements.

Outcome

  • Agent implements, tests, and updates both context files.
  • The feedback‑to‑distillation loop makes future work faster and safer.

Benefits of This Workflow

BenefitExplanation
SpeedAgent handles boilerplate and rapid iteration.
MaintainabilityContext stays synchronized because it’s part of the change flow.
OnboardingContext files serve as human‑readable, module‑level truth.
ConsistencyStandards live in context, not in tribal memory.
Compounding ImprovementEvery correction becomes durable guidance.

Risks & Considerations

  • Context Size – Too much context can cause “lost in the middle.” |
  • Ambiguity – Gaps let agents hallucinate; constraints must be explicit. |
  • Maintenance Overhead – Context files need ownership and regular review. |
  • Cost/Tooling – Large contexts can be expensive to process. |
  • Security – Context is a new attack surface; treat it accordingly. |

Further Reading & Example Repository

A hands‑on example of this paradigm is available at:

🔗 github.com/cangiremir/context-driven-ai-development

The repository demonstrates:

  • A modular source‑context architecture.
  • Human‑in‑the‑loop governance rules.
  • Example context files, ADRs, and prompt templates aligned with the ideas described above.

Vision

If this trajectory continues, a repository evolves from a pile of source code into a knowledge system:

  • Context docs define intent and constraints.
  • Decisions & rationales are preserved.
  • Agents translate intent into code.
  • Code is continuously verified and regenerated.

In that world, a software engineer’s role expands: you’re not only writing code—you’re designing systems of constraints that both humans and agents can execute.

0 views
Back to Blog

Related posts

Read more »