Build a Self-Evolving Memory Agent in 150 Lines

Published: (January 18, 2026 at 01:00 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

Self‑Evolving Memory Agent

Runnable companion to the Memory Architecture series – no external dependencies. Copy, paste, and run.

The skeleton demonstrates

  • Inner loop – runtime behavior (encode → store → retrieve → manage)
  • Outer loop – architecture evolution (adapt configuration based on performance)
  • Four roomsencode, store, retrieve, manage as separate concerns
python self_evolving_agent.py

"""
self_evolving_agent.py

A minimal, runnable skeleton of a self‑evolving memory agent.
No external dependencies. Uses fake embeddings so you can see
the loop behavior end‑to‑end before swapping in real components.
"""

import json
import math
import random
from typing import List, Dict, Any, Tuple

# ----------------------------------------------------------------------
# Utility: fake embedding + similarity
# ----------------------------------------------------------------------

def fake_embed(text: str) -> List[float]:
    """Naïve embedding: character‑frequency vector. Replace with a real model."""
    counts = [0.0] * 26
    for ch in text.lower():
        if "a"  float:
    """Cosine similarity for two normalized vectors."""
    return sum(x * y for x, y in zip(a, b))

# ----------------------------------------------------------------------
# Memory architecture (The Four Rooms)
# ----------------------------------------------------------------------

class MemoryItem:
    def __init__(self, text: str, vector: List[float], label: str = ""):
        self.text = text
        self.vector = vector
        self.label = label

class Memory:
    """Container that holds items and provides the four‑room API."""

    def __init__(self):
        # Config knobs — these are what the outer loop evolves
        self.top_k = 3
        self.sim_threshold = 0.2
        self.decay_prob = 0.0
        self.items: List[MemoryItem] = []

        # Stats for drift detection
        self.total_retrievals = 0
        self.successful_retrievals = 0

    # ------------------------------------------------------------------
    # ROOM 1: ENCODE
    # ------------------------------------------------------------------
    def encode(self, text: str) -> List[float]:
        return fake_embed(text)

    # ------------------------------------------------------------------
    # ROOM 2: STORE
    # ------------------------------------------------------------------
    def store(self, text: str, label: str = "") -> None:
        vec = self.encode(text)
        self.items.append(MemoryItem(text, vec, label))

    # ------------------------------------------------------------------
    # ROOM 3: RETRIEVE
    # ------------------------------------------------------------------
    def retrieve(self, query: str) -> List[MemoryItem]:
        if not self.items:
            return []

        q_vec = self.encode(query)
        scored: List[Tuple[float, MemoryItem]] = []
        for item in self.items:
            sim = cosine_sim(q_vec, item.vector)
            if sim >= self.sim_threshold:
                scored.append((sim, item))

        scored.sort(key=lambda x: x[0], reverse=True)
        results = [it for _, it in scored[: self.top_k]]

        # Update diagnostics
        self.total_retrievals += 1
        if results:
            self.successful_retrievals += 1

        return results

    # ------------------------------------------------------------------
    # ROOM 4: MANAGE
    # ------------------------------------------------------------------
    def manage(self) -> None:
        """Randomly decay items according to `decay_prob`."""
        if self.decay_prob  self.decay_prob
        ]

    # ------------------------------------------------------------------
    # DIAGNOSTICS
    # ------------------------------------------------------------------
    def retrieval_success_rate(self) -> float:
        if self.total_retrievals == 0:
            return 1.0
        return self.successful_retrievals / self.total_retrievals

    def size(self) -> int:
        return len(self.items)

    def to_config(self) -> Dict[str, Any]:
        return {
            "top_k": self.top_k,
            "sim_threshold": round(self.sim_threshold, 3),
            "decay_prob": round(self.decay_prob, 3),
            "size": self.size(),
            "retrieval_success_rate": round(self.retrieval_success_rate(), 3),
        }

# ----------------------------------------------------------------------
# Model stub
# ----------------------------------------------------------------------

class DummyModel:
    """Stub LLM: echoes query + context. Replace with a real model."""

    def run(self, query: str, context: List[MemoryItem]) -> str:
        ctx_texts = [f"  [{i.label}] {i.text}" for i in context]
        if ctx_texts:
            return f"Q: {query}\nContext:\n" + "\n".join(ctx_texts)
        return f"Q: {query}\nContext: (none)"

# ----------------------------------------------------------------------
# Agent: Inner Loop + Outer Loop
# ----------------------------------------------------------------------

class Agent:
    def __init__(self, memory: Memory, model: DummyModel):
        self.memory = memory
        self.model = model
        self.history: List[Dict[str, Any]] = []

    # ------------------------------------------------------------------
    # INNER LOOP (runtime)
    # ------------------------------------------------------------------
    def handle_task(self, query: str, label: str) -> str:
        """Process a single query: store → retrieve → run model → manage."""
        self.memory.store(query, label=label)
        context = self.memory.retrieve(query)
        output = self.model.run(query, context)
        self.memory.manage()

        success = any(item.label == label for item in context)
        self.history.append({"query": query, "label": label, "success": success})
        return output

    # ------------------------------------------------------------------
    # OUTER LOOP (architecture evolution)
    # ------------------------------------------------------------------
    def evolve_memory_architecture(self) -> None:
        """Adapt the memory configuration based on recent performance."""
        success_rate = self.memory.retrieval_success_rate()
        size = self.memory.size()

        print("\n>>> OUTER LOOP: Evaluating memory architecture")
        print(f"    Before: {self.memory.to_config()}")

        # Adapt retrieval aggressiveness
        if success_rate  0.9:
            self.memory.top_k = max(self.memory.top_k - 1, 1)
            self.memory.sim_threshold = min(self.memory.sim_threshold + 0.02, 0.8)

        # Adapt decay based on size
        if size > 100:
            self.memory.decay_prob = min(self.memory.decay_prob + 0.05, 0.5)
        elif size  None:
        """Write the agent's query history to a JSON‑Lines file."""
        with open(path, "w", encoding="utf-8") as f:
            for record in self.history:
                f.write(json.dumps(record) + "\n")

# ----------------------------------------------------------------------
# Demo / entry point
# ----------------------------------------------------------------------
if __name__ == "__main__":
    mem = Memory()
    model = DummyModel()
    agent = Agent(mem, model)

    # Simple demo: a few labelled queries
    demo_tasks = [
        ("What is the capital of France?", "geography"),
        ("Explain Newton's second law.", "physics"),
        ("Who wrote 'Pride and Prejudice'?", "literature"),
        ("What is the capital of France?", "geography"),  # repeat to test retrieval
    ]

    for q, lbl in demo_tasks:
        print("\n---")
        print(agent.handle_task(q, lbl))

        # Periodically evolve the architecture (e.g., every 2 tasks)
        if len(agent.history) % 2 == 0:
            agent.evolve_memory_architecture()

    # Persist the interaction log
    agent.dump_history()

Demo

def main():
    memory = Memory()
    model = DummyModel()
    agent = Agent(memory, model)

    # Toy dataset: queries with category labels
    tasks = [
        ("How do I process a refund?", "refund"),
        ("Steps to issue a refund via card", "refund"),
        ("How to troubleshoot a login error?", "login"),
        ("User cannot sign in, what now?", "login"),
        ("How to update user email address?", "account"),
        ("Change account email for a customer", "account"),
    ] * 3

    random.shuffle(tasks)

    for i, (query, label) in enumerate(tasks, start=1):
        print(f"\n--- Task {i} ---")
        output = agent.handle_task(query, label)
        print(output)

        # Run outer loop every 5 tasks
        if i % 5 == 0:
            agent.evolve_memory_architecture()

    agent.dump_history()
    print("\n✓ Done. History written to agent_history.jsonl")

if __name__ == "__main__":
    main()

What to Expect When You Run It

  • Tasks 1‑5 – Inner loop runs, memory fills, retrieval improves.
  • Outer loop fires – Config adjusts based on retrieval success rate.
  • Tasks 6‑10 – Behavior changes because the architecture changed.
  • Repeat – The agent evolves its own memory strategy.

The to_config() output shows you exactly what changed and why.


Component Swap‑In Guide

ComponentOptions
fake_embed()OpenAI, Cohere, or a local embedding model
self.itemsPinecone, Weaviate, Chroma, pgvector
DummyModelAny LLM via API or local
evolve_memory_architecture()Your own adaptation logic

The architecture stays the same; the components scale.


  • Why Memory Architecture Matters More Than Your Model – concepts
  • How To Detect Memory Drift In Production Agents – metrics + alerting
  • Build a Self‑Evolving Memory Agent in 150 Lines – you are here
  • The Two Loops – conceptual framework on Substack

Back to Blog

Related posts

Read more »

How to copy Free Fire Name Copy by UID

!Free Fire Name Copy Toolhttps://media2.dev.to/dynamic/image/width=800,height=,fit=scale-down,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws...