CRAM-Net: The Network that Thinks by Rewiring
Source: Dev.to
Introduction: Beyond the Static Model
CRAM‑Net (Conversational Reasoning & Memory Network) represents a fundamental shift in neural architecture—from static weight models to Memory‑Native (MN) systems. While traditional AI treats conversation history as external text stored in a temporary cache, CRAM‑Net treats every interaction as a physical catalyst for synaptic change:
“The conversation is the network.”
The model literally rewires itself in real time as dialogue progresses. CRAM‑Net is part of the Memory‑Native Neural Network (MNNN) family and is available on GitHub: 👉
Dual Memory Tracks
CRAM‑Net uses two internal memory tracks to mirror the human brain’s ability to handle both fleeting context and permanent logic.
The Chat Layer
Mechanism: Hebbian Trace Neurons
Function: Captures immediate conversational context (e.g., names, current topic)
Dynamics
- High learning rate
- Fast decay
Enables short‑term memory without permanently modifying core logic, allowing the network to stay context‑aware without a traditional context window.
The Reasoning Layer
Mechanism: Differentiable Logic Manifolds
Function: Discovers and hardens logical invariants (e.g., (A \Rightarrow B))
Dynamics
- Low learning rate
- High stability
Logical structures persist beyond the conversation, forming a durable reasoning map that survives long after the chat ends.
Global Workspace Bottleneck
Information does not flow freely; all internal representations must pass through a Global Workspace Bottleneck.
- Compression Ratio: ~12.5 % of raw thought vectors
- Cognitive Pressure: Forces the system to choose what truly matters
- Reasoning Trigger: Logical abstraction becomes necessary to survive compression
This bottleneck naturally activates the reasoning track, because structured logic compresses far better than raw data.
Backend Implementation
CRAM‑Net is powered by a high‑performance C backend (cram‑net.c) that applies a synaptic update for every token processed.
/* Synaptic update rule */
W_new = W_old + eta * (h_t ⊗ h_{t-1}) - lambda * W_old;
/* Association Step */
// Links the current thought with the previous one, preserving continuity.
/* Decay Step */
// Prevents runaway memory growth and gradually removes conversational noise.
Efficiency
- Only 25–30 % of synapses remain active per interaction.
- Maintains high contextual retention with minimal computational overhead.
Summary
CRAM‑Net reframes intelligence as a living, adaptive structure where:
- Conversation directly alters the network.
- Memory and reasoning are intrinsic, not bolted on.
- Logic emerges under pressure, not by explicit instruction.
This is not a chatbot with memory; it is a network that thinks by rewiring itself.