SuperLocalMemory v2.7: Your AI Learns You — Adaptive Learning with Local ML
Source: Dev.to
The Problem We Solved
You have been using SuperLocalMemory for three months. You have stored thousands of memories across multiple projects—architecture decisions, coding patterns, deployment notes, debugging discoveries. The system works. Persistence works. Cross‑tool sharing works.
But search is getting noisy.
When you recall “database configuration,” you get fifty results. Some are from your current project, some from a project you finished six weeks ago, and some from a quick experiment you abandoned after an hour. The system treats them all equally because, until now, it had no way to know which results matter most to you in this context.
This is the scaling problem every memory system hits. Raw storage and retrieval work fine at low volume. At high volume, you need intelligence—not just search, but understanding of what is relevant based on how you actually work.
SuperLocalMemory v2.7 solves this with adaptive learning. The system now observes which memories you use, learns your patterns across projects, and re‑ranks search results to surface the most relevant content first. All of it runs locally. No cloud. No telemetry. No data leaving your machine.
What’s New in v2.7
- Three‑layer local learning architecture that builds a behavioral model of your workflows
- LightGBM‑powered adaptive re‑ranking that improves search relevance over time
- Separate
learning.dbdatabase for behavioral data, fully isolated from your memories (memory.db) - Three new MCP tools:
memory_used,get_learned_patterns,correct_pattern - New CLI commands for inspecting and managing learned patterns
- Synthetic bootstrap that delivers ML‑quality results from day one — no cold‑start problem
- Full GDPR compliance with one‑command data export and deletion
Learning System Architecture
The v2.7 learning system operates in three layers, each building on the one below it.
1️⃣ Technology‑Preference Layer
Tracks your technology preferences across all projects and profiles.
- Example signals:
- Choosing TypeScript over JavaScript
- Preferring PostgreSQL over MySQL
- Reaching for server components over client components
These preferences are transferable. When you start a new project, your technology affinities carry over. The system does not dictate choices—it adjusts relevance weights so that memories matching your established preferences rank higher in search results.
2️⃣ Project‑Context Layer
Understands project boundaries using four signals:
- Active directory
- Recent memory tags
- Time‑of‑day patterns
- Explicit profile selection
This matters because a memory about “API authentication” means something different when you are working on a Node.js backend versus a Python data pipeline. Project context lets the ranker prioritize memories from the relevant project without you having to specify it every time.
3️⃣ Sequential‑Pattern Layer
Discovers sequential patterns in how you work.
- If you consistently recall deployment notes after modifying CI configuration
- If you always check database schema docs before writing migration files
The system learns these sequences and can anticipate what you will need next based on what you just did. This is not speculative prediction—it is pattern recognition grounded in your actual recorded behavior.
Adaptive Re‑Ranker (LightGBM)
Sits on top of all three layers. It takes the raw search results from SuperLocalMemory’s existing FTS5 + TF‑IDF engine and re‑orders them based on everything the learning layers know about you.
- Day‑one: rule‑based system using synthetic training data bootstrapped from your existing memories.
- After real signals accumulate: smoothly transitions to a full ML model.
You never experience a cold‑start degradation—results are personalized from the moment you upgrade.
The Transition Is Automatic
- Install v2.7.
- Keep using SuperLocalMemory as you normally do.
- The system learns in the background.
After a few days you’ll notice search results becoming more relevant to your current context.
Guarantees
-
All behavioral data stays on your machine – an architectural guarantee, not a policy decision.
-
Separate database:
learning.db(behavioral data) is completely isolated frommemory.db(memories). -
Zero telemetry: No usage data, analytics pings, crash reports, or any “phone‑home” capability.
-
GDPR‑ready by design:
# Export all behavioral data slm export-behavior --output behavior.json # Delete all behavioral data slm delete-behaviorOr simply drop
learning.dbto erase the footprint while leaving your memories untouched. -
No cloud dependencies: LightGBM trains locally; the model file lives next to your database, versioned and portable.
Research Foundations
The v2.7 learning architecture is grounded in published research, adapted for the constraints of a fully local system:
| Layer / Component | Research Source |
|---|---|
| Adaptive re‑ranking pipeline | eKNOW 2025 – BM25‑to‑re‑ranker pipelines |
| Privacy‑preserving feedback | ADPMF (IPM 2024) |
| Cold‑start bootstrap | FCS LREC 2024 |
| Workflow pattern mining | TSW‑PrefixSpan (IEEE 2020) |
| Bayesian confidence scoring | MACLA framework (arXiv:2512.18950) |
Eight research papers informed the design, resulting in an architecture that achieves personal‑level relevance while keeping everything local, private, and compliant.
SuperLocalMemory v2.7 – Local, Zero‑Communication Adaptive Re‑ranking
A Novel Contribution
To our knowledge, SuperLocalMemory v2.7 is the first system that implements fully local, zero‑communication, adaptive re‑ranking for personal AI memory. Prior work in this space universally assumes cloud‑based infrastructure for model training and inference.
Core Commands
| Command | Description |
|---|---|
memory_used | Primary feedback channel. When your AI tool uses a recalled memory in its response, it calls memory_used to signal that the memory was relevant. This is the strongest learning signal available and happens automatically through MCP. |
get_learned_patterns | Retrieve your learned technology preferences, project contexts, and workflow patterns. Useful for inspecting what the system has learned and verifying it matches your expectations. |
correct_pattern | Override or correct a learned pattern. If the system inferred that you prefer Redux but you have actually switched to Zustand, this tool lets you correct the record. The system adjusts its model accordingly. |
Viewing & Managing Learned Data
# View your learned technology preferences
slm patterns list
# See what the system knows about your current project context
slm patterns context
# Export all behavioral data (GDPR export)
slm learning export
# Delete all behavioral data
slm learning reset
# Check learning system status
slm learning status
Upgrading to v2.7
Time required: under two minutes
# Update to v2.7
npm update -g superlocalmemory
# Verify the version
superlocalmemory --version
# Expected: 2.7.x
# Check that the learning system initialized
slm learning status
The learning system activates automatically on first use after upgrade. It creates learning.db alongside your existing memory.db and begins collecting signals immediately—no configuration required.
Optional Dependencies
- lightgbm
- scipy
These are installed automatically if you use the npm package. If you installed from source, run:
pip install -r requirements-learning.txt
For the full migration guide (including changes for existing users and how to verify the upgrade), see the Upgrading to v2.7 wiki page.
Backward Compatibility
- v2.7 is fully backward compatible.
- Existing memories, knowledge graph, patterns, and profiles are untouched.
- The learning system is additive—it enhances search results but never modifies stored data.
- If the optional ML dependencies are unavailable, the system falls back to rule‑based ranking with no degradation in core functionality.
Looking Ahead: v2.8
v2.8 is already in planning. The focus will shift from individual learning to multi‑agent collaboration, enabling multiple AI agents to share context through a single memory layer using the A2A (Agent‑to‑Agent) protocol. Think of it as giving your entire AI toolkit a shared brain.
More details will be published as the design solidifies. For now, v2.7 makes your personal AI memory meaningfully smarter. Install it, use it for a week, and notice the difference in search relevance.
Your AI is learning you. Locally. Privately. Starting today.