Kalynt: An Open-Core AI IDE with Offline LLMs , P2P Collaboration and much more...
Source: Dev.to

The Problem I Wanted to Solve
I love VS Code and Cursor. They’re powerful, but they both assume the same model: send your code to the cloud for AI analysis.
As someone who cares about privacy, that felt wrong on multiple levels:
- Cloud dependency – Your LLM calls are logged, potentially trained on, always traceable.
- Single‑user design – Neither is built for teams from the ground up.
- Server reliance – “Live Share” and collaboration features rely on relay servers.
I wanted something different. So I built it.
What is Kalynt?
Kalynt is an IDE where:
- AI runs locally – via
node-llama-cpp. No internet required. - Collaboration is P2P – CRDTs + WebRTC for real‑time sync without servers.
- It’s transparent – all safety‑critical code is open‑source (AGPL‑3.0).
- It works on weak hardware – built and tested on an 8 GB Lenovo laptop.
The Technical Deep Dive
Local AI with AIME
Most developers want to run LLMs locally but think “that requires a beefy GPU or cloud subscription.”
AIME (Artificial Intelligence Memory Engine) is my answer. It’s a context‑management layer that lets agents run efficiently even on limited hardware by:
- Smart context windowing
- Efficient token caching
- Local model inference via
node-llama-cpp
Result: You can run Mistral or Llama on a “potato” and get real work done.
P2P Sync with CRDTs
Collaboration without servers is hard. Most tools gave up and built around a central relay (Figma, Notion, VS Code Live Share).
I chose CRDTs (Conflict‑free Replicated Data Types) via yjs:
- Every change is timestamped and order‑independent
- Peers sync directly via WebRTC
- No central authority = no server required
- Optional end‑to‑end encryption
Architecture modules
@kalynt/crdt→ conflict‑free state@kalynt/networking→ WebRTC signaling + peer management@kalynt/shared→ common types
Open‑Core for Transparency
The core (editor, sync, code execution, filesystem isolation) is 100 % AGPL‑3.0. You can audit every security boundary.
Proprietary modules (advanced agents, hardware optimization) are closed‑source but still visible to users:
- Run entirely locally
- Heavily obfuscated in binaries
- Not required for the core IDE
How I Built It
- Timeline: 1 month
- Hardware: 8 GB Lenovo laptop (no upgrades)
- Code: ~44 k lines of TypeScript
- Stack: Electron + React + Turbo monorepo + yjs +
node-llama-cpp
Process
- Designed the architecture (security model, P2P wiring, agent capabilities)
- Used AI models (Claude, Gemini, GPT) to help with implementation
- Reviewed, tested, and integrated everything
- Security scanning via SonarQube + Snyk
This is how modern solo development should work: humans do architecture and judgment, AI handles implementation grunt work.
What I Learned
- Shipping beats perfect – I could have spent another month polishing, but shipping v1.0‑beta gave real feedback, which is more valuable than perceived perfection.
- Open‑core requires transparency – If you close‑source parts, be extremely clear about what and why. I documented
SECURITY.md,OBFUSCATION.md, andCONTRIBUTING.mdto show I’m not hiding anything nefarious. - WebRTC is powerful but gnarly – P2P sync is genuinely hard. CRDTs solve the algorithmic problem, but signaling, NAT traversal, and peer discovery are where you lose hours.
- Privacy‑first is a feature, not a checkbox – It’s not “encryption support added.” It’s “the system is designed so that centralized storage is optional, not default.”
Try It
- GitHub:
- Download installers:
- Or build from source:
git clone https://github.com/Hermes-Lekkas/Kalynt.git
cd Kalynt
npm install
npm run dev