Building an AI-Powered Code Editor: A Journey into Structured LLM Integration
Source: Dev.to
I’ve been working on an idea: creating an online editor that isn’t just a chat window, but a true pair programmer integrated into the development environment. The goal is to facilitate development with a tool that understands project context, executes complex tasks, and most importantly, is reliable.
- Try it here: llm-codeforge – now works only with Gemini
- Repository: CodeForge-AI
Context is King: the Virtual File System (VFS)
To make the AI “see” the project, the application manages an entire virtual file system in memory, persisted on IndexedDB. This gives the assistant a complete and up‑to‑date view of the folder structure and files—a fundamental prerequisite for any meaningful operation.
A “Contract” with the AI: JSON Schema Validation
Interaction with LLMs can be unpredictable. To mitigate this, I defined a strict JSON schema that the AI must respect for each response. Every output is validated through AJV. If validation fails, the system sends automatic feedback to the AI, asking it to correct its response. This turns the interaction from a hope into a contract, increasing reliability.
Beyond this, responses are divided with a multi‑part pattern that isolates various parts of the message, avoiding complex JSON parsing issues.
A Framework for Structured Prompts (2WHAV “Light”)
To make creating complex prompts more efficient, I built a “light” version of the 2WHAV framework. This internal tool expands a simple user request into a detailed technical specification that the AI can follow. The idea is to give the assistant a clear action plan from the start, instead of a vague idea.
An Action Loop for Complex Tasks
The assistant doesn’t just respond—it can execute a series of tools (e.g., list_files, read_file) and actions (e.g., create_file, update_file). This happens within a loop that allows the AI to break down a complex problem into smaller steps, such as sequentially modifying multiple files to implement a new feature.
User Experience at the Center
Technology is fascinating, but it must be useful. Every error that appears in the live preview console is clickable; one click copies the error directly into the AI’s input, ready for analysis. This small detail reduces friction and makes debugging smoother, while auto‑correction is a work in progress.
Conclusion
Building this application is a fascinating journey into software engineering applied to AI. The goal isn’t to create a “magic box,” but a tool robust and predictable enough to provide useful help and usable code.
The road is still long and challenges abound, but as a proof‑of‑concept, I think it’s a good result.
What do you think? What are the biggest challenges in integrating AI into our development workflows?


