I built a visual LLM canvas where every branch has its own model, prompt, and context settings

Published: (May 2, 2026 at 07:31 AM EDT)
2 min read
Source: Dev.to

Source: Dev.to

Introduction

Every time I went deep on a topic with ChatGPT, one tangent would lead to a new line of inquiry. The standard workaround? Open a new chat and paste context manually. I wanted branches—real ones. Not tabs. Not separate threads you have to juggle.

So I built ContextTree.

What is ContextTree?

ContextTree is a node‑based visual canvas for LLM conversations.

Core Invariant

  • A child node only inherits its direct parent’s prompt and context.
  • No cross‑contamination of state between branches.

Honest rule in the codebase:
A child node never reads the parent’s live state — no shared LangGraph.

Key Features

  • Per‑node LLM model (e.g., GPT‑4o on one branch, Gemini Flash on another).
  • Per‑node custom system prompt scoped to that node and its children.
  • Advanced settings per node:
    • Temperature
    • Max output tokens
    • History mode
    • Last K messages
    • Context budget in tokens
    • External context chunk count

This means on a single canvas you can have a general‑assistant node, fork it into a specialized assistant with a different model, prompt, and settings, all without interfering with each other.

Knowledge vs. State

Ancestry‑scoped vector search lets a child retrieve knowledge from its ancestors, but not their live state. This distinction was crucial: “knowledge not state.”

SIMILAR_CONTEXT_LIMIT=0   # per node

Open Questions

  • Prompt stack order – Should users be able to reorder layers?
  • Per‑node system prompt – Is it enough, or do people want per‑node RAG sources pinned differently?
  • Multi‑LLM branching UX – Is it obvious enough what’s happening?

Demo

ContextTree Demo on YouTube

Call for Feedback

Built solo, early stage. Brutal feedback is welcome—especially from anyone who’s built multi‑agent or prompt‑engineering tooling.

0 views
Back to Blog

Related posts

Read more »