Day 3: Multi-Agent Systems - The Supervisor Pattern
Source: Dev.to
Introduction
Welcome to Day 3 of the 4‑Day Series Agentic AI with LangChain/LangGraph. When tasks become complex, a single agent that tries to be a “Researcher, Writer, Editor, and Coder” quickly gets overwhelmed. The solution is a Multi‑Agent System that splits the brain into specialized personas that collaborate via a shared state.
Overview of the Supervisor Pattern
We’ll build a simple graph where two agents work together:
- Researcher – has access to search tools; system prompt: “You are a researcher…”.
- Writer – has no tools; system prompt: “You are a writer…”.
The agents communicate by appending messages to a shared state. Only the Researcher is bound to tools, preventing the Writer from accidentally invoking them.
Creating Agents
// Helper to create an agent node
const createAgent = (model, systemPrompt, tools = []) => {
// Bind tools only to the agents that need them
const modelWithTools = model.bindTools(tools);
return async (state) => {
// Prepend the system prompt to the conversation history
const messages = [
new SystemMessage(systemPrompt),
...state.messages,
];
const response = await modelWithTools.invoke(messages);
// Return the new message(s) to be added to the shared state
return { messages: [response] };
};
};
The Researcher receives the tools array (e.g., a search tool), while the Writer receives an empty array.
Deterministic Supervisor Flow
For this tutorial we use a fixed sequence instead of a dynamic “Supervisor” LLM:
Researcher → (maybe Tools) → Researcher → Writer → End
graph TD
Start[__start__] --> Researcher
Researcher -->|Call Tool?| Condition{Has Tool Call?}
Condition -->|Yes| Tools[ToolNode]
Condition -->|No| Writer
Tools --> Researcher
Writer --> End[__end__]
The flow is encoded in LangGraph edges (see next section).
Defining the Workflow in LangGraph
const workflow = new StateGraph(MessagesAnnotation)
.addNode("researcher", researcherNode)
.addNode("tools", toolNode)
.addNode("writer", writerNode)
// Start the graph with the Researcher
.addEdge("__start__", "researcher")
// Researcher logic:
// – If it calls a tool → go to "tools"
// – If it produces an answer → hand off to "writer"
.addConditionalEdges("researcher", (state) => {
const last = state.messages[state.messages.length - 1];
return last.tool_calls?.length ? "tools" : "writer";
})
// Tools always return results back to the Researcher
.addEdge("tools", "researcher")
// Writer always finishes the job
.addEdge("writer", "__end__");
How Messages Are Passed
- Researcher adds a message such as “I found X, Y, Z.” to the shared state.
- The graph transitions to Writer.
- Writer receives the full conversation (
UserMessage,ResearcherMessage, …) and generates the final output, e.g., “Here is a blog post about X, Y, Z…”.
The Writer never needs to perform a search; it simply “reads” what the Researcher wrote. This separation of concerns lets you tune each agent independently (e.g., temperature 0.8 for creative writing, temperature 0 for factual research).