Day 2: Introduction to LangGraph - From Chains to Agents

Published: (December 3, 2025 at 08:12 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Day 2: Introduction to LangGraph – From Chains to Agents

Part of the 4‑Day Series – Agentic AI with LangChain/LangGraph

Yesterday we built a Chain (Input → Retrieve → Answer). Real‑world workflows often need loops, conditional steps, or clarification prompts. LangGraph treats an AI application as a graph (nodes and edges) instead of a linear chain.

Core Concepts of LangGraph

1. State

In a chain, data passes step‑by‑step. LangGraph introduces a central State object that all nodes read from and write to – think of it as a shared whiteboard.

// The State is just a list of messages
import { MessagesAnnotation } from "@langchain/langgraph";
// MessagesAnnotation is a pre‑built state definition for chat apps.

2. Nodes

Nodes are plain JavaScript functions. They receive the current state, perform work, and return an update.

const agentNode = async (state) => {
  // Read the history
  const { messages } = state;
  // Call the LLM
  const response = await model.invoke(messages);
  // Update the state (append the new message)
  return { messages: [response] };
};

3. Edges and Conditional Edges

Edges define the flow between nodes.

  • Normal Edge – “After A, always go to B.”
  • Conditional Edge – “After A, check the result. If X, go to B; if Y, go to C.”

Agents are built as loops:

  1. LLM thinks.
  2. Conditional Edge – Did the LLM request a tool?
    • YesToolNode.
    • No → End.
  3. ToolNode runs the tool and loops back to step 1.

Visualizing the Graph

graph TD
    Start[__start__] --> Agent
    Agent -->|Call Tool?| Condition{Has Tool Call?}
    Condition -->|Yes| Tools[ToolNode]
    Condition -->|No| End[__end__]
    Tools --> Agent

Building the Graph

We wrap the Day 1 RAG code into a Tool, giving the LLM the option to search instead of forcing it.

// 1. Define the Graph
const workflow = new StateGraph(MessagesAnnotation)
  // Add Nodes
  .addNode("agent", agentNode)
  .addNode("tools", new ToolNode([lookupPolicy])) // pre‑built node that runs tools

  // Define Flow
  .addEdge("__start__", "agent") // start here

  // The Brain: decide what to do next
  .addConditionalEdges("agent", (state) => {
    const lastMessage = state.messages[state.messages.length - 1];
    // If the LLM returned a "tool_call", go to "tools"
    if (lastMessage.tool_calls?.length) {
      return "tools";
    }
    // Otherwise, we are done
    return "__end__";
  })

  // The Loop: after tools, always go back to the agent to interpret results
  .addEdge("tools", "agent");

// 2. Compile
const app = workflow.compile();

Why Is This Better?

  • Autonomy – The LLM decides whether it needs to search. Simple greetings are cheap and fast; hard questions trigger a thorough search.
  • Cycles – If the first search result isn’t sufficient, the agent can loop and search again with a refined query—something a linear chain can’t do.

Tomorrow we’ll expand this graph to include multiple agents collaborating!

Source Code

Back to Blog

Related posts

Read more »