Building a Human-in-the-Loop Travel Agent with LangGraph.js

Published: (February 19, 2026 at 07:30 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Cover image for Building a Human-in-the-Loop Travel Agent with LangGraph.js

The Problem: Autonomous Agents are Scary

Fully autonomous agents can be unpredictable. When you ask an agent to “Book a trip to Paris,” you probably don’t want it to instantly charge $3,000 to your credit card without you seeing the itinerary first. You need a Review & Approve stage.

The Solution: LangGraph Breakers

This project is inspired by the architectural patterns documented by MarkTechPost, where they demonstrated this flow in Python. We’ve taken that core idea and ported it to LangGraph.js, leveraging the power of Next.js to create a seamless, interactive user experience.

Architecture Overview

Our agent is defined as a state machine with explicit control flow. Here’s how the logic is structured:

Architecture Overview

  • Planning Node: The LLM parses the user’s natural language request into a structured JSON schema.
  • Validation: A custom helper ensures the JSON is well‑formed, which is especially important for local models (Ollama) that may struggle with perfect formatting.
  • Interrupt (HITL): Using LangGraph’s interrupt() function, the graph execution pauses. The browser UI picks up this state and presents the user with an editable JSON editor.
  • Execution Node: Once the user clicks “Approve,” the graph resumes, passing the (potentially edited) plan to the tool execution logic.

Technical Deep Dive

1. Robust Local LLM Support

Key enhancements for Ollama integration:

  • Native JSON Mode – leveraging Ollama’s format: "json" configuration.
  • Balanced‑Brace Extraction – a parsing mechanism that strips away LLM “chatter” and extracts the core JSON object.
  • Type Fallbacks – if a model omits a field, the validatePlan helper injects sensible defaults instead of crashing the workflow.

2. State Management with LangGraph.js

LangGraph.js manages the agent’s state across multiple turns. We use the MemorySaver checkpointer, which allows us to “hibernate” the agent’s state while waiting for user input, making the application feel responsive and reliable.

// Example of the main Graph definition
const workflow = new StateGraph(StateAnnotation)
  .addNode("planning", make_llm_plan)
  .addNode("approve", wait_for_approval)
  .addNode("execute", execute_tools)
  .addEdge(START, "planning")
  .addEdge("planning", "approve")
  .addEdge("approve", "execute")
  .addEdge("execute", END);

3. Modern Next.js Frontend

The UI is a sleek, dark‑themed dashboard built with standard CSS and React. It features:

  • Real‑time streaming feedback (simulated via graph states).
  • JSON input/output synchronization.
  • A multi‑provider settings drawer for switching between OpenAI, Aisa.one, and local Ollama instances.

What’s Next?

This project is an open template for the community. You can fork it today and:

  • Connect it to real booking APIs (Amadeus, Skyscanner).
  • Add a clarification loop where the agent asks questions before the first plan.
  • Implement budget constraints that guard the tool execution.

Human‑in‑the‑loop isn’t just a safety feature; it’s a UX requirement for the next generation of AI agents. By combining LangGraph.js with the speed of Next.js, we’ve created a blueprint for agents that are both powerful and trustworthy.

Special thanks to MarkTechPost for the original architectural inspiration.

0 views
Back to Blog

Related posts

Read more »

Apex B. OpenClaw, Local Embeddings.

Local Embeddings para Private Memory Search Por default, el memory search de OpenClaw envía texto a un embedding API externo típicamente Anthropic u OpenAI par...