From Blood Test to Doctor's Appointment: Building an Autonomous Health Agent with LangGraph and GPT-4

Published: (February 1, 2026 at 08:10 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Beck_Moulton

We’ve all been there: you get your blood test results back, see a bunch of scary red arrows indicating “High” or “Low,” and immediately fall down a WebMD rabbit hole of doom. But what if your AI didn’t just explain the results, but actually fixed the problem by finding a specialist and booking an appointment for you?

In this tutorial we are building a sophisticated AI Health Agent using LangGraph, GPT‑4, and function calling. This isn’t just another chatbot; it’s an autonomous system designed for medical automation, bridging the gap between data analysis and real‑world action. By leveraging LLM‑driven workflows, we can automate the journey from clinical data to doctor scheduling.

💡 Side note: While this is a technical exploration of agentic workflows, always consult a human doctor for medical advice. If you’re looking for more production‑ready patterns for healthcare AI, check out the advanced case studies at the WellAlly Tech Blog.

The Architecture: Closing the Loop

Traditional LLM chains are linear, but health‑related tasks are often cyclical and require state management. That’s why we use LangGraph – it lets us define a state machine where the agent can loop back, search for more info, or trigger specific tools based on the “abnormality” detected in a report.

System Workflow

graph TD
    A[Input: Blood Test PDF/Text] --> B{GPT‑4 Analysis}
    B -- Normal --> C[Summarize & Finish]
    B -- Abnormal Findings --> D[Tavily API: Search Specialists]
    D --> E[GPT‑4: Select Best Match]
    E --> F[Google Calendar API: Check Slots]
    F --> G[Confirm & Book Appointment]
    G --> H[Final Report to User]

    style B fill:#f96,stroke:#333,stroke-width:2px
    style G fill:#00ff00,stroke:#333,stroke-width:2px

Prerequisites

To follow along you’ll need:

  • Python 3.10+
  • LangGraph & LangChain – for agent orchestration
  • OpenAI GPT‑4 API key – for high‑reasoning extraction
  • Tavily API – for specialized medical resource searching
  • Google Calendar API – to handle scheduling logic

Step 1: Defining the Agent State

In LangGraph the state is the single source of truth passed between nodes. We need to track the blood‑work results, identified abnormalities, and the recommended doctors.

from typing import List, TypedDict
from langgraph.graph import StateGraph, END

class AgentState(TypedDict):
    raw_report: str
    abnormalities: List[str]
    specialist_recommendation: List[dict]
    appointment_confirmed: bool
    final_summary: str

Step 2: The Analysis Node (GPT‑4 + Pydantic)

We want the agent to extract structured data. We’ll use GPT‑4 with a specific Pydantic schema to avoid a wall of text.

from pydantic import BaseModel, Field

class Finding(BaseModel):
    indicator: str = Field(description="The name of the test, e.g., LDL Cholesterol")
    value: str = Field(description="The numeric value detected")
    status: str = Field(description="High, Low, or Normal")

def analyze_report_node(state: AgentState):
    # Call GPT‑4 with the Findings schema
    # report = state["raw_report"]
    # structured_data = llm.with_structured_output(Finding).invoke(report)

    # Simulating a 'High Glucose' finding
    return {
        "abnormalities": ["High Glucose (120 mg/dL)"],
        "next_step": "search"
    }

Step 3: Tool Integration (Tavily & Google Calendar)

If the agent detects an abnormality, it triggers the Tavily API to find local endocrinologists and then hits the Google Calendar API to locate an open slot.

from langchain_community.tools.tavily_search import TavilySearchResults

def search_specialist_node(state: AgentState):
    search = TavilySearchResults(k=3)
    query = f"Best endocrinologists for {state['abnormalities'][0]} in San Francisco"
    results = search.run(query)
    return {"specialist_recommendation": results}

def book_appointment_node(state: AgentState):
    # In a real app this would use the Google Calendar API:
    # service.events().insert(calendarId='primary', body=event).execute()
    print("🚀 Auto‑booking appointment for next Tuesday at 10:00 AM...")
    return {"appointment_confirmed": True}

Step 4: Building the Graph

Now we connect the dots. Conditional edges send the flow to the search step when abnormalities exist, otherwise the graph ends.

workflow = StateGraph(AgentState)

workflow.add_node("analyzer", analyze_report_node)
workflow.add_node("searcher", search_specialist_node)
workflow.add_node("scheduler", book_appointment_node)

workflow.set_entry_point("analyzer")

def should_continue(state: AgentState):
    return "continue" if state["abnormalities"] else "end"

workflow.add_conditional_edges(
    "analyzer",
    should_continue,
    {
        "continue": "searcher",
        "end": END,
    },
)

workflow.add_edge("searcher", "scheduler")
workflow.add_edge("scheduler", END)

app = workflow.compile()

Why This Matters: The Shift to Action‑Oriented AI

Most AI tutorials stop at “summarize a document.” In the real world, businesses and users need outcomes. By combining LangGraph’s stateful orchestration with LLM reasoning and external tools, we move from passive insight to proactive assistance—turning a blood‑test report into a booked specialist appointment automatically.

Happy coding! 🚀

State management with powerful tools like Tavily and Google Calendar transforms a passive LLM into an active participant in a user’s health journey.

This pattern—Detect → Search → Act—is the blueprint for the next generation of enterprise AI.

For a deeper dive into handling PHI (Protected Health Information) securely or optimizing these agentic prompts for lower latency, check out the in‑depth guides at . They cover the production‑grade nuances that go beyond a simple MVP.

Conclusion

We’ve successfully built an autonomous loop that:

  • Understands complex lab data.
  • Decides if action is necessary.
  • Executes real‑world API calls to book appointments.

The future of healthcare isn’t just better medicine; it’s better access and reduced cognitive load for patients.

What would you automate next? Maybe a fitness agent that adjusts your workout based on your sleep data? Let me know in the comments! 👇

Back to Blog

Related posts

Read more »

A Guide to Fine-Tuning FunctionGemma

JAN 16, 2026 In the world of Agentic AI, the ability to call tools is what translates natural language into executable software actions. Last month we released...

A Guide to Fine-Tuning FunctionGemma

markdown Jan 16, 2026 In the world of Agentic AI, the ability to call tools is what translates natural language into executable software actions. Last month, we...

A Guide to Fine-Tuning FunctionGemma

markdown January 16, 2026 In the world of Agentic AI, the ability to call tools is what translates natural language into executable software actions. Last month...