Function Calling & Tool Schemas

Published: (February 8, 2026 at 08:28 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Overview

This learning session explores function calling and tool schemas—how agents interact with external tools. The dialogue captures the back‑and‑forth between the user and Klover, the AI assistant, highlighting misconceptions, corrections, and deeper insights.

Tool Schema Definition

A tool schema does more than list available tools; it defines how to call each tool, similar to a function signature in code.

  • Name – identifier used by the model.
  • Description – natural‑language explanation for the LLM.
  • Parameters – typed fields with required/optional flags.

Example schema for a weather tool:

name: get_weather
description: "Get current weather for a location"
parameters:
  location:
    type: string
    required: true
    description: City name
  units:
    type: string
    required: false
    description: '"celsius" or "fahrenheit"'

Where the Schema Lives

The schema is not part of the model’s training data. It is injected at runtime, typically via:

  • The system prompt, or
  • A dedicated “tools” section in the request payload.

The model learns the format of schemas during training, enabling it to work with custom tools it has never seen before.

The Full Round‑Trip (LLM ↔ Tool)

  1. Thought – the model decides a tool is needed.
  2. Action – it outputs a structured JSON representation of the tool call.
  3. Observation – the orchestrator executes the tool, captures the result, and feeds it back to the model.
  4. Thought – the model incorporates the observation and continues.

Example Tool Call Output

{
  "tool": "get_weather",
  "parameters": {
    "location": "Singapore",
    "units": "celsius"
  }
}

The LLM does not execute the call itself; it stops after emitting the JSON. Your application (the orchestrator) then:

  • Parses the JSON.
  • Calls the actual API.
  • Returns the API response as the observation.
  • Allows the LLM to generate the next thought.

Role of the Orchestrator

The orchestrator sits between the LLM and external tools, providing essential safeguards:

  • Validate parameters before execution.
  • Rate limit to prevent infinite loops.
  • Filter disallowed tools based on context or permissions.
  • Log every call for debugging and auditability.
  • Sanitize tool outputs before feeding them back.

Without this layer, a malicious prompt could trick the model into dangerous actions (e.g., delete_database).

Importance of Well‑Written Schemas

Poor schemas lead to unreliable behavior:

IssueConsequence
Vague descriptionModel selects the wrong tool or skips the appropriate one.
Incorrect types / missing requirementsMalformed requests, crashes, or garbage output.
Missing parameter detailsModel guesses meanings, causing unpredictable calls.

Designing tool schemas is essentially prompt engineering for tools. Clear names, precise descriptions, and correct type specifications are crucial for a reliable agent.

Session Details

  • Date: February 8, 2026
  • Status: Exposure
  • Notes: The user demonstrated good intuition, naturally linking concepts to the ReAct loop. Review scheduled for tomorrow.
0 views
Back to Blog

Related posts

Read more »

ReAct Pattern — Review

Empty results — what happens next? Klover: An agent calls a search tool but gets back an empty result. Walk me through what happens next in the ReAct loop — wh...