Runtime Hyperparameter Tuning in LangChain

Published: (February 9, 2026 at 12:45 PM EST)
2 min read
Source: Dev.to

Source: Dev.to

Overview

When building AI agents, parameters such as temperature or top_p are often fixed when the application starts. However, different requests may require different behaviors:

  • Code Generation – needs temperature=0.0 (strict)
  • Creative Writing – needs temperature=0.8 (creative)

LangChain’s configurable_fields lets you modify the internal attributes of an LLM at runtime. The Streamlit UI captures user input via sliders; this metadata is stored in a Configuration object that travels alongside the prompt through the chain.

graph LR
    User -->|Slides Temp to 0.9| Streamlit
    Streamlit -->|Constructs Config| Config["config = {configurable: {llm_temperature: 0.9}}"]

    subgraph LangChain Runtime
        Prompt --> Chain
        Config --> Chain
        Chain -->|Injects Params| LLM[Ollama LLM]
    end

    LLM --> Output

You don’t need to wrap the LLM in a dedicated router—just expose its internal fields.

Setup LLM with Defaults

from langchain_ollama import OllamaLLM
from langchain.schema import ConfigurableField

# 1. Initialize the LLM with default parameters
llm = OllamaLLM(model="llama3.2", temperature=0.5)

Expose the temperature Field for Runtime Configuration

# 2. Make the temperature configurable at runtime
configurable_llm = llm.configurable_fields(
    temperature=ConfigurableField(
        id="llm_temperature",
        name="LLM Temperature",
        description="The creativity of the model"
    )
)

Invoke the Chain with a New Temperature

# 3. Pass a new temperature value during invocation
chain.invoke(
    {"input": "Write a poem"},
    config={"configurable": {"llm_temperature": 0.9}}
)

Practical Scenarios

  • Multi‑Tenant Apps – User A wants a creative bot, while User B prefers a strict one. The same backend instance can serve both by adjusting the temperature per request.
  • Adaptive Agents – An agent may first extract data with temperature=0.0 and then summarize it creatively with temperature=0.7.
  • Testing & Tuning – Quickly iterate on the “sweet spot” for prompt settings without restarting the script.

Repository

The full example code is available in the GitHub repository:

https://github.com/harishkotra/langchain-ollama-cookbook/tree/main/02_temp_tuner_agent

0 views
Back to Blog

Related posts

Read more »