🧠 Your LLM Isn’t an Agent — Until It Has Tools, Memory, and Structure (LangChain Deep Dive)
Source: Dev.to
Introduction
Most “AI apps” today follow a simple pattern:
Prompt → LLM → Text Response
That’s not an agent—it’s just autocomplete with branding.
A real AI agent can:
- 🛠 Use tools
- 🧠 Remember context
- 📦 Return structured outputs
- 🔁 Reason across multiple steps
With modern LangChain, building such an agent is surprisingly clean. Let’s build one properly.
Core Components of a Production‑Ready AI Agent
| Component | Role |
|---|---|
| Model | The brain |
| Tools | Capabilities the agent can invoke |
| Structured outputs | Reliability and formatting |
| Memory | Continuity across interactions |
Missing any of these means you’re running a demo, not a system.
Building the Agent
1. Create the LLM
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0) # low temperature = more deterministic reasoning
2. Define Tools
Tools are ordinary Python functions annotated with a docstring.
from langchain.tools import tool
@tool
def calculate_revenue(price: float, quantity: int) -> float:
"""Calculate total revenue given price per unit and quantity sold."""
return price * quantity
@tool
def get_exchange_rate(currency: str) -> float:
"""Get the USD exchange rate for a given currency code."""
rates = {"EUR": 1.1, "GBP": 1.25}
return rates.get(currency.upper(), 1.0)
3. Assemble the Agent
agent = create_agent(
model=llm,
tools=[calculate_revenue, get_exchange_rate],
system_prompt="You are a financial analysis assistant."
)
The agent now:
- Decides when math is needed
- Calls tools autonomously
- Observes results
- Produces a final answer
No manual routing logic is required.
Structured Outputs
Modern agents can return validated data using Pydantic schemas.
from pydantic import BaseModel
class FinancialReport(BaseModel):
revenue: float
currency: str
usd_value: float
Create a structured agent:
structured_agent = create_agent(
model=llm,
tools=[calculate_revenue, get_exchange_rate],
response_format=FinancialReport,
)
Invoke it:
response = structured_agent.invoke({
"messages": [
{"role": "user", "content": "I sold 120 units at 50 EUR each. Convert to USD."}
]
})
print(response) # → validated FinancialReport instance
You receive a data object, not raw text.
Adding Memory
Without memory, each request is isolated. With memory, the agent becomes a collaborator.
chat_history = []
# First interaction
response = agent.invoke({
"messages": chat_history + [
{"role": "user", "content": "My product costs 20 USD."}
]
})
chat_history.extend(response["messages"])
# Follow‑up interaction
response = agent.invoke({
"messages": chat_history + [
{"role": "user", "content": "Now calculate revenue for 300 units."}
]
})
The agent now remembers:
- Product price
- Prior discussion
- Contextual decisions
Memory transforms isolated responses into evolving workflows.
How the Agent Works Internally
When you call agent.invoke(...), the agent:
- Reads the conversation + system prompt
- Plans the next action
- Chooses a tool (if needed)
- Executes the tool
- Feeds the result back into reasoning
- Produces a structured final output
This loop relies on tool‑calling rather than fragile prompt tricks.
Common Pitfalls for Beginners
- ❌ Adding too many tools (creates noise)
- ❌ Writing vague tool descriptions (confuses the planner)
- ❌ Not enforcing structured outputs (leads to ambiguous text)
- ❌ Forgetting observability / logging (hard to debug)
- ❌ Letting the agent run unrestricted (risk of unbounded behavior)
Agents are probabilistic planners, not deterministic scripts. Design them intentionally.
Before vs. After Agents
| Before Agents | After Agents |
|---|---|
| APIs returned static responses | LLMs orchestrate execution |
| Business logic was deterministic | Tools become capabilities |
| LLMs were “smart text generators” | Structure guarantees reliability |
| Memory enables continuity |
You’re no longer just building chat interfaces—you’re building goal‑driven systems.
When It’s Not an Agent
If your AI system:
- Doesn’t use tools
- Doesn’t enforce structured outputs
- Doesn’t maintain memory
then it’s merely autocomplete with better marketing.
Conclusion
With modern LangChain, the barrier to real agents is gone. The real question shifts from “Can we build agents?” to “What workflows are we ready to automate?”
Feel free to comment on how you build agents and share interesting types you’ve created!