Building AI Agents in 2025: From ChatGPT to Multi-Agent Systems

Published: (January 12, 2026 at 03:56 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

What Are AI Agents?

An AI agent is an autonomous system that can perceive its environment, make decisions, and take actions to achieve specific goals. Unlike traditional software, AI agents can adapt and learn from interactions.

Types of AI Agents

  • Simple Reflex Agents – React to the current state.
  • Model‑Based Agents – Maintain internal state.
  • Goal‑Based Agents – Work towards specific objectives.
  • Utility‑Based Agents – Optimize for the best outcomes.
  • Learning Agents – Improve performance over time.

Building Your First AI Agent

Using LangChain

from langchain.agents import create_openai_functions_agent
from langchain.tools import Tool
from langchain_openai import ChatOpenAI

# Define tools
def search_web(query: str) -> str:
    """Search the web for information"""
    # Implementation here
    return f"Results for: {query}"

def calculate(expression: str) -> str:
    """Calculate mathematical expressions"""
    return str(eval(expression))

tools = [
    Tool(name="Search", func=search_web, description="Search the web"),
    Tool(name="Calculator", func=calculate, description="Calculate math")
]

# Create agent
llm = ChatOpenAI(model="gpt-4", temperature=0)
agent = create_openai_functions_agent(llm, tools)

# Run agent
result = agent.run("What's 15% of 1200?")
print(result)

Advanced: Multi‑Agent Systems

Multi‑agent systems involve multiple AI agents working together, each with specialized roles.

AutoGen Framework

import autogen

config_list = [{
    "model": "gpt-4",
    "api_key": "your-key"
}]

# Define agents
user_proxy = autogen.UserProxyAgent(
    name="user",
    human_input_mode="TERMINATE",
    max_consecutive_auto_reply=10
)

coder = autogen.AssistantAgent(
    name="coder",
    llm_config={"config_list": config_list},
    system_message="You write Python code to solve tasks."
)

reviewer = autogen.AssistantAgent(
    name="reviewer",
    llm_config={"config_list": config_list},
    system_message="You review code for bugs and improvements."
)

# Group chat
groupchat = autogen.GroupChat(
    agents=[user_proxy, coder, reviewer],
    messages=[],
    max_round=12
)

manager = autogen.GroupChatManager(
    groupchat=groupchat,
    llm_config={"config_list": config_list}
)

# Start conversation
user_proxy.initiate_chat(
    manager,
    message="Build a web scraper for product prices"
)

Agent Memory Systems

Memory is crucial for context‑aware agents:

from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings

# Short‑term memory
memory = ConversationBufferMemory()

# Long‑term memory with vector store
vectorstore = FAISS.from_texts(
    ["User likes React", "User prefers TypeScript"],
    OpenAIEmbeddings()
)

# Retrieve relevant memories
relevant = vectorstore.similarity_search("What framework?")
print(relevant)

Tool Creation for Agents

from langchain.tools import BaseTool
from pydantic import BaseModel, Field

class DatabaseQueryInput(BaseModel):
    query: str = Field(description="SQL query to execute")

class DatabaseTool(BaseTool):
    name = "database"
    description = "Query the database"
    args_schema = DatabaseQueryInput

    def _run(self, query: str) -> str:
        # Execute query safely
        # Return results
        return f"Query results: {query}"

    async def _arun(self, query: str) -> str:
        # Async implementation
        return self._run(query)

ReACT Pattern (Reasoning + Acting)

from langchain.agents import load_tools, initialize_agent, AgentType

tools = load_tools(["serpapi", "llm-math"], llm=llm)

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

result = agent.run(
    "What's the population of Tokyo and what's 20% of it?"
)

Production Considerations

Rate Limiting

from functools import wraps
import time

def rate_limit(calls_per_minute=10):
    min_interval = 60.0 / calls_per_minute
    last_called = [0.0]

    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            elapsed = time.time() - last_called[0]
            if elapsed < min_interval:
                time.sleep(min_interval - elapsed)
            last_called[0] = time.time()
            return func(*args, **kwargs)
        return wrapper
    return decorator

@rate_limit(calls_per_minute=5)
def call_llm(prompt):
    return llm.invoke(prompt)

Error Handling

from tenacity import retry, stop_after_attempt, wait_exponential
import logging

logger = logging.getLogger(__name__)

@retry(
    stop=stop_after_attempt(3),
    wait=wait_exponential(multiplier=1, min=4, max=10)
)
def resilient_agent_call(agent, input_text):
    try:
        return agent.run(input_text)
    except Exception as e:
        logger.error(f"Agent error: {e}")
        raise

Cost Monitoring

from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    result = agent.run("Complex query")
    print(f"Total Tokens: {cb.total_tokens}")
    print(f"Total Cost: ${cb.total_cost}")

Real‑World Use Cases

Customer Support Agent

tools = [
    search_knowledge_base,
    create_ticket,
    escalate_to_human,
    check_order_status,
]

support_agent = create_agent(
    tools=tools,
    system_message="You're a helpful customer support agent",
)

Code Review Agent

code_reviewer = create_agent(
    tools=[analyze_code, suggest_improvements, check_security],
    system_message="Review code for bugs, performance, and security",
)

Research Assistant

research_agent = create_agent(
    tools=[search_papers, summarize_content, cite_sources],
    system_message="Help with academic research",
)
  • Agentic Workflows – Chains of agents working together
  • Self‑Healing Systems – Agents that fix their own errors
  • Hybrid Intelligence – Humans and agents collaborating
  • Specialized Agent Marketplaces – Pre‑built agents for specific tasks
  • Edge AI Agents – Running locally for privacy

Best Practices

  • Start Simple – Begin with single‑purpose agents
  • Clear Boundaries – Define what agents can and cannot do
  • Human Oversight – Keep humans in the loop for critical decisions
  • Monitor Performance – Track success rates and costs
  • Version Control – Track agent prompts and configurations
  • Test Extensively – Edge cases can cause unexpected behavior

Conclusion

AI agents are transforming how we build software. Whether you’re creating a simple chatbot or a complex multi‑agent system, the key is to start small and iterate based on real‑world usage.

The future of software development will increasingly involve orchestrating AI agents to handle complex tasks autonomously.

What AI agents are you building? Share your experiences!

Exploring AI integration in production systems. Follow for more cutting‑edge tech insights!

Back to Blog

Related posts

Read more »

2025: The Year in LLMs

Article URL: https://simonwillison.net/2025/Dec/31/the-year-in-llms/ Comments URL: https://news.ycombinator.com/item?id=46449643 Points: 56 Comments: 21...