AI Agent Tool Design: What I Wish I Knew Earlier

Published: (February 8, 2026 at 11:23 AM EST)
11 min read
Source: Dev.to

Source: Dev.to

Source: Dev.to

I Spent Three Days Building My First AI Agent, and It Was Terrible

Not because I chose the wrong model. Not because my prompts were bad. The agent just… didn’t work. It gave vague answers, got confused by simple requests, and sometimes tried to do things it clearly wasn’t capable of doing.

Then I discovered something that changed everything: the quality of an AI agent depends almost entirely on how you design its tools.

This article is what I wish I’d read before building that first agent. If you’re exploring AI‑agent development—whether you’re an Angular developer like me looking to add AI features, or just getting started with AI engineering—this framework will save you a lot of frustration.


What Are Tools in AI Agents?

When we talk about “tools” in the context of AI agents, we’re talking about functions the AI can call. That’s it—nothing fancy.

Simple example

const weatherTool = {
  name: "get_weather",
  description: "Get current weather for a city",
  parameters: {
    city: {
      type: "string",
      description: "City name"
    }
  },
  execute: async (city) => {
    const data = await fetch(`api.weather.com/${city}`);
    return data;
  }
};

Without this tool, if someone asks “What’s the weather in Tokyo?” the agent can only reply, “I don’t have access to current weather data.”

With the tool, the agent can call the function, retrieve real data, and give a useful answer.

Tools transform your agent from a conversationalist into something that can actually do things.

Think of tools the way you think about components in Angular—each tool should do one thing well, and you compose them together to build something powerful.


The Pattern That Changed My Approach

After my first failed attempt, I found a framework in a book about AI engineering, and it clicked immediately:

Ask yourself: “What would a human do to solve this problem?”
Then turn each step into a tool.

Simple, but incredibly powerful.


Real Example: The Book‑Recommendation Agent

Imagine you’re building an agent that recommends books from investor reading lists. You have a database of 10 000 books that various investors have recommended.

My first instinct (WRONG)

Dump all 10 000 books into the agent’s context and let it figure things out.

What actually happened

The agent got completely overwhelmed. It couldn’t navigate the data effectively, and recommendations were poor or generic.

The better approach

Think like a human analyst. If you were manually analyzing these book recommendations, what would you do?

  1. Look up which investors recommended specific books.
  2. Filter by genre or topic.
  3. Sort by popularity (how many investors recommended each book).
  4. Compare recommendations between different types of investors (founders vs. VCs).

Each of these operations became a tool:

// Tools
function get_books_by_investor(investor_name) { /* … */ }
function get_books_by_genre(genre) { /* … */ }
function sort_books_by_recommendations(books) { /* … */ }
function get_investors_by_type(type) { /* … */ } // founders vs. VCs

Result: The agent could now navigate the data intelligently and make genuinely good recommendations.

This mirrors how I build complex components in Angular: I break them down into smaller, focused services and components, each handling a single responsibility. The same principle applies to AI‑agent tools.


Why Tool Design Matters More Than You Think

You could have:

  • Perfect prompts
  • The best model available
  • Great context management

But if your tools are poorly designed, your agent will still struggle.

Conversely, with well‑designed tools, even a simpler model can deliver excellent results.

Tools = the capabilities of your agent.

Think of it this way: an LLM without tools is like a smart person with no hands. They can think, they can talk, but they can’t actually do anything. Tools are what let your agent take action.

The Framework I Now Use

When I start building an agent, I follow this process:

Step 1: Define the Goal

Be specific. “Build a customer‑support agent” is too vague.

Better: “Build an agent that helps customers troubleshoot login issues, check order status, and create support tickets.”

Step 2: List Human Actions

If a human were doing this job, what specific actions would they take?

For customer support:

  • Look up customer account information
  • Search help documentation
  • Check order/subscription status
  • Create a support ticket
  • Send confirmation email

Step 3: One Action = One Tool

Each action becomes a focused tool:

// Tools needed
- get_customer_info(customer_id)
- search_help_docs(query)
- check_order_status(order_id)
- create_ticket(issue_type, description)
- send_email(to, subject, body)

Step 4: Write Clear Descriptions

This part is crucial. The tool description isn’t just for you—the agent actually reads it to decide when to use each tool.

Bad description

{
  "name": "search",
  "description": "Search stuff"
}

The agent thinks: “When do I use this? What does it search?”

Good description

{
  "name": "search_help_docs",
  "description": "Search company help documentation for troubleshooting steps. Use this when a customer has a technical issue. Returns relevant articles with solutions.",
  "parameters": {
    "query": {
      "type": "string",
      "description": "Search terms describing the problem the customer is facing."
    }
  }
}

A clear, purpose‑driven description tells the LLM exactly when and how to invoke the tool.


TL;DR

  1. Identify the human actions needed to solve the problem.
  2. Turn each action into a single‑purpose tool (function).
  3. Write precise, descriptive metadata for each tool.
  4. Let the LLM decide when to call the tools—the better the tools, the better the agent.

By focusing on tool design first, you’ll spend far less time wrestling with vague answers and far more time building agents that actually get things done. Happy building!


Tool Design for AI Agents – Lessons Learned

(A quick guide for developers building AI‑augmented applications, especially Angular front‑ends.)

The Core Idea

When you give an LLM a tool description, the model reads it and decides when and how to call the tool. Think of the description as the interface between the AI and your functionality—good UX principles apply:

  • Be clear – use concise, unambiguous language.
  • Be specific – define inputs, outputs, and any constraints.
  • Provide examples – show typical usage scenarios.

Common Mistakes I’m Learning to Avoid

Mistake #1 – Too Many Tools

# ToolsWhat Happens
25The agent gets confused, spends time evaluating options, and the prompt cost spikes.
5Clear choices, fast decisions, cheaper runs.

Rule of thumb: Start with 3‑5 focused tools. Add more only when a clear need emerges.


Mistake #2 – Vague Tool Descriptions

The agent relies on the description to decide whether a tool is appropriate.
A vague description → confused agent → poor results.

Always include:

  1. What the tool does
  2. When to use it
  3. What it returns
  4. Example inputs

Mistake #3 – Tools That Do Too Much

I once built a single handle_customer_issue() tool that:

  • Looked up the customer
  • Searched docs
  • Created tickets
  • Sent emails

The agent couldn’t control the sequence; it was a black box.

Better approach: Split responsibilities into separate tools that the agent can chain together as needed. This mirrors the single‑responsibility principle we use in Angular services.


Mistake #4 – Not Testing Tools First

I would hand a new tool to the agent, see it fail, and waste time debugging without knowing whether the fault lay in the tool or the agent logic.

Improved workflow:

  1. Build the tool.
  2. Unit‑test it manually.
  3. Verify it works.
  4. Expose it to the agent.
  5. If the agent still fails, you now know the problem is in the agent’s reasoning, not the tool.

How Tool Calling Actually Works

Understanding the execution loop helped me design better tools.

1. User: "I can't log in to my account"

2. Agent thinks: "I need to help with login. Let me check their account status."

3. Agent calls: check_account_status(user_id)

4. Tool returns: { status: "locked", reason: "too many failed attempts" }

5. Agent thinks: "Account is locked. I should unlock it."

6. Agent calls: unlock_account(user_id)

7. Tool returns: { success: true }

8. Agent responds: "I've unlocked your account. Please try logging in now."

Key insight:
The agent decides which tools to use and when to use them. You are not writing IF/THEN logic; the model determines the sequence declaratively based on the situation.

Practical Patterns I’m Finding Useful

Data Retrieval Tools

get_customer_info(id)
search_database(query)
fetch_order_history(customer_id)

Action Tools

send_email(to, subject, body)
create_ticket(issue)
update_record(id, data)

Calculation Tools

calculate_total(items)
convert_currency(amount, from_currency, to_currency)
analyze_sentiment(text)

External API Tools

get_weather(city)
search_web(query)
translate_text(text, target_lang)

I’m building a personal library of these patterns so I can quickly compose a new agent by picking the right subset.


Applying This to Front‑End Development

Bad Angular Architecture

// One giant service that does everything
class GodService {
  handleEverything() { /* thousands of lines */ }
}

Good Angular Architecture

// Focused services with clear responsibilities
class AuthService {}
class UserService {}
class OrderService {}
class NotificationService {}

The same principle applies to AI‑agent tools: break things down into focused, composable pieces.

When I embed AI features into Angular apps (e.g., my Angular AI Chat Kit), I think of each API endpoint as a tool—just like each Angular service has a single responsibility.


The Emergent‑Behavior Surprise

With a well‑designed set of tools, agents can produce creative sequences you never explicitly programmed.

Example

A meeting‑scheduler agent equipped with the following primitives:

def get_calendar(user_id): ...
def find_free_slots(calendar1, calendar2): ...
def create_meeting(attendees, time, duration): ...
def send_notification(user_id, message): ...

When a user says, “Schedule a meeting with Sarah next week,” the agent automatically:

  1. Retrieves both calendars.
  2. Finds overlapping free slots.
  3. Creates the meeting.
  4. Sends a confirmation.

You didn’t write that exact flow; the agent reasoned it out from the available tools. This is the “magic” of good tool design—the agent becomes more capable than the sum of its parts.

What I’m Building Next

I’m currently integrating AI agents into Angular applications, focusing on the Angular AI Chat Kit. The lessons from tool design map directly to this work:

  • Each API endpoint becomes a tool.
  • Tools are grouped by responsibility (data, actions, calculations, external APIs).
  • Descriptions are written with a clear purpose, usage conditions, return schema, and examples.

Stay tuned for the upcoming demo where I’ll show a live Angular component that calls an LLM‑driven agent to handle user‑support tickets, using a minimal set of well‑described tools.


TL;DR

  1. Start small – 3‑5 focused tools.
  2. Write crystal‑clear descriptions (what, when, return, example).
  3. Keep tools single‑purpose; let the agent chain them.
  4. Test tools in isolation before handing them to the model.
  5. Leverage patterns (retrieval, action, calculation, external API) to compose new agents quickly.

Good tool design = smarter agents + easier debugging + lower costs. Happy building!

Potential Tool

  • Frontend state management needs to work with agent actions
  • User interactions trigger agent workflows

I’m also exploring how to make AI‑assisted development workflows more efficient by treating development tasks as agent operations—code review, refactoring, documentation generation—each with specific tools.


Key Takeaways

If you’re building AI agents, remember these points:

  • Tool quality determines agent quality – spend time on tool design.
  • Ask “What would a human do?” – then make each step a tool.
  • Start with 3‑5 tools – only add more when needed.
  • One tool = one job – let the agent chain them.
  • Write detailed descriptions – the agent reads them.
  • Test tools independently – before giving them to the agent.

The framework is simple, but it works. I wish I’d known this before spending days debugging my first agent.

What’s Next for You?

If you’re building AI agents or adding AI features to your applications, try this framework with your next project.

  1. Start small. Build 3‑5 focused tools. See what the agent can do.
  2. Iterate. Add tools as you discover gaps in functionality.

I’m still early in my AI‑engineering journey, but this framework has already made a huge difference in how I approach agent development. It’s one of those concepts that seems obvious in hindsight but isn’t intuitive when you’re just starting out.

What are you building with AI agents? I’d love to hear about your experiences—especially if you’ve discovered other patterns that work well.

SEO Metadata

FieldContent
TitleAI Agent Tool Design: What I Wish I Knew Earlier
Meta DescriptionLearn the framework for designing AI agent tools that actually work. Discover why tool quality matters more than model choice and how to avoid common mistakes.
Slugai-agent-tool-design
Primary KeywordAI agent tool design
Related KeywordsAI agents, tool calling, LLM tools, agent development, AI engineering, building AI agents
0 views
Back to Blog

Related posts

Read more »

A Guide to Fine-Tuning FunctionGemma

markdown January 16, 2026 In the world of Agentic AI, the ability to call tools is what translates natural language into executable software actions. Last month...

Function Calling & Tool Schemas

Overview This learning session explores function calling and tool schemas—how agents interact with external tools. The dialogue captures the back‑and‑forth bet...

A Guide to Fine-Tuning FunctionGemma

markdown Jan 16, 2026 In the world of Agentic AI, the ability to call tools translates natural language into executable software actions. Last month we released...