I Didn’t Get MCP Until I Built One

Published: (February 2, 2026 at 03:11 AM EST)
7 min read
Source: Dev.to

Source: Dev.to

From “What’s an MCP Server?” to “I Built One”

Eight months ago I could barely tell you what an MCP server was.

I’d seen the term (Model Context Protocol) floating around on LinkedIn posts and in our AWS Community Builders Slack. Everyone seemed excited about it, but I had no idea what it was—or why I should care. It felt like something for AI people (“Data Science and ML Engineers”) far removed from the daily grind of my team’s cloud‑infrastructure tasks and platform duties.

I tried a few of the AWS Labs MCP servers via Amazon Q (the documentation and pricing ones). They worked, but I treated them like opaque plugins—useful, but mysterious. I could use them; I couldn’t explain them.

Then, in November, Kiro became publicly available, and I started exploring its features more extensively (more on that in future posts). Around the same time I got the chance to participate in a Proof‑of‑Concept to build an MCP server. The initial idea was to expose our public API to AI tools and LLMs through an MCP server. Once proven, the goal would be to move that into the product so customers could use it too.

As I dove deeper, MCP servers gradually started to make sense.

What Is an MCP Server?

  • Similar to an API, but designed for AI agents.
  • Allows agents to communicate with and utilise services and datasets outside their training data.

Note: This post isn’t a guide or tutorial. Excellent resources already exist, and you can always ask your AI tool of choice for clarifications. This is a reflection on moving from “I have no idea what this is” to “I can build and deploy one.”

The Problem That Sparked the Insight

Large Language Models (LLMs) are powerful, but they are frozen in time—they only know what was present in their training data.

They don’t know:

  • The current state of your repositories
  • What’s in your internal documentation
  • The status of your Jira tickets
  • The shape of your APIs or databases

If we want AI tools to be genuinely useful in real engineering workflows, we need a way to safely and consistently connect them to live, external systems. That’s where integrations come in—and where things get messy very quickly.

Imagine You’re Building AI Integrations

You want your assistant to:

  1. Access your GitHub repositories
  2. Query your Jira tickets
  3. Search your company’s documentation
  4. Check your database

Now imagine you have four AI tools:

AI ToolGitHubJiraDocsDatabase
Claude
Copilot
Cursor
ChatGPT

That’s 4 × 4 = 16 custom integrations. Each integration is:

  • Built differently
  • Maintained separately
  • Incompatible with other tools
  • Duplicated effort

That’s completely unsustainable.

With a standard protocol, you describe your capabilities once, and any compliant AI can consume them. The complexity shifts from O(N × M) to O(N + M), where N = AI tools and M = data sources.

MCP: The Universal Adapter for AI

Think of MCP as USB‑C for AI tools.

Before USB‑C

  • Keyboards → PS/2 ports
  • Mice → Serial ports
  • Printers → Parallel ports
  • Cameras → Proprietary connectors

Every device needed its own port, and devices only worked with specific computers.

After USB‑C

  • One standard connector works everywhere.

MCP Does the Same for AI

Before MCPAfter MCP
Every AI tool had custom integrations for every data sourceOne standard protocol; any MCP server works with any MCP client

Example: Build a GitHub MCP server once → it works with Claude, Copilot, Cursor, and any future MCP‑compatible AI tool.

What Exactly Is an MCP Server?

Simply put, an MCP server is a program that:

  1. Connects to a data source (GitHub, database, API, etc.)
  2. Exposes that data through standardized “tools”
  3. Speaks the MCP protocol so any AI can use it

An MCP server is essentially a wrapper—similar to an API—but instead of many REST endpoints, you expose tools.

When an MCP server connects to an AI, it announces its capabilities:

{
  "tools": [
    {
      "name": "search_repositories",
      "description": "Search GitHub repositories",
      "parameters": {
        "query": "string",
        "limit": "number"
      }
    },
    {
      "name": "get_file_contents",
      "description": "Get contents of a file from a repository",
      "parameters": {
        "owner": "string",
        "repo": "string",
        "path": "string"
      }
    }
  ]
}

The AI now knows:

  • What tools are available
  • What each tool does
  • What parameters they need
  • How to call them

The tools are self‑documenting, but you still have to be careful.

Too Many Tools = Too Much Context

As I added more tools, I realized that overloading the AI with definitions hurts performance.

5 MCP servers × 20 tools each = 100 tools available

For each request the AI must:

  1. Load all 100 tool definitions
  2. Understand what each does
  3. Decide which to use
  4. Execute the right one

Consequences

  • Context bloat – tool definitions consume valuable tokens before the actual question is even asked.
  • Lower accuracy – the AI may pick the wrong tool, especially if names are ambiguous.

Rough token count:
100 tools × (name + description + parameter schema) → thousands of tokens per request.

Best Practices

  • Be intentional about which MCP servers you install.
  • Don’t install every MCP server you find.
  • Keep the toolset focused on what your AI actually needs.

TL;DR

  • MCP (Model Context Protocol) is a standard protocol that lets AI agents call external services via “tools.”
  • An MCP server wraps a data source (GitHub, Jira, DB, etc.) and exposes it as a set of tools that any MCP‑compatible AI can use.
  • By adopting MCP, you replace N × M custom integrations with N + M components, dramatically reducing engineering overhead.
  • Treat MCP servers like USB‑C for AI: one connector, many devices.
  • Avoid tool bloat—expose only the tools that truly add value.

Now you have a clear mental model: MCP = universal adapter; MCP server = the adapter implementation; tools = the plugs you expose. Happy building!

Example Configuration

{
  "mcpServers": {
    "awslabsaws-documentation-mcp-server": {
      "command": "uvx",
      // some args
      "disabled": false,
      "disabledTools": [],
      "autoApprove": ["read_documentation"]
    },
    "terraform-mcp-server": {
      "command": "uvx",
      // some args
      "disabled": true,
      "autoApprove": []
    }
  }
}

Different agents can have different MCP setups:

  • CloudOps agent – uses AWS documentation and Terraform MCP servers.
  • Frontend agent – might use React and GitHub MCP servers.

You can also auto‑approve safe commands so the agent executes them automatically.

Use Clear Tool Naming

When building your own MCP servers, use namespaced names.

Bad

  • search()
  • get()
  • list()

Good

  • github_search_repositories()
  • github_get_file_contents()
  • github_list_pull_requests()

The AI can filter and understand domains faster.

Lazy Loading with Resources

Some MCP servers use resources instead of many individual tools.

Instead of 50 tools for different docs

  • get_lambda_docs()
  • get_s3_docs()
  • get_ec2_docs()

One tool + resources

read_documentation(resource_uri)

Resources

  • docs://aws/lambda
  • docs://aws/s3
  • docs://aws/ec2

Resources are discovered on‑demand, not loaded upfront.

We’ll look deeper into best practices when we actually build our first MCP server.

Communication Modes

MCP servers can communicate in two ways.

1. STDIO (local process)

  • Runs as a local process.
  • Communication via stdin/stdout (like terminal piping).
  • The AI client spawns and manages the process.

Configuration example

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"]
    }
  }
}

When to use STDIO

✅ Local tools, file systems, git operations

Cons – (list any drawbacks here)

2. HTTP (remote service)

  • Runs as a remote HTTP service.
  • Clients connect via URL + authentication.

Configuration example

{
  "mcpServers": {
    "company-api": {
      "type": "http",
      "url": "https://mcp.company.com",
      "headers": {
        "Authorization": "Bearer ${API_TOKEN}"
      }
    }
  }
}

When to use HTTP

✅ Shared services or APIs

Cons – (list any drawbacks here)

Ecosystem Overview

Once I understood MCP, I realized there’s an entire ecosystem growing fast:

  • AWS – plenty of choices (e.g., AWS Labs MCP)
  • Terraform
  • Atlassian – Jira and Confluence
  • GitHub
  • MCP Servers – Awesome MCP Servers

There are many useful servers that can boost how you use AI, but be careful and selective—you might end up with a dozen installed in the first week!

Building Your Own

The barrier to entry is low:

  • Python – FastMCP
  • TypeScript@modelcontextprotocol/sdk
  • Any language that speaks JSON‑RPC over STDIO or HTTP

Docs:

Next Steps

In upcoming posts I’ll show how we built our internal POC to showcase the concept to colleagues and management.

I’ll admit I was initially intimidated (that imposter syndrome never really goes away), but it turns out it’s not about “adding AI”; it’s about making your system AI‑accessible.

AWS Agent Core Runtime

Before diving deeper into building MCP servers, I need to introduce the tool that made this exploration possible: Kiro.

As the title of this series suggests, this journey happened Vibecoding in between meetings. Without the right AI setup, it simply wouldn’t have been possible to stay hands‑on while juggling everything else.

Stay tuned.

Back to Blog

Related posts

Read more »