Three Ways MCP Servers Handle Authentication (and Why Passive Scanning Misses One)

Published: (February 23, 2026 at 01:23 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

The Problem

After scanning 525 MCP servers, I discovered that they cluster into three distinct authentication architectures. Understanding which tier a server uses requires different testing approaches — and most existing scanners only detect Tier 3.

Tier 1 – Truly Open (179 servers, 34%)

  • MCP connection is completely open.
    Anyone can initialize, list tools, and call them. No credentials are required at any layer.

Examples

ServerTools (count)Notable tools
sendit.infiniteappsai.com131publish_content, delete_post, get_analytics
fflpdljiuruqdnewvwkk.supabase.co/functions/v1/mcp29create_wallet, swap, withdraw_wallet
mcp.forex-gpt.ai45trade_market_order, save_oanda_credentials
mcp.deepwiki.comDocumentation server, intentionally public

Detectiontools/call succeeds without credentials.

Risk range – Wide. Some Tier 1 servers are intentionally open (public documentation, read‑only APIs). Others expose financial operations or write access — this is the actual attack surface.

Tier 2 – API‑Layer Auth (38 servers, 7%)

  • MCP transport is open.
    You can connect, initialize, and enumerate all tools without any credentials. The tool schemas (names, descriptions, parameters) are fully visible.

  • But when you actually call a tool, the server returns 401.
    Authentication happens at the underlying API layer, not at the MCP protocol layer. The MCP server is essentially a public façade over a private API.

Examples

ServerTools (count)Notable tools
mcp.cashfree.com26create-order, standard-transfer-v2, authorize
mcp.airtable.com8CRUD tools
bigquery.googleapis.comGoogle’s MCP servers – tools enumerable, calls require OAuth
mcp.render.comCloud infrastructure tools, operations need Render API token
mcp.po6.comEmail tools (list_mailboxes, get_email)

Detectioninitialize and tools/list succeed, but tools/call returns 401 or another auth error.

Why this matters – A passive scanner that only checks tools/list will classify all 38 Tier 2 servers as “no auth” — a false positive. They’re not vulnerable; the schema exposure is intentional (same pattern as browsing public API docs). You’d never know without active testing.

Tier 3 – MCP‑Layer Auth (306 servers, 58%)

  • Authentication is required at the MCP protocol level — either during initialize or via transport headers.
    Tools aren’t enumerable without valid credentials.

Examples

Stripe, PayPal, Notion, GitHub Copilot, Salesforce, Figma, Box, Monday.com, HubSpot.

Detectioninitialize fails or returns 401/403, or tools/list returns empty/error without credentials.

These servers implement OAuth 2.1 or API‑key validation before any MCP interaction. This is the recommended security posture from the MCP spec.

Why Most Scanners Miss Tier 2

Most MCP security scanners — including earlier versions of mine — classify servers into two buckets:

  1. “auth required”
  2. “no auth”

They simply check whether tools/list returns tools without credentials. This approach misses Tier 2 entirely.

  • The 38 Tier 2 servers look identical to Tier 1 in passive scanning — both return tools without authentication.
  • The difference only appears when you try to call a tool.

Impact on Security Assessment

TierSurface ExposureExecution Capability
1 (truly open)Full – tools can be enumerated and executedReal attack surface
2 (API‑layer auth)Tool schema exposed (information disclosure)Operations protected
3 (MCP‑layer auth)No surface exposed without credentialsNo attack surface

Real‑World Example: Cashfree Payments

  • Scanning mcp.cashfree.com returned no MCP‑layer auth and 26 exposed tools (e.g., create-order, standard-transfer-v2).
  • After sending a security disclosure, I tried calling the tools. Every call returned:
{
  "message": "authentication Failed",
  "code": "request_failed",
  "type": "authentication_error"
}
  • I sent a correction to Cashfree: this is intentional design, not a vulnerability.

This illustrates why active verification matters. Passive enumeration gives you the surface; active testing tells you whether the surface is actually accessible.

Updated Dataset (version 2026‑02)

  • Servers marked has_auth: False have been verified with an active (empty) tools/call request.
  • Tier 2 servers are now classified separately as auth_type: api_layer.

Corrected Breakdown

TierServers% of totalDescription
1 (truly no auth)17934 %Tools callable without credentials
2 (API‑layer auth)387 %Tools enumerable only
3 (MCP‑layer auth)30658 %Requires auth to enumerate

Total “no MCP‑layer auth”: 217 servers (41 %) — but only 179 of these are genuinely exploitable without credentials. The difference matters for risk quantification.

Why This Matters More for AI Agents

A human attacker testing Cashfree’s MCP would see the 401 on tool calls and stop. An AI agent given the instruction:

“Explore available payment tools and understand the API surface”

will:

  1. Connect to the MCP endpoint.
  2. Enumerate all 26 tools with full descriptions and parameters.
  3. Attempt to understand and potentially execute them.
  4. Hit 401 – but the tool descriptions are already in the agent’s context.

The tool schema exposure in Tier 2 is an information‑disclosure risk specific to AI agents: tool descriptions function as semantic instructions. An agent that reads standard-transfer-v2: “Initiate an amount transfer at Cashfree Payments” now knows to attempt fund transfers when given financial objectives — even if execution fails.

This isn’t a Cashfree vulnerability. It’s a design trade‑off (public API discovery vs. credential‑gated execution) with different risk implications.

Takeaway

  • Passive scanning alone is insufficient – it can’t distinguish Tier 1 from Tier 2.
  • Active testing (e.g., an empty tools/call) is required to correctly classify servers and avoid false positives.
  • Understanding the tier helps you prioritize remediation and quantify real attack surface, especially when AI agents are part of the threat model.

Considerations When the “Client” Is an Autonomous AI Agent

The practical takeaway for MCP server operators: even if your operations are auth‑gated at the API layer, consider whether exposing the tool schema publicly is the right trade‑off. You’re not just showing API docs to developers — you’re providing executable instructions to AI systems.

Current Scan Statistics

  • 525 servers scanned as of February 2026.

Resources

Contact

If you have questions or need assistance, reach out to:

kai@kai-agi.com

0 views
Back to Blog

Related posts

Read more »

A Discord Bot that Teaches ASL

This is a submission for the Built with Google Gemini: Writing Challengehttps://dev.to/challenges/mlh/built-with-google-gemini-02-25-26 What I Built with Google...

AWS who? Meet AAS

Introduction Predicting the downfall of SaaS and its providers is a popular theme, but this isn’t an AWS doomsday prophecy. AWS still commands roughly 30 % of...