The Protocol Wars Are Missing the Point
Source: Dev.to
Introduction
Something interesting is happening in AI right now. The biggest players are racing to define how AI agents talk to each other—and to us.
- Anthropic has MCP.
- Google has A2A.
- OpenAI has its Agents SDK.
Everyone’s building protocols.
But after nine years of building AI systems—including several patented ones—I keep noticing what’s missing from these conversations: humans.
Let me break down what’s actually happening
MCP (Model Context Protocol)
Anthropic’s attempt to standardize how AI models share context.
Announced in November 2024, MCP is designed as an open protocol that creates a universal way for AI assistants to connect with data sources and tools.¹
The problem MCP solves – when you chain AI calls together, context gets lost. The second model doesn’t know what the first model was thinking. MCP creates a structured way to pass that context along.
What MCP does well
- Standardized context format across models
- Clean handoffs between AI components
- Works across different AI providers (not just Claude)
- Open specification that anyone can implement
What MCP doesn’t address
- What happens when a human needs to intervene?
- How does human context get preserved and passed?
- Who decides when AI should stop and ask for help?
Google’s A2A
Focuses on how autonomous agents communicate and coordinate with each other.
Announced in April 2025, A2A builds on existing standards like HTTP and JSON‑RPC to define how agents discover each other’s capabilities, negotiate tasks, and collaborate on complex workflows.²
What A2A does well
- Multi‑agent coordination across vendors
- Capability discovery and negotiation
- Task delegation between specialized agents
- Built on proven web standards
What A2A doesn’t address
- Same gap: where do humans fit?
- When Agent A hands off to Agent B, what if a human should have been Agent B?
- How do you audit decisions that were never meant to be audited?
OpenAI Agents SDK
Production‑ready framework for building multi‑agent systems.
Released in March 2025, it replaces the experimental Swarm framework with something more robust.³
What it does well
- Clean developer experience
- Good defaults for common patterns
- Tight integration with OpenAI’s models
- Production‑ready tooling
What it doesn’t address
- Vendor lock‑in (it’s OpenAI‑first)
- The human question, again
The missing piece: humans
Every major protocol focuses on AI‑to‑AI communication. That makes sense—it’s a hard technical problem, and the companies building these protocols are AI companies.
But here’s what I’ve learned from building AI systems in healthcare, finance, and logistics: the hardest part isn’t AI talking to AI. It’s AI talking to humans, and knowing when it should.
Research from Stanford’s Human‑Centered AI Institute consistently shows that human‑AI collaboration outperforms either alone. Their 2024 study on AI‑assisted decision‑making found that humans with AI support made 23 % better decisions than AI alone—but only when the handoff between human and AI was well‑designed.⁴
A typical workflow (without human handoff)
| Step | Agent | Outcome |
|---|---|---|
| 1 | Data agent | Extracts information ✓ |
| 2 | Analysis agent | Processes it ✓ |
| 3 | Recommendation agent | Generates options ✓ |
| 4 | Action agent | Executes ✓ |
If step 3 should have been “Human reviews options before action,” current protocols give you no clean answer. You end up bolting on custom solutions:
- A Slack notification that someone might miss
- An email that sits in an inbox
- A dashboard nobody checks
The context that made the AI’s recommendation make sense is often lost by the time a human sees it.
A 2024 McKinsey study on AI in enterprise workflows found that 67 % of failed AI implementations cited “poor human‑AI handoff design” as a primary factor.⁵
Uncertainty & the need for human input
AI doesn’t know what it doesn’t know. When an AI agent is uncertain, it should ask a human—but current protocols don’t standardize:
- How to express uncertainty
- When uncertainty should trigger human involvement
- How to preserve context for the human handoff
Research from MIT’s CSAIL found that LLMs are often confidently wrong—expressing high certainty on incorrect answers 31 % of the time.⁶ Without confidence‑based routing to humans, these errors propagate through automated workflows.
Regulatory pressure
Regulations are catching up to AI. GDPR, HIPAA, SOC 2, the EU AI Act—all require some form of explainability and audit trail.
- The EU AI Act (full effect 2025) specifically requires “meaningful human oversight” for high‑risk AI systems.⁷
- Article 14 mandates that humans must be able to understand AI outputs and intervene when necessary.
Current protocols focus on what happened between agents. Auditors, however, ask different questions:
- Who made this decision?
- Was a human involved?
- Could a human have intervened?
- Why wasn’t a human involved?
If your protocol doesn’t treat humans as first‑class citizens, you’ll struggle to answer these.
What we need: humans as first‑class nodes
A human shouldn’t be a “fallback” or an “escalation path.” They should be a valid node type, just like an AI agent.
Example workflow
[AI: Analyze] → [Human: Validate] → [AI: Execute]
The protocol should handle routing to humans the same way it handles routing to AI—with:
- Preserved context
- Clear expectations
- Tracked outcomes
When AI hands off to a human, the human should understand:
- What the AI was trying to do
- Why it stopped
- What options it considered
- What it recommends
When the human hands back to AI, the AI should understand:
- What the human decided
- Why they decided it
- Any additional context they provided
MCP is great for AI‑to‑AI context. We need the same rigor for AI‑to‑human and human‑to‑AI interactions.
References
- Anthropic, Model Context Protocol (MCP) – Technical Overview, November 2024.
- Google Cloud, Agent‑to‑Agent (A2A) Specification, April 2025.
- OpenAI, Agents SDK Documentation, March 2025.
- Stanford Human‑Centered AI Institute, Human‑AI Collaboration Study, 2024.
- McKinsey & Company, AI in Enterprise Workflows Report, 2024.
- MIT CSAIL, Confidence Calibration in Large Language Models, 2024.
- European Commission, EU AI Act – Article 14: Human Oversight, 2025.
Human‑to‑AI & Human‑to‑Human Interaction
Protocols should support routing decisions based on confidence
This isn’t just a nice‑to‑have. For regulated industries, it’s becoming mandatory.
Auditable Decision Points
- What information was available
- What decision was made (by AI or human)
- Why that decision was made
- What happened next
This needs to be built into the protocol, not bolted on after.
The Market Gap: Human‑Aware AI Orchestration
A solution that:
- Speaks MCP, A2A, and other protocols
- Treats humans as first‑class workflow participants
- Preserves context across the human boundary
- Provides native governance and audit capabilities
The AI orchestration market is projected to reach $42.8 B by 2032, growing at 23.4 % CAGR. Most of that will go to enterprise use cases, and enterprises can’t deploy AI workflows without a human in the loop—compliance teams won’t allow it.
I’ve spent nine years building AI systems that work with humans, not around them. Several of those systems are now patented.
Common thread: The most powerful AI systems don’t replace humans; they collaborate with us.
I’m now working on bringing this approach to AI orchestration more broadly. If you’re interested in human‑aware AI workflows, stay tuned—I’ll have more to share soon.
Call for Input
- What are the biggest gaps you see in current AI protocols?
- Where do humans fit in your AI workflows?
I’d love to hear your perspective.
About the Author
Srirajasekhar “Bobby” Koritala – Founder of Bodaty
- Nearly a decade of building production AI systems
- Holds multiple patents in AI and human‑AI collaboration
If you found this useful, drop a reaction and follow @bobbykoritala for updates on AICtrlNet development.
- ⭐ Star us:
- 📖 Read more:
- 💬 Join the conversation: GitHub Discussions
- 🚀 Try it:
pip install aictrlnet
Additional References
- Anthropic. (2024). Introducing the Model Context Protocol.
- Google Cloud. (2025). Agent2Agent: An Open Protocol for AI Agent Interoperability.
- OpenAI. (2025). Introducing the Agents SDK.
- Stanford HAI. (2024). Human‑AI Collaboration in High‑Stakes Decision Making.
- McKinsey & Company. (2024). The State of AI in 2024: Generative AI’s Breakout Year.
- MIT CSAIL. (2024). Calibrating Large Language Model Confidence.
- European Commission. (2024). The EU Artificial Intelligence Act.
- Grand View Research. (2024). AI Orchestration Market Size Report, 2024‑2032.