Enterprise MCP adoption is outpacing security controls
Source: VentureBeat
The Growing Attack Surface of Agentic AI
AI agents now have more access and connections to enterprise systems than any other software in the environment. This makes them the largest attack surface security teams have ever had to govern, yet the industry still lacks a unified framework to address it.
“If that attack vector gets utilized, it can result in a data breach, or even worse,”
— Spiros Xanthos, Founder & CEO of Resolve AI, speaking at a recent VentureBeat AI Impact Series event.
Why Existing Frameworks Fall Short
Traditional security frameworks are built around human interactions. There is no agreed‑upon construct for AI agents that possess personas and can operate autonomously, noted Jon Aniano, SVP of Product and CRM Applications at Zendesk, at the same event.
“Right now it’s an unsolved problem because it’s the wild, wild West. We don’t even have a defined technical agent‑to‑agent protocol that all companies agree on. How do you balance user expectations versus what keeps your platform safe?”
— Jon Aniano, Zendesk
The Model Context Protocol (MCP) Dilemma
The Model Context Protocol (MCP) was introduced to reduce integration complexity, but it doesn’t mitigate the security risk. In fact, by simplifying connections, it can exacerbate the problem.
- Pros: Streamlines data sharing between models.
- Cons: Lacks built‑in authentication or governance controls, leaving enterprises exposed.
For a deeper look at MCP’s security gaps, see the VentureBeat article: MCP shipped without authentication – why that’s a problem.
Key Takeaways
- Agentic AI is outpacing security guardrails.
- Current standards don’t cover autonomous AI agents.
- MCP simplifies integration but adds to the attack surface.
Enterprises must prioritize the development of industry‑wide, agent‑to‑agent security protocols and robust governance frameworks before the threat landscape expands further.
MCP Still “Extremely Permissive”
Enterprises are increasingly hooking into MCP servers because they simplify integration between agents, tools, and data. However, MCP servers tend to be “extremely permissive,” he said.
They are “actually probably worse than an API,” he contended, because APIs at least have more controls in place to impose upon agents.
Today’s agents act on behalf of humans based on explicit permissions, establishing human accountability.
“But you might have tens, hundreds of agents in the future with their own identity, their own access,” said Xanthos. “It becomes a very complex matrix.”
Even as his startup develops autonomous AI agents for site‑reliability engineering (SRE) and system management, he acknowledges that the industry “completely lacks the framework” for autonomous agents.
“It’s completely on us and anyone who builds agents to figure out what restrictions to give them,” he said. “Customers must be able to trust those decisions.”
Some existing security tools do offer fine‑grained access—Splunk, for instance, has a method to provide access to certain indexes in underlying data stores—but most tools are broader and human‑oriented.
“We’re trying to figure this out with existing tools,” he said. “But I don’t think they’re sufficient for the era of agents.”

Credit: Michael O’Donnell, ShinyRedPhoto
Who’s Accountable When an AI Mis‑Authenticates a User?
At Zendesk and other customer‑relationship‑management (CRM) platform providers, AI now participates in a large volume and scale of user interactions—far beyond what businesses and society have previously contemplated.
The Problem
When AI assists human agents, the audit trail can become a labyrinth:
“So now you’ve got a human talking to a human that’s talking to an AI. The human tells the AI to take action. Who’s at fault if it’s the wrong action?” – Aniano
The situation grows more complex with multiple AI components and multiple humans involved.
Current Mitigations at Zendesk
| Measure | Description |
|---|---|
| Strict access & scope controls | Zendesk limits what AI can do; customers can add their own guardrails. |
| Limited AI capabilities | AI can read knowledge sources but does not write code or run server commands. |
| Declarative API calls | When AI calls an API, the call is pre‑designed, sanctioned, and explicitly listed. |
| Gate‑keeping | Due to high demand, Zendesk is “holding the gates” while standards evolve. |
Industry Needs
- Concrete standards for AI‑human agent interactions.
- New safety methods for tools that bots can access (e.g., MCP auto‑discovery).
“We’re entering a world where, with things like MCP that can auto‑discover tools, we’ll have to create new methods of safety for deciding what tools these bots can interact with.” – Aniano
Security Concerns
Enterprises worry when AI handles authentication tasks such as:
- Sending/processing one‑time passwords (OTP)
- Managing SMS codes
- Other two‑step verification methods
If AI mis‑authenticates or misidentifies a user, the result can be:
- Sensitive data leakage
- An entry point for attackers
The Human‑AI Spectrum
| Spectrum End | Current State | Future Possibility |
|---|---|---|
| Today | Human remains the final decision‑maker. | Specialized AI agents mimic human gut feeling. |
| Tomorrow | Potential for AI to act as the final authority. | AI agents with deeper system integration. |
Adoption Variability
- Highly regulated sectors (e.g., financial services) still require human involvement in authentication.
- Legacy or conservative organizations trust only humans to authenticate other humans.
Zendesk’s Ongoing Experiments
- Developing AI agents that are more connected to internal systems.
- Collaborating with a select group of customers to define and test guardrails.
Prepared from the remarks of Aniano, reflecting current challenges and future directions for AI‑driven authentication in CRM platforms.
Standing Authorization Is Coming
In the future, agents may be trusted more than humans to perform certain tasks and could be granted permissions far beyond what humans have today, Xanthos said. However, we are still a long way from that reality. For the most part, the fear of something going wrong is what’s holding enterprises back.
“Which is a good fear, right? I’m not saying that it is a bad thing,” he added.
Many enterprises aren’t yet comfortable with an agent handling all steps of a workflow or fully closing the loop on its own. They still want human review.
What’s on the horizon?
- Standing authorization for agents in a few generally safe scenarios (e.g., coding).
- Gradual expansion to more open‑ended, low‑risk situations.
Xanthos acknowledged that there will always be high‑risk situations where AI mistakes could “mutate the state of the production system.”
“There’s no going back, obviously; this is moving faster than maybe even mobile did. So the question is what do we do about it?”
What Security Teams Can Do Now
Both speakers highlighted interim measures that can be implemented with existing tooling:
- Fine‑grained index‑level access controls – Xanthos noted that tools such as Splunk already support applying these controls to individual agents.
- Declarative API design with strict scopes – Aniano described Zendesk’s approach as a practical starting point:
- API calls are defined declaratively and limited to explicitly sanctioned actions.
- Access and scope limits are enforced rigorously.
- Any expansion of agent permissions requires human review before being granted.
“We’re always checking those gates and seeing how we can widen the aperture.” – Aniano
Key principle: Do not grant standing authorization until each expansion has been validated.