Enterprise identity was built for humans — not AI agents

Published: (March 10, 2026 at 01:00 AM EDT)
7 min read

Source: VentureBeat

Presented by 1Password

Adding agentic capabilities to enterprise environments is fundamentally reshaping the threat model by introducing a new class of actor into identity systems.

The problem: AI agents are taking action within sensitive enterprise systems—logging in, fetching data, calling LLM tools, and executing workflows—often without the visibility or control that traditional identity and access systems were designed to enforce.

AI tools and autonomous agents are proliferating across enterprises faster than security teams can instrument or govern them. At the same time, most identity systems still assume:

  • Static users
  • Long‑lived service accounts
  • Coarse role assignments

These systems were not designed to represent delegated human authority, short‑lived execution contexts, or agents operating in tight decision loops.

Rethinking the Trust Layer

As a result, IT leaders need to step back and rethink the trust layer itself. This shift isn’t theoretical. NIST’s Zero Trust Architecture (SP 800‑207) explicitly states that:

“All subjects — including applications and non‑human entities — are considered untrusted until authenticated and authorized.”

In an agentic world, that means AI systems must have explicit, verifiable identities of their own, not operate through inherited or shared credentials.

“Enterprise IAM architectures are built to assume all system identities are human, which means that they count on consistent behavior, clear intent, and direct human accountability to enforce trust,” says Nancy Wang, CTO at 1Password and Venture Partner at Felicis.
“Agentic systems break those assumptions. An AI agent is not a user you can train or periodically review. It is software that can be copied, forked, scaled horizontally, and left running in tight execution loops across multiple systems. If we continue to treat agents like humans or static service accounts, we lose the ability to clearly represent who they are acting for, what authority they hold, and how long that authority should last.”

How AI Agents Turn Development Environments into Security Risk Zones

One of the first places these identity assumptions break down is the modern development environment. The integrated developer environment (IDE) has evolved beyond a simple editor into an orchestrator capable of reading, writing, executing, fetching, and configuring systems. With an AI agent at the heart of this process, prompt‑injection transitions aren’t just an abstract possibility; they become a concrete risk.

  • Because traditional IDEs weren’t designed with AI agents as a core component, adding aftermarket AI capabilities introduces new kinds of risks that traditional security models weren’t built to account for.
  • Example: A seemingly harmless README might contain concealed directives that trick an assistant into exposing credentials during standard analysis.
  • Project content from untrusted sources can alter agent behavior in unintended ways, even when that content bears no obvious resemblance to a prompt.
  • Input sources now extend beyond files that are deliberately run. Documentation, configuration files, filenames, and tool metadata are all ingested by agents as part of their decision‑making processes, influencing how they interpret a project.

Trust Erodes When Agents Act Without Intent or Accountability

When you add highly autonomous, deterministic agents operating with elevated privileges—capable of reading, writing, executing, or reconfiguring systems—the threat grows. These agents have:

  • No context
  • No ability to determine whether a request for authentication is legitimate
  • No knowledge of who delegated that request
  • No built‑in boundaries for their actions

“With agents, you can’t assume that they have the ability to make accurate judgments, and they certainly lack a moral code,” Wang says. “Every one of their actions needs to be constrained properly, and access to sensitive systems and what they can do within them needs to be more clearly defined. The tricky part is that they’re continuously taking actions, so they also need to be continuously constrained.”

Where Traditional IAM Fails with Agents

Traditional identity and access management (IAM) systems operate on several core assumptions that agentic AI violates:

AssumptionHow Agents Violate It
Static privilege modelsAgents execute chains of actions that require different privilege levels at different moments. Least‑privilege can no longer be a “set‑it‑and‑forget‑it” configuration; it must be scoped dynamically with automatic expiration and refresh mechanisms.
Human accountabilityLegacy systems assume every identity traces back to a specific person who can be held responsible. Agents blur this line, making it unclear under whose authority they operate. When duplicated, modified, or left running long after their original purpose, the risk multiplies.
Behavior‑based detectionHuman users follow recognizable patterns (e.g., logging in during business hours). Agents operate continuously across multiple systems, causing legitimate workflows to be flagged as suspicious and overwhelming traditional anomaly‑detection tools.
Visibility of identitiesTraditional IAM tools expect static, manageable identities. Agents can spin up new identities dynamically, operate through existing service accounts, or leverage credentials in ways that make them invisible to conventional IAM solutions.

Prepared by the 1Password team.

Rethinking Security Architecture for Agentic Systems

“It’s the whole context piece, the intent behind an agent, and traditional IAM systems don’t have any ability to manage that,” Wang says. “This convergence of different systems makes the challenge broader than identity alone, requiring context and observability to understand not just who acted, but why and how.”

Securing agentic AI requires rethinking the enterprise security architecture from the ground up

Several key shifts are necessary:

  1. Identity as the control plane for AI agents

    • Identity should be treated as the fundamental control plane, not just another security component.
    • Major security vendors are already integrating identity into every security solution and stack.
  2. Context‑aware access as a requirement for agentic AI

    • Policies must be far more granular, defining what an agent can access and under what conditions.
    • Considerations include:
      • Who invoked the agent?
      • What device it runs on?
      • Time constraints?
      • Specific actions permitted within each system.
  3. Zero‑knowledge credential handling for autonomous agents

    • Keep credentials completely out of the agent’s view.
    • Techniques such as agentic autofill inject credentials into authentication flows without exposing them in plain text—similar to password managers for humans, but extended to software agents.
  4. Auditability requirements for AI agents

    • Traditional audit logs (API calls, authentication events) are insufficient.
    • Required audit data:
      • Who the agent is.
      • Whose authority it operates under.
      • Scope of authority granted.
      • Complete chain of actions taken to accomplish a workflow.
    • Mirrors detailed activity logging for human employees, but must scale to software entities executing hundreds of actions per minute.
  5. Enforcing trust boundaries across humans, agents, and systems

    • Define clear, enforceable boundaries for what an agent can do when invoked by a specific person on a particular device.
    • Separate intent (what a user wants) from execution (what the agent actually does).

The Future of Enterprise Security in an Agentic World

As agentic AI becomes embedded in everyday enterprise workflows, the security challenge isn’t whether organizations will adopt agents; it’s whether the systems that govern access can evolve to keep pace.

  • Blocking AI at the perimeter is unlikely to scale.
  • Extending legacy identity models won’t work either.

What’s required: a shift toward identity systems that can account for context, delegation, and accountability in real time, across humans, machines, and AI agents.

“The step function for agents in production will not come from smarter models alone,” Wang says. “It will come from predictable authority and enforceable trust boundaries. Enterprises need identity systems that can clearly represent who an agent is acting for, what it is allowed to do, and when that authority expires. Without that, autonomy becomes unmanaged risk. With it, agents become governable.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

0 views
Back to Blog

Related posts

Read more »