Token Security Is an Innovation Sandbox Finalist. Here Is What That Means for AI Agent Governance.

Published: (March 17, 2026 at 10:49 PM EDT)
6 min read
Source: Dev.to

Source: Dev.to

Douglas Walseth

Overview

RSAC 2026 selected Token Security as one of ten Innovation Sandbox finalists, presenting on Day 1 (March 23). The program has a strong track record—previous winners include Wiz, Apiiro, and Abnormal Security, each worth billions today.

Token Security is not a governance vendor in the traditional sense. They are building identity security purpose‑built for non‑human identities (NHIs)—the AI agents, service accounts, API keys, and machine credentials that are rapidly outnumbering human users in enterprise infrastructure. Their thesis is straightforward: traditional IAM was designed for humans, and the explosion of AI agents requires a machine‑first identity architecture.

  • Funding: $28 M Series A led by Notable Capital (Jan 2026)
  • Visibility: Innovation Sandbox selection puts them in front of the largest security audience of the year.

What Token Security Actually Does

Token Security’s platform operates at the identity layer for AI agents and NHIs. It delivers four core capabilities:

  1. Continuous NHI Discovery – Automatically finds AI agents and non‑human identities across cloud infrastructure, mapping what exists and what it connects to.
  2. Contextual Identity Graph – Maps relationships between agents, services, resources, and permissions into a queryable graph structure.
  3. Permission Drift Detection – Monitors when agent permissions deviate from their intended scope, catching privilege creep before it becomes a security incident.
  4. Intent‑Based Access Controls – Grants and restricts access based on what agents are supposed to do, not just static role assignments.

They also integrate with MCP servers, giving visibility into the agent tool‑chain layer—what tools agents are using and what resources those tools access.

Their content marketing leading into RSA has been notably aggressive: 10+ blog posts published in a single week, each targeting a different segment of the NHI security narrative. This is sophisticated go‑to‑market execution that signals both marketing maturity and confidence in their Innovation Sandbox pitch.

The Question Token Security Answers — And the One It Does Not

Token Security answers a critical question:

“Who are your AI agents, and what can they access?”

This is a real problem. Most organizations have no inventory of their non‑human identities. AI agents spin up with credentials that nobody tracks. Permission sprawl happens silently. When a security team asks, “Which agents have access to production data?”, there is usually no answer.

Token Security provides that answer. Their identity graph and continuous discovery solve the visibility gap that makes NHI governance impossible. This is valuable and necessary work.

But there is a second question that identity governance does not address:

“What will the agent do with that access, and how do you prevent violations?”

An AI agent can be fully discovered in Token Security’s identity graph, have correctly scoped permissions, and pass every NHI compliance check—and still produce outputs that violate compliance policies. It can drift from its behavioral constraints or introduce governance regressions in the codebase it modifies. Identity verification ensures the right agent has the right access; it does not ensure the agent uses that access correctly.

This is the identity‑behavioral gap. Token Security operates at the identity layer, while behavioral enforcement operates at the constraint layer. They are different problems requiring different architectures.

Identity Layer vs. Behavioral Layer

LayerTypical Failures
Identity layer (Token Security)• Unknown agents operating in production infrastructure
• Stale credentials with excessive permissions
• Permission sprawl across NHI populations
• No audit trail of which agents exist or what they connect to
Behavioral layer (Walseth AI)• Agents producing outputs that violate compliance policies
• Context drift where agent behavior diverges from intent
• Constraint regression when code changes weaken governance controls
• No structural prevention of violation classes before runtime

You can have perfect identity governance and still experience behavioral failures. Conversely, you can have perfect behavioral enforcement and still have NHI visibility gaps. Enterprises need both layers.

What Innovation Sandbox Means for the Market

The Innovation Sandbox selection validates that NHI security is now a first‑class category at RSA, not a niche within traditional IAM. This matters for three reasons:

  1. Visibility – Innovation Sandbox finalists receive press coverage estimated at $5 M+ in equivalent visibility. Token Security’s pitch on March 23 puts NHI identity security in front of every CISO, security architect, and enterprise buyer attending RSA. Search volume for “Token Security”, “NHI security”, and related terms will spike during the week.
  2. Validation – The program’s track record of selecting companies that become category leaders signals that judges see NHI identity as a real, fundable, scalable market. This draws more investment and more competition to the identity layer of agent governance.
  3. Complementary positioning – For organizations evaluating AI agent security, Token Security’s Innovation Sandbox presence clarifies the two‑layer architecture:
    • Identity governance – Who agents are and what they can access.
    • Behavioral enforcement – What agents do and how they comply.

Enterprises that adopt both layers will be better equipped to secure the rapidly expanding universe of non‑human identities.

Prepared by Douglas Walseth
dev.to/douglasrw

Identity for NHI Discovery Still Needs Behavioral Constraints for the Agents They Discover

The Two‑Layer Architecture Enterprises Need

The strongest AI‑agent security posture combines both layers:

  • Token Security – discovers every agent, maps every permission, detects every identity drift.
  • Behavioral Enforcement – ensures every discovered agent complies with policies, maintains context integrity, and produces governance‑compliant outputs.

Neither layer alone is sufficient.
Identity without behavioral constraints means you know who your agents are but cannot prevent what they do.
Behavioral constraints without identity management means you govern agent behavior but cannot see your full NHI surface.

Our enforcement ladder operates at five levels, from prose documentation through automated hooks, each compounding on the previous. This prevent‑by‑construction approach eliminates violation classes before they reach runtime—the exact layer that identity governance does not cover.

What to Watch at RSA

Token Security presents to the Innovation Sandbox judges on March 23. Watch for:

  • How they position NHI discovery relative to existing IAM vendors (particularly Okta for AI Agents, which also targets NHI governance from the enterprise identity side).
  • Whether their pitch addresses the behavioral gap or stays focused on identity.
  • Audience questions about what happens after agents are discovered and permissioned.

For a full comparison of all AI‑governance vendors heading into RSA, see our vendor map. To see how behavioral enforcement scores your own repository, run the free scanner or explore the AI governance leaderboard.

See also: Walseth AI vs Token Security for a detailed feature comparison.

Originally published at walseth.ai.

0 views
Back to Blog

Related posts

Read more »