Built runtime security for AI agents

Published: (February 18, 2026 at 05:08 PM EST)
1 min read
Source: Dev.to

Source: Dev.to

Problem

There is currently no standard way to control what AI agents are allowed to do at runtime, leading to risks such as accidental data leaks or unauthorized actions.

Solution: Agent‑SPM

Agent‑SPM is a security layer that enforces policies on agent actions in real‑time.

How it works

  1. Define policies – Create a policy file that specifies what the agent can and cannot do.
  2. Runtime checks – Every action the agent attempts is checked against the policy before execution, acting like a firewall for AI agents.

Goals

  • Prevent data leaks – Detect SSNs, credit cards, API keys in tool arguments.
  • Stop unauthorized actions – Block bulk exports and dangerous commands.
  • Enable human oversight – Require approval for high‑risk operations.
  • Emergency controls – Provide a kill switch to disable rogue agents.
  • Compliance – Generate automatic audit trails for regulatory requirements.

Technical Details

  • Open source – MIT license.
  • Framework agnostic – Works with any LLM framework (LangChain, CrewAI, Claude, custom).
  • Zero infrastructure – Runs inside the agent’s own process.
  • Modular – 8 composable packages; install only what you need.

Repository

https://github.com/mlnas/agent-runtime-security

0 views
Back to Blog

Related posts

Read more »

OpenClaw Is Unsafe By Design

OpenClaw Is Unsafe By Design The Cline Supply‑Chain Attack Feb 17 A popular VS Code extension, Cline, was compromised. The attack chain illustrates several AI‑...