Built runtime security for AI agents
Source: Dev.to
Problem
There is currently no standard way to control what AI agents are allowed to do at runtime, leading to risks such as accidental data leaks or unauthorized actions.
Solution: Agent‑SPM
Agent‑SPM is a security layer that enforces policies on agent actions in real‑time.
How it works
- Define policies – Create a policy file that specifies what the agent can and cannot do.
- Runtime checks – Every action the agent attempts is checked against the policy before execution, acting like a firewall for AI agents.
Goals
- Prevent data leaks – Detect SSNs, credit cards, API keys in tool arguments.
- Stop unauthorized actions – Block bulk exports and dangerous commands.
- Enable human oversight – Require approval for high‑risk operations.
- Emergency controls – Provide a kill switch to disable rogue agents.
- Compliance – Generate automatic audit trails for regulatory requirements.
Technical Details
- Open source – MIT license.
- Framework agnostic – Works with any LLM framework (LangChain, CrewAI, Claude, custom).
- Zero infrastructure – Runs inside the agent’s own process.
- Modular – 8 composable packages; install only what you need.