Why I am Writing '11 Controls for Zero Trust architecture in multi-agent AI-to-AI Systems'

Published: (December 12, 2025 at 06:33 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Most security models were never designed for autonomous systems talking to each other. They assume a human login, a session, a perimeter, and a moment where someone is done authenticating. That logic breaks down fast once you introduce autonomous agents that run continuously, make decisions without pause, and interact with other agents at machine speed.

I ran into this problem repeatedly while working through security architecture, AI systems, and Zero Trust theory. The controls existed, but they were scattered—identity lived in one place, authorization in another, rate limiting somewhere else, and time‑based controls were often an afterthought, if they existed at all.

What bothered me was not that the ideas were missing, but that nobody was putting them together into a system that assumed autonomy from the start. That is why I wrote this book.

The core problem

AI‑to‑AI communication removes the human pause. There is no login screen, no are you sure prompt, and no natural delay where a mistake can be caught by a person. Once an agent is trusted, it acts. If that trust is too broad, too long‑lived, or too static, the system fails quietly and completely.

Most Zero Trust discussions stop at identity and policy, which is not enough for autonomous environments. You also need to control when actions are allowed, how often they can occur, how disagreement is resolved, and how trust decays over time. Systems must assume failure, drift, and compromise as normal operating conditions.

What the book actually does

11 Controls for Zero Trust Architecture in Multi‑Agent AI‑to‑AI Systems is not a theory book; it is a control framework. Each chapter focuses on a single control that answers a specific security question, such as:

  • Who is this agent really?
  • What is it allowed to do right now?
  • How long should that permission exist?
  • What happens if it starts behaving strangely?
  • How does the system slow or stop damage without human intervention?

The controls are designed to stack:

  • Identity alone is not trusted.
  • Authorization alone is not trusted.
  • Time alone is not trusted.

The system only moves forward when multiple controls agree. If one control fails, the others are meant to catch it.

Why I am sharing this now

I am preparing the book for release and sharing excerpts and ideas publicly as part of that process—to pressure‑test the concepts. If these ideas resonate with you, great; if you disagree, even better. Autonomous systems are already here, and pretending traditional security models will stretch to fit them is wishful thinking.

This book exists because I could not find a single place where these controls were treated as a cohesive system instead of isolated best practices.

What is next

Over the next few weeks, I will post short essays and excerpts that break down individual controls, failure modes, and design patterns for securing agent‑to‑agent communication. If you are working with AI systems, automation, or distributed services, you will recognize the problems immediately. And if you are building something that talks to itself, you probably already know why this matters.

Pre‑orders go live on Jan 15 2026, with full release on Jan 31 2026. Look for it on Amazon.

Back to Blog

Related posts

Read more »

2025-12-12 Daily Robotics News

Humanoid Robots in Logistics Agility Robotics announced that its Digit humanoid is now deployed for MercadoLibre, its newest customer. The robot is being used...