Making Google ADK Agents Audit-Ready for the EU AI Act

Published: (March 7, 2026 at 03:47 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Google open‑sourced the Agent Development Kit (ADK) – the same framework that powers Agentspace and the Customer Engagement Suite. It’s already pulling 3.7 M downloads/month on PyPI, so ADK is going to be everywhere.

Problem: None of these agents have audit trails.
Every ADK agent you deploy will need to prove EU AI Act compliance by 2 Aug 2026 (penalties up to €35 M or 7 % of global turnover).
Solution: air‑adk‑trust – a tiny plug‑in that adds a tamper‑evident audit chain.

Install

pip install air-adk-trust

One‑liner to Make Any ADK Agent Audit‑Ready

from google.adk.agents import Agent
from google.adk.runners import Runner, InMemorySessionService
from air_adk_trust import AIRBlackboxPlugin

agent = Agent(
    model="gemini-2.0-flash",
    name="my_agent",
    instruction="You are a helpful assistant.",
    tools=[my_tool],
)

runner = Runner(
    agent=agent,
    app_name="my_app",
    session_service=InMemorySessionService(),
    plugins=[AIRBlackboxPlugin()],   # ← this line adds the audit layer
)

That’s it. Every LLM call, tool execution, and agent delegation is logged to a tamper‑evident HMAC‑SHA256 audit chain – no cloud service, no API keys, runs entirely on your machine.

Why ADK Makes This Easy

Most agent frameworks require monkey‑patching or wrapper functions to add observability. ADK was built with a first‑class Plugin system and callback hooks at every stage of the agent lifecycle.

Hook Flow

User Message


before_agent   → Start audit record, check risk tier


before_model   → Scan prompt for PII, log request hash


LLM Call


after_model    → Log response hash, track token spend


before_tool    → Classify tool risk, enforce policy


Tool Runs


after_tool     → Log tool result, append to audit chain


after_agent    → Seal HMAC chain, finalize record

Six callback hooks map cleanly to six EU AI Act articles:

EU AI Act ArticleWhat the Plugin Does
Art. 9 – Risk ManagementClassifies agent actions by risk tier; blocks high‑risk tools
Art. 10 – Data GovernanceDetects PII in prompts and responses before they reach the LLM
Art. 11 – Technical DocumentationGenerates structured audit logs for every agent action
Art. 12 – Record‑KeepingHMAC‑SHA256 tamper‑evident chain – cryptographically verifiable
Art. 14 – Human OversightTool‑confirmation gates for high‑risk operations
Art. 15 – RobustnessTracks failures, detects loops, monitors error rates

Multi‑Agent Coverage Out of the Box

ADK agents can delegate to sub‑agents (researchers, writers, reviewers, …). The plug‑in fires callbacks for every agent in the tree, not just the root.

from google.adk.agents import Agent
from air_adk_trust import AIRBlackboxPlugin
from google.adk.runners import Runner, InMemorySessionService

researcher = Agent(name="researcher", model="gemini-2.0-flash", ...)
writer     = Agent(name="writer",     model="gemini-2.0-flash", ...)
reviewer   = Agent(name="reviewer",   model="gemini-2.0-flash", ...)

orchestrator = Agent(
    name="orchestrator",
    model="gemini-2.0-flash",
    sub_agents=[researcher, writer, reviewer],
)

runner = Runner(
    agent=orchestrator,
    app_name="content_pipeline",
    session_service=InMemorySessionService(),
    plugins=[AIRBlackboxPlugin()],   # One instance covers all four agents
)

The audit chain captures the full delegation tree – which agent called which, what tools they used, and the LLM responses at each step. When a regulator asks “show me the decision chain,” you hand them the chain.

The Audit Chain (How HMAC‑SHA256 Works Here)

Each lifecycle event is chained together cryptographically. Every record stores the hash of the previous record, making the chain tamper‑evident – any modification breaks the chain and can be proven.

Record 1: agent_start
  hash: abc123

Record 2: model_call
  prev_hash: abc123
  hash: def456

Record 3: tool_call (web_search)
  prev_hash: def456
  hash: ghi789

Record 4: agent_complete
  prev_hash: ghi789
  hash: jkl012   ← final seal

This isn’t just logging; it’s evidence – the same principle used by flight recorders.

What It Doesn’t Do

air‑adk‑trust is a technical linter for AI governance, not a legal compliance guarantee. It won’t make you “EU AI Act compliant” on its own, but it provides the tamper‑evident audit trails, PII detection, risk classification, and policy enforcement that auditors and compliance teams need.

Framework #6 in the AIR Blackbox Ecosystem

air‑adk‑trust joins five other trust layers, all open‑source (Apache 2.0) and on PyPI:

pip install air-langchain-trust   # LangChain / LangGraph
pip install air-crewai-trust      # CrewAI
pip install air-autogen-trust     # AutoGen / AG2
pip install air-anthropic-trust   # Anthropic Claude SDK
pip install air-rag-trust         # RAG pipelines
pip install air-adk-trust         # Google ADK  ← this package

The goal is coverage – whatever framework you’re building with, audit trails should be one import away.

Try It

pip install air-adk-trust

Air ADK Trust Repository
https://github.com/airblackbox/air-adk-trust

Full Ecosystem
https://github.com/airblackbox

Live Demo
https://airblackbox.ai/demo

August 2026 is 17 months away. Your agents need audit trails. This is a place to start.

If you have questions about the architecture, how the HMAC chain works, or how to integrate with your existing ADK agents, feel free to:

  • Drop a comment, or
  • Open an issue on GitHub.
0 views
Back to Blog

Related posts

Read more »