Your AI agent just did 5 things. Can you prove it?
Source: Dev.to

I’ve been building AI agents for the past year. Last month I realized I have no idea what half of them actually do in production.
Like, I think my support agent looks up the right docs and gives good answers. But when someone asks “why did the bot say X?” — I’m grepping through logs hoping to find something useful. Usually I don’t.
This wasn’t a huge problem until I started reading about the EU AI Act.
The law nobody’s talking about
August 2026 is when the EU AI Act fully kicks in. Fines can reach €35 million or 7 % of global turnover.
And here’s the thing that surprised me: AI agents are in scope.
The law doesn’t use the word “agent” anywhere—it was written before the current wave of agentic AI. But it covers “AI systems,” and agents are AI systems. A report from The Future Society confirmed this: the Act wasn’t designed with agents in mind, but it absolutely applies to them.
The tricky part? Agents are harder to comply with than regular AI. A chatbot takes input, gives output—done. An agent takes input, calls three APIs, makes a decision, updates a database, sends an email—and you need to be able to explain every step.
What you actually need to build
I spent a few weeks digging into the actual requirements. Here’s the short version:
1. Log everything (Article 12)
Every LLM call, every tool use, every decision—recorded with timestamps. For high‑risk systems, retain for 10 years.
import { AgentGov } from "@agentgov/sdk";
import OpenAI from "openai";
const ag = new AgentGov({
apiKey: process.env.AGENTGOV_API_KEY,
projectId: process.env.AGENTGOV_PROJECT_ID,
});
const openai = ag.wrapOpenAI(new OpenAI());
// now every call is traced — inputs, outputs, tokens, cost
If you’re using the OpenAI Agents SDK, there’s an exporter that plugs right in:
import { BatchTraceProcessor, setTraceProcessors } from "@openai/agents";
import { AgentGovExporter } from "@agentgov/sdk/openai-agents";
setTraceProcessors([
new BatchTraceProcessor(
new AgentGovExporter({
apiKey: process.env.AGENTGOV_API_KEY!,
projectId: process.env.AGENTGOV_PROJECT_ID!,
})
),
]);
2. Tell users it’s AI (Article 50)
If your agent emails customers, chats with users, or generates content, the user must be informed that they are interacting with AI. It sounds obvious, but many deployments omit this disclosure.
3. Figure out your risk level (Annex III)
Not all agents need the same compliance effort. An agent that merely filters spam is low‑risk and faces minimal requirements. An agent that screens job applicants or scores credit is high‑risk and must meet the full compliance stack. You need to map your use‑case to the Annex III categories.
4. Human oversight (Article 14)
For high‑risk agents, a human must be able to stop the agent, override its decisions, and understand what it is doing. This is genuinely hard for autonomous agents, so I built approval gates for any action with real‑world consequences.
The uncomfortable truth
Most of us are building agents without any of this. I was too. If you’re shipping a side project, maybe it doesn’t matter—yet.
But if you’re building agents for a company that operates in Europe (or has European customers), the deadline is real. August 2026 is only six months away. Retrofitting audit trails into an existing system is far harder than building them in from the start.
I started working on this problem because I needed it myself. That effort turned into AgentGov—an open‑source project that combines tracing with EU AI Act compliance features (risk classification, documentation generation, incident tracking).
This is my first open‑source project, so I’m figuring things out as I go. If you have feedback on the approach, the code, or anything else, I’d genuinely appreciate it.
If you’re building agents and thinking about compliance (or deliberately not thinking about it), I’d love to hear how you’re approaching it.
