I Built Cryptographic Audit Trails for AI Agents. Here Is Why.
Source: Dev.to

The Problem No One Is Solving Well
Here is a scenario that is becoming common. You deploy an AI agent that processes customer requests, accesses a database, calls external APIs, and takes actions on behalf of your users. It runs for a week. Then something goes wrong. A customer complains about an unauthorized change. Your team asks the obvious question: what did the agent actually do?
You check the logs. They are text files, maybe JSON lines in a database. They say the agent did X, Y, and Z. But those logs are mutable. Anyone with write access could have modified them. The agent itself could have modified them. There is no cryptographic proof that the log is accurate.
This is the state of agent accountability in 2026. Agents are gaining access to production databases, financial systems, and customer data. The best most teams have for auditing is print() statements and hope. That gap between what agents can do and what we can prove they did is growing fast.
The Pattern Underneath
This is not a new problem. It is a well‑understood one wearing new clothes. Distributed systems solved this class of problem decades ago with append‑only logs, hash chains, and cryptographic signatures. The pattern is simple: make every action produce a record that is mathematically linked to the one before it. If anyone modifies a record in the middle, every subsequent link breaks. Tampering becomes not just difficult, but visible.
The fact that most agent frameworks do not apply these techniques is not a technology gap. It is an attention gap. The tooling exists. The cryptographic primitives are mature. No one has wired them together for the specific context of AI agent actions.
So I did.
What Sigil Does
Sigil provides tamper‑evident audit trails for AI agents. Every agent action becomes an attestation, a signed and timestamped record that includes a hash of the previous attestation. This creates a hash chain. Each attestation is signed with Ed25519, a fast and well‑studied signature scheme. The signature proves the attestation was created by a specific key at a specific time.
Each agent gets its own independent hash chain. No global bottleneck, no cross‑contamination between agents. The architecture is deliberately simple because trust infrastructure should be easy to reason about.
Sigil ships as an MCP server. If you are using any MCP‑compatible client (Claude Code, OpenHands, or your own), you can add Sigil and start recording attestations immediately.
from sigil import SigilClient
client = SigilClient(api_key="sg_...")
# Record an action
receipt = client.attest(
action_type="database.query",
payload={"table": "customers", "rows_returned": 142}
)
# Verify the chain is intact
result = client.verify(receipt.id)
assert result.valid and result.chain_valid
Each attestation includes:
agent_idaction_typepayloadtimestampprev_hash(SHA‑256 chain link)signature(Ed25519)
The chain is append‑only, queryable, and independently verifiable.
Why Open‑Source
The MCP server and Python SDK are MIT‑licensed. You can self‑host the entire stack. This was a deliberate choice, not a growth strategy. Trust infrastructure should be inspectable. If you cannot read the code that generates your audit trail, you have not actually solved the trust problem. You have just moved it.
What Comes Next
Sigil is structured in layers, each building on the one below:
- Notary (available now): Hash‑chained attestations and verification
- Identity (planned): Agent PKI with Ed25519 keypairs
- Delegation (planned): Cryptographic proof of authorization chains
I am also working on integrations with popular agent frameworks for automatic attestation recording. The goal is to make auditable the default, not the exception.
Try It
pip install sigil-notary
GitHub:
PyPI:
I built this for developers shipping agents into production. If that is you, open an issue, start a discussion, or reach out directly.
Sigil means “a seal of authority.” I think AI agents need one.