The Pre-Execution Check: The One Habit That Makes AI Agents Safe to Run Unsupervised
Source: Dev.to
Why a Pre‑Execution Check Is Essential
Most people trust their AI agents by default, but that approach is backwards.
The agents that inspire confidence aren’t necessarily those with the most powerful models; they are the ones that perform a pre‑execution check before taking any significant action.
A quick self‑audit should answer these questions:
- Is this task within my defined scope?
- Do I have the information I need to do this correctly?
- Could this action cause harm that can’t be undone?
- Is there anything in
outbox.jsonI should know about first?
If any answer is uncertain, the agent stops and escalates instead of guessing. Without this gate, agents operate at the edge of their competence, unaware of their limitations. The model won’t flag “I’m not sure”; it will simply produce its best guess, which in edge cases can be confidently wrong.
What a Pre‑Execution Check Looks Like
Add the following checklist to your SOUL.md (or equivalent configuration file) and run it before any task that modifies external state or sends communications:
- Confirm task is within defined scope
- Confirm required context is loaded and current
- Check
outbox.jsonfor pending escalations - If any check fails: write reasoning to
outbox.jsonand stop
This adds roughly 2 seconds per major action but prevents compounding failures.
Real‑World Example
One of our agents sends a weekly newsletter. Without the pre‑execution check, it once dispatched a draft that still contained a placeholder ([INSERT_STAT_HERE]). The task was simply “send newsletter,” and the agent saw a file that looked like a newsletter, so it proceeded.
With the check in place, the agent would have detected the placeholder, written an escalation to outbox.json, and halted the send operation.
One check. One habit. It prevents the category of failure that’s hardest to recover from.
Building a Reliability Stack
Pre‑execution checks work best when combined with additional safety layers:
- Pre‑execution check – before acting
- Escalation rule – when uncertain, write to
outbox.json - Circuit breaker – after N failures, stop and surface the issue
- Dead‑letter queue – log what failed and why
Each layer catches what the previous one misses, creating a robust safety net.
Further Resources
The full set of patterns—including templates for each layer—is available in the Ask Patrick Library:
Ask Patrick publishes battle‑tested AI agent configurations and patterns updated nightly. The library serves as a shortcut to building agents you can actually trust.