I built a WASM execution firewall for AI agents — here’s why
Source: Dev.to
What I’m building
Night Core is a console for controlling execution of WebAssembly modules, especially when the code comes from agents, remote systems, or any source that isn’t fully trusted.
Before anything runs, it applies a few basic rules.
The architecture separates a worker that enforces policy from a UI that helps with approvals and visibility. It’s built in Rust with a Tauri frontend and TypeScript for the UI.
The code’s open here:
Why this matters (to me)
Most of the agent discussion I see is about whether the output is correct. I’m more interested in what happens when that output becomes an action—especially code execution.
Once something runs, you’re already in response mode. Logs, alerts, and sandboxes are helpful, but they’re all after the fact. That’s what pushed me to treat execution itself as the boundary.
Threat model
The threat model is simple.
Still early, but here’s what I’m wondering
This isn’t finished. It’s a working sketch. I’d like to hear how others are thinking about this, or if you’re seeing the same edge cases.
You can see the full thread and screenshots here:
GitHub: