I built a WASM execution firewall for AI agents — here’s why

Published: (January 10, 2026 at 04:17 PM EST)
1 min read
Source: Dev.to

Source: Dev.to

What I’m building

Night Core is a console for controlling execution of WebAssembly modules, especially when the code comes from agents, remote systems, or any source that isn’t fully trusted.
Before anything runs, it applies a few basic rules.
The architecture separates a worker that enforces policy from a UI that helps with approvals and visibility. It’s built in Rust with a Tauri frontend and TypeScript for the UI.

The code’s open here:

Why this matters (to me)

Most of the agent discussion I see is about whether the output is correct. I’m more interested in what happens when that output becomes an action—especially code execution.
Once something runs, you’re already in response mode. Logs, alerts, and sandboxes are helpful, but they’re all after the fact. That’s what pushed me to treat execution itself as the boundary.

Threat model

The threat model is simple.

Still early, but here’s what I’m wondering

This isn’t finished. It’s a working sketch. I’d like to hear how others are thinking about this, or if you’re seeing the same edge cases.

You can see the full thread and screenshots here:
GitHub:

Back to Blog

Related posts

Read more »

Hello, Newbie Here.

Hi! I'm falling back into the realm of S.T.E.M. I enjoy learning about energy systems, science, technology, engineering, and math as well. One of the projects I...