I’m building a deterministic policy firewall for AI systems — looking for technical feedback

Published: (December 25, 2025 at 12:58 AM EST)
1 min read
Source: Dev.to

Source: Dev.to

Overview

I’ve been working on a small but opinionated system and would love technical feedback from people who have dealt with AI in regulated or high‑risk environments.

Core Idea

  • AI systems can propose actions.
  • Something else must decide whether those actions are allowed to execute.

The project is not about perfectly “understanding intent.” Intent normalization is deliberately lossy (regex / LLM / upstream systems).

The invariant is a deterministic policy layer that:

  • blocks unsafe or illegal execution
  • fails closed when inputs are ambiguous
  • produces a tamper‑evident audit trail

Think of it as an execution firewall or control plane for AI agents.

Tested Scenarios

  • Fintech – loan approvals, AML‑style constraints
  • Healthtech – prescription safety, controlled substances, pregnancy
  • Legal – M&A, antitrust thresholds
  • Other – insurance, e‑commerce, government scenarios, including unstructured natural‑language inputs

This is early‑stage and intentionally conservative. False positives are escalated; false negatives are unacceptable.

Repository

Intent‑Engine‑Api on GitHub

Feedback Requested

I’m not looking for product feedback—mainly architectural criticism:

  • Where does this break down?
  • What would you challenge if you were deploying this?
  • What’s missing at the execution boundary?

Happy to clarify assumptions.

Back to Blog

Related posts

Read more »