Your AI agent is a ticking time bomb. Here's how to defuse it.

Published: (March 10, 2026 at 04:39 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

What AI agents can actually do

Modern AI coding agents aren’t just writing code. They can run shell commands, read files, make network requests, and write to your filesystem—effectively having the same permissions you do.

  • Read .env files
  • Run rm -rf on anything they have access to
  • curl data to an external server
  • Write to /etc/passwd, .ssh/authorized_keys, or any other sensitive path

These aren’t theoretical threats; they’re tool calls that real agents make during normal operation—often by accident, sometimes because a bad prompt led them there.

The near‑miss that prompted this

I was using OpenClaw to refactor some API routes. Midway through, it read my .env file. It wasn’t malicious—it was probably looking for environment variable names to reference in the code—but it had no business touching credentials. I only discovered it after checking the logs.

That got me thinking: there’s no equivalent of a firewall for AI‑agent tool calls. No way to say “you can write code, but you can’t touch credentials.” No way to enforce that—just vibes and hope.

ClawWall

ClawWall is a policy firewall for AI agents. It intercepts every tool call before it runs and decides to allow, deny, or pause and ask you.

npm install -g clawwall
clawwall start
CLAWWALL_ENABLED=true openclaw

How it works

ClawWall integrates with OpenClaw’s before-tool-call hook. Every action your agent wants to take—write a file, run a command, browse a URL—passes ClawWall’s policy engine first.

Agent → before-tool-call hook → POST /policy/check → ClawWall daemon

                         allow (ms) ← Rule Engine → deny (ms)

                                                   ask → Dashboard
  • ALLOW and DENY decisions are sub‑millisecond, adding essentially zero latency.
  • ASK decisions pause the agent and surface in a dashboard where you click Allow or Deny. The agent waits.

Six rules, active by default

No configuration is needed; these fire on install.

RuleDecisionWhat it catches
dangerous_commandDENYrm -rf, mkfs, shutdown, dd
credential_readDENY.env, .aws/credentials, id_rsa
exfiltrationDENYcurl -d, wget --post, nc -e
sensitive_writeDENY.env, .ssh/, /etc/passwd
outside_workspaceDENYPaths outside your project directory
internal_networkASKlocalhost, 127.x, 192.168.x

The hard‑block rules have no override; the agent cannot bypass them, regardless of the prompt.

What the dashboard looks

ALLOW   847
DENY     12
ASK       3

LIVE FEED
09:41:03  ✓  write  src/api/routes.ts   allow
09:41:05  ✗  read   .env                deny  credential_read
09:41:07  ✓  exec   npm test            allow
09:41:09  ✗  exec   rm -rf /tmp/build   deny  dangerous_command
09:41:11  ?  browse localhost:5173       ask   internal_network

Why not just trust the agent?

Modern models are pretty good, but “generally careful” isn’t a security posture.

  • Prompt injection: A malicious string in a file your agent reads could redirect its behavior.
  • Model drift: The model that’s careful today might behave differently after a version update.
  • Edge cases: Agents can do unexpected things in long, complex sessions.
  • Least privilege: You wouldn’t give a new employee root access just because they seem trustworthy.

The point isn’t that AI agents are malicious; it’s that they’re powerful and operate at machine speed. Without a firewall, you’re betting that none of their tool calls are wrong.

Get started

npm install -g clawwall
clawwall start
CLAWWALL_ENABLED=true openclaw

Or with curl:

curl -fsSL https://clawwall.dev/install.sh | bash

What’s the sketchiest thing you’ve seen an AI agent try to do? Drop it in the comments.

clawwall.dev

0 views
Back to Blog

Related posts

Read more »