What Happens When You Give an AI Agent Root Access?

Published: (January 16, 2026 at 03:33 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Why I built Cordum

I’m obsessed with AI agents – not chatbots, but agents that actually do things:

  • Merge pull requests
  • Deploy to Kubernetes
  • Update database records
  • Send Slack messages on your behalf

The technology is ready, but every time I tried to ship one to production the same thing happened:

Security said no.

And honestly? They were right.

Giving an AI the ability to write to production systems without an audit trail, approval workflow, or enforceable policies is like giving an intern root access and hoping for the best. Teams get stuck in what I call “PoC Purgatory” – impressive demos that never ship because there’s no governance story.

What if every AI action had to pass through a policy check before it executed?
That’s the core idea behind Cordet.

Architecture

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   AI Agent  │ --> │ Safety Kernel │ --> │   Action    │
└─────────────┘     └──────────────┘     └─────────────┘

                    ┌──────┴──────┐
                    │   Policy    │
                    │  (as code)  │
                    └─────────────┘

Before any job executes, the Safety Kernel evaluates your policy and returns one of:

  • Allow – proceed normally
  • Deny – block with reason
  • 👤 Require Approval – human in the loop
  • Throttle – rate limit

Example Policy (policy.yaml)

rules:
  - id: require-approval-for-prod
    match:
      risk_tags: [prod, write]
    decision: require_approval
    reason: "Production writes need human approval"

  - id: block-destructive
    match:
      capabilities: [delete, drop, destroy]
    decision: deny
    reason: "Destructive operations not allowed"

  - id: allow-read-only
    match:
      risk_tags: [read]
    decision: allow

When an agent tries something dangerous, Cordet intervenes:

{
  "job_id": "job_abc123",
  "decision": "require_approval",
  "reason": "Production writes need human approval",
  "matched_rule": "require-approval-for-prod"
}

The job waits in the dashboard until a human approves it – full audit trail, compliance‑happy.

Control Plane (not an agent framework)

Cordet orchestrates and governs agents; it does not replace LangChain, CrewAI, etc.

┌─────────────────────────────────────────────────────────┐
│                Cordet Control Plane                     │
├─────────────────────────────────────────────────────────┤
│  ┌───────────┐  ┌──────────────┐  ┌─────────────────┐   │
│  │ Scheduler │  │ Safety Kernel │  │ Workflow Engine │   │
│  └───────────┘  └──────────────┘  └─────────────────┘   │
├─────────────────────────────────────────────────────────┤
│  ┌───────────────┐  ┌───────────────────────────────┐   │
│  │  NATS Bus     │  │  Redis (State)                  │   │
│  └───────────────┘  └───────────────────────────────┘   │
└─────────────────────────────────────────────────────────┘
         │                    │                    │
    ┌────┴────┐          ┌────┴────┐          ┌───┴────┐
    │ Worker  │          │ Worker  │          │ Worker │
    │ (Slack) │          │ (GitHub)│          │ (K8s)  │
    └─────────┘          └─────────┘          └────────┘

Tech Stack

ComponentTechnology
Core control planeGo (~15 K lines)
Message busNATS JetStream (at‑least‑once delivery)
State storeRedis
DashboardReact (real‑time updates)

Performance

  • dashboard – that’s your dashboard.

The Bigger Picture

I originally saw governance as a “necessary evil” for compliance. Now I view it as a feature. When you can prove every AI action was evaluated against policy and logged, you unlock use cases that were previously impossible:

  • Banks can safely use AI agents.
  • Healthcare can adopt AI agents with confidence.

The “permission to write” becomes a competitive advantage.

Open‑Source, Not SaaS

I could have built this as a closed SaaS from day one, but open‑source:

  • Builds trust (you can inspect the code)
  • Encourages community contributions
  • Enables anyone to run the control plane in‑house

Cordet – Open‑Source AI Agent Framework

  • Self‑hosting ready – enterprises love the ability to run it on‑premise.
  • Community funnel – encourages contributions and ecosystem growth.
  • Open‑core business model
    • Self‑hosted version: free forever.
    • Cloud/enterprise features: paid plans.

Roadmap

  • Helm chart for Kubernetes deployment.
  • Cordet Cloud – a fully managed SaaS offering.
  • Visual workflow editor integrated into the dashboard.
  • Additional integration packs (AWS, GCP, PagerDuty, etc.).
  • 🌐 Website:
  • 📦 GitHub:
  • 📋 Protocol (CAP):
  • 📚 Documentation:

If you’re building AI agents and need built‑in governance, give Cordet a try.
Star the repo if you find it useful!

I’d love your feedback – what’s missing? What would make this more useful for your projects?

Thanks for reading! I’m happy to answer any questions in the comments.

Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...