What Happens When You Give an AI Agent Root Access?

Published: (January 16, 2026 at 03:33 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Why I built Cordum

I’m obsessed with AI agents – not chatbots, but agents that actually do things:

  • Merge pull requests
  • Deploy to Kubernetes
  • Update database records
  • Send Slack messages on your behalf

The technology is ready, but every time I tried to ship one to production the same thing happened:

Security said no.

And honestly? They were right.

Giving an AI the ability to write to production systems without an audit trail, approval workflow, or enforceable policies is like giving an intern root access and hoping for the best. Teams get stuck in what I call “PoC Purgatory” – impressive demos that never ship because there’s no governance story.

What if every AI action had to pass through a policy check before it executed?
That’s the core idea behind Cordet.

Architecture

┌─────────────┐     ┌──────────────┐     ┌─────────────┐
│   AI Agent  │ --> │ Safety Kernel │ --> │   Action    │
└─────────────┘     └──────────────┘     └─────────────┘

                    ┌──────┴──────┐
                    │   Policy    │
                    │  (as code)  │
                    └─────────────┘

Before any job executes, the Safety Kernel evaluates your policy and returns one of:

  • Allow – proceed normally
  • Deny – block with reason
  • 👤 Require Approval – human in the loop
  • Throttle – rate limit

Example Policy (policy.yaml)

rules:
  - id: require-approval-for-prod
    match:
      risk_tags: [prod, write]
    decision: require_approval
    reason: "Production writes need human approval"

  - id: block-destructive
    match:
      capabilities: [delete, drop, destroy]
    decision: deny
    reason: "Destructive operations not allowed"

  - id: allow-read-only
    match:
      risk_tags: [read]
    decision: allow

When an agent tries something dangerous, Cordet intervenes:

{
  "job_id": "job_abc123",
  "decision": "require_approval",
  "reason": "Production writes need human approval",
  "matched_rule": "require-approval-for-prod"
}

The job waits in the dashboard until a human approves it – full audit trail, compliance‑happy.

Control Plane (not an agent framework)

Cordet orchestrates and governs agents; it does not replace LangChain, CrewAI, etc.

┌─────────────────────────────────────────────────────────┐
│                Cordet Control Plane                     │
├─────────────────────────────────────────────────────────┤
│  ┌───────────┐  ┌──────────────┐  ┌─────────────────┐   │
│  │ Scheduler │  │ Safety Kernel │  │ Workflow Engine │   │
│  └───────────┘  └──────────────┘  └─────────────────┘   │
├─────────────────────────────────────────────────────────┤
│  ┌───────────────┐  ┌───────────────────────────────┐   │
│  │  NATS Bus     │  │  Redis (State)                  │   │
│  └───────────────┘  └───────────────────────────────┘   │
└─────────────────────────────────────────────────────────┘
         │                    │                    │
    ┌────┴────┐          ┌────┴────┐          ┌───┴────┐
    │ Worker  │          │ Worker  │          │ Worker │
    │ (Slack) │          │ (GitHub)│          │ (K8s)  │
    └─────────┘          └─────────┘          └────────┘

Tech Stack

ComponentTechnology
Core control planeGo (~15 K lines)
Message busNATS JetStream (at‑least‑once delivery)
State storeRedis
DashboardReact (real‑time updates)

Performance

  • dashboard – that’s your dashboard.

The Bigger Picture

I originally saw governance as a “necessary evil” for compliance. Now I view it as a feature. When you can prove every AI action was evaluated against policy and logged, you unlock use cases that were previously impossible:

  • Banks can safely use AI agents.
  • Healthcare can adopt AI agents with confidence.

The “permission to write” becomes a competitive advantage.

Open‑Source, Not SaaS

I could have built this as a closed SaaS from day one, but open‑source:

  • Builds trust (you can inspect the code)
  • Encourages community contributions
  • Enables anyone to run the control plane in‑house

Cordet – Open‑Source AI Agent Framework

  • Self‑hosting ready – enterprises love the ability to run it on‑premise.
  • Community funnel – encourages contributions and ecosystem growth.
  • Open‑core business model
    • Self‑hosted version: free forever.
    • Cloud/enterprise features: paid plans.

Roadmap

  • Helm chart for Kubernetes deployment.
  • Cordet Cloud – a fully managed SaaS offering.
  • Visual workflow editor integrated into the dashboard.
  • Additional integration packs (AWS, GCP, PagerDuty, etc.).
  • 🌐 Website:
  • 📦 GitHub:
  • 📋 Protocol (CAP):
  • 📚 Documentation:

If you’re building AI agents and need built‑in governance, give Cordet a try.
Star the repo if you find it useful!

I’d love your feedback – what’s missing? What would make this more useful for your projects?

Thanks for reading! I’m happy to answer any questions in the comments.

Back to Blog

Related posts

Read more »

𝗗𝗲𝘀𝗶𝗴𝗻𝗲𝗱 𝗮 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻‑𝗥𝗲𝗮𝗱𝘆 𝗠𝘂𝗹𝘁𝗶‑𝗥𝗲𝗴𝗶𝗼𝗻 𝗔𝗪𝗦 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗘𝗞𝗦 | 𝗖𝗜/𝗖𝗗 | 𝗖𝗮𝗻𝗮𝗿𝘆 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 | 𝗗𝗥 𝗙𝗮𝗶𝗹𝗼𝘃𝗲𝗿

!Architecture Diagramhttps://dev-to-uploads.s3.amazonaws.com/uploads/articles/p20jqk5gukphtqbsnftb.gif I designed a production‑grade multi‑region AWS architectu...