Hermes-Backchannel
Source: Dev.to

My AI agents were talking through Discord bots. I finally fixed it.
GitHub repo:
I run three Hermes agents on a PC as separate processes. They’re supposed to collaborate, but for days they couldn’t. Below is everything I tried and how each approach failed.
Files on a cron job
Agent A drops a markdown file in a shared folder. Agent B’s cron picks it up 60 seconds later, writes a reply, and Agent A’s cron grabs that 60 seconds after that.
A yes/no question took two minutes. I watched my executor agent sit idle for 58 seconds waiting to receive “use port 8080 not 3000” and decided this approach was dead.
HTTP endpoints
Each agent binds a port and exposes a /receive route. POST some JSON, get a 200.
Now I’m managing ports, auth tokens, and serialization overhead for messages that are just strings between processes on the same box. I ran ss -tlnp one night, saw those ports, and asked myself: why am I running a web server for the machine to talk to itself?
Redis
“Just use Redis pub/sub.”
And yes — PUBLISH agent:executor "hello" lands instantly. It works perfectly.
But now Redis is a service I run, something to monitor, something in my attack surface. For three processes passing strings to each other on localhost, installing a message broker felt like calling a tow truck to open my glovebox.
Discord bots
This is where I hit bottom. Three agent identities, three Discord bot users, one private server. They typed messages to each other.
It worked. And it was ridiculous.
What I actually built
I wrote down what I actually needed on a sticky note:
- Three processes on the same machine
- Send text
- Know if it was received
- No ports, no services
- Under a millisecond latency
Unix domain sockets satisfy every requirement. They’re files — chmod 0600 and only the owning user touches them. The kernel manages everything. No TCP, no ports, no external dependencies.
So I wrote a thin layer on top: a tiny daemon per agent. Messages are push‑and‑disconnect — no persistent connections. The agent polls its daemon: “any messages for me?” The whole thing is Python, a few hundred lines, one pip install.
The actual innovation — the part I’d argue is new — is the session protocol. TCP‑style handshakes: SYN, SYN‑ACK, DATA, FIN, FIN‑ACK. You actually know if your message was delivered, if the other agent accepted the task, and when the conversation ends. Most agent communication today is “throw‑a‑string‑and‑pray,” which felt insufficient for agents that need to coordinate on real work.
Why this matters
If you’re building multi‑agent systems and your agents talk to each other through files or HTTP, you’re burning latency and complexity on a problem your kernel solved decades ago.
It’s not a framework and it doesn’t orchestrate workflows. It simply delivers messages between agent processes reliably, in under a millisecond, with confirmation. That’s the whole thing.