Heartbeats in OpenClaw: Cheap Checks First, Models Only When You Need Them
Source: Dev.to
Heartbeats in OpenClaw: Cheap Checks First, Models Only When You Need Them
I used to think “heartbeat” meant “run the assistant every X minutes.” It doesn’t.
In OpenClaw, a heartbeat is just a regular pulse where your agent checks a short checklist and decides one of two things:
- Nothing important changed → reply
HEARTBEAT_OK - Something needs attention → send a short alert (and maybe do deeper work)
That sounds simple, but there’s a trap: if you throw an LLM at every heartbeat, you end up paying for a whole lot of “nothing happened.”
The approach I’m using now:
- Rule‑based checks first (fast, deterministic, basically free)
- Call a model only when there’s actual signal (summaries, decisions, or messy human context)
The core idea: a heartbeat is a gate, not a workflow
Think of a heartbeat as a gatekeeper. A good heartbeat answers questions like:
- Did anything break? (CI failing, errors, deploy alarms)
- Did anything change? (new PR, new task queued, new email from a customer)
- Is anything time‑sensitive? (calendar event in <2 hours, expiring cert)
If the answer is “no” to all of those, the best output is literally a single line:
HEARTBEAT_OK
Anything more is noise.
You usually don’t need a model for that
Most heartbeat logic is not “reasoning.” It’s just checking state. Examples that don’t need an LLM:
- Is the repo dirty?
- Are there open PRs?
- Did the agent queue grow?
- Did a job fail?
- Did Slack/WhatsApp disconnect?
For this, a shell script + a few API calls is perfect.
#!/usr/bin/env bash
# cheap‑checks.sh – returns HEARTBEAT_OK or an alert list
# Example checks (replace with real logic)
if git status --porcelain | grep .; then
echo "HEARTBEAT_ALERT: repository has uncommitted changes"
exit 0
fi
if curl -s https://api.example.com/ci/status | grep "failed"; then
echo "HEARTBEAT_ALERT: CI pipeline failed"
exit 0
fi
echo "HEARTBEAT_OK"
A practical pattern: cheap mode first
Run a lightweight script first. It outputs either:
HEARTBEAT_OKHEARTBEAT_ALERT+ bullet list of what changed
Only if it prints an alert do you involve a model. That gives you the best of both worlds:
- $0 heartbeats most of the time
- Still get a clean human summary when something real happens
When should you involve a model?
A model is worth it when the output benefits from language understanding:
- Summarising multiple alerts into one message
- Deciding what to do first when several things changed at once
- Writing a “brief” that a human actually wants to read
- Turning raw logs into a short action plan
In my setup, I keep it simple:
- Run cheap checks.
- If an alert is produced → use a small model (e.g. Claude Haiku) to summarise and recommend next actions.
Tuning heartbeat frequency: faster isn’t always better
Heartbeat schedules are like monitoring: you want fast enough to catch the important stuff, but not so frequent that it turns into spam (or cost creep).
Short intervals (e.g. every 5 minutes)
Good for
- High‑velocity shipping (lots of PRs / CI runs)
- Active incident response
- Anything with a “respond now” requirement
Bad for
- Cost, if an LLM runs every time
- Notification fatigue (“another alert… for nothing”)
- Wasted compute / API calls
Longer intervals (e.g. every 30–60 minutes)
Good for
- Most solo‑founder work
- “Keep an eye on it” workflows
- Staying informed without interruptions
Bad for
- Learning about failures later
- Some tasks won’t feel “real‑time”
A simple rule of thumb
- Actively shipping: every 5–15 minutes
- Build mode but stable: every 30 minutes
- Just keeping watch: every 60–120 minutes
If you need something at an exact time (“remind me at 9 am sharp”), use a scheduled job (cron) instead of heartbeats.
The takeaway
If you’re using OpenClaw (or any agent system):
- Make heartbeats cheap and deterministic.
- Treat the model as an escalation layer, not the default.
- Tune frequency so you get signal, not noise.
To copy the pattern, start with a tiny script that prints either HEARTBEAT_OK or a short HEARTBEAT_ALERT list — and only summarise the alert with a model when you need to.