SHOW HN: A usage circuit breaker for Cloudflare Workers

Published: (March 10, 2026 at 09:09 AM EDT)
3 min read

Source: Hacker News

Background

I run 3mins.news (https://3mins.news), an AI news aggregator built entirely on Cloudflare Workers. The backend has 10+ cron triggers running every few minutes — RSS fetching, article clustering, LLM calls, email delivery.

Problem

Workers Paid Plan has hard monthly limits (10 M requests, 1 M KV writes, 1 M queue ops, etc.). There’s no built‑in “pause when you hit the limit” — Cloudflare just starts billing overages. KV writes cost $5 /M over the cap, so a retry‑loop bug can become expensive quickly.

AWS provides Budget Alerts, but those are passive notifications — by the time you read the email, the damage is already done. I needed active, application‑level self‑protection.

Solution: An Inward‑Facing Circuit Breaker

I built a circuit breaker that faces inward — instead of protecting against downstream failures (the Hystrix pattern), it monitors my own resource consumption and gracefully degrades before hitting the ceiling.

Key Design Decisions

  • Per‑resource thresholds

    • Workers Requests ($0.30 /M overage) only warn at 80 %.
    • KV Writes ($5 /M overage) can trip the breaker at 90 %.
    • Not all resources are equally dangerous, so some are configured as warn‑only (trip = null).
  • Hysteresis
    Trips at 90 %, recovers at 85 %. The 5 % gap prevents oscillation — without it the system would flap between tripped and recovered each check cycle.

  • Fail‑safe on monitoring failure
    If the Cloudflare usage API is down, maintain the last known state rather than assuming “everything is fine.” A monitoring outage shouldn’t mask a usage spike.

  • Alert deduplication
    Per‑resource, per‑month. Without it you’d receive ~8,600 identical emails for the rest of the month once a resource hits 80 %.

Implementation

Every 5 minutes, the system:

  1. Queries Cloudflare’s GraphQL API (requests, CPU, KV, queues) and the Observability Telemetry API (logs/traces) in parallel.
  2. Evaluates eight resource dimensions.
  3. Caches the resulting state to KV.

Between checks the application performs a single KV read — essentially free.

When the breaker is tripped, all scheduled tasks are skipped. The cron trigger still fires (you can’t stop that), but the first thing it does is check the breaker and bail out if tripped.

Results

  • Running in production for two weeks.
  • Caught a KV reads spike at 82 % early in the month — generated one warning email, investigated, fixed the root cause, and never hit the trip threshold.

Applicability

The pattern can be applied to any metered serverless platform (Lambda, Vercel, Supabase) or any API with budget ceilings (OpenAI, Twilio). The core idea: treat your own resource budget as a health signal, just like you would treat a downstream service’s error rate.

Further Reading

Full write‑up with implementation code and tests:

Comments URL:

0 views
Back to Blog

Related posts

Read more »

Cloudflare Crawl Endpoint

Article URL: https://developers.cloudflare.com/changelog/post/2026-03-10-br-crawl-endpoint/ Comments URL: https://news.ycombinator.com/item?id=47329557 Points:...

RISC-V Is Sloooow

Triaging I went through the Fedora RISC‑V trackerhttps://abologna.gitlab.io/fedora-riscv-tracker/ entries, triaged most of them currently 17 entries remain in...

Mother of All Grease Fires (1994)

Background I work in the very center of Palo Alto, in a computer‑company office building that is surrounded by restaurants, hotels, a bank, an art‑supply store...