Triggering Long Jobs in Cloudflare Workers
Source: Dev.to
The Problem: My Job Was Too Long for HTTP
I had a Worker that handled my admin UI. One of the features was a button that kicked off a heavy background process—think scraping, data processing, batch operations, that kind of thing.
export default {
async fetch(request, env, ctx) {
if (request.url.endsWith('/admin/run-job')) {
await runHeavyJob(); // 😬
return new Response('Job complete!');
}
}
}
This worked fine in development. In production, however, it hit timeouts because HTTP requests have strict limits:
| Plan | CPU Time | Wall Time |
|---|---|---|
| Free | 10 ms | 30 s |
| Workers Paid | 50 ms | 30 s |
| Business+ | 30 s | 30 s |
My job needed more than 30 seconds of wall time, and I was burning through CPU time quickly. Even on the paid plan, I kept hitting limits.
I tried using ctx.waitUntil():
export default {
async fetch(request, env, ctx) {
if (request.url.endsWith('/admin/run-job')) {
ctx.waitUntil(runHeavyJob()); // Still doesn't work! 😭
return new Response('Job started!');
}
}
}
waitUntil() doesn’t extend the timeout; it only lets you do cleanup work after sending the response. The isolate still shuts down at the same time limit.
Why I Couldn’t Just Use scheduled()
I thought about reusing my existing cron job:
export default {
async fetch(request, env, ctx) {
if (request.url.endsWith('/admin/run-job')) {
// Can I just... call scheduled() somehow? 🤔
await this.scheduled(); // Nope!
return new Response('Done!');
}
},
async scheduled(event, env, ctx) {
await runHeavyJob(); // This works great!
}
}
You can’t invoke scheduled() directly from your code—only Cloudflare’s cron system can trigger it. Workarounds I tried included:
- Calling the Cloudflare API to trigger a cron (requires external auth, not instant)
- Setting up webhooks to external services (defeats the purpose of Workers)
- Storing a flag in KV and polling it every minute (works, but feels hacky)
The Lightbulb Moment: Queues Are Made For This
Cloudflare Queues provide a third type of invocation handler:
export default {
async fetch(request, env, ctx) { /* ... */ },
async scheduled(event, env, ctx) { /* ... */ },
async queue(batch, env, ctx) { /* ... */ } // 👈 This one!
}
Execution Limits by Handler Type
| Handler | CPU Time | Best For |
|---|---|---|
fetch() | 10‑50 ms (most plans) | Quick APIs, UI |
scheduled() | 30 s | Periodic jobs |
queue() | Unlimited ⚡ | Heavy processing |
Queue handlers have no CPU time limit—only wall‑time limits measured in minutes.
How I Actually Solved It
Worker 1: Admin UI (Producer)
export default {
async fetch(request, env, ctx) {
if (request.url.endsWith('/admin/run-job')) {
// Enqueue a message
await env.MY_QUEUE.send({
type: 'heavy-job',
triggeredBy: 'admin',
timestamp: Date.now()
});
return new Response('Job queued!');
}
}
}
Worker 2: Job Runner (Consumer)
export default {
async queue(batch, env, ctx) {
for (const message of batch.messages) {
const { type, triggeredBy } = message.body;
if (type === 'heavy-job') {
await runHeavyJob(); // Runs with unlimited CPU time! 🎉
message.ack();
}
}
}
}
Why this works:
- The UI Worker stays fast (just enqueues and returns).
- The job Worker runs with unlimited CPU time.
- Queues handle retries automatically.
- Workers can be scaled independently.
- Execution is nearly instant (no polling delay).
Important: Handlers Don’t Compete for Resources
Each handler invocation runs in its own isolated execution context, so a running queue job won’t slow down HTTP requests. They share only the code bundle (larger bundles = slower cold starts) and the deployment (a bug in one handler affects the whole Worker).
You can combine all three handlers in a single Worker if desired:
export default {
async fetch(request, env, ctx) {
await env.MY_QUEUE.send({ type: 'job' });
return new Response('Queued!');
},
async scheduled(event, env, ctx) {
await env.MY_QUEUE.send({ type: 'cron-job' });
},
async queue(batch, env, ctx) {
await runHeavyJob(); // This won't slow down fetch()
}
}
I prefer separating them to keep the UI bundle small, enable independent deployments, and maintain a cleaner separation of concerns.
Other Options I Considered
Cron Polling
Set a flag in KV and check it every minute with scheduled():
export default {
async fetch(request, env, ctx) {
await env.KV.put('pending-job', 'true');
return new Response('Job will run soon');
},
async scheduled(event, env, ctx) {
const pending = await env.KV.get('pending-job');
if (pending) {
await runHeavyJob();
await env.KV.delete('pending-job');
}
}
}
Works, but isn’t instant—you’re limited by the cron interval (minimum 1 minute).
Durable Object Alarms
Durable Objects can set alarms that fire almost immediately:
export class JobRunner {
async fetch(request) {
await this.storage.setAlarm(Date.now() + 100); // 100 ms
return new Response('Alarm set');
}
async alarm() {
await runHeavyJob(); // Runs in the DO context
}
}
Elegant, but requires setting up Durable Objects, which can feel heavyweight for simple background jobs.
My Recommendation
For on‑demand long‑running jobs, use Queues. They are purpose‑built for this scenario:
- Unlimited CPU time
- Built‑in retry logic
- Simple API
- Automatic scaling
- Near‑instant execution
Minimal Setup
# wrangler.toml
[[queues.producers]]
queue = "my-jobs"
binding = "MY_QUEUE"
[[queues.consumers]]
queue = "my-jobs"
max_batch_size = 10
max_batch_timeout = 30
Wrapping Up
- Don’t fight the platform. Trying to make
fetch()do something it wasn’t designed for wastes time. - Read the limits. Understanding CPU vs. wall time saved me hours of debugging.
- Queues are underrated. They’re not just for distributed systems—they’re perfect for background jobs in monoliths too.