Running Caddy on Cloudflare Workers via WebAssembly
Source: Dev.to
Introduction
Hi folks. This is my first post. Happy to join you here :)
I’ve been running Caddy in production long enough to know two things: the Caddyfile is a joy to work with, and everything around it tends to get way more complicated than it should.
For over a decade, the software‑engineering industry has been captivated by the containerization paradigm. You start with a clean Caddyfile on your laptop, then you add Docker for a $5 Virtual Private Server (VPS), then Helm charts, Terraform, or custom CI glue to get things into a Kubernetes cluster or edge platform. Suddenly, that elegant routing config is buried under YAML and infrastructure that mostly exists just to move the same HTTP rules between environments.
So, I asked a simple question:
What if one Caddyfile could run on my laptop, on a cheap VPS, and on the infinitely scalable Cloudflare edge – without Docker, and without rewriting configs?
That question led to a profound architectural shift: compiling the Caddy web server directly to WebAssembly (WASM) and executing it natively on Cloudflare Workers.
Here is how bypassing the containerization trap entirely allows us to turn the edge into just another place Caddy lives.
Missed Shots
- Coolify VDS (Hetzner) – great when utilization is high, but for pet projects or low‑usage services the cost of idling outweighs the value. You also have to manage manual scaling.
- Cloudflare Containers – runtime is complex, cold starts are slow, and pricing isn’t free.
- AWS Lightsail & Google Cloud Run – excellent solutions but still require separate DNS/Domain/TLS management for “no‑infra‑config” setups, burning money at unbelievable speeds.
V8 Isolates vs. Containers
To understand how a monolithic Go application like Caddy can run on a serverless edge network, we have to look at the execution environment. Cloudflare Workers do not use containers or micro‑VMs. They are powered by workerd, an open‑source runtime built on Google’s V8 JavaScript engine.
The fundamental unit of execution here is the V8 isolate. When a request arrives, the V8 engine does not boot a Linux kernel, allocate a network namespace, or spin up control groups (cgroups). It simply allocates a memory context and executes the code.
| Component | Docker | V8 (workerd) |
|---|---|---|
| Isolation | OS‑level (cgroups, namespaces) | Application‑level (V8 memory heap) |
| Cold Start | ~1500 ms | — |
The era of the idle proxy server is ending.
One Caddyfile. Many runtimes.
Full repository: