Nomad vs. Kubernetes: Why We Switched Our SaaS to HashiCorp Nomad
Source: Dev.to
The Complexity Tax: Why Kubernetes Isn’t Always the Answer
In the modern DevOps landscape, Kubernetes (K8s) is often treated as the default choice for container orchestration. It’s powerful, battle‑tested, and has a massive ecosystem. However, for many small‑to‑medium SaaS teams, Kubernetes comes with a significant complexity tax.
We spent two years managing a production K8s cluster. While it solved our scaling issues, we found ourselves spending about 30 % of our engineering time just maintaining the orchestrator itself—debugging CNI plugins, managing complex RBAC, and wrestling with Helm charts that felt like they required a PhD to understand.
That’s when we looked at HashiCorp Nomad.
Nomad is a lightweight, flexible orchestrator that can manage both containerized and non‑containerized applications. It follows the Unix philosophy: do one thing and do it well. In this article we’ll walk through why we made the switch, the architectural differences, and how you can implement a production‑ready Nomad workflow.
A basic Nomad agent configuration
data_dir = "/opt/nomad/data"
bind_addr = "0.0.0.0"
server {
enabled = true
bootstrap_expect = 3
}
client {
enabled = true
}
This simplicity extends to the developer experience. A Nomad Job is defined in HCL (HashiCorp Configuration Language), which is far more readable than the verbose YAML required by Kubernetes.
Production‑ready Nomad job for a Next.js frontend
job "webapp" {
datacenters = ["dc1"]
type = "service"
group "frontend" {
count = 3
network {
port "http" {
to = 3000
}
}
service {
name = "webapp-frontend"
port = "http"
check {
type = "http"
path = "/api/health"
interval = "10s"
timeout = "2s"
}
}
task "nextjs" {
driver = "docker"
config {
image = "my-registry/webapp:latest"
ports = ["http"]
}
resources {
cpu = 500 # MHz
memory = 256 # MB
}
env {
NODE_ENV = "production"
}
}
}
}
In Nomad the hierarchy is Job → Group → Task. A Job can contain multiple groups, and a group contains tasks that are co‑located on the same node (similar to a K8s Pod).
Service discovery and load balancing
One of the “gotchas” in Kubernetes is the complexity of Ingress controllers. In the Nomad ecosystem you use Consul. When a Nomad task starts, it automatically registers itself with Consul. You can then use Fabio or Traefik as a load balancer that dynamically updates its configuration based on Consul’s service catalog.
Vault integration for secrets
Instead of K8s Secrets (which are just base64‑encoded strings), Nomad integrates natively with Vault. You can inject secrets directly into environment variables or as files:
template {
data = <<EOH
DATABASE_URL="{{with secret \"database/creds/readonly\"}}{{.Data.url}}{{end}}"
EOH
destination = "secrets/file.env"
env = true
}
Networking and storage considerations
-
Network stack – By default, Nomad uses the host’s network stack, providing near‑native performance but requiring you to manage port collisions.
Fix: Use Nomad’s bridge networking mode with CNI plugins for isolated container networks similar to K8s. -
Storage – Nomad’s CSI (Container Storage Interface) support is solid, though not as “automagical” as Kubernetes.
Fix: For SaaS databases we recommend running them on managed services (e.g., RDS, Supabase) or on dedicated nodes using Nomad’shost_volumefor maximum IOPS.
Feature comparison
| Feature | Kubernetes | HashiCorp Nomad |
|---|---|---|
| Complexity | High (steep learning curve) | Low (single binary) |
| Flexibility | Containers only (mostly) | Containers, binaries, Java, VMs |
| Ecosystem | Massive | Focused (HashiStack) |
| Resource usage | High overhead | Very low overhead |
Key Takeaways
- Nomad is significantly easier to operate and maintain.
- The HashiStack (Nomad, Consul, Vault) provides a modular, best‑of‑breed approach.
- HCL is a superior configuration language for infrastructure‑as‑code.
What’s your approach to orchestration? Have you felt the “Kubernetes fatigue,” or do you think the ecosystem benefits outweigh the complexity? Drop your thoughts in the comments.