Why We Didn’t Move to EKS (Yet): Choosing ECS Over Kubernetes in Production

Published: (December 28, 2025 at 04:47 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

The “Kubernetes Tax” We Wanted to Avoid

Kubernetes is amazing, but it requires a significant investment in tooling and maintenance. To run EKS properly in production, you aren’t just managing containers; you’re managing a platform. You need:

  • GitOps tools: ArgoCD or FluxCD for deployments.
  • Observability: Fluentd or similar for log shipping.
  • Ingress Controllers: NGINX or ALB controllers.
  • Security: Constant patching of the control plane and worker nodes.

For our team, we wanted to focus 100 % on shipping application code, not managing infrastructure plumbing.

confused k8s engineer

Our Hybrid ECS Architecture

We designed a hybrid ECS strategy that leverages the best of both serverless and provisioned compute.

1. Fargate for Stateless Workloads

For our main application servers and Sidekiq background workers, we used ECS Fargate.

  • No Servers to Manage: No OS patching or instance scaling.
  • Right‑Sizing: Pay only for the vCPU and RAM the tasks actually use.
  • Scalability: Fargate handles the heavy lifting of launching thousands of containers if needed.

2. EC2 Launch Type for Cron Jobs

We didn’t go 100 % Fargate. For our scheduled cron jobs, we stuck with the EC2 Launch Type.

  • Why? Cron jobs run frequently and often use the same base images.
  • The Cost Hack: Running them on EC2 instances lets us cache Docker layers locally, drastically reducing data‑transfer costs from ECR and speeding up start times—something Fargate doesn’t support as efficiently for frequent, short‑lived tasks.

AWS ECS Console

The Stack: Simple and Managed

We offloaded state management to AWS managed services to keep the compute layer purely ephemeral:

  • Database: Amazon RDS for PostgreSQL.
  • Caching: Amazon ElastiCache (Redis).

CI/CD: Skipping the Complexity

One of the biggest wins was avoiding the “GitOps” complexity of ArgoCD or Flux. Our pipeline is a straightforward GitHub Actions workflow:

  1. Build: Create the Docker image.
  2. Scan: Run security vulnerability scans.
  3. Push: Upload to ECR.
  4. Deploy: Update the ECS Task Definition and force a new deployment.

That’s it. No separate synchronization server, no complex CRDs, and no managing Helm charts. The pipeline is robust, easy to debug, and requires zero maintenance.

The Verdict: Time is Money

By choosing ECS, we:

  • Skipped the Learning Curve: No need to train the team on kubectl, manifests, or cluster networking.
  • Reduced Operational Overhead: No node patching, no control‑plane upgrades.
  • Lowered the Bill: No EKS control‑plane fees ($73 / month per cluster) or system‑pod overhead on worker nodes.

We might move to EKS one day if our requirements for custom networking or a service mesh become complex enough to warrant it. But for now, ECS allows us to run a stable, high‑performance production environment where the only thing we have to take care of is our application code.

Sometimes, the best engineering decision is the boring one.

Back to Blog

Related posts

Read more »