Optimizing Costs for Container Workloads on AWS EKS and ECS

Published: (December 24, 2025 at 11:46 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Hey Everyone!
Let’s talk about something that we all care about: saving money on our cloud bills. I recently dived deep into optimizing our container costs on AWS, and I wish I’d known these insights earlier.

Why Container Cost Optimization Matters

Containers are a huge win for scaling and deployment, but they can quietly eat away at your budget if you’re not keeping an eye on them. The good news? AWS offers a whole bunch of ways to make the cost‑magic disappear without impacting performance—often even improving it.

Spot Instances: Your Secret Weapon

Spot instances are probably the biggest win AWS has given me. You can save up to 90 % compared with on‑demand instances. Yes, 90 %! They’re ideal for fault‑tolerant applications that can withstand intermittent disruptions.

  • EKS makes using managed node groups with Spot instances relatively easy.
  • You can mix Spot and regular instances within the same EKS cluster: use regular instances for critical workloads and Spot for less‑critical ones.

What worked for us:
Moving batch‑processing workloads and CI/CD pipelines to Spot instances saved us instantly. Those workloads are inherently interruptible, so the cost benefit was immediate. Just make sure your applications can shut down gracefully, and you’re good to go.

Fargate vs EC2: Choosing Wisely

FargateEC2
CostHigher per compute unit, but you only pay for what you use (down to the second).Lower per compute unit when right‑sized; can be cheaper with Reserved Instances or Savings Plans.
Operational overheadNo infrastructure management—AWS handles the underlying servers.You manage the instances (patching, scaling, etc.).
Best forUnpredictable traffic, small workloads, or when you want zero‑ops.Predictable, steady‑state production workloads where you can right‑size and maintain high utilization.

My current approach

  • Fargate for dev environments and occasional workloads.
  • EC2 (well‑optimized, possibly with Reserved Instances/Savings Plans) for production workloads that receive steady traffic.

It’s essentially about using the right tool for the right job.

Autoscaling: The Dynamic Duo

Two components changed the way I think about resource allocation:

  1. Cluster Autoscaler – automatically scales the number of nodes based on pending pods. No more paying for idle nodes.
  2. Horizontal Pod Autoscaler (HPA) – scales pods at the application level based on CPU, memory, or custom metrics.

Together they form a “magnificent symphony of efficiency.” Our cluster now scales up when traffic spikes and scales down when traffic drops, saving us ~30 % by eliminating over‑provisioning.

Example HPA Manifest

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: my-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: my-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

Rightsizing: Stop Wasting Resources

I was guilty of setting pod resource requests far too high “just in case.” That meant paying for resources we never used.

Steps to right‑size:

  1. Measure actual consumption – use the Kubernetes Metrics Server (or Prometheus) to see real usage.
  2. Adjust requests & limits – e.g., if a pod uses ~100 MiB of memory but you request 512 MiB, you’re over‑paying.
  3. Iterate conservatively – monitor for a week or two after changes, then fine‑tune further.

Even small adjustments add up across dozens or hundreds of pods.

Kubecost: Your Financial Visibility Partner

Kubecost provides real‑time cost visibility for Kubernetes workloads. It shows exactly where dollars are being spent—down to the namespace or pod level.

Why I love Kubecost

  • Cost breakdown by team, app, or environment.
  • Alerts when spending exceeds defined thresholds.
  • Community edition is free and perfect for beginners.

Once installed on your cluster, you get insights into cost allocation, optimization opportunities, and alerts—essentially a financial analyst for your Kubernetes fleet.

ECR Lifecycle Policies: Clean Up and Save

Container images stored in Amazon ECR can start costing money, especially old, unused versions. ECR lifecycle policies let you automatically prune images by age or count.

Simple policy example: Keep the last 10 images and delete any image older than 30 days that hasn’t been pulled.

{
  "rules": [
    {
      "rulePriority": 1,
      "description": "Keep last 10 images",
      "selection": {
        "tagStatus": "any",
        "countType": "imageCountMoreThan",
        "countNumber": 10
      },
      "action": {
        "type": "expire"
      }
    },
    {
      "rulePriority": 2,
      "description": "Delete images older than 30 days",
      "selection": {
        "tagStatus": "any",
        "countType": "sinceImagePushed",
        "countUnit": "days",
        "countNumber": 30
      },
      "action": {
        "type": "expire"
      }
    }
  ]
}

Applying a policy like this prevents unused images from silently burning money.

TL;DR

  • Spot Instances → up to 90 % savings for interruptible workloads.
  • Fargate vs EC2 → choose based on predictability, operational overhead, and cost model.
  • Autoscaling (Cluster Autoscaler + HPA) → eliminates idle capacity, saving ~30 %.
  • Rightsizing → align requests/limits with actual usage.
  • Kubecost → real‑time cost visibility and alerts.
  • ECR Lifecycle Policies → automatically prune old images to stop storage waste.

Implement these practices, and you’ll see a noticeable reduction in your AWS container bill without sacrificing performance. Happy optimizing!

{
  "rules": [
    {
      "id": 1,
      "description": "Keep last 10 images",
      "selection": {
        "tagStatus": "any",
        "countType": "imageCountMoreThan",
        "countNumber": 10
      },
      "action": {
        "type": "expire"
      }
    }
  ]
}

Fullscreen Controls (illustrative)

  • Enter fullscreen mode
  • Exit fullscreen mode

It’s a small thing, but if you’re working with several repositories, the disk‑space savings can add up.

Cost Optimization

Bringing It All Together

Cost optimization, however, isn’t a one‑time task. Start with the quick wins—autoscaling, Spot instances, and Kubecost.

  1. Quick wins

    • Autoscaling
    • Spot instances
    • Kubecost
  2. Next steps

    • Rightsizing
    • Cleaning up old images
    • Deciding between Fargate and EC2

Monitor progress and celebrate the wins along the way. We reduced our container costs by ≈45 % in three months.

The Next Areas to Tackle

  • Infrastructure
  • IRSA (IAM Roles for Service Accounts)

Remember, every dollar saved can be reinvested in building better features or improving your infrastructure.


What cost‑optimization methods have you tried with success?
I’d love to hear your experiences so I don’t miss any tips. Thanks in advance, and let’s keep learning from each other!

Share your “best tip on cutting costs” in the comment section below to help each other stay on top of those cloud bills!

Back to Blog

Related posts

Read more »