GitOps with ArgoCD on Amazon EKS using Terraform: A Complete Implementation Guide
Source: Dev.to
GitOps Overview
GitOps is a modern approach to continuous deployment that uses Git as the single source of truth for declarative infrastructure and applications. The core principles include:
- Declarative Configuration – Everything is described declaratively in Git.
- Version Control – All changes are tracked and auditable.
- Automated Deployment – Changes in Git trigger automatic deployments.
- Continuous Monitoring – The system continuously ensures the desired state matches the actual state.
Why ArgoCD?
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes that offers:
- Application Management – Centralized management of multiple applications.
- Multi‑Cluster Support – Deploy to multiple Kubernetes clusters.
- Rich UI – Intuitive web interface for monitoring deployments.
- RBAC Integration – Fine‑grained access control.
- Rollback Capabilities – Easy rollback to previous versions.
Architecture Overview
Our implementation creates a robust, scalable architecture that includes:
┌──────────────────────────────────────────────────────────────┐
│ AWS Cloud │
│ ┌──────────────────────────────────────────────────────────┐│
│ │ VPC ││
│ │ ┌─────────────────┐ ┌──────────────────────────────┐ ││
│ │ │ Public Subnets │ │ Private Subnets │ ││
│ │ │ │ │ ┌─────────────────────────┐ │ ││
│ │ │ ┌───────────┐ │ │ │ EKS Cluster │ │ ││
│ │ │ │ NAT │ │ │ │ ┌─────────────────────┐│ │ ││
│ │ │ │ Gateway │ │ │ │ │ NGINX Ingress ││ │ ││
│ │ │ └───────────┘ │ │ │ │ Controller ││ │ ││
│ │ │ │ │ │ └─────────────────────┘│ │ ││
│ │ └─────────────────┘ │ │ ┌─────────────────────┐│ │ ││
│ │ │ │ │ ArgoCD ││ │ ││
│ │ │ │ │ Server ││ │ ││
│ │ │ │ └─────────────────────┘│ │ ││
│ │ │ │ ┌─────────────────────┐│ │ ││
│ │ │ │ │ Application ││ │ ││
│ │ │ │ │ Workloads ││ │ ││
│ │ │ │ └─────────────────────┘│ │ ││
│ │ │ └─────────────────────────┘ │ ││
│ │ └──────────────────────────────┘ ││
│ └──────────────────────────────────────────────────────────┘│
│ │
│ ┌──────────────────────────────────────────────────────────┐│
│ │ Route53 ││
│ │ argocd.chinmayto.com → NGINX Ingress NLB ││
│ │ app.chinmayto.com → NGINX Ingress NLB ││
│ └──────────────────────────────────────────────────────────┘│
└──────────────────────────────────────────────────────────────┘
Prerequisites
Before starting, ensure you have:
- AWS CLI configured with appropriate permissions
- Terraform installed (version ≥ 1.0)
kubectlinstalled- A registered domain name in Route53
- Helm installed (version ≥ 3.0)
Implementation Steps
Step 1: Create VPC and EKS Cluster
We start by creating the foundational infrastructure using AWS community Terraform modules.
Variables (infrastructure/variables.tf)
variable "aws_region" {
description = "AWS region"
type = string
default = "us-east-1"
}
variable "cluster_name" {
description = "Name of the EKS cluster"
type = string
default = "CT-EKS-Cluster"
}
variable "cluster_version" {
description = "Kubernetes version for the EKS cluster"
type = string
default = "1.33"
}
variable "vpc_cidr" {
description = "CIDR block for VPC"
type = string
default = "10.0.0.0/16"
}
variable "public_subnet_cidrs" {
description = "CIDR blocks for public subnets"
type = list(string)
default = ["10.0.1.0/24", "10.0.2.0/24"]
}
variable "private_subnet_cidrs" {
description = "CIDR blocks for private subnets"
type = list(string)
default = ["10.0.10.0/24", "10.0.20.0/24"]
}
Main Terraform configuration (infrastructure/main.tf)
# Data source for availability zones
data "aws_availability_zones" "available" {
state = "available"
}
# VPC Module Configuration
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "${var.cluster_name}-VPC"
cidr = var.vpc_cidr
azs = slice(data.aws_availability_zones.available.names, 0, 2)
private_subnets = var.private_subnet_cidrs
public_subnets = var.public_subnet_cidrs
enable_nat_gateway = true
enable_vpn_gateway = false
single_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
public_subnet_tags = {
"kubernetes.io/role/elb" = "1"
}
private_subnet_tags = {
"kubernetes.io/role/internal-elb" = "1"
}
tags = {
Name = "${var.cluster_name}-VPC"
Terraform = "true"
}
}
# EKS Cluster Module Configuration
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
# EKS Managed Node Groups
eks_managed_node_groups = {
EKS_Node_Group = {
min_size = 1
max_size = 3
desired_size = 2
instance_types = ["t3.medium"]
capacity_type = "ON_DEMAND"
subnet_ids = module.vpc.private_subnets
}
}
# EKS Add‑ons
cluster_addons = {
coredns = {
most_recent = true
}
kube-proxy = {
most_recent = true
}
vpc-cni = {
most_recent = true
}
eks-pod-identity-agent = {
most_recent = true
}
}
tags = {
Name = var.cluster_name
Terraform = "true"
}
}
# Null Resource to update the kubeconfig file
resource "null_resource" "update_kubeconfig" {
provisioner "local-exec" {
command = "aws eks --region ${var.aws_region} update-kubeconfig --name ${var.cluster_name}"
}
depends_on = [module.eks]
}