From Terraform to GitOps: Building an End-to-End DevOps Platform with 11 Microservices

Published: (January 14, 2026 at 07:18 PM EST)
5 min read
Source: Dev.to

Source: Dev.to

This article covers

  • How I provisioned the jumphost using Terraform
  • How I configured the jumphost with Ansible
  • Bonus: Monitor your AWS cost in real‑time while building this project (using AWS CostWatch)

Who this works for

  • Free‑Tier users
  • Anyone who wants to run everything from their own laptop terminal

Bonus: Monitor Your AWS Cost in Real‑Time (AWS CostWatch)

You don’t need any server – just your laptop terminal.

Step‑by‑Step

  1. Clone the repository

    git clone https://github.com/vsaraths/AWS-Cost-Watch.git
    cd AWS-Cost-Watch
  2. Install dependencies

    pip install boto3 rich sqlite-utils
  3. Configure AWS credentials

    Verify you already have credentials:

    aws sts get-caller-identity

    If you don’t have credentials configured yet:

    aws configure

    Provide the following when prompted:

    • Access Key ID
    • Secret Access Key
    • Default region name
    • Default output format (optional)
  4. Run CostWatch

    python3 aws_cost_dashboard.py

    The tool will instantly scan your AWS account and begin displaying cost information in real‑time.

Terraform & Ansible – Building the Jumphost

Below is a concise, step‑by‑step guide for provisioning a jumphost using Terraform (for infrastructure) and Ansible (for configuration).

1️⃣ Clone the Infrastructure Repository

git clone <REPO_URL>
cd <REPO_DIRECTORY>

Replace <REPO_URL> and <REPO_DIRECTORY> with the actual values for your project.

2️⃣ Configure the AWS CLI (if not already done)

aws configure

Enter the same AWS credentials that you will use for Terraform.

3️⃣ Create an S3 Backend for Terraform State

cd s3-buckets
# Run the Terraform scripts that create the bucket
terraform init
terraform apply -auto-approve

The S3 bucket stores the Terraform state remotely, enabling safe, collaborative infrastructure management.

3.1️⃣ (Optional) Create Network Infrastructure

cd ../network
terraform init
terraform apply -auto-approve   # creates VPC, subnets, route tables, etc.

Sample verification

terraform state list

4️⃣ Provision the Jumphost EC2 (Terraform + Ansible)

cd ../ec2-jumphost
terraform init
terraform apply -auto-approve   # creates the EC2 instance

After the instance is up, the accompanying Ansible playbook will install and configure all required tools, making the setup repeatable and idempotent.

5️⃣ Connect to the EC2 Instance & Access Jenkins

# Replace <KEY_NAME>.pem, <PUBLIC_IP> with your values
ssh -i <KEY_NAME>.pem ec2-user@<PUBLIC_IP>

Verify Git is installed

git --version

Retrieve the Jenkins admin password

sudo cat /var/lib/jenkins/secrets/initialAdminPassword

📌 Quick Recap

StepActionKey Command
1Clone repogit clone …
2Configure AWS CLIaws configure
3Create S3 backendterraform apply (in s3-buckets)
3.1(Optional) Build VPCterraform apply (in network)
4Spin up jumphostterraform apply (in ec2-jumphost)
5SSH & Jenkinsssh …cat …/initialAdminPassword

Feel free to adapt any of the commands (e.g., add -var flags, use a different backend) to suit your environment. Happy provisioning!

Jenkins – Installation & Pipeline Setup

Step 6 – Install Jenkins Plugins

  1. Navigate: Jenkins Dashboard → Manage Jenkins → Plugins
  2. Install the required plugins (e.g., Git, Docker, Kubernetes, Pipeline, etc.).
  3. When installation is complete, restart Jenkins (ensure no jobs are running).

Step 7 – Set Up Jenkins Credentials

Add a GitHub Personal Access Token (PAT):

Jenkins Dashboard → Manage Jenkins → Credentials → (global) → Global credentials (unrestricted) → Add Credentials → Secret text

Step 8 – Create Jenkins Pipeline Jobs

All pipelines point to the same GitHub repository:

https://github.com/vsaraths/Deploy--E-Commerce-Application-eks-microservices-platform-11-Services-.git

8.1 Create EKS Cluster

SettingValue
Job typePipeline (Multibranch)
Branch specifier*/main
Build with Parameters → ACTIONcreate-eks-cluster

8.2 Create Elastic Container Registry (ECR)

SettingValue
Job typePipeline (Multibranch)
Branch specifier*/main
Build with Parameters → ACTIONcreate-ecr

Verify the repository with:

aws ecr describe-repositories --region us-east-1

8.3 Build & Push Docker Images to ECR

Create a separate pipeline job (or a single multibranch job) for each microservice. Use the table below to define the ACTION parameter for each service.

ServiceBuild Parameter (ACTION)
emailservicebuild-emailservice
checkoutservicebuild-checkoutservice
recommendationservicebuild-recommendationservice
frontendbuild-frontend
paymentservicebuild-paymentservice
productcatalogservicebuild-productcatalogservice
cartservicebuild-cartservice
loadgeneratorbuild-loadgenerator
currencyservicebuild-currencyservice
shippingservicebuild-shippingservice
adservicebuild-adservice

Example Jenkinsfile (Groovy)

pipeline {
    agent any
    environment {
        AWS_DEFAULT_REGION = 'us-east-1'
        ECR_REPO = "your-account-id.dkr.ecr.${env.AWS_DEFAULT_REGION}.amazonaws.com/${params.SERVICE}"
    }
    stages {
        stage('Checkout') {
            steps {
                git url: 'https://github.com/vsaraths/Deploy--E-Commerce-Application-eks-microservices-platform-11-Services-.git',
                    branch: 'main'
            }
        }
        stage('Build Docker Image') {
            steps {
                sh "docker build -t ${params.SERVICE}:latest ./services/${params.SERVICE}"
            }
        }
        stage('Login to ECR') {
            steps {
                sh "aws ecr get-login-password | docker login --username AWS --password-stdin ${ECR_REPO}"
            }
        }
        stage('Push Image') {
            steps {
                sh """
                    docker tag ${params.SERVICE}:latest ${ECR_REPO}:latest
                    docker push ${ECR_REPO}:latest
                """
            }
        }
    }
}

Replace ${params.SERVICE} with the appropriate service name for each job.

Argo CD – Install on the Jumphost EC2

Step 13 – Install Argo CD

# 13.1 Create a namespace for Argo CD
kubectl create namespace argocd

# 13.2 Install Argo CD manifests
kubectl apply -n argocd \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# 13.3 Verify the installation
kubectl get pods -n argocd   # All pods should be in the Running state.

# 13.4 (Optional) Validate the cluster nodes
kubectl get nodes

Access the Argo CD UI (port‑forward example)

kubectl port-forward svc/argocd-server -n argocd 8080:443
# Then open https://localhost:8080 in your browser.

Recap

  1. Monitor AWS cost in real‑time with CostWatch.
  2. Provision a reusable jumphost using Terraform + Ansible.
  3. Set up Jenkins with required plugins, credentials, and pipelines for EKS, ECR, and all microservices.
  4. Install Argo CD on the jumphost to manage GitOps deployments.

Following these steps gives you a repeatable, debuggable, and production‑ready DevOps foundation. Happy building!

Test Pod (if necessary)

kubectl get nodes

13.5 – List All ArgoCD Resources

Output omitted.

13.6 – Expose ArgoCD Server Using a LoadBalancer

13.6.1 – Edit the ArgoCD Server Service

  1. Locate the ArgoCD Server Service manifest (e.g., argocd-server-svc.yaml).

  2. Find and replace the following lines:

    Original snippet

    type: ClusterIP   # '
        send_resolved: true

    Updated snippet

    type: LoadBalancer

    Note: The send_resolved: true line belongs to Alertmanager configuration and should be removed from the ArgoCD Service manifest.

  3. Apply the changes

    kubectl apply -f argocd-server-svc.yaml

13.6.2 – Restart the Alertmanager Pod

kubectl rollout restart deployment/kube-prom-stack-alertmanager -n monitoring

⚠️ Gmail users: If you need to send alerts via Gmail, enable “Less Secure Apps” or create an App Password in your Google Account security settings. This allows Alertmanager to authenticate when using Gmail’s SMTP server.

Add a CPU‑Usage Alert Rule

Create cpu-alert-rule.yaml:

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: cpu-alert
  namespace: monitoring
spec:
  groups:
    - name: cpu.rules
      rules:
        - alert: HighCPUUsage
          expr: sum(rate(container_cpu_usage_seconds_total{namespace="argocd"}[5m])) by (pod) > 0.7
          for: 2m
          labels:
            severity: warning
          annotations:
            summary: "CPU usage > 70% on pod {{ $labels.pod }}"
            description: "CPU usage has been above 70% for more than 2 minutes."

Apply the rule:

kubectl apply -f cpu-alert-rule.yaml

Edit the Prometheus Service

Change the service type to LoadBalancer (same steps as for Alertmanager) and apply the change.

Get the Prometheus LoadBalancer IP

kubectl get svc -n monitoring kube-prom-stack-prometheus

Example URL

http://a1b2c3d4.us-east-1.elb.amazonaws.com:9090

You can now access Prometheus, Grafana, and receive email alerts when CPU usage crosses the defined limit.

🎉 Final Checklist

  • ✅ Prometheus & Grafana installed
  • ✅ Grafana dashboards imported (Kubernetes, Argo CD, etc.)
  • ✅ Alertmanager reachable via LoadBalancer and email alerts configured
  • ✅ CPU/RAM metrics for your Argo CD app visible in Grafana

You’re all set! 🎉

Back to Blog

Related posts

Read more »