End-to-End Microservices Deployment on AWS EKS: CI/CD with Jenkins, Docker, Kubernetes & Argo CD
Source: Dev.to
User–Order Microservices Application
Architecture Overview
A simple two‑service system consisting of a User Service and an Order Service. Both services are built with Spring Boot, packaged as Docker images, and deployed to an Amazon EKS cluster.
Project Structure (Mono‑Repo for Learning)
microservices-project/
├─ user-service/
│ ├─ src/
│ └─ Dockerfile
├─ order-service/
│ ├─ src/
│ └─ Dockerfile
└─ k8s/
├─ db-deployment.yaml
├─ user-deployment.yaml
├─ order-deployment.yaml
└─ ingress.yaml
Example Spring Boot snippets
@Entity
public class User {
@Id @GeneratedValue
private Long id;
// …
}
@RestController
@RequestMapping("/users")
public class UserController {
@PostMapping
public ResponseEntity create(@RequestBody User user) { … }
@GetMapping("/{id}")
public ResponseEntity get(@PathVariable Long id) { … }
@GetMapping("/health")
public String health() { return "OK"; }
}
@RestController
@RequestMapping("/orders")
public class OrderController {
@Autowired
private RestTemplate restTemplate;
@PostMapping
public ResponseEntity create(@RequestBody Order order) {
// Call user service
String url = "http://user-service:8080/users/" + order.getUserId();
restTemplate.getForObject(url, User.class);
// …
}
}
Build – Dockerize Both Services
# user-service/Dockerfile
FROM openjdk:17-jdk-slim
WORKDIR /app
COPY target/user-service.jar app.jar
ENTRYPOINT ["java","-jar","/app/app.jar"]
# order-service/Dockerfile
FROM openjdk:17-jdk-slim
WORKDIR /app
COPY target/order-service.jar app.jar
ENTRYPOINT ["java","-jar","/app/app.jar"]
# Build and tag images
mvn clean package -f user-service/pom.xml
docker build -t user-service:1.0 ./user-service
mvn clean package -f order-service/pom.xml
docker build -t order-service:1.0 ./order-service
Kubernetes – Database Deployment
# k8s/db-deployment.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
ports:
- port: 5432
selector:
app: postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
env:
- name: POSTGRES_DB
value: microservices
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: secret
ports:
- containerPort: 5432
Kubernetes – User Service Deployment
# k8s/user-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
spec:
replicas: 2
selector:
matchLabels:
app: user-service
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: user-service:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: user-service
spec:
selector:
app: user-service
ports:
- port: 8080
targetPort: 8080
Kubernetes – Order Service Deployment
# k8s/order-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 2
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: order-service:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- port: 8080
targetPort: 8080
Ingress – Single Entry Point
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: microservices-ingress
spec:
rules:
- http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 8080
- path: /orders
pathType: Prefix
backend:
service:
name: order-service
port:
number: 8080
Deploy Everything
kubectl apply -f k8s/
Functional Proof (Critical Test)
curl -X POST http:///orders \
-H "Content-Type: application/json" \
-d '{"userId":1,"productId":42,"quantity":2}'
Rollback Scenario
# Revert to previous Git commit that contains the older manifests
git revert
# Argo CD (or kubectl) will apply the reverted state automatically
Production‑Grade Improvements
- Add health‑checks and readiness probes.
- Enable resource limits and requests.
- Use a sidecar for logging/metrics (e.g., Prometheus exporter).
- Store secrets in AWS Secrets Manager or Kubernetes Secrets encrypted with KMS.
- Implement blue‑green or canary deployments (see below).
Part 2 – Full CI Pipeline (Jenkins)
pipeline {
agent any
environment {
REGISTRY = 'your-registry.io'
IMAGE_TAG = "${env.BUILD_NUMBER}"
}
stages {
stage('Checkout') {
steps {
git branch: 'main',
url: 'https://github.com/your-org/microservices-project.git'
}
}
stage('Build User Service') {
steps {
dir('user-service') {
sh 'mvn clean package'
}
}
}
stage('Build Order Service') {
steps {
dir('order-service') {
sh 'mvn clean package'
}
}
}
stage('Docker Build') {
steps {
sh '''
docker build -t $REGISTRY/user-service:$IMAGE_TAG user-service/
docker build -t $REGISTRY/order-service:$IMAGE_TAG order-service/
'''
}
}
stage('Docker Push') {
steps {
withDockerRegistry([credentialsId: 'dockerhub-creds', url: '']) {
sh '''
docker push $REGISTRY/user-service:$IMAGE_TAG
docker push $REGISTRY/order-service:$IMAGE_TAG
'''
}
}
}
stage('Update K8s Manifests Repo') {
steps {
sh '''
git clone https://github.com/your-org/k8s-manifests.git
cd k8s-manifests
sed -i "s|image:.*user-service.*|image: $REGISTRY/user-service:$IMAGE_TAG|" user-deployment.yaml
sed -i "s|image:.*order-service.*|image: $REGISTRY/order-service:$IMAGE_TAG|" order-deployment.yaml
git commit -am "Update images to $IMAGE_TAG"
git push
'''
}
}
}
}
The pipeline follows the classic flow: **Trigger → Checkout → Build → Test (omitted for brevity) → Docker Build → Docker Push → Update Git → CD via Argo CD.
Part 3 – Argo CD (GitOps Deployment)
Install Argo CD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443
Retrieve the initial admin password:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d
Connect Argo CD to the Manifests Repository
# application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: microservices-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/your-org/k8s-manifests.git
targetRevision: HEAD
path: .
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true
Apply the application:
kubectl apply -f application.yaml
Argo CD will continuously reconcile the live cluster state with the Git manifests, handling rollouts, rollbacks, and health checks automatically.
Automatic Operations
- Sync:
kubectl apply -f k8s/→ Argo CD detects drift and syncs. - Rollback:
git revert→ Argo CD reverts the cluster to the previous state. - Status:
kubectl get podsor use the Argo CD UI.
Blue‑Green & Canary Deployments with Argo CD
Blue‑Green Deployment (Zero Downtime)
- Create a “green” deployment (new version) alongside the existing “blue” deployment.
# user-service-green.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-green
spec:
replicas: 2
selector:
matchLabels:
app: user-service
version: green
template:
metadata:
labels:
app: user-service
version: green
spec:
containers:
- name: user-service
image: your-registry.io/user-service:2.0
ports:
- containerPort: 8080
- Service selector points to the blue version initially.
# user-service-service.yaml
apiVersion: v1
metadata:
name: user-service
spec:
selector:
app: user-service
version: blue # change to "green" to switch traffic
ports:
- port: 8080
targetPort: 8080
- Switch traffic by updating the selector in Git:
selector:
app: user-service
version: green
- Commit & push → Argo CD syncs → traffic moves instantly without pod restarts.
- Rollback: revert the selector change (
git revert) and Argo CD restores the blue deployment.
Canary Deployment (Gradual Rollout)
- Create a canary deployment with a small replica count (e.g., 1 pod ≈ 10 % traffic).
# user-service-canary.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-canary
spec:
replicas: 1
selector:
matchLabels:
app: user-service
version: canary
template:
metadata:
labels:
app: user-service
version: canary
spec:
containers:
- name: user-service
image: your-registry.io/user-service:2.0
ports:
- containerPort: 8080
- Service selector includes both stable and canary pods (using a label selector that matches both versions).
selector:
app: user-service
-
Increase canary replicas gradually by editing the manifest in Git (e.g., 2 → 3 pods) and let Argo CD apply the changes.
-
Promote to stable: once confidence is high, replace the stable deployment image with the new version and delete the canary deployment.
-
Rollback: either scale the canary to zero (
kubectl scale deployment user-service-canary --replicas=0) or revert the Git commit that introduced the canary.