Strangler Fig on IBM Kubernetes: Modernizing a Monolith Without Breaking Production
Source: Dev.to
Why the Strangler Fig Pattern Still Works
Most enterprise monoliths don’t fail because of bad code.
They fail because changing them safely becomes too risky.
A full rewrite to micro‑services sounds attractive, but in practice it often leads to:
- Long delivery cycles
- High data risk
- Business disruption
The Strangler Fig pattern offers a safer alternative: modernize incrementally while keeping the system running.
In this article I walk through a step‑by‑step, production‑safe approach to applying the Strangler Fig pattern using IBM Cloud Kubernetes Service (IKS), including real commands and manifests you can run.
By the end of this guide you will be able to:
- Containerize an existing monolithic application
- Deploy it to IBM Cloud Kubernetes Service
- Place it behind an Ingress
- Deploy a new “edge” service
- Route traffic gradually using path‑based routing
- Keep rollback simple and safe
Prerequisites
| Item | Details |
|---|---|
| IBM Cloud account | – |
| Existing IKS cluster | – |
| Local tools | ibmcloud, kubectl, docker |
| Login | ```bash |
| ibmcloud login -a https://cloud.ibm.com | |
| ibmcloud target -r -g |
---
## 1. Set up a clean namespace
```bash
kubectl create namespace monolith-demo
kubectl config set-context --current --namespace=monolith-demo
kubectl get ns
Goal: No behavior change – just package the monolith.
2. Containerize the monolith
Dockerfile (Node.js monolith)
# ---- Build stage ----
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
# ---- Runtime stage ----
FROM node:20-alpine
WORKDIR /app
COPY --from=build /app /app
EXPOSE 8080
CMD ["npm","start"]
Add minimal health endpoints (if you don’t already have them)
// Example endpoints
app.get("/health", (req, res) => res.status(200).send("ok"));
app.get("/ready", (req, res) => res.status(200).send("ready"));
Build the image
docker build -t monolith:1.0.0 .
Push to IBM Cloud Container Registry
# Log in (one‑time)
ibmcloud cr login
ibmcloud cr namespace-add
# Tag & push
docker tag monolith:1.0.0 //monolith:1.0.0
docker push //monolith:1.0.0
# Verify
ibmcloud cr images | grep monolith
3. Deploy the monolith
3.1 Deployment manifest (deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: monolith
spec:
replicas: 2
selector:
matchLabels:
app: monolith
template:
metadata:
labels:
app: monolith
spec:
containers:
- name: monolith
image: //monolith:1.0.0
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 15
periodSeconds: 10
kubectl apply -f deployment.yaml
kubectl rollout status deploy/monolith
kubectl get pods -l app=monolith
3.2 Service manifest (service.yaml)
apiVersion: v1
kind: Service
metadata:
name: monolith-svc
spec:
selector:
app: monolith
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
kubectl apply -f service.yaml
kubectl get svc monolith-svc
Quick local test
kubectl port-forward svc/monolith-svc 8080:80
curl -i http://localhost:8080/health
4. Expose via Ingress (the routing control plane)
Ingress manifest (ingress.yaml)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host:
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: monolith-svc
port:
number: 80
kubectl apply -f ingress.yaml
kubectl get ingress app-ingress -o wide
At this point: 100 % of traffic still goes to the monolith.
5. Choose a low‑risk first slice to “strangle”
Good first candidates: /api/auth/*
For this walkthrough we’ll extract the auth API.
Minimal example endpoint (add to the monolith)
app.get("/api/auth/ping", (req, res) => {
res.json({ service: "auth-service", status: "pong" });
});
6. Build the new Auth service
Dockerfile (Dockerfile)
FROM node:20-alpine
WORKDIR /app
COPY . .
EXPOSE 8081
CMD ["node","server.js"]
Build & push
docker build -t auth-service:1.0.0 .
docker tag auth-service:1.0.0 //auth-service:1.0.0
docker push //auth-service:1.0.0
7. Deploy the Auth service
7.1 Deployment manifest (auth-deploy.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 2
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: //auth-service:1.0.0
ports:
- containerPort: 8081
readinessProbe:
httpGet:
path: /ready
port: 8081
initialDelaySeconds: 5
periodSeconds: 5
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 15
periodSeconds: 10
kubectl apply -f auth-deploy.yaml
kubectl rollout status deploy/auth-service
kubectl get pods -l app=auth-service
7.2 Service manifest (auth-svc.yaml)
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
selector:
app: auth-service
ports:
- name: http
port: 80
targetPort: 8081
type: ClusterIP
kubectl apply -f auth-svc.yaml
kubectl get svc auth-svc
8. Update Ingress to route /api/auth/* to the new service
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host:
http:
paths:
# New route for auth
- path: /api/auth
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 80
# Fallback to monolith for everything else
- path: /
pathType: Prefix
backend:
service:
name: monolith-svc
port:
number: 80
kubectl apply -f ingress.yaml
kubectl get ingress app-ingress -o wide
Now traffic to /api/auth/* is served by the new auth‑service, while all other requests continue to hit the monolith.
9. Gradual rollout & rollback
-
Validate the new endpoint:
curl -i https:///api/auth/ping -
Increase the weight (if using a traffic‑splitting controller) or simply monitor the new service’s metrics and logs.
-
Rollback (if needed):
# Remove the auth path from the Ingress kubectl edit ingress app-ingress # delete the /api/auth block # Or delete the auth deployment/service kubectl delete -f auth-deploy.yaml kubectl delete -f auth-svc.yaml
Because the monolith remains untouched behind the Ingress, you can always revert instantly.
10. Clean‑up (when you’re ready)
kubectl delete -f ingress.yaml
kubectl delete -f service.yaml
kubectl delete -f deployment.yaml
kubectl delete namespace monolith-demo
🎉 You’ve just applied the Strangler Fig pattern in a real IBM Cloud Kubernetes environment!
Continue extracting additional functional slices (e.g., /api/orders/*, /api/payments/*) using the same approach until the monolith can be retired safely.
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 80
- path: /
pathType: Prefix
backend:
service:
name: monolith-svc
port:
number: 80
Apply
kubectl apply -f ingress.yaml
kubectl get ingress app-ingress -o wide
Test
curl http:///api/auth/ping
Expected response
{"service":"auth-service","status":"pong"}
Keep rollback boring and fast
Option A – Route back to monolith
Edit the Ingress and remove the /api/auth path (or point it to monolith-svc), then re‑apply:
kubectl apply -f ingress.yaml
Option B – Undo the deployment rollout
kubectl rollout undo deploy/auth-service
Process
- Stabilize the first extracted capability
- Choose the next bounded domain
- Build it as a separate service
- Deploy it
- Route it with Ingress
- Keep rollback available at every step
Over time
- The monolith shrinks.
- Modernization becomes routine rather than a “big migration”.
- No downtime.
The Strangler Fig pattern works because it respects reality – you modernize without deleting the past.
If you’re sitting on a monolith today, this approach lets you move forward without breaking what already works.