CKA DEPLOYMENT & SERVICE LAB #2
Source: Dev.to
Global Rules
- Do NOT use
kubectl edit - Do NOT recreate resources unless asked
- Fix issues using
kubectl patch/set/scale/rollout - A namespace must be used
- Treat this like a real CKA exam
Baseline (Run on Every Machine)
Step 0 – Start cluster
minikube start --driver=docker
kubectl create namespace cka-lab
kubectl config set-context --current --namespace=cka-lab
Deploy the application
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 4
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
EOF
Expose the application
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: web-svc
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
type: ClusterIP
EOF
Generate traffic
kubectl run traffic --image=busybox -it --rm -- sh
Inside the pod:
while true; do
wget -qO- web-svc
echo "-----"
sleep 1
done
Tasks
Fix the deployment strategy
kubectl patch deployment web-app -p '
{
"spec": {
"strategy": {
"type": "RollingUpdate",
"rollingUpdate": {
"maxUnavailable": 4,
"maxSurge": 0
}
}
}
}'
- No downtime during image updates
- Traffic loop never stops
- At least one pod always available
Simulate a failing image
kubectl set image deployment/web-app nginx=nginx:doesnotexist
Restore the deployment to a healthy state (without deleting)
kubectl patch deployment web-app --type=json -p='[
{
"op": "remove",
"path": "/spec/template/spec/containers/0/readinessProbe"
}
]'
- All pods Running, no
CrashLoopBackOff - Traffic loop remains stable
Ensure traffic only goes to ready pods (no cluster restart)
Patch the service selector to a wrong value:
kubectl patch svc web-svc -p '
{
"spec": {
"selector": {
"app": "wrong"
}
}
}'
Restore correct selector (without recreating the service)
kubectl patch svc web-svc -p '
{
"spec": {
"selector": {
"app": "web"
}
}
}'
- Endpoints repopulated, traffic resumes
Deploy a canary version
kubectl create deployment web-app-canary --image=nginx:1.27 --replicas=1
kubectl label deployment web-app-canary track=canary
- Canary receives traffic alongside the stable workload
- No service recreation
Remove the faulty canary immediately
kubectl delete deployment web-app-canary
- Stable version continues serving traffic uninterrupted
Pause rollout, update image, and finish rollout
kubectl rollout pause deployment web-app
kubectl set image deployment/web-app nginx=nginx:1.26
- Identify why the rollout is stuck
- Complete the deployment:
kubectl rollout resume deployment web-app
- New image fully deployed, rollout no longer paused
Expose the service via NodePort (no recreation)
kubectl patch svc web-svc -p '
{
"spec": {
"type": "NodePort"
}
}'
- Application reachable from a browser via the assigned NodePort
Scale down to zero and restore service
kubectl scale deployment web-app --replicas=0
Restore the original replica count (do not recreate pods):
kubectl scale deployment web-app --replicas=4
- Pods run, traffic restored
Cleanup
kubectl delete namespace cka-lab
- Namespace must be empty
kubectl get all
# → No resources found