CKA HANDS-ON LABS
Source: Dev.to
Topic Coverage
- Startup / Readiness / Liveness Probes
- ConfigMaps
- Secrets
- Configuration mistakes & fixes
Environment
- Minikube
kubectl- Local machine (macOS / Linux / Windows)
minikube start
kubectl get nodes
Expected: STATUS: Ready
Lab 1 – Startup Probe (Fixing Endless Restarts)
1️⃣ Broken manifest (lab1-broken.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: slow-app
spec:
replicas: 1
selector:
matchLabels:
app: slow
template:
metadata:
labels:
app: slow
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c"]
args:
- sleep 30; echo STARTED; sleep 3600
livenessProbe:
exec:
command: ["sh", "-c", "echo ok"]
initialDelaySeconds: 5
periodSeconds: 5
# Apply the broken manifest
kubectl apply -f lab1-broken.yaml
# Verify pod status
kubectl get pods
Result – The pod ends up in CrashLoopBackOff because the liveness probe starts before the application finishes its startup sequence.
# Inspect why the pod is failing
kubectl describe pod <pod-name>
2️⃣ Fixed manifest (lab1-fixed.yaml)
Add a startupProbe so the liveness probe is blocked until the container is ready.
apiVersion: apps/v1
kind: Deployment
metadata:
name: slow-app
spec:
replicas: 1
selector:
matchLabels:
app: slow
template:
metadata:
labels:
app: slow
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c"]
args:
- sleep 30; echo STARTED; sleep 3600
startupProbe:
exec:
command: ["sh", "-c", "echo ok"]
failureThreshold: 40 # 40 × 1 s = 40 s total wait
periodSeconds: 1
livenessProbe:
exec:
command: ["sh", "-c", "echo ok"]
initialDelaySeconds: 5
periodSeconds: 5
# Apply the fixed manifest
kubectl apply -f lab1-fixed.yaml
# Verify that the pod starts correctly
kubectl get pods
Result – The pod starts successfully with no restarts. The startup probe holds off the liveness probe until the container reports it is ready.
📚 Why It Matters
- Slow‑starting applications (e.g., Java services, databases, migrations) often need more time before they can answer health checks.
- Without a startup probe, a premature liveness probe can cause unnecessary restarts and put the pod into a crash loop.
- A correctly configured startup probe prevents traffic from being sent to an unhealthy pod while avoiding repeated restarts.
Lab 2 – Readiness Probe (Controlling Traffic)
1. Broken Manifests
Deployment (lab2-broken.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: ready-app
spec:
replicas: 1
selector:
matchLabels:
app: ready
template:
metadata:
labels:
app: ready
spec:
containers:
- name: app
image: hashicorp/http-echo:0.2.3
args:
- "-listen=:8080"
- "-text=HELLO"
ports:
- containerPort: 8080
Service (service.yaml)
apiVersion: v1
kind: Service
metadata:
name: ready-svc
spec:
selector:
app: ready
ports:
- port: 80
targetPort: 8080
Apply the manifests
kubectl apply -f lab2-broken.yaml
kubectl apply -f service.yaml
Observation
- The pod receives traffic immediately, even if the application is unhealthy.
- There is no mechanism to stop traffic while the pod is in a bad state.
kubectl get endpoints ready-svc
2. Adding a Readiness Probe
Update the deployment to include a readiness probe:
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
Tip: Insert the
readinessProbeblock under the container definition inlab2-broken.yaml, then re‑apply the file.
Apply the updated deployment
kubectl apply -f lab2-broken.yaml # now contains the readinessProbe
Verify the changes
kubectl get pods
kubectl get endpoints ready-svc
Result
- When the readiness probe fails, the pod remains in the
Runningstate but is removed from the Service’s endpoint list. - Consequently, traffic stops being routed to the unhealthy pod automatically.
3. Why Readiness Probes Matter
| Benefit | Explanation |
|---|---|
| Zero‑downtime deployments | New pods are added only after they pass the readiness check, preventing users from hitting a broken instance. |
| Self‑healing | Kubernetes can restart or replace pods that never become ready, improving overall reliability. |
| Traffic control | Services route traffic only to pods that are ready, reducing error rates. |
Next steps: Experiment with different probe types (httpGet, exec) and tune initialDelaySeconds, periodSeconds, and failureThreshold to match your application’s startup characteristics.
Lab 3 – Liveness Probe (Self‑healing)
1. Broken manifest (lab3-broken.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hang-app
spec:
replicas: 1
selector:
matchLabels:
app: hang
template:
metadata:
labels:
app: hang
spec:
containers:
- name: app
image: nginx
Apply the manifest:
kubectl apply -f lab3-broken.yaml
Simulate a hung container
# Get the pod name (assumes the label `app=hang` is unique)
POD=$(kubectl get pods -l app=hang -o jsonpath='{.items[0].metadata.name}')
# Stop PID 1 inside the container to simulate a hang
kubectl exec -it "$POD" -- kill -STOP 1
# Verify the pod status
kubectl get pods
Result: The pod stays Running even though the application inside is dead.
2. Adding a liveness probe (lab3-fixed.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hang-app
spec:
replicas: 1
selector:
matchLabels:
app: hang
template:
metadata:
labels:
app: hang
spec:
containers:
- name: app
image: nginx
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 10
periodSeconds: 5
Apply the updated deployment:
kubectl apply -f lab3-fixed.yaml
kubectl get pods
Result: When the probe fails, Kubernetes automatically restarts the pod. Liveness probes provide self‑healing; without them Kubernetes does nothing.
ConfigMap Behavior
1. Creating a ConfigMap
kubectl create configmap app-config --from-literal=APP_COLOR=blue
2. Using the ConfigMap as an environment variable
env:
- name: APP_COLOR
valueFrom:
configMapKeyRef:
name: app-config
key: APP_COLOR
Deploy the pod, then edit the ConfigMap:
kubectl edit configmap app-config # change APP_COLOR value
Observation: The running pod does not pick up the new value automatically.
kubectl rollout restart deployment <deployment-name>
Key point: Environment variables are immutable for a running container; a restart is required.
3. Mounting a ConfigMap as a file
volumes:
- name: config
configMap:
name: app-config
volumeMounts:
- name: config
mountPath: /etc/config
After editing the ConfigMap, the file on the node is updated, but the application must reload the file manually—Kubernetes does not restart the pod automatically.
Secret Handling
Creating a secret
kubectl create secret generic db-secret \
--from-literal=DB_PASS=secret123
Using the secret as an environment variable
env:
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-secret
key: DB_PASS
Inspecting the secret
kubectl describe pod <pod-name>
Observation: Secrets are stored in Base64. When you run kubectl describe, the decoded values are shown in plain text.
Best practices
- Treat secrets as sensitive data; never commit them to source control.
- Use RBAC to restrict access to secrets.
- Prefer external secret‑management solutions when possible.
RBAC (Role‑Based Access Control)
- Define
RoleorClusterRoleobjects that grant only the permissions a workload truly needs. - Bind those roles to users, groups, or service accounts using
RoleBindingorClusterRoleBinding. - Never run workloads with
cluster-adminprivileges.
Summary
- Startup probes: delay liveness checks until the container is truly ready, preventing endless restarts for slow‑starting apps.
- Readiness probes: control whether a pod receives traffic, enabling zero‑downtime updates.
- Liveness probes: provide self‑healing by restarting unhealthy containers.
- ConfigMap changes to environment variables require a pod restart; file mounts update automatically but the application must reload the data.
- Secrets are base64‑encoded; handle them with care and enforce RBAC.
Troubleshooting tips
- Use
kubectl describeto inspect pod and probe configurations. - Use
kubectl logsto view container output. - Use
kubectl execto run commands inside a running container.