Part 8: Helm - Packaging Kubernetes Applications
Source: Dev.to
Series: From “Just Put It on a Server” to Production DevOps
Reading time: 15 minutes
Level: Intermediate
The YAML Duplication Problem
In Part 6 we deployed our SSPP platform to Kubernetes. It works! But look at your k8s/ directory:
k8s/
├── api-deployment.yaml
├── api-service.yaml
├── api-configmap.yaml
├── worker-deployment.yaml
├── worker-configmap.yaml
├── redis-deployment.yaml
├── redis-service.yaml
├── postgres-statefulset.yaml
├── postgres-service.yaml
└── …
Now your manager says:
“We need dev, staging, and prod environments.”
Your first thought is to copy‑paste all YAML files three times:
k8s/
├── dev/
│ ├── api-deployment.yaml # replicas: 1, resources: small
│ ├── api-service.yaml
│ └── …
├── staging/
│ ├── api-deployment.yaml # replicas: 2, resources: medium
│ ├── api-service.yaml
│ └── …
└── prod/
├── api-deployment.yaml # replicas: 5, resources: large
├── api-service.yaml
└── …
What changes between environments?
- Replica counts
- Resource limits
- Image tags
- Database URLs
- Domain names
- Storage sizes
What stays the same?
- Container ports
- Health‑check paths
- Service types
- Label selectors
- Volume mount paths
You’re copying 80 % identical YAML and changing 20 %.
Then a bug is found: the API health‑check path should be /health instead of /healthz.
Now you need to update it in three places and you miss one. Staging is broken. Users are angry.
This is the YAML duplication problem.
What is Helm?
Helm is a package manager for Kubernetes applications.
Beginner mental model
Helm is like apt‑get (Linux) or Homebrew (macOS) for Kubernetes apps.
Instead of managing 20+ YAML files per environment, you create a Helm chart—a template with variables:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
spec:
replicas: {{ .Values.replicas }}
template:
spec:
containers:
- name: api
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
resources:
limits:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
Values files per environment
# values-dev.yaml
name: sspp-api
replicas: 1
image:
repository: davidbrown77/sspp-api
tag: dev-latest
resources:
cpu: "500m"
memory: "512Mi"
# values-prod.yaml
name: sspp-api
replicas: 5
image:
repository: davidbrown77/sspp-api
tag: v1.2.3
resources:
cpu: "2000m"
memory: "4Gi"
Deploy with Helm
# Dev
helm install sspp-api ./charts/api -f values-dev.yaml
# Prod
helm install sspp-api ./charts/api -f values-prod.yaml
Same template, different values. DRY (Don’t Repeat Yourself) for Kubernetes.
Helm Concepts
Charts
A Helm chart is a package of Kubernetes manifests.
Structure
charts/api/
├── Chart.yaml # metadata (name, version)
├── values.yaml # default values
├── templates/ # templated Kubernetes manifests
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── configmap.yaml
│ └── ingress.yaml
└── charts/ # dependencies (sub‑charts)
Values
values.yaml defines default configuration:
replicaCount: 3
image:
repository: davidbrown77/sspp-api
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
Override values
helm install sspp-api ./charts/api \
--set replicaCount=5 \
--set image.tag=v1.2.3
Or use a separate values file:
helm install sspp-api ./charts/api -f prod-values.yaml
Templates
Templates use Go templating syntax:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "api.fullname" . }}
labels:
{{- include "api.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "api.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "api.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- name: http
containerPort: {{ .Values.service.port }}
Template functions
{{ .Values.replicaCount }}– access values{{ include "helper" . }}– reuse templates{{- if .Values.enabled }}– conditionals{{- range .Values.items }}– loops
Creating a Helm Chart for SSPP API
Initialize the chart
cd infrastructure
helm create charts/api
Define Chart.yaml
apiVersion: v2
name: sspp-api
description: Sales Signal Processing Platform API
type: application
version: 1.0.0
appVersion: "1.0.0"
Create templates
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
replicas: {{ .Values.replicas }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- name: {{ .Values.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: {{ .Values.containerPort }}
resources:
limits:
cpu: {{ .Values.resources.cpu }}
memory: {{ .Values.resources.memory }}
(Add additional templates for Service, ConfigMap, Ingress, etc., following the same pattern.)
Deploy
# Dev
helm install sspp-api ./charts/api -f values-dev.yaml
# Staging
helm install sspp-api ./charts/api -f values-staging.yaml
# Prod
helm install sspp-api ./charts/api -f values-prod.yaml
Now you have a single source of truth for your manifests, with environment‑specific values kept separate—eliminating the YAML duplication problem.
Helm Charts for SSPP API & Worker
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Values.name }}
template:
metadata:
labels:
app: {{ .Values.name }}
spec:
containers:
- name: api
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
value: {{ .Values.database.url }}
- name: REDIS_URL
value: {{ .Values.redis.url }}
{{- if .Values.env }}
{{- range $key, $value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
resources:
limits:
cpu: {{ .Values.resources.limits.cpu }}
memory: {{ .Values.resources.limits.memory }}
requests:
cpu: {{ .Values.resources.requests.cpu }}
memory: {{ .Values.resources.requests.memory }}
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}
labels:
app: {{ .Values.name }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3000
protocol: TCP
name: http
selector:
app: {{ .Values.name }}
Values Files
values.yaml
name: sspp-api
replicaCount: 3
image:
repository: davidbrown77/sspp-api
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 3000
database:
url: "postgresql://user:pass@postgres:5432/sspp"
redis:
url: "redis://redis:6379"
resources:
limits:
cpu: "1000m"
memory: "1Gi"
requests:
cpu: "500m"
memory: "512Mi"
env: {}
values-dev.yaml
name: sspp-api
replicaCount: 1
image:
tag: dev-latest
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "250m"
memory: "256Mi"
values-prod.yaml
name: sspp-api
replicaCount: 5
image:
tag: v1.2.3
pullPolicy: Always
resources:
limits:
cpu: "2000m"
memory: "4Gi"
requests:
cpu: "1000m"
memory: "2Gi"
env:
LOG_LEVEL: "info"
NODE_ENV: "production"
Deploying with Helm
Install
# Dev environment
helm install sspp-api ./charts/api -f values-dev.yaml
# Prod environment (different namespace)
helm install sspp-api ./charts/api \
-f values-prod.yaml \
-n production \
--create-namespace
Upgrade
# Update image tag
helm upgrade sspp-api ./charts/api \
--set image.tag=v1.3.0 \
--reuse-values
Rollback
# Rollback to previous release
helm rollback sspp-api
# Rollback to specific revision
helm rollback sspp-api 3
List Releases
helm list
helm list -n production
Get Values
# See computed values
helm get values sspp-api
# See all values (including defaults)
helm get values sspp-api --all
Helm for Worker Service
Create a similar chart for the worker:
helm create charts/worker
Key differences from API
- No Service – the worker doesn’t expose HTTP.
- Different environment variables.
- Different resource requirements.
values.yaml (worker)
name: sspp-worker
replicaCount: 2
image:
repository: davidbrown77/sspp-worker
tag: latest
redis:
url: "redis://redis:6379"
database:
url: "postgresql://user:pass@postgres:5432/sspp"
resources:
limits:
cpu: "1000m"
memory: "2Gi"
requests:
cpu: "500m"
memory: "1Gi"
Benefits of Helm
- ✅ DRY Principle – One template, multiple environments. Change once, deploy everywhere.
- ✅ Version Control – Track changes to chart versions; rollback to any previous release.
- ✅ Parameterization – Override any value at deployment; no hard‑coded configuration.
- ✅ Package Management – Share charts via Helm repositories; reuse community charts (PostgreSQL, Redis, etc.).
- ✅ Release Management – Track deployment history; easy rollbacks.
What’s Next?
Helm solves packaging and templating, but we still have deployment problems:
- ❌ Manual deployments – someone must run
helm upgrade. - ❌ No Git sync – cluster state can drift from Git.
- ❌ No automation – still need CI/CD triggers.
- ❌ Configuration drift – manual
kubectlchanges go untracked.
In Part 9, we’ll add GitOps with Argo CD.
You’ll learn:
- Git as the single source of truth (not your laptop).
- Automatic sync from Git → Cluster.
- Self‑healing (Argo CD reverts manual changes).
- One‑click rollbacks through UI.
- Deployment history and audit trails.
- Progressive delivery (blue/green, canary).
Spoiler: Push to Git → Argo CD deploys automatically. This is GitOps.
Try It Yourself
Create Helm charts for all SSPP services:
- API chart – Deployment + Service + Ingress.
- Worker chart – Deployment only.
- Redis chart – Use
bitnami/redisfrom Helm Hub (or create your own). - PostgreSQL chart – Use
bitnami/postgresqlfrom Helm Hub (or create your own).
Happy charting! 🚀
# PostgreSQL
## Test multi‑environment setup
```bash
# Deploy dev
helm install sspp ./charts/api -f values-dev.yaml -n dev --create-namespace
# Deploy prod
helm install sspp ./charts/api -f values-prod.yaml -n production --create-namespace
# Compare
kubectl get pods -n dev
kubectl get pods -n production
Bonus: Package your chart and push to a Helm repository (GitHub Pages, ChartMuseum, or OCI registry).
## Next: Automating Deployments
In [Part 9](//./09-argocd-gitops.md) we’ll solve the manual‑