I Built a Multi-Service Kubernetes App and Here's What Actually Broke
Source: Dev.to
Overview
The app consists of five independent components that each run in their own container and are managed separately by Kubernetes. None of the components know pod IPs; everything communicates through Kubernetes service discovery. This mirrors how real micro‑services work in production.
Components
Voting Frontend
The UI where users cast their votes.
Results Frontend
The UI where users view aggregated results.
Redis
Acts as a queue for incoming votes.
PostgreSQL
Provides persistent storage for vote results.
Worker Service
Processes votes asynchronously, reading from Redis and writing results to PostgreSQL.
Kubernetes Resources Used
Deployments, Pods, Services
- Deployments manage the desired state of each component.
- Pods are the smallest deployable units; they are recreated automatically when needed.
- Services provide stable network endpoints and DNS names for pods.
Service Types
| Type | Purpose | Typical Use |
|---|---|---|
| ClusterIP | Internal‑only communication | Redis, PostgreSQL, internal APIs |
| NodePort | Exposes a service on each node’s IP (useful for testing) | Temporary exposure of frontends before Ingress |
| Ingress | HTTP‑level routing from outside the cluster | Production‑grade external access |
Ingress and Ingress Controller
An Ingress resource only defines routing rules; it does nothing by itself. You must also run an Ingress Controller that watches those rules and actually processes incoming traffic. Without a controller, the Ingress rules are useless—a lesson I learned the hard way.
Traffic Flow
Inside the Cluster
- Voting frontend sends votes to Redis using the
redisService name. - Worker reads from Redis using the same Service name.
- Worker writes results to PostgreSQL using the
postgresqlService name. - Results frontend reads from PostgreSQL using the
postgresqlService name.
All communication uses Service DNS; no pod IPs are hard‑coded.
From Browser to Application
- User sends an HTTP request.
- Request hits the Ingress Controller.
- Ingress rules are evaluated.
- Traffic is forwarded to the appropriate Service.
- The Service load‑balances the request across its backend pods.
Ingress operates at the HTTP level and is the production‑grade way to expose applications.
Common Pitfalls and Solutions
| Problem | Solution |
|---|---|
| Ingress rules did nothing | Install an Ingress Controller (e.g., NGINX, Traefik). |
| Pods recreated with new IPs, breaking hard‑coded addresses | Use Services exclusively; they provide stable endpoints. |
| Confusion over Service types | Use ClusterIP for internal traffic, NodePort only for temporary testing, and Ingress for external HTTP traffic. |
| Ingress Controller pod stuck in Pending | Adjust node labels and tolerations so the controller can schedule on the control‑plane node (relevant for local clusters). |
| Cannot access the app from a local container‑based cluster | Port‑forward the Ingress Controller port to your local machine, simulating a cloud load balancer. |
| Service names not resolving across namespaces | Remember that Service DNS is namespace‑scoped; use fully qualified domain names (service.namespace.svc.cluster.local) when needed. |
Lessons Learned
- Kubernetes networking is service‑driven, not pod‑driven.
- An Ingress requires both rules and a controller to function.
- Local clusters behave differently from managed cloud clusters (node scheduling, load balancing, etc.).
- Service discovery happens through DNS, never through hard‑coded IPs.
- Effective debugging demands understanding both the platform and the application architecture.
Once this mental model clicked, advanced topics (network policies, service meshes, etc.) started making sense.
How to Get the Code
The full source code and step‑by‑step setup instructions are available in the repository:
kubernetes-sample-voting-app-project1
(Replace with the actual Git URL if you have one.)
If you’re learning Kubernetes, pick a multi‑service application, deploy it, then deliberately break it. Fixing the issues will give you the deep understanding you need. What’s been the hardest part of Kubernetes for you? Drop a comment!
Tags: kubernetes devops learning microservices