I Built a Multi-Service Kubernetes App and Here's What Actually Broke

Published: (January 31, 2026 at 01:32 AM EST)
3 min read
Source: Dev.to

Source: Dev.to

Overview

The app consists of five independent components that each run in their own container and are managed separately by Kubernetes. None of the components know pod IPs; everything communicates through Kubernetes service discovery. This mirrors how real micro‑services work in production.

Components

Voting Frontend

The UI where users cast their votes.

Results Frontend

The UI where users view aggregated results.

Redis

Acts as a queue for incoming votes.

PostgreSQL

Provides persistent storage for vote results.

Worker Service

Processes votes asynchronously, reading from Redis and writing results to PostgreSQL.

Kubernetes Resources Used

Deployments, Pods, Services

  • Deployments manage the desired state of each component.
  • Pods are the smallest deployable units; they are recreated automatically when needed.
  • Services provide stable network endpoints and DNS names for pods.

Service Types

TypePurposeTypical Use
ClusterIPInternal‑only communicationRedis, PostgreSQL, internal APIs
NodePortExposes a service on each node’s IP (useful for testing)Temporary exposure of frontends before Ingress
IngressHTTP‑level routing from outside the clusterProduction‑grade external access

Ingress and Ingress Controller

An Ingress resource only defines routing rules; it does nothing by itself. You must also run an Ingress Controller that watches those rules and actually processes incoming traffic. Without a controller, the Ingress rules are useless—a lesson I learned the hard way.

Traffic Flow

Inside the Cluster

  1. Voting frontend sends votes to Redis using the redis Service name.
  2. Worker reads from Redis using the same Service name.
  3. Worker writes results to PostgreSQL using the postgresql Service name.
  4. Results frontend reads from PostgreSQL using the postgresql Service name.

All communication uses Service DNS; no pod IPs are hard‑coded.

From Browser to Application

  1. User sends an HTTP request.
  2. Request hits the Ingress Controller.
  3. Ingress rules are evaluated.
  4. Traffic is forwarded to the appropriate Service.
  5. The Service load‑balances the request across its backend pods.

Ingress operates at the HTTP level and is the production‑grade way to expose applications.

Common Pitfalls and Solutions

ProblemSolution
Ingress rules did nothingInstall an Ingress Controller (e.g., NGINX, Traefik).
Pods recreated with new IPs, breaking hard‑coded addressesUse Services exclusively; they provide stable endpoints.
Confusion over Service typesUse ClusterIP for internal traffic, NodePort only for temporary testing, and Ingress for external HTTP traffic.
Ingress Controller pod stuck in PendingAdjust node labels and tolerations so the controller can schedule on the control‑plane node (relevant for local clusters).
Cannot access the app from a local container‑based clusterPort‑forward the Ingress Controller port to your local machine, simulating a cloud load balancer.
Service names not resolving across namespacesRemember that Service DNS is namespace‑scoped; use fully qualified domain names (service.namespace.svc.cluster.local) when needed.

Lessons Learned

  • Kubernetes networking is service‑driven, not pod‑driven.
  • An Ingress requires both rules and a controller to function.
  • Local clusters behave differently from managed cloud clusters (node scheduling, load balancing, etc.).
  • Service discovery happens through DNS, never through hard‑coded IPs.
  • Effective debugging demands understanding both the platform and the application architecture.

Once this mental model clicked, advanced topics (network policies, service meshes, etc.) started making sense.

How to Get the Code

The full source code and step‑by‑step setup instructions are available in the repository:

kubernetes-sample-voting-app-project1

(Replace with the actual Git URL if you have one.)

If you’re learning Kubernetes, pick a multi‑service application, deploy it, then deliberately break it. Fixing the issues will give you the deep understanding you need. What’s been the hardest part of Kubernetes for you? Drop a comment!

Tags: kubernetes devops learning microservices

Back to Blog

Related posts

Read more »