Kubernetes and it's Architecture
Source: Dev.to
Problems using Docker
- Single host resource contention – With many containers on one host, a container that consumes excessive memory can affect the performance of other containers, potentially causing them to be killed based on priority.
- No auto‑healing – If a container crashes, the application becomes inaccessible and the container must be restarted manually.
- Lack of auto‑scaling – Docker does not provide built‑in mechanisms to automatically scale workloads up or down.
- No enterprise support – Docker itself does not include enterprise‑grade support features.
How Kubernetes Addresses These Issues
Kubernetes adds orchestration capabilities that solve the above limitations, providing automated health‑checking, scaling, and robust support for production environments.
Kubernetes Architecture
Data Plane
- Container runtime – Executes containers (e.g., Docker, containerd). Comparable to a Java runtime for Java applications.
- kube‑proxy – Manages networking between containers and services, performs load balancing, and assigns IP addresses.
- kubelet – Ensures that Pods assigned to its node are running and healthy. It communicates with the control plane and uses the container runtime to start containers inside Pods.
Control Plane
- API Server – The core component and front door for all commands. Stores objects (such as Pods) and exposes them to other components.
- Scheduler – Determines which node will run each Pod. It watches for unscheduled Pods, evaluates candidate nodes, and selects the best node based on policies and resource availability.
- etcd – A distributed key‑value store that holds the cluster’s state and configuration data.
- Controller Manager – Maintains the desired state of the cluster (e.g., ensures the specified number of replicas are running).
Workflow Overview
- User request – A user sends a request (e.g.,
kubectl apply) to the API Server. - Persist state – The API Server validates the request and stores the desired object definition in etcd.
- Scheduling – The Scheduler detects new Pods without a node assignment, evaluates available nodes, and assigns each Pod to a suitable node.
- Node execution – The kubelet on the selected node receives the assignment, instructs the container runtime to launch the Pod’s containers.
- Networking – kube‑proxy configures service routing and load balancing for the newly created Pods.
- Control loops – Controllers in the Controller Manager continuously monitor the actual state and reconcile it with the desired state, triggering actions such as scaling or self‑healing as needed.