Kubernetes Journey Part 1: Why Docker?
Source: Dev.to
Welcome to the first post on learning Kubernetes! Before we dive into the complexities, we have to talk about the building block that made it all possible: Docker.
If you’ve ever worked in software development, you’ve likely encountered the phrase: “But it works on my machine!”
Why Docker?
Imagine you’ve just finished a new feature and your code is merged into the Version Control System (VCS). The build pipeline creates an artifact which is deployed to the server.
- Dev Server: It works perfectly. ✅
- Test Server: Working as expected. ✅
- Production: You deploy, and the build fails. ❌
What happened?
The most frequent cause is environment misconfiguration or missing dependencies that existed in dev and test environments but not in production. Traditionally, you could only ship the artifact, not the entire environment. That’s where containers come into the picture.
How Docker solved this
Docker packages everything the code needs to run—libraries, dependencies, configurations, and even the base OS binaries—into a single unit. This makes environment‑related failures negligible.
What is a Container?
A container is an isolated, lightweight sandbox environment that includes the application code, libraries, and runtime dependencies, regardless of the host operating system.
- Key distinction: Unlike a Virtual Machine (VM), a container does not package a full operating system. It contains only the minimal binaries and uses the host’s OS kernel, making it fast, portable, and resource‑efficient.
Definition: Docker is the platform that lets you build, ship, and run these containers anywhere.
How It Works: Dockerfile to Runtime
To containerize an application, you work with three main components: Dockerfiles, Images, and Containers.
The Dockerfile
A Dockerfile is a plain‑text document with a list of instructions. Running docker build reads these instructions and creates an image.
The Image
A Docker image is a snapshot of your application, containing everything needed to run it. Images are stored in a registry (similar to a VCS) so that all environments can pull the same image with docker pull.
The Container
Running docker run takes an image and creates a running instance—the container.
The Docker Architecture
Docker’s architecture consists of three primary components:
- The Client – where you write code (e.g., your laptop).
- The Docker Daemon (
dockerd) – the “brain” that listens for API requests, manages images, and runs containers. - The Registry – remote storage for images.
The Lifecycle of a Docker Command
# Build an image from a Dockerfile (client → daemon)
docker build -t myapp:latest .
# Push the local image to a remote registry
docker push myrepo/myapp:latest
# Pull the image on a target environment
docker pull myrepo/myapp:latest
# Run a container from the pulled image
docker run -d -p 80:80 myrepo/myapp:latest
Summary
By using Docker, the environment in development becomes identical to the environment in production. If it works on your machine, it will work on the server because you are shipping “your machine” (the container) along with the code.
Now that we understand why we need containers, the next step in our series is learning how to manage multiple containers at once. That is where Kubernetes enters the story.
Stay tuned for Part 2!