Docker & Containers Explained: A Beginner-Friendly Guide to How Docker Works

Published: (January 16, 2026 at 05:43 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Modern software development is no longer just about writing code. It’s also about running that code reliably everywhere — on your laptop, in testing, and in production. This is where containerization and Docker come in.

In this blog we’ll break down:

  • What containers are
  • Why Docker was created
  • How Docker works internally

…in a way that’s easy to understand, even if you’re completely new.


Why we needed containers

Before containers, applications were usually deployed directly on servers or virtual machines. This caused several issues:

ProblemDescription
Environment driftDifferent environments (dev, test, prod) behaved differently
Dependency conflictsApplications fought over libraries, runtimes, etc.
Difficult deployments & rollbacksManual steps, long release windows, downtime
Poor scalabilityAdding capacity meant cloning whole servers/VMs

Developers needed a way to package an application along with everything it needs to run, and run it the same way everywhere.

What a container is

A container is a lightweight, portable unit that packages:

  • Application code
  • Runtime (Node, Python, Java, etc.)
  • Libraries and dependencies
  • Configuration

Containers share the host operating system kernel but run in isolated user spaces, making them fast and efficient.

Shipping‑container analogy

  • The contents inside don’t matter to the ship.
  • The container can move between ships, trucks, or ports.
  • Everything inside stays the same.

Virtual Machines vs. Containers

FeatureVirtual MachinesContainers
OSFull guest OS per VMShare host OS kernel
Startup timeMinutesSeconds or milliseconds
Resource usageHeavyLightweight
IsolationStrongProcess‑level isolation
PortabilityLimitedVery high

Analogy:
VMs are like renting a full house for each guest.
Containers are like renting rooms in the same building.

Pain points before containers

  • Dependency hell – One app needs Node 16, another needs Node 18 → upgrading one breaks the other.
  • “Works on my laptop” syndrome – Works on a developer laptop, fails on QA, breaks in production because of different OS versions, libraries, configs, or runtimes.
  • Scaling headaches – To handle more traffic you clone the server/VM, re‑configure everything, and wait.
  • Manual deployments – Long release windows, error‑prone rollbacks, frequent downtime.

What containers solve

  • Same container runs everywhere – No more environment drift.
  • Scale by running more containers – Fast, cheap, and automated.
  • Rollback by switching container versions – Simple and reliable.

Enter Docker

Docker is a tool that helps you:

  • Package your application – Include everything it needs to run.
  • Run it the same way on any machine – “Build my app once, and run it anywhere.”

Instead of manually setting up environments, Docker automates this using containers.

Before Docker

  • Containers existed but were hard to use.
  • Each company had its own custom tooling.
  • Developers struggled with setup and consistency.

Docker’s answer

  • Standard formatDockerfile & images.
  • Simple commandsdocker build, docker run.
  • Easy image sharing – Registries (Docker Hub, GHCR, private registries).

Docker directly addresses the problems we discussed:

ProblemDocker solution
Dependency conflictsEach app gets its own container
Environment mismatchSame image runs everywhere
Slow deploymentsStart containers in seconds
Difficult rollbacksSwitch image versions easily

Docker doesn’t remove complexity—it packages it neatly.

Where Docker fits in modern workflows

  • Local development – Same setup for all developers.
  • CI/CD pipelines – Predictable builds and tests.
  • Microservices – Each service runs in its own container.
  • Cloud & Kubernetes – Docker images are the standard unit.

Docker is often the first step toward:

  • Kubernetes
  • DevOps
  • Cloud‑native architectures

Docker building blocks

ComponentWhat it does
Docker ClientWhat you type (docker build, docker run) or what a GUI calls. Sends requests to the daemon.
Docker Daemon (dockerd)Background service that actually builds images and runs containers.
containerd & runcLow‑level helpers the daemon uses to turn an image into a running process. You don’t need to memorize them—just know Docker delegates the actual process creation to these focused tools.
ImageA frozen snapshot (read‑only), like a recipe or blueprint.
ContainerA running instance of an image. Adds a thin writable layer so the app can change files while it’s alive.
DockerfileSimple text file with instructions to build an image (base image, files to copy, commands to run). docker build reads it and creates the image.
RegistryStores images (Docker Hub, GitHub Container Registry, private registries). Enables docker push and docker pull from anywhere.

Kernel features Docker relies on

  • Namespaces – Give containers their own view of processes, network, filesystem, etc. Think of a namespace as a private room: you can’t see other rooms.
  • cgroups (control groups) – Limit how much CPU, memory, or disk a container can use. Think of cgroups as the room’s power limiter.

Together they make containers feel like isolated environments without the overhead of a full OS.

Image layers

Each Dockerfile step produces a layer. Layers are cached if they haven’t changed, which is why ordering your Dockerfile sensibly makes builds faster.

Networking

Docker gives each container a network interface and lets you map host ports (-p host:container) so services are reachable.

Volumes

For data that must survive container restarts, use volumes. They keep data outside the container’s temporary writable layer.

Example Dockerfile

# Use an official Node runtime as a parent image
FROM node:18-alpine

# Set working directory inside the container
WORKDIR /app

# Copy package.json and package-lock.json first (for caching)
COPY package*.json ./

# Install app dependencies
RUN npm ci --only=production

# Copy the rest of the application source code
COPY . .

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the app
CMD ["node", "index.js"]

Build the image:

docker build -t my-node-app:1.0 .

Run a container:

docker run -d -p 8080:3000 --name my-app my-node-app:1.0

TL;DR

Docker is a small system that helps you package apps and run them the same way everywhere. By understanding the concepts above—containers, images, Dockerfile, registries, and the underlying kernel features—you’ll be ready to adopt Docker in your workflow and move smoothly toward modern cloud‑native architectures.

Running a Containerized Node.js App

In this section you’ll actually run a containerized app by copy‑pasting commands. No prior Docker experience required. We’ll containerize a very simple Node.js web server.

Prerequisites

  • Docker installed (docker --version should work)
  • Any OS (Windows / macOS / Linux)

1. Set up the project folder

mkdir simple-docker-app
cd simple-docker-app

2. Create the application source

index.js

const http = require('http');

const PORT = 3000;

const server = http.createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello from Docker Container!');
});

server.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

package.json

{
  "name": "simple-docker-app",
  "version": "1.0.0",
  "main": "index.js",
  "scripts": {
    "start": "node index.js"
  }
}

3. Write the Dockerfile

Create a file named Dockerfile (no extension) with the following content:

# Use an official Node.js runtime
FROM node:18

# Set working directory inside container
WORKDIR /app

# Copy package files first (for caching)
COPY package*.json ./

# Install dependencies
RUN npm install

# Copy application source code
COPY . .

# Expose application port
EXPOSE 3000

# Start the application
CMD ["npm", "start"]

4. Build the Docker image

Run this command in the same folder as the Dockerfile:

docker build -t simple-web-app .

This creates a Docker image called simple-web-app.

5. Run the container

docker run -p 3000:3000 simple-web-app
  • The container’s internal port 3000 is mapped to your host’s port 3000.

6. Verify the app

Open a browser and visit:

http://localhost:3000

You should see:

Hello from Docker Container!

What just happened?

  • Docker packaged your app + Node.js + configuration into an image.
  • A container was created from that image.
  • Port 3000 inside the container was mapped to your machine.
  • The app runs the same way everywhere.

Why this matters

  • Any developer laptop can run the app.
  • It can be used in CI/CD pipelines.
  • It works on cloud servers without extra setup.

Docker isn’t just a tool—it’s a fundamental shift in how software is built and shipped.

Benefits of containers

  • Eliminate environment issues.
  • Simplify deployments.
  • Scale with confidence.

If you understand

  • What containers are.
  • Why Docker exists.
  • How Docker works internally.

…you’re ready to start using Docker for real‑world projects!

Back to Blog

Related posts

Read more »

Rapg: TUI-based Secret Manager

We've all been there. You join a new project, and the first thing you hear is: > 'Check the pinned message in Slack for the .env file.' Or you have several .env...

Technology is an Enabler, not a Saviour

Why clarity of thinking matters more than the tools you use Technology is often treated as a magic switch—flip it on, and everything improves. New software, pl...