Docker Internals Deep Dive: What Really Happens When You Run docker run (2025 Edition)

Published: (December 15, 2025 at 10:10 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

Modern Container Platforms Overview

Modern container platforms depend on predictable, modular behavior. Docker’s architecture is a layered execution pipeline built around standard interfaces — REST, gRPC, OCI Runtime, and Linux‑kernel primitives. Understanding this flow eliminates ambiguity during debugging, scaling, or integrating with orchestration systems.

1. Core Architecture

CLI  →  dockerd (API + Orchestration)  →  containerd (Runtime mgmt)
      →  containerd‑shim (Process supervisor)  →  runc (OCI runtime)
      →  Linux Kernel (Namespaces, cgroups, fs, net)

Docker CLI

  • User‑facing command interface
  • Converts flags to JSON
  • Talks to dockerd through /var/run/docker.sock

dockerd

  • REST API server
  • Container‑lifecycle orchestration
  • Network & volume management
  • Delegates image and runtime operations to containerd

containerd

  • High‑level runtime manager
  • Manages snapshots, images, and the content store
  • Pulls/unpacks layers & creates OCI runtime specifications
  • Launches a containerd‑shim for each container

Image Storage Detail

  • Each layer is content‑addressable via SHA‑256
  • Identical layers are deduplicated
  • OverlayFS uses hard‑links, so layers are shared across containers

containerd‑shim

  • Parent process for the container’s workload
  • Keeps containers alive if dockerd/containerd restart
  • Manages I/O streams (logs, attach)
  • Returns exit codes to containerd

runc

  • Implements the OCI runtime spec
  • Creates namespaces, applies cgroup limits, mounts the root filesystem, and executes the entrypoint
  • Exits immediately after container creation (the shim stays alive)

Linux Kernel

  • Enforces process isolation (namespaces)
  • Controls resources (cgroups)
  • Provides layered filesystems (OverlayFS)
  • Handles networking (veth pairs, bridges, iptables/NAT)

✈️ The Airport Analogy – A Mental Model

Docker ComponentAirport RoleReal‑World Impact
Docker CLIPassenger TerminalYou type docker run, check status
dockerdAirport Operations CenterManages all flights, gates, schedules
containerdGround ControlLoads luggage (images), assigns runways
containerd‑shimGate AgentsEnsures plane stays ready even if Ops Center restarts
runcPilotActually flies the plane (executes container)
KernelAir Traffic ControlManages airspace (resources), prevents collisions
ContainerThe Actual FlightYour app running in isolated airspace

Use this model to remember component relationships during troubleshooting.

2. Execution Flow: docker run -d -p 8080:80 nginx

StepDescription
1. CLI → dockerdCLI parses the command, builds a JSON payload, and sends it over the Unix socket.
2. dockerd ValidationValidates configuration, checks local images, and coordinates container creation.
3. Image Pull (if needed)containerd handles registry authentication, manifest resolution, layer download & verification, and stores layers in the content store.
4. Filesystem Assemblycontainerd prepares a snapshot, creates the OverlayFS upper/lower layout, and builds an OCI bundle with metadata & runtime config.
5. Networking Setupdockerd configures the network namespace:
• Creates a veth pair (host end attached to docker0)
• Assigns container IP (e.g., 172.17.0.2)
• Adds iptables DNAT for the port mapping
• Adds a MASQUERADE rule for outbound traffic
6. containerd → containerd‑shimcontainerd spawns a shim, hands off the OCI spec, and delegates lifecycle supervision.
7. shim → runcrunc creates namespaces, mounts the rootfs, applies cgroup limits, executes the container entrypoint, then exits (shim remains).
8. Container RunningThe container runs as an isolated Linux process:
• Shim maintains the lifecycle
dockerd streams logs & reports state
• Kernel enforces isolation

Docker workflow

3. Component Responsibilities

ComponentRoleDelegates To
CLIUser interface, request creationdockerd
dockerdAPI, orchestration, networkingcontainerd
containerdImage management, snapshots, lifecyclerunc
containerd‑shimSupervises container processKernel (via namespaces created by runc)
runcCreates container environmentKernel
KernelIsolation + resource controlHardware

Related Architecture (Kubernetes)

dockerd → kubelet → CRI → containerd

Everything downstream (containerd → shim → runc → kernel) remains unchanged.

4. Key Clarifications

  • Containers are processes, not virtual machines.
  • runc does not stay resident; the shim manages the container’s lifecycle.
  • Docker’s layered filesystem is copy‑on‑write, enabling efficient storage.
  • Kubernetes removed dockerd and talks to containerd directly for a simpler CRI pipeline.
  • Live‑restore works because the shim decouples containers from dockerd.

5. Debugging Guide (Ops‑Ready Edition)

A structured, layered sequence for diagnosing container failures. Designed for SRE, DevOps, and runtime‑engineering teams.

Container exits immediately

Approach: Follow the layers from highest (application) to lowest (kernel).

1. Application Layer

Severity: Low – most failures originate here.

docker logs <container>

Inspect logs for crashes, missing binaries, mis‑configured entrypoints, etc.

2. Shim / Runtime Layer

  • Verify the shim is alive: ps -ef | grep containerd-shim
  • Check runc exit status: docker inspect --format='{{.State.ExitCode}}' <container>

3. Containerd Layer

  • Look at containerd logs: journalctl -u containerd for snapshot or OCI‑spec errors.

4. Dockerd Layer

  • Examine Docker daemon logs: journalctl -u docker for API‑level rejections or network‑setup failures.

5. Kernel Layer

  • Confirm namespace creation: lsns or ip netns list
  • Check cgroup limits: cat /sys/fs/cgroup/.../memory.max

Use this layered checklist to pinpoint the exact stage where a failure occurs, then apply the appropriate fix.


Debugging Docker Runtime Issues

Looks for: runtime exceptions, crash loops, missing configs, entrypoint failures.

Runtime Layer (containerd / OCI)

Severity: Medium – issues affect container creation, not application logic.

journalctl -u containerd

Detects:

  • Invalid OCI specs
  • Snapshot / unpack errors
  • Permission issues
  • Image‑metadata failures

Kernel Layer

Severity: High – kernel failures affect all containers on the node.

dmesg | tail -20

Reveals:

  • Namespace creation failures
  • Cgroup enforcement errors
  • LSM blocks (AppArmor / SELinux)
  • OverlayFS mount issues

Slow Container Startup

Pinpoint latency at the registry, storage, or runtime.

Image Pull / Unpack Latency

journalctl -u containerd --since "2 minutes ago" | grep -Ei "pull|unpack"

Finds slow remote pulls, layer‑unpack delays, decompression problems.

Host‑Storage Bottleneck

iostat -dx 1 /var/lib/containerd

Detects:

  • High I/O wait
  • OverlayFS backing‑store saturation
  • Slow disks or overloaded volumes

Registry / Network Slowness

time docker pull alpine:latest

Measures:

  • Round‑trip latency
  • Download throughput
  • Registry auth or proxy delays

Network Issues

Trace connectivity host → bridge → container.

Verify NAT / Port‑Forward Rules

sudo iptables -t nat -L DOCKER -n -v

Bridge & veth Topology

ip addr show docker0
brctl show

Container Namespace Networking

docker exec <container> ip addr show

Common Error Patterns

Error MessageLikely Cause
no such file or directoryMissing entrypoint or wrong working directory
permission deniedUser‑namespace restriction, volume permissions
address already in useHost‑port collision
exec format errorArchitecture mismatch (e.g., amd64 vs arm64)
layer does not existCorrupted image store, partial pull
failed to setup network namespaceKernel lacking required capabilities

Recovery Actions

Image Pull Failures

  • Check registry authentication tokens.
  • Verify proxy / SSL configuration.
  • Test connectivity to registry endpoints.

OCI Spec / Runtime Errors

  • Ensure Docker, containerd, and runc versions are compatible.
  • Validate custom seccomp or AppArmor profiles.
  • Recreate corrupted snapshots.

Kernel Namespace / Cgroup Failures

  • Confirm the kernel version supports required features.
  • Validate cgroup v1 vs. v2 mode.
  • Inspect sysctl overrides affecting namespaces.

Debugging tree

6. Summary

A docker run invocation travels through a disciplined, modular execution path. Each component accepts a small, well‑defined piece of responsibility and hands off cleanly to the next, forming a predictable control flow:

  • Dockerd parses intent and translates it into runtime instructions.
  • Containerd orchestrates the container lifecycle via stable gRPC APIs.
  • containerd‑shim isolates the container’s process management from daemon restarts.
  • runc materializes the OCI Runtime Spec into Linux primitives.
  • The kernel provides the final enforcement layer through namespaces, cgroups, and filesystem drivers.

These boundaries are governed by open standards (REST → gRPC → OCI Spec → syscalls), ensuring compatibility, reliability, and deep observability across layers. Isolation, resource governance, and performance efficiency emerge directly from native Linux constructs—no hidden hypervisor, no extra abstraction.

Operational Note

Because process ownership is delegated to containerd‑shim, both dockerd and containerd can be restarted without disrupting running containers. This design supports safe daemon upgrades, node maintenance, and high‑availability workflows that do not interrupt workloads.


Quick Reference

  • Core Architecture – Execution Flow → Component Responsibilities
  • Key Clarifications – Debugging Guide (Ops‑Ready Edition)
  • Debugging Tree – Container exits immediately → Slow startup → Network issues → Common error patterns → Recovery actions
  • Summary – High‑level recap of the modular stack
Back to Blog

Related posts

Read more »