Docker Internals Deep Dive: What Really Happens When You Run docker run (2025 Edition)
Source: Dev.to
Modern Container Platforms Overview
Modern container platforms depend on predictable, modular behavior. Docker’s architecture is a layered execution pipeline built around standard interfaces — REST, gRPC, OCI Runtime, and Linux‑kernel primitives. Understanding this flow eliminates ambiguity during debugging, scaling, or integrating with orchestration systems.
1. Core Architecture
CLI → dockerd (API + Orchestration) → containerd (Runtime mgmt)
→ containerd‑shim (Process supervisor) → runc (OCI runtime)
→ Linux Kernel (Namespaces, cgroups, fs, net)
Docker CLI
- User‑facing command interface
- Converts flags to JSON
- Talks to dockerd through
/var/run/docker.sock
dockerd
- REST API server
- Container‑lifecycle orchestration
- Network & volume management
- Delegates image and runtime operations to containerd
containerd
- High‑level runtime manager
- Manages snapshots, images, and the content store
- Pulls/unpacks layers & creates OCI runtime specifications
- Launches a
containerd‑shimfor each container
Image Storage Detail
- Each layer is content‑addressable via SHA‑256
- Identical layers are deduplicated
- OverlayFS uses hard‑links, so layers are shared across containers
containerd‑shim
- Parent process for the container’s workload
- Keeps containers alive if dockerd/
containerdrestart - Manages I/O streams (logs, attach)
- Returns exit codes to
containerd
runc
- Implements the OCI runtime spec
- Creates namespaces, applies cgroup limits, mounts the root filesystem, and executes the entrypoint
- Exits immediately after container creation (the shim stays alive)
Linux Kernel
- Enforces process isolation (namespaces)
- Controls resources (cgroups)
- Provides layered filesystems (OverlayFS)
- Handles networking (veth pairs, bridges, iptables/NAT)
✈️ The Airport Analogy – A Mental Model
| Docker Component | Airport Role | Real‑World Impact |
|---|---|---|
| Docker CLI | Passenger Terminal | You type docker run, check status |
| dockerd | Airport Operations Center | Manages all flights, gates, schedules |
| containerd | Ground Control | Loads luggage (images), assigns runways |
| containerd‑shim | Gate Agents | Ensures plane stays ready even if Ops Center restarts |
| runc | Pilot | Actually flies the plane (executes container) |
| Kernel | Air Traffic Control | Manages airspace (resources), prevents collisions |
| Container | The Actual Flight | Your app running in isolated airspace |
Use this model to remember component relationships during troubleshooting.
2. Execution Flow: docker run -d -p 8080:80 nginx
| Step | Description |
|---|---|
| 1. CLI → dockerd | CLI parses the command, builds a JSON payload, and sends it over the Unix socket. |
| 2. dockerd Validation | Validates configuration, checks local images, and coordinates container creation. |
| 3. Image Pull (if needed) | containerd handles registry authentication, manifest resolution, layer download & verification, and stores layers in the content store. |
| 4. Filesystem Assembly | containerd prepares a snapshot, creates the OverlayFS upper/lower layout, and builds an OCI bundle with metadata & runtime config. |
| 5. Networking Setup | dockerd configures the network namespace:• Creates a veth pair (host end attached to docker0)• Assigns container IP (e.g., 172.17.0.2)• Adds iptables DNAT for the port mapping • Adds a MASQUERADE rule for outbound traffic |
| 6. containerd → containerd‑shim | containerd spawns a shim, hands off the OCI spec, and delegates lifecycle supervision. |
| 7. shim → runc | runc creates namespaces, mounts the rootfs, applies cgroup limits, executes the container entrypoint, then exits (shim remains). |
| 8. Container Running | The container runs as an isolated Linux process: • Shim maintains the lifecycle • dockerd streams logs & reports state• Kernel enforces isolation |

3. Component Responsibilities
| Component | Role | Delegates To |
|---|---|---|
| CLI | User interface, request creation | dockerd |
| dockerd | API, orchestration, networking | containerd |
| containerd | Image management, snapshots, lifecycle | runc |
| containerd‑shim | Supervises container process | Kernel (via namespaces created by runc) |
| runc | Creates container environment | Kernel |
| Kernel | Isolation + resource control | Hardware |
Related Architecture (Kubernetes)
dockerd → kubelet → CRI → containerd
Everything downstream (containerd → shim → runc → kernel) remains unchanged.
4. Key Clarifications
- Containers are processes, not virtual machines.
runcdoes not stay resident; the shim manages the container’s lifecycle.- Docker’s layered filesystem is copy‑on‑write, enabling efficient storage.
- Kubernetes removed
dockerdand talks tocontainerddirectly for a simpler CRI pipeline. - Live‑restore works because the shim decouples containers from
dockerd.
5. Debugging Guide (Ops‑Ready Edition)
A structured, layered sequence for diagnosing container failures. Designed for SRE, DevOps, and runtime‑engineering teams.
Container exits immediately
Approach: Follow the layers from highest (application) to lowest (kernel).
1. Application Layer
Severity: Low – most failures originate here.
docker logs <container>
Inspect logs for crashes, missing binaries, mis‑configured entrypoints, etc.
2. Shim / Runtime Layer
- Verify the shim is alive:
ps -ef | grep containerd-shim - Check
runcexit status:docker inspect --format='{{.State.ExitCode}}' <container>
3. Containerd Layer
- Look at containerd logs:
journalctl -u containerdfor snapshot or OCI‑spec errors.
4. Dockerd Layer
- Examine Docker daemon logs:
journalctl -u dockerfor API‑level rejections or network‑setup failures.
5. Kernel Layer
- Confirm namespace creation:
lsnsorip netns list - Check cgroup limits:
cat /sys/fs/cgroup/.../memory.max
Use this layered checklist to pinpoint the exact stage where a failure occurs, then apply the appropriate fix.
Debugging Docker Runtime Issues
Looks for: runtime exceptions, crash loops, missing configs, entrypoint failures.
Runtime Layer (containerd / OCI)
Severity: Medium – issues affect container creation, not application logic.
journalctl -u containerd
Detects:
- Invalid OCI specs
- Snapshot / unpack errors
- Permission issues
- Image‑metadata failures
Kernel Layer
Severity: High – kernel failures affect all containers on the node.
dmesg | tail -20
Reveals:
- Namespace creation failures
- Cgroup enforcement errors
- LSM blocks (AppArmor / SELinux)
- OverlayFS mount issues
Slow Container Startup
Pinpoint latency at the registry, storage, or runtime.
Image Pull / Unpack Latency
journalctl -u containerd --since "2 minutes ago" | grep -Ei "pull|unpack"
Finds slow remote pulls, layer‑unpack delays, decompression problems.
Host‑Storage Bottleneck
iostat -dx 1 /var/lib/containerd
Detects:
- High I/O wait
- OverlayFS backing‑store saturation
- Slow disks or overloaded volumes
Registry / Network Slowness
time docker pull alpine:latest
Measures:
- Round‑trip latency
- Download throughput
- Registry auth or proxy delays
Network Issues
Trace connectivity host → bridge → container.
Verify NAT / Port‑Forward Rules
sudo iptables -t nat -L DOCKER -n -v
Bridge & veth Topology
ip addr show docker0
brctl show
Container Namespace Networking
docker exec <container> ip addr show
Common Error Patterns
| Error Message | Likely Cause |
|---|---|
no such file or directory | Missing entrypoint or wrong working directory |
permission denied | User‑namespace restriction, volume permissions |
address already in use | Host‑port collision |
exec format error | Architecture mismatch (e.g., amd64 vs arm64) |
layer does not exist | Corrupted image store, partial pull |
failed to setup network namespace | Kernel lacking required capabilities |
Recovery Actions
Image Pull Failures
- Check registry authentication tokens.
- Verify proxy / SSL configuration.
- Test connectivity to registry endpoints.
OCI Spec / Runtime Errors
- Ensure Docker, containerd, and runc versions are compatible.
- Validate custom seccomp or AppArmor profiles.
- Recreate corrupted snapshots.
Kernel Namespace / Cgroup Failures
- Confirm the kernel version supports required features.
- Validate cgroup v1 vs. v2 mode.
- Inspect
sysctloverrides affecting namespaces.

6. Summary
A docker run invocation travels through a disciplined, modular execution path. Each component accepts a small, well‑defined piece of responsibility and hands off cleanly to the next, forming a predictable control flow:
- Dockerd parses intent and translates it into runtime instructions.
- Containerd orchestrates the container lifecycle via stable gRPC APIs.
- containerd‑shim isolates the container’s process management from daemon restarts.
- runc materializes the OCI Runtime Spec into Linux primitives.
- The kernel provides the final enforcement layer through namespaces, cgroups, and filesystem drivers.
These boundaries are governed by open standards (REST → gRPC → OCI Spec → syscalls), ensuring compatibility, reliability, and deep observability across layers. Isolation, resource governance, and performance efficiency emerge directly from native Linux constructs—no hidden hypervisor, no extra abstraction.
Operational Note
Because process ownership is delegated to containerd‑shim, both dockerd and containerd can be restarted without disrupting running containers. This design supports safe daemon upgrades, node maintenance, and high‑availability workflows that do not interrupt workloads.
Quick Reference
- Core Architecture – Execution Flow → Component Responsibilities
- Key Clarifications – Debugging Guide (Ops‑Ready Edition)
- Debugging Tree – Container exits immediately → Slow startup → Network issues → Common error patterns → Recovery actions
- Summary – High‑level recap of the modular stack