Containerization 2025: Why containerd 2.0 and eBPF are Changing Everything
Source: Dev.to
Containerization Landscape – 2024 → 2025
The containerization landscape, perennially dynamic, has seen a flurry of practical, sturdy advancements over late 2024 and through 2025. As senior developers, we’re past the “hype cycle” and into the trenches, evaluating features that deliver tangible operational benefits and address real‑world constraints. This past year has solidified several trends:
- Enhanced supply‑chain security
- Fundamental runtime‑efficiency improvements
- A significant leap in build ergonomics for multi‑architecture deployments
- The emergence of WebAssembly as a credible, albeit nascent, alternative for specific workloads
Below is a deep dive into the developments that genuinely matter.
1. Container Runtime – containerd 2.0
The foundation of our containerized world, the container runtime, has evolved dramatically with the release of containerd 2.0 in late 2024. This isn’t merely an incremental bump; it’s a strategic stabilization and enhancement of core capabilities seven years after the 1.0 release.
- The shift away from dockershim in Kubernetes v1.24 pushed containerd and CRI‑O to the forefront, solidifying the Container Runtime Interface (CRI) as the standard interaction protocol between the kubelet and the underlying runtime.
Key Features in the Stable Channel
| Feature | Why It Matters |
|---|---|
| Node Resource Interface (NRI) – enabled by default | Provides a powerful extension mechanism for customizing low‑level container configurations. It works like mutating admission webhooks but operates directly at the runtime level, allowing fine‑grained control over resource allocation and policy enforcement. |
| Image verifier plugins (stabilized) | Executable programs that containerd can invoke to decide whether an image may be pulled. When eventually integrated with the CRI, administrators can enforce policies such as “only images signed by specific keys” or “images with a verified SBOM” at pull‑time, shifting enforcement left. |
| Configuration migration to v3 | Existing configs can be migrated with containerd config migrate. Most settings remain compatible; the only notable breaking change is the deprecation of the aufs snapshotter, which forces a move to a modern, maintained storage backend. |
Example use‑case – CPU pinning via NRI
An organization needs to enforce specific CPU pinning for performance‑critical workloads. An NRI plugin can mediate this at container startup, ensuring consistent application across diverse node types without altering the core containerd daemon.
2. Image Signing – Sigstore Takes the Lead
2025 marks a definitive pivot in container image signing. Sigstore has firmly established itself as the open standard for software supply‑chain security, while Docker began formally retiring Docker Content Trust (DCT) (based on Notary v1) in August 2025.
Sigstore Workflow (illustrated with Mermaid)
graph TD
A["📥 OIDC Identity"] --> B{"🔍 Fulcio Check"}
B -->|Valid| C["⚙️ Issue Certificate"]
B -->|Invalid| D["🚨 Reject Request"]
C --> E["📊 Sign & Log (Rekor)"]
D --> F["📝 Audit Failure"]
E --> G(("✅ Image Signed"))
F --> G
classDef input fill:#6366f1,stroke:#4338ca,color:#fff
classDef process fill:#3b82f6,stroke:#1e40af,color:#fff
classDef success fill:#22c55e,stroke:#15803d,color:#fff
classDef error fill:#ef4444,stroke:#b91c1c,color:#fff
classDef decision fill:#8b5cf6,stroke:#6d28d9,color:#fff
classDef endpoint fill:#1e293b,stroke:#475569,color:#fff
class A input
class C,E process
class B decision
class D,F error
class G endpoint
Sigstore components
| Component | Role |
|---|---|
| Cosign | Signs and verifies OCI artifacts |
| Fulcio | Free, public root CA that issues short‑lived certificates |
| Rekor | Transparency log that records every signing event |
This trifecta enables keyless signing: developers use OIDC tokens from their identity provider (GitHub, Google, etc.) to obtain an ephemeral signing certificate from Fulcio. Cosign then signs the image with that certificate, and the signature (plus certificate) is recorded in the immutable Rekor log.
Signing & Verifying an Image (keyless)
# 1️⃣ Authenticate with your OIDC provider
# (Cosign will often pick this up automatically from environment variables.)
# 2️⃣ Sign an image (keyless)
cosign sign --yes /:
# 3️⃣ Verify an image
cosign verify /:
- The
--yesflag bypasses interactive prompts—crucial for CI/CD pipelines. cosign verifyqueries Rekor to ensure the signature’s authenticity and integrity, linking it back to a verifiable identity. This provides strong, supply‑chain‑level assurance before a container ever starts.
3. What This Means for Teams
| Area | Impact |
|---|---|
| Security | Shift‑left verification prevents compromised images from ever reaching a node. |
| Operations | NRI and image verifier plugins reduce the need for custom admission controllers or external gatekeepers. |
| Performance | Modern storage backends (post‑aufs) and runtime‑level resource controls improve node efficiency. |
| Future‑proofing | Adoption of Sigstore and containerd 2.0 positions teams to leverage upcoming Kubernetes enhancements (e.g., CRI‑image‑pull plugins). |
4. TL;DR
- containerd 2.0 brings NRI, stable image verifier plugins, and a streamlined config migration.
- Sigstore is now the de‑facto standard for image signing; Docker’s DCT is being retired.
- Keyless signing with Cosign/Fulcio/Rekor gives you verifiable provenance without managing long‑lived keys.
- Migrating away from aufs and embracing the new runtime features will improve both security posture and performance.
Stay tuned—2025 will continue to deliver refinements, especially around WebAssembly runtimes and multi‑arch build pipelines, but the foundations laid this year are already reshaping how we ship and run containers at scale.
Docker Buildx & BuildKit
Docker’s Buildx, powered by the BuildKit backend, has matured into an indispensable tool for any serious container development workflow, particularly for multi‑platform image builds and caching strategies.
The traditional docker build command, while functional, often suffers from performance bottlenecks and limited cross‑architecture support. BuildKit fundamentally re‑architects the build process using a Directed Acyclic Graph (DAG) for build operations, enabling parallel execution of independent steps and superior caching mechanisms.
Why Multi‑Platform Builds Matter
The standout feature—multi‑platform builds—is no longer a niche capability but a practical necessity in a world diversifying rapidly into amd64, arm64, and even arm/v7 architectures. buildx allows a single docker buildx build command to produce a manifest list containing images for multiple target platforms, eliminating the need for separate build environments.
Example Dockerfile
# Dockerfile
FROM --platform=$BUILDPLATFORM golang:1.21-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
ARG TARGETOS TARGETARCH
RUN CGO_ENABLED=0 GOOS=$TARGETOS GOARCH=$TARGETARCH go build -o /app/my-app ./cmd/server
FROM --platform=$BUILDPLATFORM alpine:3.18
COPY --from=builder /app/my-app /usr/local/bin/my-app
CMD ["/usr/local/bin/my-app"]
Building & Pushing for Multiple Platforms
docker buildx create --name multiarch-builder --use
docker buildx inspect --bootstrap
docker buildx build \
--platform linux/amd64,linux/arm64 \
-t myregistry/my-app:latest \
--push .
Caching Advantages
- Local layer caching – standard Docker behavior.
- Registry caching – previously pushed layers are reused in subsequent builds, dramatically reducing build times for frequently updated projects.
- CI/CD impact – especially valuable where build environments are ephemeral.
eBPF in Kubernetes Networking & Observability
The integration of eBPF (extended Berkeley Packet Filter) into Kubernetes networking and observability stacks has moved from experimental curiosity to a foundational technology in late 2024 and 2025. eBPF allows sandboxed programs to run directly within the Linux kernel, triggered by various events, offering unprecedented performance and flexibility without the overhead of traditional kernel‑to‑user‑space context switches.
Networking
- CNI plugins such as Cilium and Calico now leverage eBPF, replacing or offering superior alternatives to iptables‑based approaches.
- Efficiency – eBPF programs make routing and policy decisions early in the kernel’s network stack, reducing CPU overhead and latency, especially in large‑scale clusters.
Observability
- By attaching eBPF programs to system calls, network events, and process activities, developers can capture detailed telemetry data directly from the kernel in real time.
- Tools – e.g., Cilium Hubble uses eBPF to monitor network flows, exposing latency, byte counts, and policy enforcement decisions for service‑to‑service communication.
WebAssembly (Wasm) on the Server‑Side
WebAssembly, initially conceived for the browser, has undeniably crossed the chasm into server‑side and cloud‑native environments, presenting a compelling alternative to traditional containers for specific use cases in 2025. Its core advantages—blazing fast startup times, minuscule footprint, and strong sandbox security—make it particularly attractive for serverless functions and edge computing.
Runtime Landscape (2025)
- Node.js, Deno, Bun and other runtimes are evolving to support Wasm natively.
- Cold‑start differences: Wasm modules start in milliseconds, versus seconds for typical container cold starts.
Running Wasm in Kubernetes
Kubernetes schedules Wasm modules via CRI‑compatible runtimes and shims. Projects like runwasi provide a containerd shim that enables Kubernetes to treat Wasm workloads like ordinary pods.
Example: Deploying a Wasm Application with crun
# runtimeclass.yaml
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: wasm-crun
handler: crun
---
# wasm-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: wasm-demo
annotations:
module.wasm.image/variant: compat
spec:
runtimeClassName: wasm-crun
containers:
- name: my-wasm-app
image: docker.io/myuser/my-wasm-app:latest
command: ["/my-wasm-app"]
Kubernetes API Deprecations & Removals
Kubernetes consistently refines its API surface to introduce new capabilities and retire deprecated features. In late 2024 and 2025, vigilance against API deprecations and removals remains a critical operational task. The project adheres to a well‑defined deprecation policy across Alpha → Beta → GA stages.
Why It Matters
- Since v1.19, any request to a deprecated REST API returns a warning.
- Automated tooling and CI/CD pipeline checks are essential for identifying resources that use deprecated APIs.
Example: Find Deployments Using the extensions/v1beta1 API
kubectl get deployments.v1.apps -A \
-o custom-columns="NAMESPACE:.metadata.namespace,NAME:.metadata.name,APIVERSION:.apiVersion" \
| grep "extensions/v1beta1"
Proactive Migration
- Plan migrations well before an upgrade window.
- Notable releases: v1.34 (August 2025) and v1.31 (July 2024) both introduced deprecations and removals that required attention.
Runtime‑Level Security Advances
While vulnerability scanning remains a fundamental best practice, recent developments focus on bolstering security primitives at the runtime level. A significant advancement in containerd 2.0 is the im (content truncated in the source).
(The original text ends abruptly here; the remaining details were not provided.)
Container Security & Developer Tooling in 2025
User Namespaces – Containers can now run as root inside the container while being mapped to an unprivileged UID on the host. This dramatically reduces the blast radius of a container escape.
Runtime Security
- eBPF‑based solutions give real‑time visibility into container behavior, flagging anomalies and policy violations.
- Least‑privilege enforcement:
- Drop unnecessary Linux capabilities (e.g.,
CAP_NET_ADMIN). - Use read‑only filesystems whenever possible.
- Drop unnecessary Linux capabilities (e.g.,
Developer Tooling Improvements
| Tool / Area | 2025 Highlights |
|---|---|
| Docker Desktop | Continuous security patches (e.g., CVE‑2025‑9074). |
| Local Kubernetes | Faster provisioning with kind and minikube. |
| Image Building | Integrated BuildKit and Buildx for multi‑architecture builds. |
| Overall Experience | More secure defaults, robust build pipelines, and ongoing security updates. |
For senior developers, these incremental but steady enhancements translate into more practical, secure, and efficient workflows.
Useful Links
-
Personal sites:
-
DataFormatHub tools (related to the topic):
- YAML to JSON – Convert Kubernetes manifests
- JSON Formatter – Format container configs
-
Recent articles:
- dbt & Airflow in 2025: Why These Data Powerhouses Are Redefining Engineering
- AWS Lambda & S3 Express One Zone: A 2025 Deep Dive into re:Invent 2023
- GitHub Actions & Codespaces: Why 2025
This article was originally published on DataFormatHub, your go‑to resource for data‑format and developer‑tools insights.