Cilium network policy kubectl‑capture feature replaced our tcpdump sidecars for debugging
Source: Dev.to
The Problem with tcpdump Sidecars
Before adopting Cilium’s kubectl‑capture, our team relied on tcpdump sidecars to troubleshoot dropped traffic, misconfigured policies, and connectivity gaps. This workflow had three core flaws:
- Operational overhead: Every debugging session required patching pod specs to add a sidecar container with tcpdump, restarting workloads, and cleaning up after captures completed.
- Security risks: tcpdump sidecars require privileged access or host‑network mode to capture traffic, expanding the attack surface of production pods.
- Limited context: Sidecar captures lacked native integration with Cilium network policies, making it hard to correlate captured packets with specific policy rules or endpoint identities.
What is Cilium kubectl‑capture?
Cilium is an eBPF‑based networking and security layer for Kubernetes that provides advanced network‑policy enforcement, load balancing, and observability. The kubectl‑capture plugin (bundled with the Cilium CLI) leverages eBPF’s low‑level kernel visibility to capture packets directly from Cilium‑managed endpoints — no sidecars required.
Unlike traditional packet captures, kubectl‑capture ties captures to Cilium’s native constructs: you can filter traffic by endpoint IP, pod label, network‑policy name, destination port, or even specific policy verdicts (allowed/dropped). Captures are streamed directly to your local machine, or saved to a file for offline analysis with tools like Wireshark.
How We Switched from Sidecars to kubectl‑capture
Migrating our debugging workflow took less than a day. Here’s a quick example of how we debug a dropped network‑policy with kubectl‑capture:
# Capture traffic for pods labeled app=web, filtering for dropped packets on port 80
kubectl capture --pod-labels app=web --port 80 --verdict dropped -o capture.pcap
Old sidecar workflow
-
Patch the web pod’s deployment to add a tcpdump sidecar with
hostNetwork: true. -
Wait for the pod to restart, then exec into the sidecar:
kubectl exec -it web-pod -- tcpdump -i eth0 port 80 -w capture.pcap -
Copy the capture file locally with
kubectl cp. -
Remove the sidecar from the deployment and restart the pod again to clean up.
The kubectl‑capture workflow cuts out three of these four steps, with zero pod restarts or configuration changes.
Key Benefits We’ve Seen
- Zero overhead: eBPF captures run in the kernel with minimal performance impact, unlike sidecars that consume pod resources.
- Better policy context: Captures include Cilium metadata like policy names, endpoint IDs, and verdicts, so you know exactly which rule allowed or dropped a packet.
- Improved security: No privileged sidecars or host‑network access required —
kubectl‑captureuses Cilium’s existing eBPF programs to capture traffic. - Faster debugging: We’ve cut mean time to resolve (MTTR) for network‑policy issues by ~60% since switching to
kubectl‑capture.
Real‑World Use Case: Debugging a Dropped Ingress Policy
Last month, a new ingress policy for our frontend pods was dropping valid traffic from our API gateway. With tcpdump sidecars we would have needed to patch three frontend pods, restart them, and sift through generic packet captures. Instead, we ran:
kubectl capture --pod-labels app=frontend --source-ip 10.2.3.4 --verdict dropped -o frontend-drop.pcap
The capture showed that the policy was missing a rule to allow traffic from the gateway’s Cilium endpoint ID. We updated the policy, applied it, and verified the fix with a single kubectl‑capture command — no restarts needed.
Conclusion
Cilium’s kubectl‑capture feature has completely replaced our tcpdump sidecar workflow for network‑policy debugging. It’s faster, safer, and far more integrated with how we manage Kubernetes networking. If you’re running Cilium in production, kubectl‑capture is a must‑have tool for your debugging toolkit.