Zero Trust Switching: Why Firewalls Alone Can’t Secure AI Workloads
Source: Linode Blog
Recap
- Part 1: Lessons from Smart Switching – Rethinking Security & Performance – challenged the belief that security must be sacrificed for performance.
- Part 2: East‑West vs. North‑South – Rethink Security for the AI‑Driven Data Center – highlighted that most modern data‑center traffic stays east‑west, never crossing the perimeter.
This final post merges those ideas and reframes the conversation entirely.
Security Frameworks Have Evolved
Artificial intelligence (AI) has fundamentally changed how applications behave, how data flows, and how risk manifests. AI security is no longer a single‑control problem; it’s an architectural one.
- Today’s AI workloads are distributed across cloud environments, Kubernetes clusters, APIs, and containerized services.
- AI models consume massive datasets, operate at machine speed, and continuously generate outputs that feed downstream AI applications, business workflows, and real‑world decisions.
In that world, no single security control (firewalls included) can do everything.
That isn’t a failure. It’s proof that security frameworks have evolved past single solutions.
Understanding the Growing Misalignment Between AI and Security Architecture
Most organizations haven’t ignored AI security; they’ve tried to protect AI systems with controls designed for a very different era of computing.
- Traditional firewalls remain essential for north‑south protection. They play a critical role in cloud security, data security, authentication, and API protection by inspecting inbound requests, enforcing security controls, and protecting users from malicious or unsafe AI outputs.
- Purpose‑built solutions, such as Akamai Firewall for AI, add an essential layer of protection against AI‑specific risks (prompt injection, data leaks, data poisoning, adversarial attacks, misuse of generative AI).
However, firewalls—AI‑specific or not—were never designed to fully secure what happens inside AI environments once traffic is trusted and flowing east‑west.
Inside Modern AI Systems
- AI workloads constantly communicate with other AI services.
- Kubernetes pods scale dynamically.
- Training data, runtime processes, and inference pipelines share infrastructure.
- APIs exchange sensitive information in real time.
- Cloud‑native and open‑source dependencies change continuously.
- Automation accelerates everything.
When internal visibility is limited and segmentation is coarse, security teams are forced into uncomfortable trade‑offs:
- Permissions become broader than intended.
- Access controls loosen, and validation gives way to assumed trust.
- Over time, these decisions expand the attack surface and weaken the overall AI security posture.
Where AI Breaches Actually Escalate
Most AI‑related incidents don’t start with a catastrophic failure; they begin with something small and familiar, such as:
- An exposed API
- An over‑permissive workload
- A compromised endpoint
- A poisoned dataset
- A mis‑configured cloud service
The real damage occurs after initial access, when nothing prevents lateral movement.
Without Microsegmentation
Attackers can move freely between:
- AI models, large language models (LLMs), and GenAI services
- Training data, datasets, and other sensitive information
- Shared cloud services, Kubernetes dependencies, and data pipelines
They can also pivot into downstream applications that implicitly trust AI outputs.
Consequences include:
- Ransomware spreading through AI workloads
- Data exposure turning into data leaks
- Intellectual property exfiltration
Firewalls at the edge don’t fail in these scenarios—they simply aren’t positioned to stop what’s happening inside.
AI Security Requires Multiple Planes of Control
AI security must be enforced where AI risk appears, not just where it’s easiest to deploy tools.
- Edge & API layer: Use solutions such as Web Application and API Protection (WAAP) and AI guardrails to inspect prompts, outputs, and AI interactions in real time.
- Data‑center & cloud fabric: Control how AI workloads, AI services, and machine‑learning systems communicate with one another.
This is where microsegmentation and Zero Trust Switching become non‑negotiable.
Why Microsegmentation and Zero‑Trust Switching Can’t Wait
AI moves at fabric speed. Internal AI traffic cannot be hair‑pinned through centralized inspection points without breaking performance, compute efficiency, and real‑time workflows. Security controls must live directly in the path of east‑west communication.
- Microsegmentation isolates workloads at the workload‑level, enforcing least‑privilege policies for every east‑west flow.
- Zero‑Trust Switching ensures that every packet is authenticated and authorized before it traverses the fabric, eliminating implicit trust.
Together, they provide the granular, high‑performance controls needed to protect modern AI environments.
Next Steps
- Evaluate your current segmentation and trust model.
- Identify east‑west traffic flows that lack visibility or policy enforcement.
- Deploy Akamai Guardicore Segmentation (or an equivalent solution) to enforce microsegmentation and Zero‑Trust policies across your AI workloads.
By aligning security controls with the AI lifecycle—from edge to fabric—you can safeguard the high‑velocity, AI‑driven data center without sacrificing performance.
[**Akamai Guardicore Segmentation**](https://www.akamai.com/products/akamai-guardicore-segmentation) integrated into HPE Aruba CX 10000 Smart Switches
([solution brief](https://www.akamai.com/resources/solution-brief/akamai-guardicore-segmentation-and-aruba-cx10000-joint-solution-brief)), powered by AMD Pensando DPUs, moves policy enforcement into the data‑center fabric itself. Instead of relying on static IP‑based rules, micro‑segmentation enforces identity‑aware, context‑rich access controls at workload granularity. Policies follow AI workloads, not infrastructure.
This approach fundamentally changes AI risk management:
* Lateral movement is stopped by default.
* Least‑privilege access is enforced continuously.
* Attack vectors shrink instead of expand.
* Security teams gain real‑time visibility into AI systems, data, and workflows — without sacrificing performance.
[Zero Trust](https://www.akamai.com/glossary/what-is-zero-trust) switching secures how AI systems interact internally, which is precisely where modern breaches escalate.
Alignment: A Unified AI Security Architecture
The strongest AI security strategies don’t choose between controls—they align them.
- Firewall for AI secures both AI inputs and outputs to AI applications.
- Akamai Guardicore Segmentation secures east‑west workload communication across cloud‑native and containerized environments.
- Zero Trust switching with HPE/Pensando enforces those policies at fabric speed, without latency.
Together, they deliver a resilient security fabric across the entire AI lifecycle — from prompt to model, from workload to data, and from runtime to real‑world impact.
That’s not redundancy; that’s resilience.
The Urgency Is Real
AI environments will only become faster, more autonomous, and more interconnected. Attackers already understand this and are targeting internal AI workflows, data pipelines, and permissions — not just perimeter defenses.
- Firewalls are foundational to protecting AI apps.
- AI‑specific firewalls are purpose‑built protection for AI and LLM risks.
- Micro‑segmentation and Zero Trust switching are now critical to secure AI deployments and enterprise ecosystems adopting AI.
Waiting doesn’t reduce risk. It compounds it.
Building Trust in an AI‑Driven World
AI security isn’t about reacting to the latest news cycle or jumping on buzzwords. It’s about establishing real, measurable confidence:
- Protecting sensitive data.
- Controlling access.
- Properly isolating AI workloads.
- Ensuring systems behave as expected in production.
Benefit from an Integrated Approach
If you are rethinking how to secure AI workloads, cloud environments, Kubernetes platforms, or GenAI systems, consider that Akamai offers a uniquely integrated approach. We serve as a strategic partner to customers worldwide, helping them power and protect life online.
AI isn’t slowing down. Your security architecture shouldn’t either.
Check Out the Following Resources
- Read the Firewall for AI product brief for further information on secure AI interactions.
- Read the solution brief on securing AI workloads with Akamai Guardicore Segmentation and Zero Trust security.
- Learn about Akamai App & API Protector, which protects web applications and APIs from zero‑day vulnerabilities, CVEs, and more.
Author

Clint Huffaker started his career on the customer side, managing enterprise networking and security before moving into presales and architecture. Those early lessons gave him a deep appreciation for what customers do every day — balance innovation, risk, and business pressure. Today, as Director of Product Marketing for Security at Akamai, Clint leads initiatives around Akamai Guardicore Segmentation and Zero Trust.