Understanding AKS Networking: Underlay Network

Published: (March 3, 2026 at 12:35 AM EST)
4 min read
Source: Dev.to

Source: Dev.to

If you’ve ever tried to curl a Kubernetes Service IP from a VM and it just… hangs — this guide is for you.

We’ll break down:

  • AKS network design
  • CIDR layout (VNet, Subnet, Service CIDR, Pod CIDR)
  • Why ClusterIP fails from a VM
  • Why NodePort works
  • Step‑by‑step packet flow
  • Full Azure CLI setup

All tested on Azure Kubernetes Service (AKS) in Microsoft Azure.

🧱 1️⃣ Network Design Overview

Lab topology

ComponentCIDR
VNet10.0.0.0/16
AKS Subnet10.0.1.0/24
VM Subnet10.0.2.0/24
Service CIDR10.240.0.0/16
Overlay Pods (optional)192.168.0.0/16

Underlay mode (Azure CNI)

🗺️ Architecture Diagram (PlantUML)

Architecture diagram

🧠 Understanding the CIDRs

CIDRPurpose
10.0.0.0/16Azure VNet
10.0.1.0/24AKS Nodes
10.0.2.0/24Test VM
10.240.0.0/16Kubernetes Services (virtual)
192.168.0.0/16Overlay Pods (if enabled)

Critical concept: The Service CIDR is not part of Azure VNet routing, so traffic from a VM to a ClusterIP is dropped by the Azure router.

⚙️ 2️⃣ Full Azure CLI Setup

Variables

LOCATION=eastus2
RG=aks-networking-lab
VNET_NAME=aks-vnet
UNDERLAY_SUBNET=aks-underlay-subnet
VM_SUBNET=vm-subnet
AKS_NAME=aks-underlay

Create Resource Group

az group create \
  --name $RG \
  --location $LOCATION

Create VNet + AKS Subnet

az network vnet create \
  --resource-group $RG \
  --name $VNET_NAME \
  --address-prefix 10.0.0.0/16 \
  --subnet-name $UNDERLAY_SUBNET \
  --subnet-prefix 10.0.1.0/24

Create VM Subnet

az network vnet subnet create \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name $VM_SUBNET \
  --address-prefix 10.0.2.0/24

Get Subnet ID (for AKS)

SUBNET_ID=$(az network vnet subnet show \
  --resource-group $RG \
  --vnet-name $VNET_NAME \
  --name $UNDERLAY_SUBNET \
  --query id -o tsv)

Create AKS Cluster

az aks create \
  --resource-group $RG \
  --name $AKS_NAME \
  --network-plugin azure \
  --vnet-subnet-id $SUBNET_ID \
  --service-cidr 10.240.0.0/16 \
  --dns-service-ip 10.240.0.10 \
  --node-count 2 \
  --generate-ssh-keys

Connect to the Cluster

az aks get-credentials \
  --resource-group $RG \
  --name $AKS_NAME

🚀 3️⃣ Deploy Test Application

kubectl create deployment nginx --image=nginx
kubectl scale deployment nginx --replicas=2

Expose as ClusterIP

kubectl expose deployment nginx \
  --name nginx-svc \
  --port 80 \
  --type ClusterIP

Verify Service

kubectl get svc

Example output

nginx-svc   ClusterIP   10.240.225.54   80/TCP

The IP (10.240.225.54) comes from the Service CIDR.

🔥 4️⃣ Packet Flow: ClusterIP (Why VM Access Fails)

From the VM try:

curl 10.240.225.54

It hangs because Azure routing checks:

Is 10.240.0.0/16 in the VNet? → No
→ Drop packet

The packet never reaches any AKS node.

🧭 Packet Flow Diagram

Packet flow diagram

🧪 5️⃣ Convert to NodePort

kubectl patch svc nginx-svc \
  -p '{"spec":{"type":"NodePort"}}'

Verify:

kubectl get svc nginx-svc

Example output

nginx-svc   NodePort   10.240.225.54   80:31598/TCP

Now the service is reachable via any node’s IP on the allocated node‑port (31598 in the example).

✅ 6️⃣ Correct Way to Test from the VM

  1. Get a node’s internal IP:

    kubectl get nodes -o wide

    Sample output

    NAME                                STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
    aks-nodepool1-42091994-vmss000000   Ready    <none>  34m   v1.33.6   10.0.1.33      <none>        Ubuntu 22.04.5 LTS   5.15.0-1103-azure   containerd://1.7.30-2
    aks-nodepool1-42091994-vmss000001   Ready    <none>  34m   v1.33.6   10.0.1.4       <none>        Ubuntu 22.04.5 LTS   5.15.0-1103-azure   containerd://1.7.30-2
  2. Curl the service via the node IP and the node‑port:

    curl http://10.0.1.4:31598

You should receive the default Nginx response, confirming that the service is reachable from the VM when exposed as a NodePort.

End of guide.

Exit fullscreen mode

Example

azureuser@test-vm:~$ curl -s 10.0.1.33:31598 | grep -i "welcome"
Welcome to nginx!

## Welcome to nginx!

Flow

  1. VM → Node IP
  2. Node receives traffic
  3. kube-proxy matches NodePort rule
  4. DNAT to Pod IP
  5. Response returned

🧠 Deep Technical Breakdown

When a packet hits the node, kube-proxy installs iptables rules such as:

KUBE-NODEPORTS
KUBE-SERVICES
KUBE-SEP-XXXX

DNAT example

10.0.1.4:31598 → 10.0.1.10:80

Why ClusterIP works inside Pods

  • The packet reaches the node first.
  • kube-proxy rewrites the destination to the pod IP.

Why it fails from a VM

  • The packet never reaches the node.
  • Azure routing drops it.

🎯 Key Takeaways

  • ClusterIP = virtual internal Kubernetes IP.
  • NodePort = node listens on a real VNet IP.
  • Service CIDR must not overlap with the VNet CIDR.
  • Azure only routes VNet CIDRs.
  • kube-proxy handles Service IP translation.

🏁 Final Mental Model

Azure handles:

10.0.0.0/16

Kubernetes handles:

10.240.0.0/16

Different routing domains.

0 views
Back to Blog

Related posts

Read more »

Google Gemini Writing Challenge

What I Built - Where Gemini fit in - Used Gemini’s multimodal capabilities to let users upload screenshots of notes, diagrams, or code snippets. - Gemini gener...