Amazon EKS Series - Part 2: EKS Architecture and Core Components

Published: (December 17, 2025 at 08:40 AM EST)
8 min read
Source: Dev.to

Source: Dev.to

Introduction

Welcome back to the Amazon EKS at Scale series!

In Part 1 we covered the fundamentals of Amazon EKS — what it is, why to use it, and the different ways to manage worker nodes. We also created our first EKS cluster using eksctl.

In this article we’ll take a deeper look at:

  • The high‑level architecture of Amazon EKS
  • Control‑plane components (AWS‑managed)
  • Worker nodes (customer‑managed)
  • Networking fundamentals
  • IAM and authentication

Understanding this architecture is essential before diving into production deployments.

Amazon EKS is a managed Kubernetes service that runs the Kubernetes control plane for you. This means AWS handles the complex, undifferentiated heavy lifting of running Kubernetes, while you focus on deploying and managing your applications.

Responsibility Matrix

ComponentManaged By
Control Plane (API Server, etcd, Scheduler, Controllers)AWS
Control Plane High AvailabilityAWS
Control Plane Security PatchesAWS
Worker NodesYou (or AWS with Managed Node Groups / Fargate)
Application DeploymentsYou
Pod NetworkingYou (with AWS VPC CNI)
IAM Roles and PoliciesYou

Control Plane

The control plane is the brain of your Kubernetes cluster. In EKS, AWS fully manages this component, running it across multiple Availability Zones (AZs) for high availability.

kube‑apiserver

The API server is the front door to your Kubernetes cluster:

  • Exposes the Kubernetes API over HTTPS
  • Validates and processes all API requests (from kubectl, controllers, and other components)
  • Acts as the gateway for all cluster operations — creating pods, services, deployments, etc.
  • Authenticates requests using AWS IAM (via the AWS IAM Authenticator)

When you run kubectl get pods, your request goes to the API server, which retrieves the information from etcd and returns it to you.

etcd

etcd is a distributed key‑value store that serves as Kubernetes’ database:

  • Stores all cluster state and configuration data
  • Holds information about pods, services, secrets, ConfigMaps, and more
  • Provides strong consistency guarantees

In EKS, AWS manages etcd replication across multiple AZs. You never interact with etcd directly — all access goes through the API server.

kube‑scheduler

The scheduler places pods on nodes:

  • Watches for newly created pods that have no node assigned
  • Evaluates resource requirements (CPU, memory, storage)
  • Considers constraints like node selectors, taints, tolerations, and affinity rules
  • Selects the most suitable node and binds the pod to it

kube‑controller‑manager

The controller manager runs control loops that regulate cluster state:

ControllerFunction
Node ControllerMonitors node health and responds when nodes go down
Replication ControllerEnsures the correct number of pod replicas are running
Endpoints ControllerPopulates endpoint objects (joins Services and Pods)
Service Account ControllerCreates default service accounts for new namespaces

Control‑Plane Characteristics

  • Runs in an AWS‑managed VPC – isolated from your account and other customers
  • Highly available – at least two API server instances and three etcd nodes across multiple AZs
  • Automatically scaled – AWS scales control‑plane resources based on cluster size
  • No direct access – you cannot SSH into control‑plane nodes; interaction is only via the API
  • Automatic updates – AWS handles patching and security updates

Worker Nodes

Worker nodes are the compute resources where your applications actually run. Unlike the control plane, you are responsible for provisioning and managing worker nodes (unless you use Fargate).

Each worker node runs several Kubernetes components:

kubelet

The primary node agent:

  • Registers the node with the Kubernetes API server
  • Watches for pods scheduled to its node
  • Ensures containers are running and healthy
  • Reports node and pod status back to the control plane
  • Executes liveness and readiness probes

The kubelet communicates with the container runtime to manage the container lifecycle.

kube‑proxy

Handles networking on each node:

  • Maintains network rules for pod‑to‑pod communication
  • Implements Kubernetes Services (ClusterIP, NodePort, LoadBalancer)
  • Uses iptables or IPVS to route traffic to the correct pods
  • Enables service discovery within the cluster

Container Runtime

Executes containers:

  • EKS uses containerd as the default runtime (Docker support was deprecated in Kubernetes 1.24)
  • Pulls container images from registries (e.g., Amazon ECR)
  • Creates and manages container processes
  • Handles container isolation using Linux namespaces and cgroups

Node Management Options

OptionNode ManagementScalingBest For
Self‑Managed NodesYou manage everythingManual or customFull control, custom AMIs
Managed Node GroupsAWS manages provisioning and updatesAuto Scaling GroupsMost production workloads
AWS FargateAWS manages everythingAutomatic per‑podServerless, variable workloads

Networking Fundamentals

Networking is a critical aspect of any Kubernetes deployment. EKS integrates deeply with AWS networking services.

VPC Overview

Your EKS cluster runs inside a VPC – an isolated virtual network in AWS:

  • Provides network isolation and security
  • You define the IP address range (CIDR block)
  • Contains subnets, route tables, and Internet gateways

Requirements: a VPC with subnets in at least two AZs.

Subnets

Subnets divide your VPC into smaller network segments:

  • Have a route to an Internet Gateway (public subnets) or to a NAT Gateway (private subnets)
  • Resources can have public IP addresses (public subnets) or not (private subnets)
  • Used for load balancers, bastion hosts, and worker nodes

NAT Gateways (placed in public subnets) enable private subnets to reach the Internet (e.g., pulling container images) without exposing resources directly.

Pod Networking – Amazon VPC CNI

EKS uses the Amazon VPC CNI plugin for pod networking:

  • Assigns each pod an IP address from the VPC CIDR block, making pods first‑class citizens on the VPC network
  • Enables native VPC routing, security groups, and network ACLs for pods
  • Provides high performance and low latency because traffic stays within the VPC fabric

IAM & Authentication

  • IAM Authenticator: Authenticates kubectl and other API requests using AWS IAM credentials.
  • IAM Roles for Service Accounts (IRSA): Allows pods to assume IAM roles, granting fine‑grained AWS permissions without embedding static credentials.

Understanding these mechanisms is essential for securing your cluster and granting the right level of access to workloads.

Recap

  • Control plane – fully AWS‑managed, highly available, automatically patched.
  • Worker nodes – you provision (or let AWS do it with Managed Node Groups/Fargate).
  • Networking – VPC‑native pod networking via the Amazon VPC CNI plugin.
  • IAM – central to authentication and authorization for both users and pods.

With this architectural foundation, you’re ready to move toward production‑grade EKS deployments. Happy scaling!

Amazon EKS Architecture Overview

Pod Networking with the AWS VPC CNI

  • Pods receive real IP addresses from your VPC CIDR range
  • Pods can communicate directly with other AWS services (RDS, ElastiCache, etc.)
  • No overlay network – native VPC networking performance
  • Supports security groups for pods (with specific configurations)

How it works

  1. The CNI plugin attaches Elastic Network Interfaces (ENIs) to worker nodes.
  2. Each ENI can have multiple secondary IP addresses.
  3. These IPs are assigned to pods running on the node.
  4. Pod‑to‑pod traffic uses native VPC routing.

Important: The number of pods per node is limited by the number of ENIs and IPs the instance type supports.

Architecture Diagram

┌─────────────────────────────────────────────────────────────┐
│                         AWS Cloud                           │
│  ┌───────────────────────────────────────────────────────┐  │
│  │                      Your VPC                          │  │
│  │  ┌─────────────────┐    ┌─────────────────┐           │  │
│  │  │  Public Subnet  │    │  Public Subnet  │           │  │
│  │  │   (AZ‑1)        │    │   (AZ‑2)        │           │  │
│  │  │  Load Balancer  │    │  NAT Gateway    │           │  │
│  │  └─────────────────┘    └─────────────────┘           │  │
│  │  ┌─────────────────┐    ┌─────────────────┐           │  │
│  │  │ Private Subnet  │    │ Private Subnet  │           │  │
│  │  │   (AZ‑1)        │    │   (AZ‑2)        │           │  │
│  │  │  Worker Nodes   │    │  Worker Nodes   │           │  │
│  │  │  (Pods)         │    │  (Pods)         │           │  │
│  │  └─────────────────┘    └─────────────────┘           │  │
│  └───────────────────────────────────────────────────────┘  │
│                              │                              │
│  ┌───────────────────────────┴───────────────────────────┐  │
│  │           EKS Control Plane (AWS‑Managed)             │  │
│  │  API Server │ etcd │ Scheduler │ Controllers │      │  │
│  └───────────────────────────────────────────────────────┘  │
└─────────────────────────────────────────────────────────────┘

Authentication & Authorization

EKS uses AWS IAM for authentication and Kubernetes RBAC for authorization.

  1. User runs a kubectl command (e.g., kubectl get pods).
  2. AWS IAM Authenticatorkubectl uses the AWS CLI to obtain a token.
  3. Token sent to the API Server – included in the request header.
  4. EKS validates the token – confirms the IAM identity.
  5. Kubernetes RBAC – determines what the user can do.

Comparison Table

AspectAWS IAMKubernetes RBAC
PurposeWho can access the clusterWhat they can do inside the cluster
ScopeAWS account levelKubernetes cluster level
Managed byIAM policiesRole / ClusterRole objects
Example“User X can call EKS APIs”“User X can list pods in namespace Y”

Mapping IAM Identities to Kubernetes Users

The aws-auth ConfigMap bridges IAM identities to Kubernetes users and groups.

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::123456789012:role/NodeInstanceRole
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::123456789012:user/admin
      username: admin
      groups:
        - system:masters

Key points

  • Worker nodes use IAM roles (mapped in aws-auth) to join the cluster.
  • You can map IAM users and roles to Kubernetes groups.
  • The system:masters group grants full cluster‑admin access.
  • EKS also supports EKS Access Entries – a newer, simpler alternative to aws-auth.

Best Practices

  • Use IAM roles (not users) – they are more secure and support temporary credentials.
  • Follow the principle of least privilege – grant only the permissions needed.
  • Use IRSA (IAM Roles for Service Accounts) – lets pods assume IAM roles for AWS API access.
  • Audit access regularly – review who has access to your cluster.

Recap

  • EKS is a managed Kubernetes service: AWS runs the control plane; you manage worker nodes and workloads.
  • Control‑plane components – API Server, etcd, Scheduler, Controller Manager – manage cluster state.
  • Worker nodes run kubelet, kube-proxy, and a container runtime to execute pods.
  • Networking – VPC CNI gives pods real VPC IPs, enabling direct communication with AWS services.
  • IAM integration – authentication via IAM, authorization via Kubernetes RBAC.
  • The control plane runs in an AWS‑managed, highly‑available VPC (multiple AZs). Interaction is only through the Kubernetes API.
  • Worker nodes reside in your VPC (or on Fargate) and are your responsibility.

What’s Next?

In Part 3 we’ll get hands‑on and provision an Amazon EKS cluster using Terraform and community modules. You’ll learn how to:

  • Set up Terraform for EKS
  • Use the terraform-aws-eks module
  • Configure VPC, subnets, and node groups
  • Apply best practices for infrastructure‑as‑code

Stay tuned!

References

Found this article helpful? Follow the series and share your thoughts in the comments!

Back to Blog

Related posts

Read more »

AWS Service – Amazon S3 Glacier

!Cover image for AWS Service – Amazon S3 Glacierhttps://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-t...