Evolution of Workload Hosting: From Virtualization to Containerization
Source: VMware Blog
Virtualization solved the core problem of “one server, one app.” Containerization built on this outcome and refined how it is achieved. However, virtualization remains a mainstay in contemporary computing, and many of the world’s most critical workloads continue—and will continue—to run in VMs. Beyond its longevity, virtualization improves containerization and Kubernetes in delivering the core outcomes users and businesses expect.
I had the opportunity to attend KubeCon North America in November 2025. Thank you to the Cloud Native Computing Foundation for an exceptional event! You can read my colleague’s great summary about the event here. I also had the privilege of representing Broadcom at the expo booth, where I had compelling conversations with attendees and other sponsors that are part of the broader cloud‑native community. One question I heard from an engineer who stopped by the booth stood out to me: “What does virtualization have to do with Kubernetes?” This understanding is very important to IT work and organizational budgets!
Computing revolutionized the way we interact with each other, how we work, and what is possible in industry. IT workloads demand computing resources—CPU, memory, storage, network, etc.—to perform desired functions such as sending an email or updating a database. A critical piece of business operations involves IT organizations optimizing their workload‑hosting strategy, whether that be on the mainframe, in an on‑premises datacenter, or in a public‑cloud environment.
Virtualization didn’t disappear with Kubernetes — it actually makes Kubernetes work better at enterprise scale.
Virtualization
Since the dawn of electronic computing in the 1940’s, users interacted with dedicated physical hardware to accomplish their tasks. Applications, workloads, and hardware all advanced rapidly, expanding the ability, complexity, and reach of what users could do via computing. However, a key constraint remained: one machine (or server) was dedicated to one application. For example, organizations had servers dedicated to email functionality, or an entire server dedicated to activities that ran only a handful of times per month, such as payroll.
Virtualization—using technology to simulate IT resources—was pioneered in the 1960’s on the mainframe. In that era, virtualization allowed shared access to mainframe resources and enabled multiple applications and use cases to run on the same hardware. This provided a blueprint for contemporary virtualization and cloud computing by allowing multiple applications to run on dedicated hardware.
VMware led the cloud‑computing boom through the virtualization of the x86 architecture, the most common instruction set for personal computers and servers. Physical hardware could now house multiple distributed applications, support many users, and fully utilize expensive hardware. Virtualization is the key technology that makes public‑cloud computing possible. Below is a summary of its primary benefits:
- Abstraction – Virtualization abstracts physical hardware (CPU, RAM, storage) into logical partitions that can be managed independently.
- Flexibility, Scalability, Elasticity – The abstracted partitions can be scaled as business needs change, provisioned and turned off on demand, and resources can be reclaimed as needed.
- Resource Consolidation & Efficiency – Physical hardware can run multiple, right‑sized logical partitions with the appropriate amount of CPU, RAM, and storage, maximally utilizing hardware and saving on fixed costs such as real estate and power.
- Isolation & Security – Each VM has its own “world” with an OS independent from the one running on the physical host, allowing deep security and isolation for applications sharing the underlying host.
For most enterprises, critical workloads that power their mission are built to run on Virtual Machines, and they trust Broadcom to provide the best VMs and virtualization technology on the planet.
By proving that infrastructure could be abstracted and managed independently of physical hardware, virtualization laid the foundation for the next evolution in workload hosting.
Containerization
As computing demands scaled, the complexity of applications and workloads rose exponentially. Applications that were traditionally designed and managed as monoliths began to be broken apart into smaller units of functionality called microservices. This allowed developers and administrators to manage parts of their applications independently, enabling easier scaling, updates, and reliability. These microservices run in containers, which were popularized in the industry by Docker.
Docker containers package applications and their dependencies—code, libraries, and configuration files—into units that can run consistently on any infrastructure, whether it be a developer’s laptop, a server in an enterprise datacenter, or a server in the public cloud. Containers get their name from shipping containers and provide many of the same benefits as their namesake: standardization, portability, and encapsulation. Below is a quick overview of the key benefits of containerization:
- Standardization – Like shipping containers package merchandise in a form factor that other machinery can consistently interact with, software containers package applications in a uniform, logically abstracted, and isolated environment.
- Portability – Shipping containers move from ships to trucks and trains. Software containers can run on a developer’s laptop, development environments, production servers, and between cloud providers.
- Encapsulation – Shipping containers contain all the merchandise needed to fulfill an order. Software containers hold the application co
(Note: the original text ends abruptly at “co”. The content has been preserved as‑is.)
Containers vs. Virtual Machines
- Containers package an application and its runtime, system tools, libraries, and any other dependencies required to run the application.
- Isolation: Shipping and software containers both isolate their contents from other containers. Software containers share the underlying physical machine’s OS, but not the application dependencies.
As containers became the industry standard, teams began developing their own tools to orchestrate and manage containers at scale. Kubernetes was born out of these projects in 2015 and then donated to the open‑source community. Building on the nautical theme of containers, Kubernetes means “Helmsman” or “Pilot” in Greek, and functions as the brain of the infrastructure.
A container allows you to easily deploy applications – Kubernetes allows you to:
- Scale the number of application instances you would like to deploy.
- Ensure each instance remains running.
- Operate the same way across any cloud provider or datacenter.
These are the three S pillars – Self‑Healing, Scalability, and Standardization. These outcomes propelled Kubernetes’ rise to industry gold‑standard, making it ubiquitous in cloud‑native computing by delivering operational consistency, reducing risk, and enhancing portability.
Virtualization → Containerization
Virtualization paved the way for developers to house and isolate multiple applications on physical hardware, for administrators to manage IT resources decoupled from the underlying hardware, and proved that abstracting underlying parts of the stack is viable for running and scaling complex software. Containers build on these principles and abstract the application layer, providing the following benefits over virtualization:
- Efficiency – Because containers share a host OS, they eliminate the resource overhead (CPU, memory, storage) associated with running multiple copies of the same OS.
- Velocity – The smaller footprint allows much faster startup and shutdown times.
- Portability – Containers are lightweight and can be run on any conformant container runtime.
Virtualization Improves Kubernetes
Virtualization stabilizes and accelerates Kubernetes as well. Most managed Kubernetes services—such as the hyperscaler offerings (EKS on AWS, AKS on Azure, GKE on GCP)—run the Kubernetes layer on top of a virtualized OS. Since Kubernetes environments are typically complex, virtualization greatly enhances isolation, security, and reliability, while easing operational overhead. A brief overview of these benefits follows:
- Isolation & Security – Without virtualization, all containers running on a Kubernetes cluster on a physical host share the same kernel (OS). If a container is breached, everything on the physical host can potentially be compromised. The hypervisor prevents the spread of bad actors to other Kubernetes nodes and containers.
- Reliability – Kubernetes can restart containers if they crash, but it is powerless if the underlying physical host has issues. Virtualization can restart the entire Kubernetes environment via high‑availability on a different physical server.
- Operations – Without virtualization, a physical host typically runs a single Kubernetes cluster, locking the environment into one Kubernetes version and slowing velocity. Virtual machines allow multiple clusters, easier upgrades, and more flexible operations.
Bottom line: Every major managed Kubernetes service runs on virtual machines because virtualization provides the isolation, reliability, and operational flexibility required at enterprise scale.
Broadcom Provides the Best Platform for Workload Hosting
Since Kubernetes’ birth in 2015, VMware technology has been a top‑3 contributor to the upstream project. Several projects were invented by Broadcom and donated to the community, including:
- Harbor
- Antrea
- Contour
- Pinniped
Broadcom’s engineering teams remain committed to upstream Kubernetes and contribute to projects such as Harbor, Cluster API, and etcd.
With the release of VCF 9, Broadcom’s VMware Cloud Foundation (VCF) division brings the industry unified operations, shared infrastructure, and consistent tooling—agnostic of workload form factors. Customers can run VMs and containers/Kubernetes workloads on the same hardware, managed with the same tools that millions of practitioners have built their skills and careers on. Enterprises can:
- Cut down on capital and operating expenditures.
- Standardize their operating model.
- Modernize applications and infrastructure to move faster, secure data, and improve reliability of core systems.
Broadcom has been the gold standard for virtualization and VM workloads for 25+ years. Continuous innovation and contributions to the technology landscape keep customers partnering with us to run both their mission‑critical VM workloads and their container/Kubernetes workloads for the next 25 years.
Discover More from the VMware Cloud Foundation (VCF) Blog
Subscribe to get the latest posts sent to your email.