How Much Production Can Fit Into a Home Lab?
Source: Dev.to
Originally published at on March 7, 2026.
Introduction
My name is Patrick and I work as a Senior HPC DevOps and AWS Cloud Engineer. My day‑to‑day work revolves around building and operating infrastructure platforms that enable developers and researchers to run complex workloads.
In practice this means building Kubernetes platforms that power CI environments, GPU pipelines, and automated workflows. These systems integrate technologies such as:
- Kubernetes
- Argo CD
- Harbor registries
- Apache Airflow
- Autoscaling compute infrastructure with Karpenter
- Observability platforms based on the Grafana ecosystem
The common theme behind all of this work is automation. My personal engineering paradigm can be summed up in a single sentence:
Automation or nothing.
If a system cannot be rebuilt automatically, reproduced reliably, and operated without constant manual intervention, it is not finished. This philosophy did not emerge from theory—it emerged from frustration.
The Problem With Most Home Labs
Many engineers treat their home lab as a playground: a place where experiments happen quickly, configurations are tweaked manually, and problems are solved with one‑off fixes. I made the same mistake early in my career.
One of my first personal infrastructure projects was a small home cloud based on Docker Swarm. It ran OwnCloud for file storage and OpenVPN for remote access. In theory it was supposed to make my life easier by giving me control over my own data. In practice I spent more time fixing the system than using it. Every time something broke, I had to rediscover how the system was configured. Containers had been changed manually, configuration files had diverged, and the environment slowly drifted away from anything reproducible. Instead of owning my infrastructure, my infrastructure owned me. That experience fundamentally changed how I approach systems today.
Infrastructure Should Be Self‑Sufficient
In production environments we do not accept fragile systems. Infrastructure must be:
- reproducible
- automated
- observable
- auditable
- scalable
When something fails, the goal is not to manually repair it; the goal is to rebuild it automatically. The same principle should apply to personal infrastructure. If my entire home lab disappeared tomorrow, recovery should be simple:
- Buy a new machine
- Install Docker
- Run a bootstrap script
From there the system should rebuild itself:
- The Kubernetes cluster starts automatically.
- Argo CD installs itself.
- Argo CD then reconciles its own configuration and deploys every application in the platform.
- All configuration lives in Git repositories.
No manual configuration.
No hidden state.
No mystery infrastructure.
Just code.
Why Kubernetes?
My passion lies in building systems that can operate independently once they are correctly designed. Technologies that enable this idea naturally fascinate me. Some of my favorites include:
- Kubernetes
- Argo CD and GitOps workflows
- Infrastructure as Code (IaC)
- Observability platforms built around the Grafana ecosystem
These tools allow complex distributed systems to behave predictably. One of the most satisfying moments in infrastructure engineering is watching a system configure itself. For example, installing a vanilla Argo CD instance and then applying an Argo CD application that manages Argo CD itself. Within minutes the platform begins mutating its own configuration and deploying new services automatically.
Observability adds another dimension to this experience. Dashboards, logs, metrics, and traces transform distributed systems into something understandable. Suddenly an entire platform becomes visible and measurable—you can see the system breathing.
The Goal of This Project
This blog documents an experiment:
How much production‑style infrastructure can fit inside a home lab?
The environment is intentionally constrained. The entire platform runs on a single machine:
- Mac Mini with an Apple M4 chip and 16 GB of memory
Instead of relying on cloud infrastructure, the Kubernetes cluster runs locally using kind (Kubernetes in Docker). Running infrastructure locally introduces interesting constraints. Large production systems often rely on separate control planes, distributed storage, advanced networking, and managed cloud services—none of these luxuries exist in a minimal home‑lab environment. This project attempts to push those limits.
The goal is to implement as many production‑grade practices as possible, including:
- GitOps workflows
- Kubernetes platform automation
- CI/CD infrastructure
- Observability stacks
- Ingress and service exposure
- Infrastructure reproducibility
Whenever something cannot realistically be implemented in a home lab, I will explain how the same problem would typically be solved in a real production environment.
What You Can Expect From This Blog
Many technical blog posts show isolated configuration snippets and claim that a solution works, but they often omit the details of how the system is actually built, how it operates, and how it can be reproduced. This style of documentation is frustrating; it leaves readers with more questions than answers and creates a false impression that complex systems can be built with a few lines of code.
I strongly dislike that style. Instead, this project will publish everything required to reproduce the system:
- Complete Git repositories
- Helm values
- Kubernetes manifests
- Helper scripts
- Cluster bootstrap code
Readers should be able to rebuild the entire platform themselves. The intention is not only to demonstrate a working system, but also to document the reasoning behind architectural decisions, trade‑offs, and limitations.
The Question
How much Kubernetes can you squeeze into a single machine?
How close can a personal home lab get to a real production platform?
And how far can we push automation before a system truly begins to operate on its own?
This blog is the attempt to find out.