A simple load balancer from scratch written in Golang
Source: Dev.to
Load‑Balancer Basics
| Layer | Type | Description |
|---|---|---|
| L7 (Application) | HTTP / gRPC | Operates at the application layer. |
| L4 (Transport) | TCP / UDP | Operates at the transport layer. |
Core responsibilities
- Accept an incoming request from a client.
- Forward the request to a backend server from its pool.
- Return the backend’s response to the client.
Additional concerns (not covered in this simple project)
- Synchronous vs. streaming responses
- Backend‑selection algorithms
- Error handling
- Session affinity / sticky sessions
- Monitoring & observability
Production‑Ready Alternatives
| Solution / Service | Type / Role | OSI Layer | Notes |
|---|---|---|---|
| NGINX | Reverse proxy, web server, LB | L7 | Path routing via location blocks. |
| Envoy | L7 proxy / service proxy | L7 | Core model: Listener → Route → Cluster. |
| AWS Application Load Balancer | Managed application LB | L7 | Supports path and host‑based routing rules. |
| AWS Network Load Balancer | Managed network LB | L4 | TCP/UDP only; no HTTP awareness. |
| GCP HTTP(S) Load Balancer | Managed global application LB | L7 | Global host & path routing. |
| GCP TCP/UDP Load Balancer | Managed network LB | L4 | No Layer‑7 inspection. |
| Azure Application Gateway | Managed application LB | L7 | Host/path routing + WAF integration. |
| Azure Load Balancer | Managed network LB | L4 | Basic TCP/UDP distribution only. |
Features of This Project
- Traffic Proxying – HTTP request proxying with multiple load‑balancing strategies.
- Load‑Balancing Strategies
- Round Robin – Circular distribution.
- Weighted Round Robin – Distribution based on server weight.
- Least Connections – Sends to the server with the fewest active connections.
- Random – Random selection.
- Health Checks – Periodic TCP dial checks; unhealthy backends are temporarily removed.
- Request Retry – On failure, the request is retried on a different backend.
- Configuration – JSON file (
config.json) or command‑line flags. Configurable items include:- Load balancer port
- Request timeout
- Health‑check interval
- Load‑balancing strategy
- Backend server list
The architecture is modular, testable, and easy to understand, following the Separation of Concerns principle. Each component has a single, well‑defined responsibility, making the code clean and mirroring patterns used in production systems.
High‑Level Flow
+-------------------+ +-------------------+ +-------------------+
| LoadBalancer | --> | Strategy (SB) | --> | Backend (RB) |
| (receives request)| | (chooses backend) | | (reverse proxy) |
+-------------------+ +-------------------+ +-------------------+
- LoadBalancer receives a request.
- It asks the current Strategy to choose a backend from the pool.
- The LoadBalancer uses the selected Backend’s reverse proxy to forward the request.
Component Details
strategy Module – The Brains
The backend‑selection logic is defined via an interface – a classic Strategy Design Pattern.
type LoadBalancingStrategy interface {
// SelectBackend returns the backend that should handle the request.
SelectBackend(pool []*Backend) *Backend
}
The LoadBalancer holds a variable of this interface type, allowing any number of strategies (Round Robin, Least Connections, etc.) to be swapped in without touching the core load‑balancer code.
backend Module – The Worker
A backend is a stateful object, not just a URL string.
type Backend struct {
Url url.URL
proxy *httputil.ReverseProxy
isHealthy bool
activeConnections int64
// ... other fields ...
mu sync.RWMutex
}
- Tracks health, active connections, and weight.
- Holds the
httputil.ReverseProxythat performs request forwarding. - Uses a mutex to safely read/write state across concurrent requests and health checks.
loadbalancer Module – The Coordinator
Central component that ties everything together.
type LoadBalancer struct {
pool []*Backend
strategy LoadBalancingStrategy
// ... other fields (e.g., health‑check ticker, retry config) ...
}
Responsibilities
- Manage the pool of
Backendobjects. - Handle incoming HTTP traffic and use the current
Strategyto pick a backend. - Perform periodic health checks on all backends.
- Implement retry logic when a request to a chosen backend fails.
config Module – The Blueprint
A simple but important module responsible for loading configuration from a JSON file or command‑line flags. It populates fields such as:
- Listening port
- Request timeout
- Health‑check interval
- Selected load‑balancing strategy
- Backend server definitions (URL, weight, etc.)
Closing Thoughts
By isolating logic—e.g., separating backend selection from request proxying—we can unit‑test each part independently and add new features with minimal side effects. This project serves as a solid learning exercise and a foundation for building more sophisticated, production‑grade load balancers.
Loading and Parsing Configuration
The application loads its configuration from a config.json file. This decouples the load balancer’s logic from hard‑coded settings, allowing you to reconfigure it without changing the code.
What We Built
In this post we walked through the process of building a simple yet functional HTTP load balancer from scratch in Go. We implemented several core features, including:
- Multiple load‑balancing strategies
- Periodic health checks
- Dynamic configuration
The result is a working application that demonstrates the fundamental principles of traffic management.
Design Patterns in Action
More importantly, this project was a practical exploration of key software design patterns. By using an interface for our balancing algorithms (the Strategy Pattern), we created a system that is:
- Flexible
- Easy to extend
The emphasis on modularity and separation of concerns produced a codebase that is clean, testable, and easier to reason about.
Future Improvements
While our load balancer is simple, it provides a solid foundation that could be extended in many ways. Possible enhancements include:
- More Advanced Strategies – Implement IP hashing for session affinity (sticky sessions).
- Enhanced Observability – Add Prometheus metrics for monitoring request latency, error rates, and active connections.
- HTTPS Support – Add TLS termination for secure communication.
- Dynamic Configuration – Implement hot‑reloading of the configuration file without restarting the service.
Closing Thoughts
I hope this article has given you an insightful look into the internals of a load balancer and inspired you to build your own. Feel free to explore the complete source code on GitHub, try it out for yourself, and even contribute your own ideas!