A simple load balancer from scratch written in Golang

Published: (February 8, 2026 at 05:56 AM EST)
6 min read
Source: Dev.to

Source: Dev.to

Load‑Balancer Basics

LayerTypeDescription
L7 (Application)HTTP / gRPCOperates at the application layer.
L4 (Transport)TCP / UDPOperates at the transport layer.

Core responsibilities

  1. Accept an incoming request from a client.
  2. Forward the request to a backend server from its pool.
  3. Return the backend’s response to the client.

Additional concerns (not covered in this simple project)

  • Synchronous vs. streaming responses
  • Backend‑selection algorithms
  • Error handling
  • Session affinity / sticky sessions
  • Monitoring & observability

Production‑Ready Alternatives

Solution / ServiceType / RoleOSI LayerNotes
NGINXReverse proxy, web server, LBL7Path routing via location blocks.
EnvoyL7 proxy / service proxyL7Core model: Listener → Route → Cluster.
AWS Application Load BalancerManaged application LBL7Supports path and host‑based routing rules.
AWS Network Load BalancerManaged network LBL4TCP/UDP only; no HTTP awareness.
GCP HTTP(S) Load BalancerManaged global application LBL7Global host & path routing.
GCP TCP/UDP Load BalancerManaged network LBL4No Layer‑7 inspection.
Azure Application GatewayManaged application LBL7Host/path routing + WAF integration.
Azure Load BalancerManaged network LBL4Basic TCP/UDP distribution only.

Features of This Project

  • Traffic Proxying – HTTP request proxying with multiple load‑balancing strategies.
  • Load‑Balancing Strategies
    • Round Robin – Circular distribution.
    • Weighted Round Robin – Distribution based on server weight.
    • Least Connections – Sends to the server with the fewest active connections.
    • Random – Random selection.
  • Health Checks – Periodic TCP dial checks; unhealthy backends are temporarily removed.
  • Request Retry – On failure, the request is retried on a different backend.
  • Configuration – JSON file (config.json) or command‑line flags. Configurable items include:
    • Load balancer port
    • Request timeout
    • Health‑check interval
    • Load‑balancing strategy
    • Backend server list

The architecture is modular, testable, and easy to understand, following the Separation of Concerns principle. Each component has a single, well‑defined responsibility, making the code clean and mirroring patterns used in production systems.


High‑Level Flow

+-------------------+        +-------------------+        +-------------------+
|   LoadBalancer    |  -->   |   Strategy (SB)   |  -->   |   Backend (RB)    |
| (receives request)|        | (chooses backend) |        | (reverse proxy)   |
+-------------------+        +-------------------+        +-------------------+
  1. LoadBalancer receives a request.
  2. It asks the current Strategy to choose a backend from the pool.
  3. The LoadBalancer uses the selected Backend’s reverse proxy to forward the request.

Component Details

strategy Module – The Brains

The backend‑selection logic is defined via an interface – a classic Strategy Design Pattern.

type LoadBalancingStrategy interface {
    // SelectBackend returns the backend that should handle the request.
    SelectBackend(pool []*Backend) *Backend
}

The LoadBalancer holds a variable of this interface type, allowing any number of strategies (Round Robin, Least Connections, etc.) to be swapped in without touching the core load‑balancer code.

backend Module – The Worker

A backend is a stateful object, not just a URL string.

type Backend struct {
    Url                url.URL
    proxy              *httputil.ReverseProxy
    isHealthy          bool
    activeConnections  int64
    // ... other fields ...
    mu                 sync.RWMutex
}
  • Tracks health, active connections, and weight.
  • Holds the httputil.ReverseProxy that performs request forwarding.
  • Uses a mutex to safely read/write state across concurrent requests and health checks.

loadbalancer Module – The Coordinator

Central component that ties everything together.

type LoadBalancer struct {
    pool                []*Backend
    strategy            LoadBalancingStrategy
    // ... other fields (e.g., health‑check ticker, retry config) ...
}

Responsibilities

  • Manage the pool of Backend objects.
  • Handle incoming HTTP traffic and use the current Strategy to pick a backend.
  • Perform periodic health checks on all backends.
  • Implement retry logic when a request to a chosen backend fails.

config Module – The Blueprint

A simple but important module responsible for loading configuration from a JSON file or command‑line flags. It populates fields such as:

  • Listening port
  • Request timeout
  • Health‑check interval
  • Selected load‑balancing strategy
  • Backend server definitions (URL, weight, etc.)

Closing Thoughts

By isolating logic—e.g., separating backend selection from request proxying—we can unit‑test each part independently and add new features with minimal side effects. This project serves as a solid learning exercise and a foundation for building more sophisticated, production‑grade load balancers.

Loading and Parsing Configuration

The application loads its configuration from a config.json file. This decouples the load balancer’s logic from hard‑coded settings, allowing you to reconfigure it without changing the code.

What We Built

In this post we walked through the process of building a simple yet functional HTTP load balancer from scratch in Go. We implemented several core features, including:

  • Multiple load‑balancing strategies
  • Periodic health checks
  • Dynamic configuration

The result is a working application that demonstrates the fundamental principles of traffic management.

Design Patterns in Action

More importantly, this project was a practical exploration of key software design patterns. By using an interface for our balancing algorithms (the Strategy Pattern), we created a system that is:

  • Flexible
  • Easy to extend

The emphasis on modularity and separation of concerns produced a codebase that is clean, testable, and easier to reason about.

Future Improvements

While our load balancer is simple, it provides a solid foundation that could be extended in many ways. Possible enhancements include:

  • More Advanced Strategies – Implement IP hashing for session affinity (sticky sessions).
  • Enhanced Observability – Add Prometheus metrics for monitoring request latency, error rates, and active connections.
  • HTTPS Support – Add TLS termination for secure communication.
  • Dynamic Configuration – Implement hot‑reloading of the configuration file without restarting the service.

Closing Thoughts

I hope this article has given you an insightful look into the internals of a load balancer and inspired you to build your own. Feel free to explore the complete source code on GitHub, try it out for yourself, and even contribute your own ideas!

0 views
Back to Blog

Related posts

Read more »

Go templates

What are Go templates? Go templates are a way to create dynamic content in Go by mixing data with plain text or HTML files. They allow you to replace placehold...