Build a Blue/Green deployment with Nginx Auto-Failover
Source: Dev.to
Introduction
Blue/Green deployment lets you run two identical instances of an application (Blue and Green) and switch traffic instantly from the active instance to the standby when a problem occurs. In this guide we build a complete, container‑based Blue/Green setup using Nginx as the traffic director, a tiny Node.js service for the two pools, and an optional Python watcher that reads Nginx’s JSON logs and posts alerts to Slack.
Prerequisites
| Item | Reason |
|---|---|
| Docker + Docker Compose | Run the services locally without Kubernetes |
| Node.js (for building the app) | Compile the tiny demo service |
| (Optional) Slack webhook URL | Receive failover alerts |
| A terminal and a text editor | Create and edit the files |
Project structure
.
├─ app/
│ ├─ package.json
│ ├─ app.js
│ └─ Dockerfile
├─ nginx/
│ └─ nginx.conf.template
├─ watcher/
│ ├─ requirements.txt
│ └─ watcher.py
├─ docker-compose.yaml
└─ .env
1. Node.js application
package.json
{
"name": "blue-green-app",
"version": "1.0.0",
"main": "app.js",
"license": "MIT",
"scripts": {
"start": "node app.js"
},
"dependencies": {
"express": "^4.18.2"
}
}
app.js
const express = require('express');
const app = express();
const APP_POOL = process.env.APP_POOL || 'unknown';
const RELEASE_ID = process.env.RELEASE_ID || 'unknown';
const PORT = process.env.PORT || 3000;
let chaosMode = false;
let chaosType = 'error'; // 'error' or 'timeout'
// Add tracing headers
app.use((req, res, next) => {
res.setHeader('X-App-Pool', APP_POOL);
res.setHeader('X-Release-Id', RELEASE_ID);
next();
});
app.get('/', (req, res) => {
res.json({
service: 'Blue/Green Demo',
pool: APP_POOL,
releaseId: RELEASE_ID,
status: chaosMode ? 'chaos' : 'healthy',
chaosMode,
chaosType: chaosMode ? chaosType : null,
timestamp: new Date().toISOString(),
endpoints: {
version: '/version',
health: '/healthz',
chaos: '/chaos/start, /chaos/stop'
}
});
});
app.get('/healthz', (req, res) => {
res.status(200).json({ status: 'healthy', pool: APP_POOL });
});
app.get('/version', (req, res) => {
if (chaosMode && chaosType === 'error')
return res.status(500).json({ error: 'Chaos: server error' });
if (chaosMode && chaosType === 'timeout')
return; // simulate hang
res.json({
version: '1.0.0',
pool: APP_POOL,
releaseId: RELEASE_ID,
timestamp: new Date().toISOString()
});
});
app.post('/chaos/start', (req, res) => {
const mode = req.query.mode || 'error';
chaosMode = true;
chaosType = mode;
res.json({ message: 'Chaos started', mode, pool: APP_POOL });
});
app.post('/chaos/stop', (req, res) => {
chaosMode = false;
chaosType = 'error';
res.json({ message: 'Chaos stopped', pool: APP_POOL });
});
app.listen(PORT, '0.0.0.0', () => {
console.log(`App (${APP_POOL}) listening on ${PORT}`);
console.log(`Release ID: ${RELEASE_ID}`);
});
The service exposes:
GET /healthz– health check for NginxGET /version– returns version info; can be forced to error or timeout via chaos modePOST /chaos/start?mode=error|timeout– enable failure simulationPOST /chaos/stop– disable chaos
2. Docker image for both pools
Dockerfile
FROM node:18-alpine
WORKDIR /app
# Install production dependencies
COPY package*.json ./
RUN npm install --only=production
# Copy source code
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
Both Blue and Green containers are built from this image; they differ only by environment variables (APP_POOL, RELEASE_ID, etc.).
3. Nginx traffic director
nginx/nginx.conf.template
events {
worker_connections 1024;
}
http {
# Structured JSON access logs
log_format custom_json '{"time":"$time_iso8601"'
',"remote_addr":"$remote_addr"'
',"method":"$request_method"'
',"uri":"$request_uri"'
',"status":$status'
',"bytes_sent":$bytes_sent'
',"request_time":$request_time'
',"upstream_response_time":"$upstream_response_time"'
',"upstream_status":"$upstream_status"'
',"upstream_addr":"$upstream_addr"'
',"pool":"$sent_http_x_app_pool"'
',"release":"$sent_http_x_release_id"}';
upstream blue_pool {
server app-blue:3000 max_fails=1 fail_timeout=3s;
server app-green:3000 backup;
}
upstream green_pool {
server app-green:3000 max_fails=1 fail_timeout=3s;
server app-blue:3000 backup;
}
server {
listen 80;
server_name localhost;
# JSON access log (shared volume)
access_log /var/log/nginx/access.json custom_json;
# Simple health endpoint for the load balancer itself
location /healthz {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
location / {
# $UPSTREAM_POOL is set by Docker‑Compose env substitution
proxy_pass http://$UPSTREAM_POOL;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Fast timeouts → quick failover
proxy_connect_timeout 1s;
proxy_send_timeout 3s;
proxy_read_timeout 3s;
# Retry on errors / timeouts, try backup upstream
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;
proxy_next_upstream_tries 2;
proxy_next_upstream_timeout 10s;
proxy_pass_request_headers on;
proxy_hide_header X-Powered-By;
}
}
}
Key settings
max_fails=1 fail_timeout=3s– a single failure marks the upstream as down for a short period.- Short
proxy_*_timeoutvalues keep the client from waiting long when the primary pool misbehaves. proxy_next_upstreamwith retries automatically routes the request to the backup pool.
4. Optional Slack watcher
watcher/requirements.txt
requests==2.32.3
watcher/watcher.py
import json, os, time, requests
from collections import deque
from datetime import datetime, timezone
LOG_PATH = os.getenv("NGINX_LOG_FILE", "/var/log/nginx/access.json")
SLACK_WEBHOOK_URL = os.getenv("SLACK_WEBHOOK_URL", "")
SLACK_PREFIX = os.getenv("SLACK_PREFIX", "from: @Watcher")
ACTIVE_POOL = os.getenv("ACTIVE_POOL", "blue")
ERROR_RATE_THRESHOLD = float(os.getenv("ERROR_RATE_THRESHOLD", "2"))
WINDOW_SIZE = int(os.getenv("WINDOW_SIZE", "200"))
ALERT_COOLDOWN_SEC = int(os.getenv("ALERT_COOLDOWN_SEC", "300"))
MAINTENANCE_MODE = os.getenv("MAINTENANCE_MODE", "false").lower() == "true"
def now_iso():
return datetime.now(timezone.utc).isoformat()
def post_to_slack(message):
if not SLACK_WEBHOOK_URL:
return
payload = {"text": f"{SLACK_PREFIX} {message}"}
try:
requests.post(SLACK_WEBHOOK_URL, json=payload, timeout=5)
except Exception as e:
print(f"Slack post failed: {e}")
def parse_log_line(line):
try:
return json.loads(line)
except json.JSONDecodeError:
return None
def main():
recent = deque(maxlen=WINDOW_SIZE)
last_alert = 0
while True:
try:
with open(LOG_PATH, "r") as f:
# Seek to end and read new lines
f.seek(0, os.SEEK_END)
while True:
line = f.readline()
if not line:
time.sleep(0.5)
continue
entry = parse_log_line(line.strip())
if not entry:
continue
recent.append(entry)
# Detect failover: pool header changed from ACTIVE_POOL
if entry.get("pool") and entry["pool"] != ACTIVE_POOL:
now = time.time()
if now - last_alert > ALERT_COOLDOWN_SEC:
msg = f"Failover detected! Traffic switched from {ACTIVE_POOL} to {entry['pool']}"
post_to_slack(msg)
print(now_iso(), msg)
last_alert = now
except FileNotFoundError:
time.sleep(1)
except Exception as e:
print(f"Watcher error: {e}")
time.sleep(2)
if __name__ == "__main__":
main()
The watcher tails the JSON log, calculates a simple error‑rate window, and posts a Slack message when it sees traffic move to the non‑active pool or when error rates exceed the configured threshold.
5. Environment variables (.env)
# Choose which pool is primary (blue or green)
ACTIVE_POOL=blue
# Labels for the two app containers
APP_BLUE_POOL=blue
APP_GREEN_POOL=green
# Release identifiers (optional, useful for tracing)
RELEASE_ID_BLUE=2025-12-09-blue
RELEASE_ID_GREEN=2025-12-09-green
# Nginx upstream selector – will be substituted in the template
UPSTREAM_POOL=${ACTIVE_POOL}_pool
# Watcher settings (adjust as needed)
ERROR_RATE_THRESHOLD=2
WINDOW_SIZE=200
ALERT_COOLDOWN_SEC=300
# Slack webhook (leave empty to disable alerts)
SLACK_WEBHOOK_URL=
When ACTIVE_POOL=blue, the Nginx template resolves UPSTREAM_POOL to blue_pool, making the Blue service the primary upstream.
6. Docker Compose file
version: "3.9"
services:
app-blue:
build: ./app
environment:
- APP_POOL=${APP_BLUE_POOL}
- RELEASE_ID=${RELEASE_ID_BLUE}
ports: [] # not exposed directly
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/healthz"]
interval: 5s
timeout: 2s
retries: 2
app-green:
build: ./app
environment:
- APP_POOL=${APP_GREEN_POOL}
- RELEASE_ID=${RELEASE_ID_GREEN}
ports: [] # not exposed directly
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3000/healthz"]
interval: 5s
timeout: 2s
retries: 2
nginx:
image: nginx:1.25-alpine
depends_on:
- app-blue
- app-green
ports:
- "8080:80"
volumes:
- ./nginx/nginx.conf.template:/etc/nginx/nginx.conf.template:ro
- ./nginx/log:/var/log/nginx
environment:
- ACTIVE_POOL=${ACTIVE_POOL}
- UPSTREAM_POOL=${UPSTREAM_POOL}
command: /bin/sh -c "envsubst '\$UPSTREAM_POOL' /etc/nginx/nginx.conf && nginx -g 'daemon off;'"
watcher:
build:
context: ./watcher
depends_on:
- nginx
volumes:
- ./nginx/log:/var/log/nginx
environment:
- ACTIVE_POOL=${ACTIVE_POOL}
- SLACK_WEBHOOK_URL=${SLACK_WEBHOOK_URL}
- ERROR_RATE_THRESHOLD=${ERROR_RATE_THRESHOLD}
- WINDOW_SIZE=${WINDOW_SIZE}
- ALERT_COOLDOWN_SEC=${ALERT_COOLDOWN_SEC}
# Remove this service if you don't need Slack alerts
The nginx service uses envsubst to replace $UPSTREAM_POOL in the template before starting.
7. Running the demo
# Start everything
docker compose --env-file .env up -d
# Verify Nginx health endpoint
curl http://localhost:8080/healthz
# → should return "healthy"
# Call the application through the load balancer
curl http://localhost:8080/
You should see a JSON response containing X-App-Pool header information (Blue by default).
Simulating a failure
# Put the Blue app into chaos mode (force 500 errors)
curl -X POST "http://localhost:8080/chaos/start?mode=error"
# Or simulate a timeout
curl -X POST "http://localhost:8080/chaos/start?mode=timeout"
After the failure is triggered, subsequent requests to http://localhost:8080/ will be served by the Green pool automatically, thanks to Nginx’s proxy_next_upstream logic. The watcher (if enabled) will post a Slack alert about the failover.
To stop chaos:
curl -X POST "http://localhost:8080/chaos/stop"
8. Switching the active pool
If you want to make Green the primary without causing a failover, edit .env:
ACTIVE_POOL=green
Then restart Nginx (or the whole stack) so the template is regenerated:
docker compose up -d --no-deps --build nginx
Now new traffic will be directed to Green first, while Blue remains on standby.
9. Cleaning up
docker compose down -v
The -v flag removes the anonymous volume that stored Nginx logs.
10. What you’ve learned
- Blue/Green pattern with pure Docker Compose – no Kubernetes needed.
- Nginx upstream configuration with
max_fails,fail_timeout, andproxy_next_upstreamfor instant failover. - Structured JSON access logs that expose the upstream pool via custom headers.
- A minimal chaos interface to test resiliency.
- Optional watcher that turns log events into Slack alerts.
Feel free to adapt the pattern to other languages, add TLS termination, or integrate with a CI/CD pipeline for automated releases. Happy deploying!