Stop Configuring Nginx: The Easiest Way to Deploy Go & React with HTTPS
Source: Dev.to

The “It Works on My Machine” Trap
We have all been there. You spend weeks building a robust application. Your Go backend is blazing fast, your React frontend is snappy, and everything runs perfectly on localhost:8080.
But then comes the deployment phase. Suddenly, you are dealing with VPS configuration, SSL certificates, Nginx config files that look like hieroglyphics, and the dreaded CORS errors.
I recently built Geo Engine, a geospatial backend service using Go and PostGIS. I wanted to deploy it to a DigitalOcean Droplet with a custom domain and HTTPS, but I didn’t want to spend hours configuring Certbot or managing complex Nginx directives.
Here is how I solved it using Docker Compose and Caddy (the web server that saves your sanity).
The Architecture 🏗️
My goal was to have a professional production environment:
- Frontend: A React Dashboard (Vite) on
app.geoengine.dev. - Backend: A Go API (Chi Router + PostGIS) on
api.geoengine.dev. - Security: Automatic HTTPS for both subdomains.
- Infrastructure: Everything containerized with Docker.
Instead of exposing ports 8080 and 5173 to the wild, I used Caddy as the entry point. Caddy acts as a reverse proxy, handling SSL certificate generation and renewal automatically.
The “Magic” Caddyfile ✨
If you have ever struggled with an nginx.conf file, you are going to love this. This is literally all the configuration I needed to get HTTPS working for two subdomains:
# The Dashboard (Frontend)
app.geoengine.dev {
reverse_proxy dashboard:80
}
# The API (Backend)
api.geoengine.dev {
reverse_proxy api:8080
}
Caddy detects the domain, talks to Let’s Encrypt, gets the certificates, and routes the traffic. No cron jobs, no manual renewals.
The Docker Setup 🐳
Here is the secret sauce in my docker-compose.yml. Notice how the services don’t expose ports to the host machine (except Caddy); they only talk inside the geo-net network.
services:
# Caddy: The only service exposed to the world
caddy:
image: caddy:2-alpine
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
networks:
- geo-net
depends_on:
- dashboard
- api
# Backend API
api:
build: ./backend
expose:
- "8080" # Only visible to Caddy, not the internet
environment:
- ALLOWED_ORIGINS=https://app.geoengine.dev
networks:
- geo-net
# Database
db:
image: postgres:15-alpine
# ... config ...
networks:
- geo-net
networks:
geo-net:
driver: bridge
The Challenges (Where I Got Stuck) 🚧
It wasn’t all smooth sailing. Here are two “gotchas” that cost me a few hours of debugging, so you don’t have to suffer:
1. The “Orphan” Migration Container
I use a separate container to run database migrations (golang-migrate). It kept crashing with a connection error.
The Fix: Even utility containers need to be on the same Docker network! I had forgotten to add networks: - geo-net to my migration service, so it couldn’t “see” the database.
2. The CORS Villain 💀
On localhost, allowing * (wildcard) for CORS usually works. But once I moved to production with HTTPS, my frontend requests started failing. Browsers are strict about credentials (cookies/headers) in secure environments. I had to stop being lazy and specify the exact origin in my Go code using the rs/cors library.
In Go:
// Don't do this in production:
// AllowedOrigins: []string{"*"} // ❌
// Do this instead:
AllowedOrigins: []string{"https://app.geoengine.dev"} // ✅
By matching the exact origin of my frontend, the browser (and the security protocols) were happy.
The Result
After pushing the changes, I ran docker compose up -d. In about 30 seconds, Caddy had secured my site.
You can check out the live demo here: https://app.geoengine.dev
Or explore the code on GitHub: Geo Engine Core
If you are deploying a side project, give Caddy a try. It feels like cheating, but in the best way possible.
Happy coding!