Setting Up an App Hub with Nginx Reverse Proxy on Our Internal Dashboard
Source: Dev.to
Background
Our team runs a Flask‑based SPA as an internal dashboard. As features grew, we needed a quick way to jump between services, so we created the App Hub panel.
Changes made this time:
- Removed 3 retired services
- Added 3 new services (TechsFree Shop, TechsFree ERP, Accounting AI)
- Updated URLs for services that moved
Problem: Host Header with IP‑Based Access
When referencing services on another internal server by IP address in App Hub links, Nginx wasn’t routing to the right vhost.
The Nginx config assumes subdomain‑based routing (techsfree.com, blog.techsfree.com, etc.). When accessing by raw IP, the Host header becomes the IP itself, which doesn’t match any server_name.
Solution: Add Locations to the Default Server
# /www/server/panel/vhost/nginx/0.default.conf
server {
listen 80 default_server;
# ...
location /shop/ {
root /www/apps/techsfree-shop;
try_files $uri $uri/ /shop/index.html;
}
location /erp/ {
proxy_pass http://127.0.0.1:3101/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Key: add path‑based location blocks to the default_server vhost. When accessing services directly by internal IP without a domain, this is the cleanest approach.
Static Files vs Reverse Proxy
| Service | Architecture | Nginx Config |
|---|---|---|
| TechsFree Shop | Vite‑built static files | root directive |
| TechsFree ERP | Backend API + Frontend | proxy_pass |
Static files (Shop)
Build artifacts are placed in /www/apps/techsfree-shop/ and served directly by Nginx. Since it’s an SPA, try_files $uri $uri/ /shop/index.html handles client‑side routing.
Reverse proxy (ERP)
Backend runs on Node.js (ts‑node) listening on port 3101. Nginx forwards requests to it. The trailing slash in proxy_pass is critical:
# Wrong: /erp/ prefix gets passed through to the backend
location /erp/ {
proxy_pass http://127.0.0.1:3101; # no trailing slash
}
# Right: /erp/ prefix is stripped before forwarding
location /erp/ {
proxy_pass http://127.0.0.1:3101/; # with trailing slash
}
Whether you include a trailing slash in the proxy_pass URL changes how the path is handled. Easy to miss, annoying to debug.
Managing systemd and PM2 Side by Side
Another lesson: mixing process managers causes confusion.
The dashboard runs on Ubuntu with systemd. I briefly tried moving it to a server running aaPanel (PM2), but hit permission issues and rolled back.
# Check on systemd server
systemctl --user status task-dashboard.service
# Check on PM2 server (if migrated)
pm2 status
When the same service was running in two environments simultaneously, changes weren’t being reflected — because systemd was spawning new processes without killing the old ones.
# Find stale processes
ps aux | grep server.py
# Kill explicitly, then restart
kill <pid>
systemctl --user restart task-dashboard.service
Always check for duplicate processes after a deploy.
Bug Fix: Inconsistent JSON Field Names
While implementing a “bulk delete completed tasks” feature, I hit an easy‑to‑miss bug.
The task JSON uses completed, but the API’s filter logic was checking done:
# Buggy
active_tasks = [t for t in tasks if not t.get('done')]
# Correct
active_tasks = [t for t in tasks if not t.get('completed')]
Once you decide on a field name, keep it consistent across frontend, backend, and documentation. Any deviation will cause bugs like this.
Summary
- Add locations to the default server to handle IP‑based access.
- Choose static files vs reverse proxy by use case (SPAs need
try_files). - Watch the trailing slash in
proxy_pass— it changes path handling. - Check for duplicate processes after deploys when mixing process managers.
- Keep JSON field names consistent across the whole stack.
Even small internal tools teach real lessons when run in a production‑like setup.
Tags: nginx, reverse-proxy, spa, flask, systemd, deployment, webdev, infra