Secure Linux Server Setup & Application Deployment
Source: Dev.to
Deploying an application is easy.
Running it securely, so that one compromised app does not take down your entire server, requires discipline and structure.
This guide documents the exact process we follow to:
- Prepare a fresh Linux server
- Deploy databases and applications
- Keep the system secure, isolated, and maintainable
It is battle‑tested and suitable for real production servers.
What This Setup Works For
| Category | Examples |
|---|---|
| Node.js back‑ends | NestJS, Express, Fastify |
| Next.js | Standalone build |
| Static front‑ends | React, Vite |
| Databases (Docker) | PostgreSQL, MongoDB, Redis |
| Reverse proxy | Caddy, Nginx (HTTPS) |
Core Principles
- Root is not an app runtime – never run your services as root.
- One app = one service user – each app gets its own low‑privilege Linux user.
- Humans deploy, services run – only humans have sudo access.
- Databases are private by default – bind only to
127.0.0.1. - Reverse proxy is the only public entry point.
- Assume one app will eventually be compromised – design for containment.
Our goal is simple: If one application is hacked, everything else must remain safe.
1. Create a Normal Admin User
On a fresh server you usually start as root.
# Create a non‑root admin user (example: dev)
adduser dev
usermod -aG sudo dev
Why?
- Root SSH access is dangerous.
- Using
sudois auditable. - Accidents are easier to recover from.
2. Set Up SSH Key Authentication
# Generate a modern SSH key (ed25519)
ssh-keygen -t ed25519
Copy the public key to the server:
mkdir -p /home/dev/.ssh
nano /home/dev/.ssh/authorized_keys # paste your public key here
Fix permissions:
chown -R dev:dev /home/dev/.ssh
chmod 700 /home/dev/.ssh
chmod 600 /home/dev/.ssh/authorized_keys
Harden sshd_config
sudo nano /etc/ssh/sshd_config
Ensure the following lines exist (or add them):
PermitRootLogin no
PubkeyAuthentication yes
# Disable password login
Match all
PasswordAuthentication no
Reload SSH safely:
sudo systemctl reload ssh
3. Configure the Firewall (UFW)
# Allow only what’s required
sudo ufw allow OpenSSH # port 22
sudo ufw allow 80 # HTTP
sudo ufw allow 443 # HTTPS
sudo ufw enable
sudo ufw status verbose
Result: Only ports 22, 80, 443 are publicly reachable.
4. Install Docker (Official Repository)
Never use the
docker.iopackage from the default Ubuntu repo.
# Follow Docker’s official installation guide for your distro.
# After installation, add the admin user to the docker group:
sudo usermod -aG docker dev
Re‑login and verify:
docker ps
Important Rules
- Never add service users to the
dockergroup. - Docker runs with root‑equivalent privileges, so only the admin user (
dev) should be allowed to use it.
5. Deploy Databases Securely (Docker)
Example: PostgreSQL (secure)
docker run -d \
--name postgres \
--restart unless-stopped \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=STRONG_PASSWORD \
-e POSTGRES_DB=appdb \
-v pgdata:/var/lib/postgresql \
-p 127.0.0.1:5432:5432 \
postgres:18
Verify the binding:
ss -tulpn | grep 5432
# Expected output: 127.0.0.1:5432
Why bind to 127.0.0.1?
Docker does not honor UFW rules for published ports. It inserts its own iptables rules, so a container bound to 0.0.0.0 would be publicly reachable even if UFW blocks the port. Binding to 127.0.0.1 guarantees the database is reachable only from the host itself.
Access from Your Local Machine (SSH Tunnel)
ssh -N -L 5432:127.0.0.1:5432 dev@SERVER_IP
Now connect locally:
| Host | Port |
|---|---|
| 127.0.0.1 | 5432 |
🔐 Encrypted, private, safe.
6. Manage Private Repository Access (Deploy Keys)
For each private repo, generate a dedicated SSH deploy key as the admin user:
ssh-keygen -t ed25519 -C "deploy-myapp" -f ~/.ssh/id_ed25519_myapp
- Add the public key (
id_ed25519_myapp.pub) as a Deploy key in GitHub (read‑only). - Clone using the SSH alias (never HTTPS).
7. Create a Locked‑Down Service User for Each App
sudo adduser \
--system \
--no-create-home \
--group \
--shell /usr/sbin/nologin \
svc-myapp
Characteristics of this user
- Cannot SSH.
- Has no shell.
- No sudo rights.
- Owns only its app directory.
Deploy the Application
# Switch to the admin user (dev) and clone the repo
cd /var/apps
git clone git@github.com-myapp:org/repo.git
cd repo
# Install dependencies and build
npm ci
npm run build
npm prune --production
Humans build. In production, environment variables are never committed to Git.
8. Store Runtime Environment Variables Securely
Create a system‑managed env file (owned by root):
# /etc/systemd/system/myapp.env
# Example content:
# DATABASE_URL=postgres://appuser:STRONG_PASSWORD@127.0.0.1:5432/appdb
# OTHER_SECRET=...
- The file is loaded by
systemdat runtime. - If a Next.js project uses variables prefixed with
NEXT_PUBLIC_, they must be available at build time because the compiler embeds them.
Build with Public Variables (if needed)
sudo -E bash -c '
set -a
source /etc/systemd/system/myapp.env
set +a
npm run build
'
If there are no NEXT_PUBLIC_* variables, this step is unnecessary – runtime injection via systemd suffices.
9. Prepare a Standalone Next.js Build
When building Next.js as a standalone app, copy static assets manually:
# After `npm run build` (or `next build`)
mkdir -p .next/standalone/.next
cp -r .next/static .next/standalone/.next/
cp -r public .next/standalone/
Why?
The standalone output contains only server code; static files (_next/static, public/) are omitted automatically. Without copying them, the app would run but assets would return 404.
10. Set Permissions on the Application Directory
sudo chown -R svc-myapp:svc-myapp /var/apps/myapp
sudo chmod -R o-rwx /var/apps/myapp
# Even the admin user (dev) now gets permission denied – intentional.
11. Create a Systemd Service
# /etc/systemd/system/myapp.service
[Unit]
Description=My Application
After=network.target
[Service]
User=svc-myapp
Group=svc-myapp
WorkingDirectory=/var/apps/myapp
EnvironmentFile=/etc/systemd/system/myapp.env
ExecStart=/usr/bin/node dist/main.js
Restart=always
RestartSec=3
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/apps/myapp
PrivateTmp=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictNamespaces=true
LockPersonality=true
RestrictSUIDSGID=true
CapabilityBoundingSet=
AmbientCapabilities=
Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable --now myapp.service
sudo systemctl status myapp.service
12. Summary Checklist
| Step | Done? |
|---|---|
Create admin user (dev) | ✅ |
Set up SSH key auth & harden sshd | ✅ |
| Configure UFW (22, 80, 443) | ✅ |
| Install Docker from official repo | ✅ |
Add admin user to docker group only | ✅ |
Deploy databases with 127.0.0.1 binding | ✅ |
| Generate per‑repo deploy keys | ✅ |
Create low‑privilege service user (svc‑myapp) | ✅ |
| Clone, build, and prune app | ✅ |
Store env vars in /etc/systemd/system/*.env | ✅ |
| Build Next.js standalone (copy static assets) | ✅ |
| Set strict permissions on app files | ✅ |
| Create hardened systemd service | ✅ |
| Verify firewall, Docker, and service status | ✅ |
Follow this guide on every new server, and you’ll have a secure, isolated, and maintainable production environment where a single compromised app cannot bring down the whole host. Happy deploying!
Mask=0077
[Install]
WantedBy=multi-user.target
Enable and Start (Alternative)
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
Static Frontend Apps (React, Vite, etc.)
Static apps do not run via systemd. They are built once and served directly by the reverse proxy.
-
Place the env file in the project root:
.env.production -
Build explicitly:
npm run buildThe build output (
HTML,JS,CSS, assets) is now fully static and contains the injected values. -
Give Caddy read‑only access to the static build folder:
sudo chown -R svc-frontend:svc-frontend /var/apps/frontend sudo chmod -R 755 /var/apps/frontend/dist
Why we do this
- Caddy must read static files; no write access is needed.
- Prevents accidental or malicious file modification.
- Static apps have no runtime, no open ports, and no background process → significantly reduces attack surface.
Caddy is the only public entry point to the server. The app runs internally on a private port (e.g. 127.0.0.1:5000).
Example Reverse‑Proxy Configuration
api.example.com {
reverse_proxy 127.0.0.1:5000
}
Notes
- The app port is not exposed publicly.
- Firewall blocks direct access; only Caddy can reach it.
- Applies to NestJS, Next.js (standalone), Express, etc.
Caddyfile for Static Site
app.example.com {
root * /var/apps/frontend/dist
encode gzip zstd
try_files {path} {path}/ /index.html
file_server
}
What this does
- Serves static files directly.
- Supports client‑side routing (SPA).
- Enables compression.
- No Node.js process required.
- Only ports 80 and 443 are public; apps never bind directly to the internet.
- Static sites have zero runtime risk.
- Dynamic apps are isolated behind systemd and firewall.
This clean separation keeps the server secure, observable, and easy to reason about.
Deploying a Dynamic App
sudo chown -R dev:dev /var/apps/myapp
cd /var/apps/myapp
git pull
npm ci
npm run build
npm prune --production
sudo chown -R svc-myapp:svc-myapp /var/apps/myapp
sudo chmod -R o-rwx /var/apps/myapp
sudo systemctl restart myapp
⚠️ Never sudo git pull
- Prevents privilege escalation.
- Stops lateral movement between apps.
- Avoids accidental data leaks.
- Reduces risk of exposed databases.
- Prevents root‑level compromise from app bugs.
Even if one app is hacked, the system survives. Other apps survive. Data survives.
Security Philosophy
- Security is not about tools; it’s about clear boundaries and boring defaults.
- This setup avoids complexity, avoids magic, and relies on Linux doing what it already does best.
If you follow this guide end‑to‑end, your server will already be more secure than most production environments.
Happy (and secure) deploying! 🚀