Backing Up Nginx Logs the Right Way: From Basics to Automation

Published: (December 15, 2025 at 10:18 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

What Nginx logs are and why they matter

Nginx generates logs for every request it handles. By default, these logs are stored in:

/var/log/nginx/

The two most important files are:

  • access.log – records every HTTP request served by Nginx.
  • error.log – records issues Nginx encounters while processing requests.

Access log

Example entry:

43.202.80.217 - - [16/Dec/2025:02:14:10 +0000] "GET /news/latest HTTP/1.1" 200 5421 "-" "Mozilla/5.0"

What it tells you:

FieldMeaning
Client IP addressSource of the request
Date and timeWhen the request was received
Requested URLResource that was accessed
HTTP status codeResult of the request (200, 404, …)
Response sizeBytes sent to the client
User agentBrowser, bot, or crawler

Used for: traffic analysis, bandwidth estimation, bot detection, page‑view analytics, cost estimation, etc.

Error log

Example entry:

connect() failed (111: Connection refused) while connecting to upstream

Used for: debugging backend failures, finding misconfigurations, diagnosing outages and performance issues.

Why log rotation is essential

Logs grow continuously. If left unmanaged:

  • Disk space fills up → server performance degrades.
  • Historical data is lost → you lose valuable audit information.

Log rotation:

  • Splits logs into daily (or size‑based) chunks.
  • Compresses old logs.
  • Removes very old logs.

On most Linux systems this is handled by logrotate. A typical Nginx configuration (/etc/logrotate.d/nginx) looks like this:

/var/log/nginx/*.log {
    daily
    rotate 14
    compress
    delaycompress
    missingok
    notifempty
    postrotate
        invoke-rc.d nginx rotate
    endscript
}

What this means

  • Logs rotate daily.
  • Old logs are compressed (.gz).
  • 14 days of logs are kept.
  • Nginx is notified after rotation.

Rotated files appear as:

access.log.1
access.log.2.gz
access.log.3.gz

Backup strategies

Manual copy (small projects)

scp user@server:/var/log/nginx/access.log.2.gz .

Limitations: manual, not scalable, prone to loss.

Durable, low‑cost, and easy to retrieve. Popular options:

  • Amazon S3
  • Google Cloud Storage
  • Azure Blob Storage

If your server runs on AWS, uploading logs to S3 in the same region incurs minimal cost and provides 11 nines durability plus lifecycle rules for automatic deletion.

Suggested S3 layout

s3://nginx-logs/
└── 16-12-2025/
    └── ip-172-31-44-115/
        ├── access.log.2.gz
        ├── access.log.3.gz
        └── error.log.2.gz
  • Logs are grouped by date.
  • Supports multiple servers.
  • Simplifies automation.

Automating backups to S3

Create a script, e.g. /usr/local/bin/nginx_log_backup.sh:

#!/bin/bash
set -e

LOG_DIR="/var/log/nginx"
BUCKET="s3://nginx-logs"
DATE=$(date +%d-%m-%Y)
HOST=$(hostname)

aws s3 sync "$LOG_DIR" \
  "$BUCKET/$DATE/$HOST/" \
  --exclude "*" \
  --include "*.gz"

Make it executable:

sudo chmod +x /usr/local/bin/nginx_log_backup.sh

Scheduling with cron

Since logrotate runs at 00:00 UTC, schedule the backup shortly after:

sudo crontab -e

Add the following line:

30 0 * * * /usr/local/bin/nginx_log_backup.sh >> /var/log/nginx_backup.log 2>&1

What this does

  • Runs daily at 00:30 UTC, ensuring logs have been rotated.
  • Captures both standard output and errors in /var/log/nginx_backup.log.

Verifying the backup

# List local rotated logs
ls -lh /var/log/nginx/*.gz

# Run the script manually (for testing)
sudo /usr/local/bin/nginx_log_backup.sh

# List uploaded objects
aws s3 ls s3://nginx-logs/$(date +%d-%m-%Y)/$(hostname)/ --recursive

# Check cron activity
grep CRON /var/log/syslog

# View the backup log
tail /var/log/nginx_backup.log

Common pitfalls to avoid

PitfallRemedy
Backing up the active access.log instead of the rotated .gz filesRun the backup after logrotate (e.g., 00:30 UTC).
Forgetting to test the script in a cron‑like environmentExecute the script manually and inspect the log.
Not capturing cron outputRedirect output to a log file as shown above.
Missing S3 lifecycle rulesConfigure a lifecycle policy to delete logs older than needed.

Benefits of the setup

  • Reliable, low‑cost, and auditable log storage.
  • Enables traffic analysis, cost estimation, abuse detection, and reliability improvements.
  • Works with a minimal stack: Nginx → logrotate → shell script → cron → Amazon S3.

Logs are not noise—they are valuable data. Treat them accordingly.

Back to Blog

Related posts

Read more »