Top 7 pg_dump Backup Strategies for Production-Grade PostgreSQL

Published: (December 16, 2025 at 05:57 AM EST)
7 min read
Source: Dev.to

Source: Dev.to

Introduction

Running pg_dump manually once in a while isn’t a backup strategy — it’s a gamble. Production databases demand systematic approaches that account for data growth, recovery‑time objectives, and operational reality. The difference between amateur and production‑grade backups comes down to strategy: knowing what, when, where, and how often to create backups.

pg_dump strategies

1. Scheduled Full Backups with Rotation

The foundation of any production backup strategy is an automated, scheduled full backup combined with an intelligent rotation policy. This guarantees recent backups are always available while preventing storage from growing without bound.

Rotation policy (grandfather‑father‑son)

RetentionDescription
DailyKeep 7 days – quick recovery from recent mistakes
WeeklyKeep 4 weeks – medium‑term recovery
MonthlyKeep 12 months – compliance & historical access

Example PowerShell script (run daily at 2 AM)

# daily_backup.ps1
$timestamp = Get-Date -Format "yyyy-MM-dd_HHmmss"
pg_dump -F c -Z 6 -h localhost -U postgres -d production_db -f "backup_$timestamp.dump"

Windows Task Scheduler – Action: powershell.exe
Arguments: -File "C:\backups\scripts\daily_backup.ps1"

Without automation, backup consistency depends on human memory — and humans forget, especially during busy periods when backups matter most.

2. Pre‑Deployment Safety Backups

Every deployment that touches the database should trigger an immediate backup before any changes execute. This gives you a restore point from minutes ago, not hours.

Backup matrix

Trigger eventBackup typeRetention
Schema migrationFull databaseMinimum 48 hours
Major releaseFull databaseUntil next successful release
Data import / ETLAffected tables24 hours
Manual data fixesAffected tables72 hours

Example snippets

# Pre‑deployment backup in CI/CD pipeline
$release   = $env:RELEASE_VERSION
$timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
pg_dump -F c -Z 6 -d production_db -f "pre_deploy_${release}_${timestamp}.dump"

# Targeted backup for specific tables before migration
pg_dump -F c -d production_db -t users -t orders -f "pre_migrate_critical_tables.dump"

Integrating backup triggers into your deployment pipeline eliminates the “did someone take a backup?” question. The backup becomes a required step that blocks deployment until complete.

3. Parallel Dump Strategy for Large Databases

When a database exceeds ~50 GB, a single‑threaded pg_dump can be painfully slow. The parallel dump leverages multiple CPU cores to dump several tables simultaneously, cutting backup windows by 60‑80 % on modern multi‑core servers. Use the directory format (-F d) which splits the backup into per‑table files.

Example commands

# Parallel dump with 4 workers
pg_dump -F d -j 4 -Z 6 -h localhost -U postgres -d large_db -f backup_dir/

# Parallel restore (matching job count)
pg_restore -F d -j 4 -h localhost -U postgres -d restored_db backup_dir/

# Verify backup directory contents
ls -l backup_dir/
# Expected: toc.dat, 3456.dat.gz, 3457.dat.gz, …

Tip: Start with CPU cores – 2 workers (leaving headroom for PostgreSQL and the OS). On a 16‑core server, -j 12 or -j 14 usually maximizes throughput without destabilizing the database.

4. Schema‑and‑Data Separation Strategy

Separating schema backups from data backups enables faster recovery workflows and better version‑control practices. Schema‑only backups are tiny (kilobytes) regardless of database size and can be stored in Git alongside application code. Data‑only backups can then be created more frequently, since they skip the relatively static schema definitions.

Backup matrix

Backup typeApprox. sizeFrequencyStorage location
Schema‑only~50 KBEvery commit / deployGit repository
Data‑onlyVariesHourly / DailyCloud storage
Full backupLargestWeeklyOff‑site archive

Example snippets

# Schema backup for version control
pg_dump -d production_db --schema-only -O --no-privileges -f schema.sql

# Data‑only backup (assumes schema exists at restore target)
pg_dump -F c -d production_db --data-only -f data_backup.dump

# Restore workflow: schema first, then data
psql -d new_db -f schema.sql
pg_restore -d new_db --data-only data_backup.dump

This separation shines in micro‑service architectures where each service owns its own schema but shares a common data store.

Continue reading the remaining three strategies in the full guide to build a rock‑solid, production‑grade PostgreSQL backup regimen.

5. Multi‑Destination Backup Strategy

Storing all backups in one location creates a single point of failure. The multi‑destination strategy copies backups to multiple storage systems — local disk for fast access, cloud storage for off‑site protection, and possibly a secondary region for disaster recovery. If any single storage system fails, backups remain available elsewhere.

# Backup to local storage first
$backupFile = "backup_$(Get-Date -Format 'yyyyMMdd').dump"
pg_dump -F c -Z 6 -d production_db -f "C:\backups\local\$backupFile"

# Copy to S3 (primary cloud)
aws s3 cp "C:\backups\local\$backupFile" "s3://company-backups/postgres/$backupFile"

# Copy to secondary region (geographic redundancy)
aws s3 cp "C:\backups\local\$backupFile" "s3://company-backups-dr/postgres/$backupFile" --region us-west-2

# Copy to network storage (NAS)
Copy-Item "C:\backups\local\$backupFile" "\\nas\backups\postgres\"

Storage tiers

  • Local storage – fastest restore times for common recovery scenarios.
  • Primary cloud – off‑site protection against local disasters.
  • Secondary region – geographic redundancy for true disaster recovery.

The multi‑destination approach does increase complexity and storage costs, but the protection against correlated failures justifies the investment for any data you cannot afford to lose.

6. Selective Table Backup Strategy

Not all tables deserve equal backup attention. The selective backup strategy prioritises critical business data with more frequent backups while treating log tables, caches, and temporary data with less urgency. This reduces backup times, storage costs, and allows faster recovery of the data that actually matters.

# Critical tables – backup every hour
pg_dump -F c -d production_db \
    -t users -t orders -t payments -t subscriptions \
    -f critical_hourly.dump

# Standard tables – backup daily
pg_dump -F c -d production_db \
    -T logs -T sessions -T cache -T analytics_events \
    -f standard_daily.dump

# Exclude large, regenerable tables from full backups
pg_dump -F c -d production_db \
    -T 'public.*_log' -T 'public.*_cache' -T audit_trail \
    -f production_backup.dump
  • Identify tables containing irreplaceable business data (e.g., users, transactions, orders).
  • Classify tables that can be regenerated (caches, computed aggregates, search indexes).
  • Create tiered backup schedules matching data criticality to backup frequency.

This strategy requires upfront analysis of your data model but pays dividends in faster backups and more focused recovery procedures. When disaster strikes, restoring critical tables in minutes beats waiting hours for a full database recovery.

7. Encrypted Backup Strategy

Production backups contain sensitive data — customer information, financial records, authentication credentials. The encrypted backup strategy ensures this data remains protected even if backup files are compromised. Encryption should happen before backups leave your server, not as an afterthought during storage.

# Backup with OpenSSL encryption (AES‑256)
pg_dump -F c -d production_db | \
    openssl enc -aes-256-cbc -salt -pbkdf2 -out backup_encrypted.dump.enc

# Decrypt and restore
openssl enc -d -aes-256-cbc -pbkdf2 -in backup_encrypted.dump.enc | \
    pg_restore -d restored_db

# Using GPG for key‑based encryption
pg_dump -F c -d production_db | \
    gpg --encrypt --recipient backup@company.com -o backup.dump.gpg

Store encryption keys separately from backup files — preferably in a secrets manager or hardware security module. A backup encrypted with a key stored alongside it provides little actual security. Document your key‑management procedures and test decryption regularly to avoid discovering key problems during an actual emergency.

A Simpler Path: Postgresus

Implementing these seven strategies manually requires significant scripting, scheduling, and monitoring infrastructure. Postgresus is the most popular tool for PostgreSQL backup, providing all these strategies through a web interface that takes minutes to configure. It handles:

  • Scheduling and rotation
  • Multi‑destination storage (S3, Google Drive, Dropbox, NAS)
  • AES‑256‑GCM encryption
  • Instant notifications

Suitable for individuals and enterprise teams alike who want production‑grade backups without managing complex scripts.

Choosing Your Strategy Combination

Most production environments benefit from combining multiple strategies rather than relying on just one. The right combination depends on your database size, recovery‑time objectives, and compliance requirements.

AudienceRecommended Core Strategies
Startups & small teamsScheduled full backups + pre‑deployment backups + multi‑destination storage
Growing companiesAdd selective table backups + encryption + schema/data separation
Enterprise environmentsAll strategies with parallel dumps, geographic redundancy, and comprehensive retention policies

Start with scheduled full backups as your foundation, then layer additional strategies as your needs evolve. Each strategy addresses a specific failure mode — together, they create defense‑in‑depth that handles everything from accidental deletions to datacenter disasters.

Conclusion

Production‑grade PostgreSQL backups require more than occasional pg_dump commands. The seven strategies covered — scheduled full backups, pre‑deployment safety backups, parallel dumps, schema/data separation, multi‑destination storage, selective table backups, and encryption — address the full spectrum of backup requirements from routine protection to disaster recovery. You don’t need to implement all seven immediately, but a thoughtful combination will give you the resilience modern applications demand.

…but you should have a clear plan for which strategies apply to your situation and when you’ll add more.
The cost of implementing proper backup strategies is measured in hours; the cost of losing production data is measured in careers, customers, and sometimes companies.

Back to Blog

Related posts

Read more »

Postgres 18 is now available

Postgres 18 is now available on PlanetScale.Starting today, when you create a new database, the default version will be 18.1.You can select a prior version usin...