Top 10 Mistakes Developers Make with pg_dump (And How to Avoid Them)

Published: (December 6, 2025 at 05:18 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

Database backups are your last line of defense against data loss, yet many developers unknowingly sabotage their backup strategy with common pg_dump mistakes. These errors often go unnoticed until disaster strikes—when it’s already too late. This guide walks through the most frequent pitfalls and shows how to build a bullet‑proof backup workflow.

Mistake 1: Not testing restores

Creating backups without ever testing if they actually restore is like buying insurance without reading the policy.

How to avoid it

  • Schedule regular restore tests in a staging environment.
  • Automate this process (at least monthly).
  • Document restoration procedures so anyone on the team can execute them under pressure.

Mistake 2: Using the plain‑text format for large databases

The default plain‑text SQL format (.sql) creates massive files and doesn’t support parallel restoration. For databases over a few gigabytes this means painfully slow backup and restore times.

FormatCompressionParallel RestoreBest For
Plain (.sql)NoNoSmall DBs, version control
Custom (-Fc)YesYesMost production use cases
Directory (-Fd)YesYesLarge DBs, selective restore
Tar (-Ft)NoNoArchive compatibility

How to avoid it

  • Use custom (-Fc) or directory (-Fd) format for any database larger than 1 GB.
  • These formats compress automatically and enable parallel restoration with pg_restore -j <jobs>.

Mistake 3: Not backing up roles, permissions, or tablespaces

pg_dump does not include database roles, permissions, or tablespace definitions. Restoring a dump on a fresh server can leave the database unusable because the required users don’t exist.

How to avoid it

# Dump globals (roles, tablespaces, etc.)
pg_dumpall --globals-only > globals.sql

# Dump the database itself
pg_dump -Fc mydb > mydb.dump

Store both files together as part of your backup routine.

Mistake 4: Running pg_dump during high‑traffic periods

Executing pg_dump on a busy primary can lock tables, slow down queries, and produce inconsistent snapshots.

How to avoid it

  • Schedule backups during low‑traffic windows.
  • Use --no-synchronized-snapshots when dumping from a read replica.
  • Prefer running backups against a replica instead of the primary.

Mistake 5: Including ownership and privilege information that doesn’t match the target environment

Backing up with ownership and privilege commands baked in can cause restoration failures when the target environment has different user configurations (e.g., moving between development, staging, and production).

Useful flags

FlagPurposeWhen to use
--no-ownerOmits OWNER TO commandsCross‑environment restores
--no-privilegesOmits GRANT/REVOKE statementsDifferent permission setups
--no-commentsOmits COMMENT commandsCleaner dumps

How to avoid it

  • For portable backups, include --no-owner --no-privileges.
  • Apply appropriate permissions after restoration based on the target environment.

Mistake 6: Storing uncompressed backups

Uncompressed backups consume enormous storage space and take longer to transfer. A 50 GB database might compress to 5 GB—a 10× saving.

How to avoid it

# Custom format (compressed by default)
pg_dump -Fc mydb > mydb.dump

# Plain format piped through gzip
pg_dump mydb | gzip > mydb.sql.gz

# Maximum compression with -Z (0–9)
pg_dump -Fc -Z 9 mydb > mydb_maxcomp.dump

Mistake 7: Ignoring schema‑only backups

Backing up only data or only the full database misses the value of schema‑only dumps. Schema‑only backups are invaluable for version control, documentation, and rapid environment setup.

How to avoid it

# Schema‑only dump
pg_dump --schema-only -Fc mydb > mydb_schema.dump

Store schema dumps in version control to track database evolution over time.

Mistake 8: Embedding passwords in backup scripts

Hard‑coding passwords creates security vulnerabilities and makes credential rotation a nightmare. Scripts often end up in version control, exposing sensitive data.

How to avoid it

  • Use a .pgpass file with permissions 600.
  • Use environment variables (PGPASSWORD) or connection service files (pg_service.conf).
  • Never commit credentials to repositories.
# Example .pgpass entry
hostname:port:dbname:username:password
chmod 600 ~/.pgpass

Mistake 9: No clear retention policy

Keeping every backup forever wastes storage and money, while keeping too few limits recovery options. Many teams never establish a retention strategy.

How to avoid it

Implement a tiered retention policy and automate cleanup:

FrequencyRetention
Daily7 days
Weekly4 weeks
Monthly12 months

Use a script or backup tool to purge old files according to this schedule.

Mistake 10: Relying on manual pg_dump execution

Manual execution guarantees eventual failure: someone forgets, is on vacation, or assumes someone else ran it. Manual processes don’t scale and don’t survive team changes.

How to avoid it

  • Automate everything with cron jobs, systemd timers, or dedicated backup tools.
  • Implement monitoring/alerting to notify when backups fail or don’t run on schedule.

Conclusion

While mastering pg_dump flags and scripting workarounds is possible, modern PostgreSQL backup tools (e.g., Postgresus, pgBackRest) eliminate many of these pitfalls. They handle compression, scheduling, retention policies, and restore testing automatically—suitable for individual developers and enterprise teams alike.

Every mistake on this list has caused real data loss for real teams. The good news: they’re all preventable. Whether you refine your pg_dump scripts or adopt a dedicated backup solution, the key is building a system that works without constant attention. Test your restores, automate your processes, and never assume your backups work until you’ve proven they do.

Back to Blog

Related posts

Read more »