Top 10 pg_dump Options You Should Know for Reliable PostgreSQL Backups

Published: (December 4, 2025 at 03:08 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

PostgreSQL’s pg_dump utility is the go‑to tool for creating logical backups, but its true power lies in the dozens of command‑line options that let you customize exactly what gets backed up and how. Knowing which options to use — and when — can mean the difference between a backup that saves your project and one that falls short during a crisis. This guide covers the most essential pg_dump options every developer and DBA should master.

pg_dump options

1. Format Selection with -F / --format

The format option determines how your backup is stored and what restoration capabilities you’ll have. Choosing the right format upfront saves time and headaches during recovery. This single option affects compression, parallel‑restore support and file compatibility.

FormatFlagExtensionCompressionParallel RestoreBest For
Plain-F p.sqlNoNoSmall DBs, manual review
Custom-F c.dumpYesYesMost production use cases
Directory-F dfolderYesYesVery large databases
Tar-F t.tarNoNoArchive compatibility
# Custom format (recommended for most cases)
pg_dump -F c -d myapp -f backup.dump

# Directory format for parallel operations
pg_dump -F d -d myapp -f backup_dir/

For databases under 500 GB, the custom format (-F c) provides the best balance of compression, flexibility, and restoration speed.

2. Parallel Dump with -j / --jobs

The jobs option enables parallel dumping by specifying the number of concurrent processes. This dramatically reduces backup time for large databases by dumping multiple tables simultaneously. It works only with the directory format (-F d).

# Dump using 4 parallel processes
pg_dump -F d -j 4 -d myapp -f backup_dir/

A good rule of thumb is to set the number of jobs to match your CPU cores, but leave at least one core free for the database server itself. For a server with 8 cores, -j 6 or -j 7 typically yields optimal performance without starving other processes.

3. Table Selection with -t / --table

Back up specific tables instead of the entire database. You can specify multiple tables by repeating the flag or use wildcards for pattern matching.

# Backup single table
pg_dump -d myapp -t users -f users_backup.sql

# Backup multiple tables
pg_dump -d myapp -t users -t orders -t products -f critical_tables.sql

# Use wildcards for pattern matching
pg_dump -d myapp -t 'public.user_*' -f user_tables.sql

Table names are case‑sensitive and should include the schema prefix when working with non‑public schemas (e.g., -t sales.orders).

4. Table Exclusion with -T / --exclude-table

Exclude specific tables (or patterns) from the dump. Useful for skipping large log, session, or temporary tables that can be regenerated.

# Exclude log and session tables
pg_dump -d myapp -T logs -T sessions -T temp_data -f backup.sql

# Exclude tables matching a pattern
pg_dump -d myapp -T 'public.*_log' -T 'public.*_temp' -f backup.sql

Combining -T with regular backups can reduce backup size by 50 % or more when your database contains large audit or logging tables.

5. Schema‑Only with --schema-only

Exports just the database structure—tables, indexes, constraints, functions, and triggers—without any row data. Ideal for version control, documentation, or creating empty replicas.

# Export complete schema
pg_dump -d myapp --schema-only -f schema.sql

# Schema for specific tables only
pg_dump -d myapp --schema-only -t users -t orders -f tables_schema.sql

Schema‑only backups are typically just a few hundred kilobytes regardless of database size, making them perfect for storing in Git repositories alongside your application code.

6. Data‑Only with --data-only

Exports only row data, omitting schema definitions. Useful for refreshing data in an existing database structure or creating data snapshots for testing.

# Export all data without schema
pg_dump -d myapp --data-only -f data.sql

# Data‑only with INSERT statements (more portable)
pg_dump -d myapp --data-only --inserts -f data_inserts.sql

When using --data-only, the target database must already have the correct schema in place, including all tables, constraints, and sequences.

7. Compression Level with -Z / --compress

Controls compression for custom and directory formats (0 = no compression, 9 = maximum). Higher levels produce smaller files but take longer to create.

LevelSpeedSize ReductionBest For
0FastestNoneTesting, fast recovery priority
1‑3FastModerateDaily backups, balanced approach
4‑6MediumGoodStandard production backups
7‑9SlowMaximumLong‑term archival, storage‑limited
# Maximum compression for archival
pg_dump -F c -Z 9 -d myapp -f backup.dump

# Fast compression for frequent backups
pg_dump -F c -Z 1 -d myapp -f backup.dump

For most production scenarios, -Z 6 (the default) offers an excellent compression‑to‑speed ratio.

8. Clean Objects with -c / --clean

Adds DROP statements before CREATE statements in the dump, ensuring restoration replaces existing objects rather than failing on conflicts. Pair with --if-exists to avoid errors when objects are missing.

# Backup with DROP statements
pg_dump -d myapp -c -f backup.sql

# Combine with --if-exists to avoid errors on missing objects
pg_dump -d myapp -c --if-exists -f backup.sql

9. Create Database with -C / --create

Includes a CREATE DATABASE statement and connection command, making the backup fully self‑contained. When restoring, connect to a maintenance database (e.g., postgres) rather than the target database.

# Include CREATE DATABASE statement
pg_dump -d myapp -C -f backup.sql

# Full backup with both clean and create
pg_dump -d myapp -C -c --if-exists -f backup.sql
Back to Blog

Related posts

Read more »

Powershell is low-key cool

PowerShell often gets overlooked in favor of more popular shells, but it offers powerful features that can streamline local development workflows. Below is a st...