Speeding Up PostgreSQL in Containers
Source: Dev.to
The Problem
Running a test suite on an older CI machine with slow disks revealed PostgreSQL as a major bottleneck. Each test run was taking over 1 hour to complete.
The culprit? Tests performed numerous database operations, and TRUNCATE commands were used to clean up data after each test. With slow disk I/O, PostgreSQL spent most of its time syncing data to disk—operations that are completely unnecessary in an ephemeral CI environment where data persistence doesn’t matter.
Running top during test execution showed the smoking gun:
242503 postgres 20 0 184592 49420 39944 R 81.7 0.3 0:15.66 postgres: postgres api_test 10.89.5.6(43216) TRUNCATE TABLE
- PostgreSQL was consuming 81.7 % CPU just to truncate a table.
- The single
TRUNCATEran for over 15 seconds.
On a machine with slow disks, PostgreSQL was waiting for the kernel to confirm that data was written to physical storage—even though the operation only needed to empty tables between tests.
The Fix – Three Simple PostgreSQL Tweaks
services:
postgres:
image: postgres:16.11-alpine
environment:
POSTGRES_INITDB_ARGS: "--nosync"
POSTGRES_SHARED_BUFFERS: 256MB
tmpfs:
- /var/lib/postgresql/data:size=1g
1. --nosync Flag
POSTGRES_INITDB_ARGS: "--nosync"tells PostgreSQL to skipfsync()calls during database initialization.- In CI we don’t care about durability – if the container crashes we’ll just start over.
- This eliminates expensive disk‑sync operations that were slowing down database setup.
2. Increase shared_buffers
POSTGRES_SHARED_BUFFERS: 256MB(up from the default ~128 MB) gives PostgreSQL more memory to cache frequently accessed data.- Helpful when tests repeatedly access the same tables.
3. Mount Data Directory on tmpfs
tmpfs:
- /var/lib/postgresql/data:size=1g
tmpfscreates an in‑memory filesystem for PostgreSQL’s data directory.- All database operations happen in RAM, dramatically speeding up:
TRUNCATEoperations – instant cleanup between tests- Index updates – no disk seeks required
- WAL (Write‑Ahead Log) writes – purely memory operations
- Checkpoint operations – no waiting for disk flushes
The 1 GB size limit is generous for most test databases; adjust it based on your data volume.
Results
| Metric | Before | After | Improvement |
|---|---|---|---|
| Total test run time | ~60 min | ~10 min | 6× faster |
Single test (e.g., API::FilamentSupplierAssortmentsTest#test_create_validation_negative_price) | 25.5 s | 0.47 s | ≈ 55× faster |
| Average per‑test time (24 tests) | 27 s | 0.45 s | ≈ 60× faster |
Sample Output
Before tmpfs optimization
API::FilamentSupplierAssortmentsTest#test_create_validation_negative_price = 25.536s
API::FilamentSupplierAssortmentsTest#test_list_with_a_single_assortment = 29.996s
API::FilamentSupplierAssortmentsTest#test_list_missing_token = 25.952s
After tmpfs optimization
API::FilamentSupplierAssortmentsTest#test_list_as_uber_curator = 0.474s
API::FilamentSupplierAssortmentsTest#test_list_as_assistant = 0.466s
API::FilamentSupplierAssortmentsTest#test_for_pressman_without_filament_supplier = 0.420s
Why This Works
TRUNCATE– PostgreSQL was syncing empty‑table states to disk.- Database initialization – each CI run recreated the database.
INSERT– creating test fixtures (users, roles, …).- Transaction commits – each test runs in a transaction that gets rolled back.
- Frequent small writes – occurring during test execution.
On slow disks, even simple operations like creating a test user or truncating a table took seconds instead of milliseconds. The top output above shows a single TRUNCATE consuming 81.7 % CPU while waiting for disk I/O. Multiply that across hundreds of tests, and you get hour‑long CI runs.
Practical Guidance
- Production – Keep
fsyncenabled and use conservative settings for durability. - CI – Data is ephemeral; speed matters more than durability.
- Profile your pipeline – We discovered disk I/O was the bottleneck, not CPU or memory.
- tmpfs is the ultimate disk‑I/O eliminator – everything in RAM means zero disk bottleneck.
- Memory –
tmpfsconsumes RAM; ensure your CI runners have enough (≥ 1 GB for the DB).
Full PostgreSQL Service Configuration
services:
postgres:
image: postgres:16.11-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: dbpgpassword
POSTGRES_DB: api_test
POSTGRES_INITDB_ARGS: "--nosync"
POSTGRES_SHARED_BUFFERS: 256MB
ports:
- 5432
tmpfs:
- /var/lib/postgresql/data:size=1g
Note: The
tmpfsfield is officially supported in Woodpecker CI’s backend (seepipeline/backend/types/step.go). If you encounter schema‑validation warnings, they are likely due to outdated documentation—the feature works as expected.
Takeaway
- Small configuration tweaks can have massive impacts on CI speed.
- Optimizing for speed rather than durability is appropriate for temporary test databases.
- Using
tmpfseliminates the disk‑I/O bottleneck, turning hour‑long test runs into minutes.
Happy testing! 🚀
CI + tmpfs: Simple, Effective, No Code Changes
CI makes it trivial with native Docker support – just add a tmpfs: field and you’re done.
If you’re on GitHub Actions, GitLab CI, or other platforms, you might need work‑arounds like docker run with --tmpfs flags or custom runner configurations.
TL;DR: I tried.
tmpfsis still faster and simpler.
Could Aggressive PostgreSQL Tuning Match tmpfs?
After seeing the dramatic improvements with tmpfs, I wondered:
“Could we achieve similar performance by aggressively tuning PostgreSQL settings instead?”
This would be useful for environments where tmpfs isn’t available or RAM is limited.
Experiment: Disable All Durability Features
services:
postgres:
command:
- postgres
- -c
- fsync=off # Skip forced disk syncs
- -c
- synchronous_commit=off # Async WAL writes
- -c
- wal_level=minimal # Minimal WAL overhead
- -c
- full_page_writes=off # Less WAL volume
- -c
- autovacuum=off # No background vacuum
- -c
- max_wal_size=1GB # Fewer checkpoints
- -c
- shared_buffers=256MB # More memory cache
Even with all these aggressive settings, tmpfs was still faster.
| Feature / Aspect | Disk‑based (even with fsync=off) | tmpfs‑based |
|---|---|---|
| File‑system overhead – ext4/xfs metadata ops | ❌ | ✅ |
| Disk seeks – mechanical latency / limited IOPS | ❌ | ✅ |
| Kernel buffer cache – memory copies | ❌ | ✅ |
Docker overlay2 – storage‑driver overhead | ❌ | ✅ |
| Configuration complexity (7+ settings) | ❌ | ✅ |
| Pure RAM operations – no physical storage | ✅ | ✅ |
| Zero disk I/O | ❌ | ✅ |
| Maximum performance – nothing faster than RAM | ❌ | ✅ |
Bonus: Other PostgreSQL CI Optimizations to Consider
If you’re still looking for more speed improvements, try the following tweaks.
Disable Query Logging
command:
- postgres
- -c
- log_statement=none # Don't log any statements
- -c
- log_min_duration_statement=-1 # Don't log slow queries
Additional Tweaks
fsync=offinpostgresql.conf– disables synchronous writes (similar to--nosync). Use only in temporary, non‑persistent environments.- Increase
work_mem– gives more memory to each query, which can speed up complex operations in tests.