Why Your PostgreSQL Keeps Running Out of Connections
Source: Dev.to
Understanding the connection limit
PostgreSQL enforces a maximum number of concurrent connections, defined by the max_connections setting. Typical defaults are:
- Small/basic tiers: 50–100 connections
- General‑purpose tiers: 100–200 connections
When an application tries to open more connections than this limit, PostgreSQL returns a TooManyConnectionsError. The database itself is still up; it simply has no room for new connections.
Why a single pool isn’t enough
Even when you use a connection pool, creating multiple pools can quickly exhaust the limit.
What happens in a typical application
class DatabaseClient:
def __init__(self):
self.pool = create_pool(min=2, max=10) # each instance gets its own pool
If your service spawns several components that each instantiate DatabaseClient:
| Component | Pools created | Connections per pool |
|---|---|---|
| Health checker | 1 | 10 |
| Dependency checker | 1 | 10 |
| Each worker (e.g., 6) | 6 | 10 each (60) |
| Total | 8 | 80 |
Running two containers would double that to 160 connections, easily exceeding a max_connections of 100 and triggering the error.
The fix: a single shared pool
Create one pool at the module or process level and let every DatabaseClient instance borrow from it.
# shared_pool.py
_shared_pool = None
def get_shared_pool():
global _shared_pool
if _shared_pool is None:
_shared_pool = create_pool(min=2, max=5) # configure as needed
return _shared_pool
# client.py
from shared_pool import get_shared_pool
class DatabaseClient:
def get_connection(self):
return get_shared_pool().acquire()
Now, regardless of how many DatabaseClient objects you create, they all share the same limited set of connections.
Gotchas to watch out for
-
The
close()trap
If a client’sclose()method shuts down the shared pool, any subsequent call will fail. Makeclose()a no‑op on individual instances and only close the pool when the whole process exits. -
Cascade effect on health checks
When the database runs out of connections, health‑check queries also fail. Orchestrators may restart containers, which creates new pools and worsens the problem. -
Configurable pool size
Use an environment variable (e.g.,PG_POOL_MAX_SIZE=5) so you can adjust the pool without redeploying. -
Do the napkin math before deployment
pool_max_size × max_replicas < max_connections - admin_headroomExample
- Pool max size: 5
- Total replicas across services: 10 → 5 × 10 = 50
max_connections: 100- Admin headroom: 20 → 100 − 20 = 80
Since 50 < 80, the configuration is safe. If the inequality fails, reduce pool sizes or introduce a dedicated pooler such as PgBouncer.
Checklist before pushing changes
- ✅ Use a single shared pool at module/process level.
- ✅ Make pool size configurable via an environment variable.
- ✅ Ensure
close()on individual instances does nothing. - ✅ Close the shared pool only on process shutdown.
- ✅ Verify the connection‑count math for all services.
- ✅ Run syntax and unit tests to confirm the code works.
TL;DR
- Many pools = connection exhaustion.
- One shared pool = stable, predictable usage.
Understanding that the problem is usually too many pools—not the absence of a pool—turns a 1 AM panic into a manageable situation. Hope this helps anyone who’s just been paged for a “database ran out of connections.” 🙂