Why Your PostgreSQL Keeps Running Out of Connections

Published: (March 17, 2026 at 01:31 PM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Understanding the connection limit

PostgreSQL enforces a maximum number of concurrent connections, defined by the max_connections setting. Typical defaults are:

  • Small/basic tiers: 50–100 connections
  • General‑purpose tiers: 100–200 connections

When an application tries to open more connections than this limit, PostgreSQL returns a TooManyConnectionsError. The database itself is still up; it simply has no room for new connections.

Why a single pool isn’t enough

Even when you use a connection pool, creating multiple pools can quickly exhaust the limit.

What happens in a typical application

class DatabaseClient:
    def __init__(self):
        self.pool = create_pool(min=2, max=10)   # each instance gets its own pool

If your service spawns several components that each instantiate DatabaseClient:

ComponentPools createdConnections per pool
Health checker110
Dependency checker110
Each worker (e.g., 6)610 each (60)
Total880

Running two containers would double that to 160 connections, easily exceeding a max_connections of 100 and triggering the error.

The fix: a single shared pool

Create one pool at the module or process level and let every DatabaseClient instance borrow from it.

# shared_pool.py
_shared_pool = None

def get_shared_pool():
    global _shared_pool
    if _shared_pool is None:
        _shared_pool = create_pool(min=2, max=5)   # configure as needed
    return _shared_pool
# client.py
from shared_pool import get_shared_pool

class DatabaseClient:
    def get_connection(self):
        return get_shared_pool().acquire()

Now, regardless of how many DatabaseClient objects you create, they all share the same limited set of connections.

Gotchas to watch out for

  1. The close() trap
    If a client’s close() method shuts down the shared pool, any subsequent call will fail. Make close() a no‑op on individual instances and only close the pool when the whole process exits.

  2. Cascade effect on health checks
    When the database runs out of connections, health‑check queries also fail. Orchestrators may restart containers, which creates new pools and worsens the problem.

  3. Configurable pool size
    Use an environment variable (e.g., PG_POOL_MAX_SIZE=5) so you can adjust the pool without redeploying.

  4. Do the napkin math before deployment

    pool_max_size × max_replicas < max_connections - admin_headroom

    Example

    • Pool max size: 5
    • Total replicas across services: 10 → 5 × 10 = 50
    • max_connections: 100
    • Admin headroom: 20 → 100 − 20 = 80

    Since 50 < 80, the configuration is safe. If the inequality fails, reduce pool sizes or introduce a dedicated pooler such as PgBouncer.

Checklist before pushing changes

  • ✅ Use a single shared pool at module/process level.
  • ✅ Make pool size configurable via an environment variable.
  • ✅ Ensure close() on individual instances does nothing.
  • ✅ Close the shared pool only on process shutdown.
  • ✅ Verify the connection‑count math for all services.
  • ✅ Run syntax and unit tests to confirm the code works.

TL;DR

  • Many pools = connection exhaustion.
  • One shared pool = stable, predictable usage.

Understanding that the problem is usually too many pools—not the absence of a pool—turns a 1 AM panic into a manageable situation. Hope this helps anyone who’s just been paged for a “database ran out of connections.” 🙂

0 views
Back to Blog

Related posts

Read more »

How Slack Rebuilt Notifications 📣

Introduction At Slack, notifications are how teams stay in the loop, but they can also become overwhelming when not designed with intention. Our goal was to ma...

Stop Leaking Data in Multi-Tenant Apps

Why Your application logic isn't Enough: The Case for Database-Level Row-Level Security You've built a robust multi‑tenant SaaS. You've implemented tenant_id f...