Your Microservices Aren’t Scalable. Your Database Is Just Crying.

Published: (February 4, 2026 at 04:24 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

The hidden bottleneck

We initially blamed everything except the obvious: “Maybe we need more replicas.”
The truth was simpler and more uncomfortable: our microservices weren’t the problem.

Microservices sell a seductive idea—scale each part of your system independently. In practice, most teams do this:

  1. Split the app into 10–20 services.
  2. Point all of them at the same database.
  3. Call it “microservices architecture.”

The result is a distributed monolith with network latency. Each service may scale horizontally, but every single one still funnels its traffic into the same bottleneck. When load increases, the database sees chaos, not microservices.

Symptoms

  • Latency spikes without errors or crashes.
  • More connections, read‑replica lag, maxed‑out connection pools.
  • Locks piling up in places no one monitors.

Typical changes that look harmless in isolation become disastrous together:

ServiceChangeAdditional queries
ANew endpoint+3 queries
B“Just a join”+2 queries
CPolling every 5 s

Scaling effects

Scaling a service from 2 pods to 20 pods doesn’t just multiply throughput; it multiplies:

  • Open connections
  • Idle transactions
  • Concurrent writes
  • Cache misses
  • Lock contention

The database treats each pod as a new stranger aggressively asking for attention, even though dashboards may show “service latency looks fine.”

The caching temptation

Most teams add:

  • Redis for reads
  • Some HTTP caching with an emotionally chosen TTL

Caching makes the system faster… until it isn’t, because:

  • Writes still hit the same database
  • Cache invalidation becomes messy quickly
  • Cross‑service data consistency turns into a guessing game
  • Operational complexity rises without removing coupling

Caching is a painkiller, not a cure.

What didn’t solve the problem

  • Bigger database instance
  • More replicas
  • Higher connection limits
  • Shouting “optimize queries” in stand‑ups

What did help

Each service owns its data. Period.

If another service needs that data, it must:

  • Call an API, or
  • Consume an event, or
  • Read from a purpose‑built read model

No “just this one join across services.” That pattern reignites the database pain.

We replaced synchronous dependencies with:

  • Events
  • Async workflows
  • Eventually consistent updates

Not everything needs to be instant; most systems just need to be reliable.

A shift in mindset

Instead of asking, “Can this service scale?” we ask, “What does this do to the database at 10× traffic?” That single question reshaped our architecture reviews.

Microservices don’t automatically give you scalability. They give you options—at the cost of discipline. Without strict boundaries, they amplify database problems instead of solving them.

Takeaways

  • Own your data. Each service should have its own schema or database.
  • Avoid shared tables across services; they create hidden coupling.
  • Design for traffic patterns. Understand how scaling pods multiplies database load.
  • Prefer asynchronous communication over synchronous joins.
  • Monitor database health (connections, replication lag, lock contention) as a first‑class concern.

If your system slows down every time traffic increases, don’t just look at your services. Look at:

  1. Who owns the data?
  2. How many services touch the same tables?
  3. How scaling pods multiplies database load?
  4. Whether your architecture matches your traffic patterns.

Because nine times out of ten, when “microservices don’t scale”… they actually do. Your database is just crying for help.

Back to Blog

Related posts

Read more »

Implementing gRPC.

gRPC is a framework developed by Google that provides an efficient, language‑independent mechanism for making Remote Procedure Calls RPCs. Its primary use case...