How Redis Cut My Database Reads from ~26K to Almost Zero

Published: (February 8, 2026 at 12:09 PM EST)
3 min read
Source: Dev.to

Source: Dev.to

Cover image for How Redis Cut My Database Reads from ~26K to Almost Zero

I used to hit Supabase on every single page load—blogs, individual posts, experiences, toolboxes, services, connections, profile info, role visibility, skills… basically my entire personal dashboard depended on direct database queries.

The result?

  • ~26,000 database reads per day
  • Slow responses
  • Unnecessary load
  • Occasional connection warnings

So I introduced Redis as a read‑through cache with a small pre‑warm script—and everything changed.

What I Cached

I focused on the hottest read‑heavy data:

  • Blogs → published list, per‑post data, combined blog payload
  • Experiences → active + full history
  • Toolboxes → all, software, hardware
  • Services → active + all
  • Connections → full list
  • Profile info → singleton record
  • Role visibility → sidebar & quick actions
  • Skills → full list + category variants

These were perfect cache candidates because they:

  • Change infrequently
  • Are read constantly
  • Don’t require real‑time consistency

How the Caching Works

1) Read‑Through Cache Pattern

Each GET endpoint wraps a helper:

getCached(key, fetcher, ttl = 300)

Flow

Request → Check Redis
        → Cache hit → return instantly
        → Cache miss → fetch from Supabase → store in Redis → return
  • Only the first request touches the DB
  • All other requests are served from Redis in milliseconds

2) Smart Invalidation on Writes

Whenever data changes via POST, PUT, or DELETE, I call:

invalidateKeys([...])

This clears only the affected cache prefixes, keeping everything:

  • Fresh
  • Consistent
  • Fast

3) Prewarming the Cache

To avoid cold‑start latency after deploys, I built a script:

scripts/prewarm-redis.mjs

It simply calls the public API endpoints—no DB credentials needed.

Run it like:

BASE_URL=http://localhost:3000 node scripts/prewarm-redis.mjs

Now Redis is fully populated before real users arrive.

4) Visibility & Health Monitoring

I added a Data tab UI showing:

  • Redis health status
  • Total cached items
  • Cached datasets overview

If Redis goes down, I know immediately.

The Results 📉

Before Redis

  • ~26K DB reads/day
  • Higher latency
  • Risk of max connection limits

After Redis

  • Cold start: one DB hit per dataset
  • Warm traffic: almost zero DB reads
  • Latency: single‑digit milliseconds
  • Stability: no connection pressure

In short: the database became a backup; Redis became the primary read layer.

What Made the Biggest Difference

  1. Cache the hottest reads – Lists and singleton data deliver massive ROI when cached.
  2. Keep TTL modest – I used 5 minutes to balance freshness and performance.
  3. Always prewarm on deploy – Removes the cold‑start penalty completely.
  4. Monitor cache health – Visibility prevents silent performance regressions.

How You Can Try This

  1. Configure Redis

    REDIS_URL=...
    # or host/port/user/password
  2. Start your app

  3. Run the prewarm script

  4. Open dashboard/blog pages

Watch:

  • DB metrics drop
  • Redis hit rate rise
  • Latency shrink

Final Thoughts

Adding Redis wasn’t just a performance optimization—it fundamentally changed how my app handles reads at scale.

From:

“Query the database every time”

to:

“Serve instantly from memory, and only hit DB when necessary.”

That single shift reduced 26K daily reads to nearly zero.

And the best part? It took less than a day to implement.

0 views
Back to Blog

Related posts

Read more »

Happy women in STEM day!! <3

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as we...