How Redis Cut My Database Reads from ~26K to Almost Zero
Source: Dev.to

I used to hit Supabase on every single page load—blogs, individual posts, experiences, toolboxes, services, connections, profile info, role visibility, skills… basically my entire personal dashboard depended on direct database queries.
The result?
- ~26,000 database reads per day
- Slow responses
- Unnecessary load
- Occasional connection warnings
So I introduced Redis as a read‑through cache with a small pre‑warm script—and everything changed.
What I Cached
I focused on the hottest read‑heavy data:
- Blogs → published list, per‑post data, combined blog payload
- Experiences → active + full history
- Toolboxes → all, software, hardware
- Services → active + all
- Connections → full list
- Profile info → singleton record
- Role visibility → sidebar & quick actions
- Skills → full list + category variants
These were perfect cache candidates because they:
- Change infrequently
- Are read constantly
- Don’t require real‑time consistency
How the Caching Works
1) Read‑Through Cache Pattern
Each GET endpoint wraps a helper:
getCached(key, fetcher, ttl = 300)
Flow
Request → Check Redis
→ Cache hit → return instantly
→ Cache miss → fetch from Supabase → store in Redis → return
- Only the first request touches the DB
- All other requests are served from Redis in milliseconds
2) Smart Invalidation on Writes
Whenever data changes via POST, PUT, or DELETE, I call:
invalidateKeys([...])
This clears only the affected cache prefixes, keeping everything:
- Fresh
- Consistent
- Fast
3) Prewarming the Cache
To avoid cold‑start latency after deploys, I built a script:
scripts/prewarm-redis.mjs
It simply calls the public API endpoints—no DB credentials needed.
Run it like:
BASE_URL=http://localhost:3000 node scripts/prewarm-redis.mjs
Now Redis is fully populated before real users arrive.
4) Visibility & Health Monitoring
I added a Data tab UI showing:
- Redis health status
- Total cached items
- Cached datasets overview
If Redis goes down, I know immediately.
The Results 📉
Before Redis
- ~26K DB reads/day
- Higher latency
- Risk of max connection limits
After Redis
- Cold start: one DB hit per dataset
- Warm traffic: almost zero DB reads
- Latency: single‑digit milliseconds
- Stability: no connection pressure
In short: the database became a backup; Redis became the primary read layer.
What Made the Biggest Difference
- Cache the hottest reads – Lists and singleton data deliver massive ROI when cached.
- Keep TTL modest – I used 5 minutes to balance freshness and performance.
- Always prewarm on deploy – Removes the cold‑start penalty completely.
- Monitor cache health – Visibility prevents silent performance regressions.
How You Can Try This
-
Configure Redis
REDIS_URL=... # or host/port/user/password -
Start your app
-
Run the prewarm script
-
Open dashboard/blog pages
Watch:
- DB metrics drop
- Redis hit rate rise
- Latency shrink
Final Thoughts
Adding Redis wasn’t just a performance optimization—it fundamentally changed how my app handles reads at scale.
From:
“Query the database every time”
to:
“Serve instantly from memory, and only hit DB when necessary.”
That single shift reduced 26K daily reads to nearly zero.
And the best part? It took less than a day to implement.