Using Redis to Optimize Backend Queries
Source: Dev.to
The Original Approach (The Comfortable One)
The endpoint logic was simple:
- Query database
- Sort users by score
- Return top 10
SELECT * FROM users ORDER BY score DESC LIMIT 10;
With proper indexing it worked fine at small scale, but leaderboards are:
- Frequently accessed
- Frequently updated
- Competitive, real‑time data
Hitting the database on every request quickly became a concern.
First Attempt: Let’s Use Redis
Redis seemed perfect: in‑memory, fast, built for ranking.
However, starting it locally produced an error:
Error: Address already in use
Port 6379 was already occupied.
After trying to restart services and kill processes without success, I decided to isolate Redis properly.
The Fix: Dockerizing Redis
docker run -d -p 6379:6379 --name redis-server redis
Running Redis in a container made it:
- Isolated
- Portable
- Cleanly running
- Easy to restart
With the environment fixed, I could move forward.
Enter Sorted Sets (ZSET)
Redis Sorted Sets automatically keep members ordered by score.
- Member → user ID
- Score → points
This eliminated the need for SQL sorting and heavy DB reads.
Updating a user’s score
await redis.zadd("leaderboard", score, userId);
Fetching the top 10
await redis.zrevrange("leaderboard", 0, 9, "WITHSCORES");
The ranking logic now lived entirely in memory, and latency improved immediately.
The Hidden Bottleneck I Didn’t See Coming
After retrieving the top‑10 user IDs, I needed additional user details (username, avatar, etc.):
for (let userId of topUsers) {
await redis.hgetall(`user:${userId}`);
}
This introduced an N+1 problem in Redis:
- 1 request → fetch leaderboard
- 10 requests → fetch each user
Result: 11 network round trips, adding ~100 ms.
The Real Fix: Redis Pipelining
Redis pipelining batches commands, reducing round trips.
const pipeline = redis.pipeline();
for (let userId of topUsers) {
pipeline.hgetall(`user:${userId}`);
}
const users = await pipeline.exec();
Now only one network round trip is needed, eliminating the N+1 latency.
The Results
| Stage | Latency |
|---|---|
| DB sorting | ~200 ms |
| Redis (no pipeline) | ~120 ms |
| Redis + pipeline | ~20 ms |
A full 10× improvement, primarily from cutting network calls.
What This Taught Me
- Infrastructure problems come first – if Redis isn’t running cleanly, nothing else matters.
- Data structures matter – ZSET removed repeated sorting entirely.
- N+1 problems aren’t just database issues – they can appear with any remote system.
- Network latency is invisible but expensive – even “fast” systems become slow when called too often.
- Docker simplifies backend life – containerizing dependencies avoids OS‑level conflicts.
Final Architecture
- Score update →
ZADD - Fetch top 10 →
ZREVRANGE - Batch fetch user data → pipeline +
EXEC - Return response
No database hits, fully in‑memory, minimal network calls, ~20 ms response time.
Closing Thought
Optimization isn’t about throwing tools at a problem; it’s about identifying where time is actually spent. In this case, the biggest gains came from:
- Fixing the environment
- Choosing the right data structure
- Reducing network round trips
Addressing those made all the difference.