Redis Caching with Claude Code: Cache-Aside, Write-Through, and TTL Strategy

Published: (March 11, 2026 at 12:57 AM EDT)
3 min read
Source: Dev.to

Source: Dev.to

Cache Rules

## Redis Cache Design Rules

### Patterns
- Cache-Aside: high-read/low-write data (user profiles, product catalog)
- Write-Through: data requiring strong consistency (balances, inventory)
- Pub/Sub: cache invalidation propagation (distributed environments)

### TTL (required)
- All caches must have TTL (no permanent caches)
- Master data: TTL 1 hour
- Sessions/profiles: TTL 15 minutes
- Aggregations: TTL 5 minutes

### Cache Keys
- Format: {service}:{entity}:{id}
- Examples: user:profile:123, product:detail:abc
- Versioning: user:v2:profile:123 (bump version on schema changes)

### Monitoring
- Track cache hit ratio as a metric
- Design for cold start (unwarmed cache) scenarios

Cache-Aside Pattern

Requirements

  • Cache key: user:profile:{userId}
  • TTL: 15 minutes
  • On cache miss: fetch from DB and populate cache
  • On profile update: invalidate cache
  • Log cache hit/miss

Generated Service (src/services/userCacheService.ts)

// src/services/userCacheService.ts
import { createClient } from 'redis';
import { prisma } from '../prisma/client';
import { logger } from '../logger';

const redis = createClient({ url: process.env.REDIS_URL });
const CACHE_TTL = 15 * 60; // seconds
const KEY_PREFIX = 'user:profile';

function cacheKey(userId: string): string {
  return `${KEY_PREFIX}:${userId}`;
}

export async function getUserProfile(userId: string): Promise {
  const key = cacheKey(userId);
  const cached = await redis.get(key);

  if (cached) {
    logger.debug({ userId, cacheHit: true }, 'User profile cache hit');
    return JSON.parse(cached) as UserProfile;
  }

  logger.debug({ userId, cacheHit: false }, 'User profile cache miss');

  const user = await prisma.user.findUnique({
    where: { id: userId },
    select: {
      id: true,
      name: true,
      email: true,
      avatarUrl: true,
      updatedAt: true,
    },
  });

  if (!user) return null;

  await redis.set(key, JSON.stringify(user), { EX: CACHE_TTL });
  return user;
}

export async function invalidateUserProfile(userId: string): Promise {
  await redis.del(cacheKey(userId));
  logger.info({ userId }, 'User profile cache invalidated');
}

Invalidate on Update

// src/services/userService.ts
export async function updateUserProfile(userId: string, data: UpdateProfileInput) {
  const updated = await prisma.user.update({ where: { id: userId }, data });
  await invalidateUserProfile(userId);
  return updated;
}

Write-Through Pattern

// src/services/inventoryCacheService.ts
import { prisma } from '../prisma/client';
import { redis } from '../redis/client';
import { logger } from '../logger';

// Inventory: consistency‑critical → Write‑Through
export async function updateInventory(productId: string, quantity: number): Promise {
  await prisma.$transaction(async (tx) => {
    await tx.inventory.update({ where: { productId }, data: { quantity } });
  });

  // After DB success, update cache too (Write‑Through)
  await redis.set(
    `inventory:${productId}`,
    JSON.stringify({ quantity, updatedAt: new Date() }),
    { EX: 5 * 60 } // 5 minutes TTL
  );

  logger.debug({ productId, quantity }, 'Inventory cache updated (write‑through)');
}

Distributed Cache Invalidation (Pub/Sub)

// src/cache/pubsub.ts
import { redisPublisher as publisher, redisSubscriber as subscriber } from '../redis/client';
import { logger } from '../logger';

// Publish cache invalidation to other servers
export async function publishCacheInvalidation(channel: string, key: string): Promise {
  await publisher.publish(channel, JSON.stringify({ key, timestamp: Date.now() }));
}

// Subscribe and act on invalidation messages
export async function subscribeCacheInvalidation(): Promise {
  await subscriber.subscribe('cache:invalidate', async (message) => {
    const { key } = JSON.parse(message);
    await redis.del(key);
    logger.info({ key }, 'Cache invalidated via pub/sub');
  });
}

Summary

  • CLAUDE.md enforces TTL on all caches, a standard key format, and pattern‑selection criteria.
  • Cache‑Aside reduces read load; invalidate on write to keep data fresh.
  • Write‑Through updates the cache atomically with DB writes for strong consistency.
  • Pub/Sub propagates cache invalidation across distributed servers.

For a deeper code review (including TTL gaps, stampede risks, and consistency checks), see the Code Review Pack: prompt-works.jp

Claude Code engineer focused on performance and caching.

0 views
Back to Blog

Related posts

Read more »