AWS re:Invent 2025 - Advanced data modeling for Amazon ElastiCache (DAT438)
Source: Dev.to
Overview
AWS re:Invent 2025 – Advanced data modeling for Amazon ElastiCache (DAT438)
In this session Yaron and Kevin McGehee from the ElastiCache team demonstrate advanced data‑modeling techniques using Amazon ElastiCache and Valkey for building a massive multiplayer online game (MMORPG). Topics include:
- Caching strategies – lazy loading with DynamoDB triggers, thundering‑herd protection with locks, client‑side caching with invalidation subscriptions.
- Valkey data structures – Hash and JSON for session storage, HyperLogLog for unique‑user counting (sub‑1 % error in 12 KB), Bloom filters for membership testing (≈ 90 % memory savings), geospatial commands for location queries, Pub/Sub for real‑time chat.
- Semantic caching with vector search (≈ 95 % recall).
- Rate‑limiting implementations using simple counters and token‑bucket algorithms via Lua scripts.
All concepts are illustrated with practical code examples using Valkey Glide.
Introduction to Amazon ElastiCache and Valkey: Building a Scalable MMORPG
“Hello everyone, thank you for joining our session today. My name is Yaron, Senior Engineering Manager, and with me is Kevin McGehee, Principal Engineer, both from the ElastiCache team.”
What is Amazon ElastiCache?
Amazon ElastiCache is a fully managed, in‑memory data‑store service that delivers microsecond‑level response times. It supports three open‑source engines:
- Redis (open source)
- Memcached (open source)
- Valkey – a high‑performance key‑value store derived from Redis after the Redis license change.
ElastiCache (and MemoryDB) now support the Valkey engine, which we’ll use for all examples in this session.
Why a MMORPG?
A massive multiplayer online game demands:
- Extremely low latency and high throughput
- Scalable data models for player sessions, leaderboards, real‑time chat, and location‑based queries
Using Valkey lets us explore a variety of data structures and patterns that meet these requirements.
Caching Fundamentals
Traditional Architecture
- Application runs on EC2 instances.
- Persistent data lives in a relational database (e.g., Amazon RDS).
When read/write traffic grows, options include:
- Scaling up RDS or adding read replicas.
- Using RDS’s internal page‑cache (still incurs disk I/O latency).
Moving to In‑Memory Caching
Amazon ElastiCache stores data entirely in memory, enabling:
- Microsecond read latency.
- Offloading read traffic from the primary database.
- Cost optimization by allowing the backend to scale down.
Visual References
The session continues with a deep dive into lazy‑loading patterns, thundering‑herd mitigation, and advanced Valkey data‑modeling techniques.












