Redis is the most versatile tool in a backend engineer’s toolkit. It functions as a cache, session store, rate limiter, pub/sub message broker, job queue, leaderboard engine, and distributed lock — all with sub-millisecond latency. Understanding Redis data structures and production patterns is what separates engineers who use Redis from engineers who use Redis well.
⚡ TL;DR: String for simple cache. Hash for objects. Sorted Set for leaderboards and rate limiting. List for queues. Set for unique membership. Pub/Sub for real-time. Always set TTLs. Never use KEYS in production. Use connection pooling. Monitor with INFO and SLOWLOG.
Data structures and when to use each
const redis = require('ioredis');
const client = new redis();
// STRING — simple key-value, counters, cached responses
await client.set('user:123', JSON.stringify(user), 'EX', 3600); // 1hr TTL
await client.get('user:123');
await client.incr('page_views'); // Atomic increment
await client.incrby('score', 10); // Increment by N
// HASH — objects with multiple fields (more memory-efficient than JSON string)
await client.hset('user:123', { name: 'Alice', email: 'alice@example.com', age: 30 });
await client.hget('user:123', 'name'); // Single field
await client.hgetall('user:123'); // All fields
await client.hincrby('user:123', 'age', 1); // Increment one field
// Hash: 40% less memory than storing as JSON string for objects with many fields
// SORTED SET — leaderboards, rate limiting, scheduled jobs
await client.zadd('leaderboard', 9850, 'alice');
await client.zadd('leaderboard', 9200, 'bob');
await client.zrevrange('leaderboard', 0, 9, 'WITHSCORES'); // Top 10
await client.zrank('leaderboard', 'alice'); // Rank (0-indexed)
await client.zincrby('leaderboard', 100, 'alice'); // Add 100 to score
// LIST — queues, activity feeds, recent items
await client.lpush('queue', JSON.stringify(job)); // Enqueue left
await client.rpop('queue'); // Dequeue right (FIFO)
await client.lrange('feed:user:123', 0, 19); // First 20 feed items
await client.ltrim('feed:user:123', 0, 999); // Keep only last 1000
// SET — unique membership, tags, intersections
await client.sadd('online_users', 'user:123');
await client.sismember('online_users', 'user:123'); // Check membership O(1)
await client.scard('online_users'); // Count members
await client.sinter('followers:alice', 'followers:bob'); // Common followers
Cache-aside pattern — the production standard
// Cache-aside (Lazy Loading): check cache first, populate on miss
async function getUser(userId) {
const cacheKey = `user:${userId}`;
// 1. Try cache
const cached = await redis.get(cacheKey);
if (cached) return JSON.parse(cached);
// 2. Cache miss: fetch from DB
const user = await db.query('SELECT * FROM users WHERE id = $1', [userId]);
if (!user) return null;
// 3. Populate cache with TTL
await redis.setex(cacheKey, 3600, JSON.stringify(user)); // 1hr
return user;
}
// Cache invalidation: delete on write
async function updateUser(userId, data) {
await db.query('UPDATE users SET ... WHERE id = $1', [userId, ...data]);
await redis.del(`user:${userId}`); // Invalidate cache
// Next read will repopulate from DB
}
Cache stampede prevention
// Problem: 1000 requests hit simultaneously when cache expires
// All go to DB at once — DB overwhelmed
// Solution 1: Probabilistic early expiration
async function getWithEarlyRefresh(key, fetchFn, ttl) {
const data = await redis.get(key);
const ttlRemaining = await redis.ttl(key);
// Proactively refresh when 10% of TTL remains
if (ttlRemaining < ttl * 0.1) {
fetchFn().then(fresh => redis.setex(key, ttl, JSON.stringify(fresh)));
}
return data ? JSON.parse(data) : null;
}
// Solution 2: Lock-based protection (only one caller hits DB)
async function getWithLock(key, fetchFn, ttl) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
// Try to acquire lock
const lockKey = `lock:${key}`;
const lockAcquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
if (lockAcquired) {
// This process fetches from DB
const data = await fetchFn();
await redis.setex(key, ttl, JSON.stringify(data));
await redis.del(lockKey);
return data;
} else {
// Others wait for lock holder to populate cache
await new Promise(r => setTimeout(r, 100));
return getWithLock(key, fetchFn, ttl); // Retry
}
}
Production Redis checklist
- ✅ Always set TTL —
SETEXnotSET. Memory is finite. - ✅ Use connection pooling — ioredis handles this automatically
- ✅ Never use
KEYS *in production — blocks Redis for seconds on large keyspaces. Use SCAN. - ✅ Set maxmemory + eviction policy:
allkeys-lrufor pure cache,noevictionfor session store - ✅ Use Lua scripts for atomic multi-step operations
- ✅ Monitor:
INFO stats,SLOWLOG GET 25,MEMORY USAGE key - ❌ Never use MGET for thousands of keys — use pipelines or Hash instead
- ❌ Never store massive values (>1MB) — Redis is not a blob store
Redis is central to rate limiter implementation — the Sorted Set patterns here power sliding window rate limiting. For AWS-native Redis, ElastiCache in VPC Lambda covers the connection and networking setup. External reference: Redis data types documentation.
Recommended Reading
→ Designing Data-Intensive Applications — The essential book every senior developer needs. Covers distributed systems, databases, and production architecture.
→ The Pragmatic Programmer — Timeless engineering wisdom for writing better, more maintainable code at any level.
Affiliate links. We earn a small commission at no extra cost to you.
Free Weekly Newsletter
🚀 Don’t Miss the Next Cheat Code
Join 1,000+ senior developers getting expert-level JS, Python, AWS, system design and AI secrets every week. Zero fluff, pure signal.
Discover more from CheatCoders
Subscribe to get the latest posts sent to your email.
