Redis & Caching Interview Guide: From Basics to Production Patterns

·16 min read
rediscachingbackendnodejsperformanceinterview-preparation

Caching is how you make slow things fast. When your database query takes 100ms and your cache lookup takes 1ms, caching isn't an optimization—it's a requirement for any system at scale.

Redis is the dominant caching solution in backend development. Understanding its data structures, caching patterns, and failure modes is expected knowledge for backend interviews.

This guide covers caching concepts and Redis specifics that come up in backend and system design interviews.


Caching Fundamentals

Before diving into Redis specifics, understand the core concepts that apply to any caching system.

Why Cache?

ProblemWithout CacheWith Cache
Database query50-200ms1-5ms
API call to external service100-500ms1-5ms
Complex computationVariesInstant (precomputed)
Database loadEvery request hits DBOnly cache misses hit DB

The math matters: If your cache hit rate is 90% and cache responds in 2ms while database responds in 100ms, your average response time is: 0.9 * 2ms + 0.1 * 100ms = 11.8ms instead of 100ms.

Cache Hit and Miss

Request → Check Cache → [Hit] → Return cached data
                     → [Miss] → Fetch from source → Store in cache → Return data

Key metrics:

  • Hit rate: Percentage of requests served from cache (target: 90%+)
  • Miss rate: Percentage requiring origin fetch
  • Latency: Time to retrieve from cache vs origin
  • Eviction rate: How often items are removed to make space

What to Cache

Good candidates for caching:

  • Frequently accessed data (hot data)
  • Expensive computations
  • Data that changes infrequently
  • Data where slight staleness is acceptable

Poor candidates:

  • Rapidly changing data
  • Data requiring strong consistency
  • User-specific data with low reuse
  • Large objects with low access frequency

Redis Data Structures

Redis isn't just a key-value store. Its data structures are why it's powerful. Each has specific use cases and performance characteristics.

Strings

The simplest structure—a key maps to a value (up to 512MB).

// Basic operations
await redis.set('user:1:name', 'Alice');
const name = await redis.get('user:1:name');
 
// With expiration (TTL in seconds)
await redis.set('session:abc123', JSON.stringify(sessionData), 'EX', 3600);
 
// Atomic increment (counters)
await redis.incr('page:home:views');
await redis.incrby('user:1:points', 10);
 
// Set only if not exists (distributed locking)
const acquired = await redis.set('lock:resource', 'owner1', 'NX', 'EX', 30);

Use cases: Simple caching, counters, distributed locks, session storage.

Hashes

A key maps to a collection of field-value pairs—like a mini key-value store within a key.

// Store object fields individually
await redis.hset('user:1', {
  name: 'Alice',
  email: 'alice@example.com',
  points: '100'
});
 
// Get single field (efficient for partial reads)
const email = await redis.hget('user:1', 'email');
 
// Get all fields
const user = await redis.hgetall('user:1');
 
// Increment a field
await redis.hincrby('user:1', 'points', 10);

Use cases: Object storage when you need field-level access, user profiles, configuration.

Why not just JSON strings? Hashes let you read/write individual fields without serializing/deserializing the entire object.

Lists

Ordered collection of strings. Fast operations at head and tail (O(1)).

// Add to list (queue pattern)
await redis.rpush('queue:emails', JSON.stringify(email));
 
// Pop from list (worker pattern)
const email = await redis.lpop('queue:emails');
 
// Blocking pop (wait for item)
const [key, value] = await redis.blpop('queue:emails', 30); // 30s timeout
 
// Get range (recent items)
const recentPosts = await redis.lrange('user:1:posts', 0, 9); // Last 10
 
// Trim list (keep only recent)
await redis.ltrim('user:1:activity', 0, 99); // Keep last 100

Use cases: Job queues, recent activity feeds, message buffers.

Sets

Unordered collection of unique strings.

// Add members
await redis.sadd('post:1:tags', 'javascript', 'nodejs', 'redis');
 
// Check membership (O(1))
const isTagged = await redis.sismember('post:1:tags', 'redis');
 
// Get all members
const tags = await redis.smembers('post:1:tags');
 
// Set operations
await redis.sinter('user:1:following', 'user:2:following'); // Mutual follows
await redis.sunion('post:1:tags', 'post:2:tags'); // All tags
await redis.sdiff('user:1:following', 'user:2:following'); // Unique to user 1
 
// Count unique items
await redis.sadd('page:home:visitors', visitorId);
const uniqueVisitors = await redis.scard('page:home:visitors');

Use cases: Tags, unique visitor tracking, social graph relationships, deduplication.

Sorted Sets

Like sets, but each member has a score. Ordered by score (O(log n) operations).

// Add with scores
await redis.zadd('leaderboard',
  100, 'player:1',
  250, 'player:2',
  175, 'player:3'
);
 
// Get top players
const topPlayers = await redis.zrevrange('leaderboard', 0, 9, 'WITHSCORES');
 
// Get player rank (0-indexed)
const rank = await redis.zrevrank('leaderboard', 'player:2');
 
// Increment score
await redis.zincrby('leaderboard', 10, 'player:1');
 
// Range by score (time-based queries)
await redis.zadd('events', Date.now(), JSON.stringify(event));
const recentEvents = await redis.zrangebyscore('events',
  Date.now() - 3600000, // 1 hour ago
  Date.now()
);

Use cases: Leaderboards, priority queues, time-series data, rate limiting.

Data Structure Selection

NeedStructureWhy
Simple cacheStringStraightforward, supports TTL
Object with field accessHashRead/write individual fields
FIFO queueListO(1) push/pop at ends
Unique itemsSetAutomatic deduplication
Ranking/scoringSorted SetOrdered by score, range queries

Caching Patterns

Different patterns suit different consistency and performance requirements.

Cache-Aside (Lazy Loading)

The application manages the cache. Most common pattern.

async function getUser(userId) {
  const cacheKey = `user:${userId}`;
 
  // 1. Check cache
  const cached = await redis.get(cacheKey);
  if (cached) {
    return JSON.parse(cached);
  }
 
  // 2. Cache miss - fetch from database
  const user = await db.users.findById(userId);
 
  // 3. Populate cache
  if (user) {
    await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600);
  }
 
  return user;
}

Pros: Simple, only caches what's needed, cache failures don't break the app.

Cons: Cache miss penalty (three round trips), potential for stale data.

Read-Through

Cache sits between app and database. Cache handles misses automatically.

// Conceptual - typically handled by caching library
const cache = new ReadThroughCache({
  get: (key) => redis.get(key),
  set: (key, value, ttl) => redis.set(key, value, 'EX', ttl),
  load: async (key) => {
    // Called automatically on cache miss
    const userId = key.replace('user:', '');
    return db.users.findById(userId);
  }
});
 
// Usage - cache handles miss automatically
const user = await cache.get(`user:${userId}`);

Pros: Cleaner application code, consistent miss handling.

Cons: More complex setup, less control over caching logic.

Write-Through

Writes go to cache and database synchronously.

async function updateUser(userId, data) {
  const cacheKey = `user:${userId}`;
 
  // Write to database
  const user = await db.users.update(userId, data);
 
  // Write to cache (synchronously)
  await redis.set(cacheKey, JSON.stringify(user), 'EX', 3600);
 
  return user;
}

Pros: Cache always consistent with database, no stale reads after writes.

Cons: Write latency (two operations), caches data that might not be read.

Write-Behind (Write-Back)

Writes go to cache immediately, database updated asynchronously.

async function updateUser(userId, data) {
  const cacheKey = `user:${userId}`;
 
  // Write to cache immediately
  await redis.set(cacheKey, JSON.stringify(data), 'EX', 3600);
 
  // Queue database write for async processing
  await redis.rpush('write:queue', JSON.stringify({
    type: 'user:update',
    userId,
    data
  }));
 
  return data;
}
 
// Background worker processes writes
async function processWriteQueue() {
  while (true) {
    const item = await redis.blpop('write:queue', 0);
    const { type, userId, data } = JSON.parse(item[1]);
    await db.users.update(userId, data);
  }
}

Pros: Fast writes, reduced database load.

Cons: Complexity, data loss risk if cache fails before DB write, eventual consistency.

Pattern Comparison

PatternRead LatencyWrite LatencyConsistencyComplexity
Cache-AsideMiss penaltyN/AEventualLow
Read-ThroughMiss penaltyN/AEventualMedium
Write-ThroughLowHigherStrongMedium
Write-BehindLowLowEventualHigh

Cache Invalidation

"There are only two hard things in Computer Science: cache invalidation and naming things." —Phil Karlton

TTL-Based Expiration

Simplest approach—data expires after a fixed time.

// Set TTL on write
await redis.set('user:1', userData, 'EX', 3600); // 1 hour
 
// Set TTL on existing key
await redis.expire('user:1', 3600);
 
// Check remaining TTL
const ttl = await redis.ttl('user:1');

Pros: Simple, automatic cleanup.

Cons: Stale data until expiration, choosing right TTL is tricky.

Event-Driven Invalidation

Invalidate when the source data changes.

// On user update
async function updateUser(userId, data) {
  await db.users.update(userId, data);
 
  // Invalidate all related caches
  await redis.del(`user:${userId}`);
  await redis.del(`user:${userId}:profile`);
  await redis.del(`user:${userId}:permissions`);
}
 
// Or use pub/sub for distributed invalidation
async function updateUser(userId, data) {
  await db.users.update(userId, data);
  await redis.publish('cache:invalidate', JSON.stringify({
    pattern: `user:${userId}:*`
  }));
}

Pros: Immediate consistency.

Cons: Complexity, must track all cache keys to invalidate.

Version-Based Keys

Include version in cache key. Increment version to invalidate.

async function getUser(userId) {
  // Get current version
  const version = await redis.get(`user:${userId}:version`) || '1';
  const cacheKey = `user:${userId}:v${version}`;
 
  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);
 
  const user = await db.users.findById(userId);
  await redis.set(cacheKey, JSON.stringify(user), 'EX', 86400);
  return user;
}
 
async function invalidateUser(userId) {
  // Increment version - old keys will be ignored and eventually expire
  await redis.incr(`user:${userId}:version`);
}

Pros: No need to find and delete keys, old versions expire naturally.

Cons: Temporary memory overhead from old versions.

Cache Stampede Prevention

When a popular key expires, many requests hit the database simultaneously.

Solution 1: Locking

async function getWithLock(key, fetchFn, ttl = 3600) {
  const cached = await redis.get(key);
  if (cached) return JSON.parse(cached);
 
  const lockKey = `lock:${key}`;
  const lockAcquired = await redis.set(lockKey, '1', 'NX', 'EX', 10);
 
  if (lockAcquired) {
    try {
      const data = await fetchFn();
      await redis.set(key, JSON.stringify(data), 'EX', ttl);
      return data;
    } finally {
      await redis.del(lockKey);
    }
  } else {
    // Wait and retry
    await sleep(50);
    return getWithLock(key, fetchFn, ttl);
  }
}

Solution 2: Probabilistic Early Expiration

async function getWithEarlyRefresh(key, fetchFn, ttl = 3600) {
  const cached = await redis.get(key);
  const keyTtl = await redis.ttl(key);
 
  if (cached) {
    // Randomly refresh if TTL is low (last 10%)
    const shouldRefresh = keyTtl < ttl * 0.1 && Math.random() < 0.1;
 
    if (shouldRefresh) {
      // Refresh in background, return stale data
      fetchFn().then(data => {
        redis.set(key, JSON.stringify(data), 'EX', ttl);
      });
    }
 
    return JSON.parse(cached);
  }
 
  const data = await fetchFn();
  await redis.set(key, JSON.stringify(data), 'EX', ttl);
  return data;
}

Redis in Node.js

Practical patterns for using Redis in Node.js applications.

Connection Setup (ioredis)

import Redis from 'ioredis';
 
// Single instance
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  password: process.env.REDIS_PASSWORD,
  db: 0,
  retryDelayOnFailover: 100,
  maxRetriesPerRequest: 3
});
 
// Cluster
const cluster = new Redis.Cluster([
  { host: 'node1', port: 6379 },
  { host: 'node2', port: 6379 },
  { host: 'node3', port: 6379 }
]);
 
// Handle connection events
redis.on('error', (err) => console.error('Redis error:', err));
redis.on('connect', () => console.log('Redis connected'));

Pipelining

Send multiple commands without waiting for responses—reduces round trips.

// Without pipelining: 3 round trips
await redis.set('key1', 'value1');
await redis.set('key2', 'value2');
await redis.set('key3', 'value3');
 
// With pipelining: 1 round trip
const pipeline = redis.pipeline();
pipeline.set('key1', 'value1');
pipeline.set('key2', 'value2');
pipeline.set('key3', 'value3');
await pipeline.exec();
 
// Get multiple values
const pipeline = redis.pipeline();
pipeline.get('user:1');
pipeline.get('user:2');
pipeline.get('user:3');
const results = await pipeline.exec();
// results = [[null, 'user1data'], [null, 'user2data'], [null, 'user3data']]

Transactions

Atomic execution of multiple commands.

// MULTI/EXEC transaction
const result = await redis.multi()
  .incr('counter')
  .get('counter')
  .exec();
// All commands execute atomically
 
// Watch for optimistic locking
await redis.watch('balance');
const balance = parseInt(await redis.get('balance'));
 
if (balance >= amount) {
  await redis.multi()
    .decrby('balance', amount)
    .rpush('transactions', JSON.stringify({ amount, date: Date.now() }))
    .exec();
} else {
  await redis.unwatch();
  throw new Error('Insufficient balance');
}

Lua Scripts

For complex atomic operations.

// Rate limiter in Lua (atomic)
const rateLimitScript = `
  local key = KEYS[1]
  local limit = tonumber(ARGV[1])
  local window = tonumber(ARGV[2])
 
  local current = redis.call('INCR', key)
  if current == 1 then
    redis.call('EXPIRE', key, window)
  end
 
  if current > limit then
    return 0
  end
  return 1
`;
 
// Load and use
const rateLimitSha = await redis.script('LOAD', rateLimitScript);
 
async function checkRateLimit(userId, limit = 100, windowSeconds = 60) {
  const key = `ratelimit:${userId}:${Math.floor(Date.now() / 1000 / windowSeconds)}`;
  const allowed = await redis.evalsha(rateLimitSha, 1, key, limit, windowSeconds);
  return allowed === 1;
}

Session & Rate Limiting

Two common Redis use cases with practical implementations.

Session Storage

import session from 'express-session';
import RedisStore from 'connect-redis';
 
// Express session with Redis
app.use(session({
  store: new RedisStore({ client: redis }),
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: process.env.NODE_ENV === 'production',
    maxAge: 24 * 60 * 60 * 1000 // 24 hours
  }
}));
 
// Manual session management
async function createSession(userId) {
  const sessionId = crypto.randomUUID();
  const sessionData = {
    userId,
    createdAt: Date.now(),
    lastAccess: Date.now()
  };
 
  await redis.set(
    `session:${sessionId}`,
    JSON.stringify(sessionData),
    'EX',
    86400
  );
 
  return sessionId;
}
 
async function getSession(sessionId) {
  const data = await redis.get(`session:${sessionId}`);
  if (!data) return null;
 
  // Refresh TTL on access
  await redis.expire(`session:${sessionId}`, 86400);
 
  return JSON.parse(data);
}

Rate Limiting

Fixed Window:

async function fixedWindowRateLimit(userId, limit = 100, windowSeconds = 60) {
  const window = Math.floor(Date.now() / 1000 / windowSeconds);
  const key = `ratelimit:${userId}:${window}`;
 
  const current = await redis.incr(key);
  if (current === 1) {
    await redis.expire(key, windowSeconds);
  }
 
  return {
    allowed: current <= limit,
    remaining: Math.max(0, limit - current),
    resetAt: (window + 1) * windowSeconds * 1000
  };
}

Sliding Window Counter:

async function slidingWindowRateLimit(userId, limit = 100, windowSeconds = 60) {
  const now = Date.now();
  const windowMs = windowSeconds * 1000;
  const currentWindow = Math.floor(now / windowMs);
  const previousWindow = currentWindow - 1;
 
  const currentKey = `ratelimit:${userId}:${currentWindow}`;
  const previousKey = `ratelimit:${userId}:${previousWindow}`;
 
  const [currentCount, previousCount] = await redis.mget(currentKey, previousKey);
 
  // Weight previous window by how much of current window has passed
  const elapsedInCurrentWindow = now % windowMs;
  const previousWeight = 1 - (elapsedInCurrentWindow / windowMs);
 
  const count = (parseInt(currentCount) || 0) +
                (parseInt(previousCount) || 0) * previousWeight;
 
  if (count >= limit) {
    return { allowed: false, remaining: 0 };
  }
 
  await redis.incr(currentKey);
  await redis.expire(currentKey, windowSeconds * 2);
 
  return {
    allowed: true,
    remaining: Math.floor(limit - count - 1)
  };
}

Token Bucket:

async function tokenBucketRateLimit(
  userId,
  bucketSize = 10,
  refillRate = 1, // tokens per second
  tokensRequired = 1
) {
  const key = `bucket:${userId}`;
  const now = Date.now();
 
  // Lua script for atomic token bucket
  const script = `
    local key = KEYS[1]
    local bucket_size = tonumber(ARGV[1])
    local refill_rate = tonumber(ARGV[2])
    local tokens_required = tonumber(ARGV[3])
    local now = tonumber(ARGV[4])
 
    local bucket = redis.call('HMGET', key, 'tokens', 'last_refill')
    local tokens = tonumber(bucket[1]) or bucket_size
    local last_refill = tonumber(bucket[2]) or now
 
    -- Refill tokens based on time passed
    local elapsed = (now - last_refill) / 1000
    tokens = math.min(bucket_size, tokens + (elapsed * refill_rate))
 
    if tokens >= tokens_required then
      tokens = tokens - tokens_required
      redis.call('HMSET', key, 'tokens', tokens, 'last_refill', now)
      redis.call('EXPIRE', key, bucket_size / refill_rate * 2)
      return {1, tokens}
    else
      return {0, tokens}
    end
  `;
 
  const [allowed, remaining] = await redis.eval(
    script, 1, key, bucketSize, refillRate, tokensRequired, now
  );
 
  return { allowed: allowed === 1, remaining };
}

Scaling Redis

As data and traffic grow, single Redis instance won't suffice.

Replication

Master-replica setup for read scaling and high availability.

        ┌──────────┐
        │  Master  │ ← All writes
        └────┬─────┘
             │ Replication
    ┌────────┼────────┐
    ▼        ▼        ▼
┌───────┐ ┌───────┐ ┌───────┐
│Replica│ │Replica│ │Replica│ ← Reads distributed
└───────┘ └───────┘ └───────┘
// ioredis with read replicas
const redis = new Redis({
  sentinels: [
    { host: 'sentinel1', port: 26379 },
    { host: 'sentinel2', port: 26379 }
  ],
  name: 'mymaster',
  role: 'slave', // Read from replicas
  preferredSlaves: [
    { ip: 'replica1', port: 6379, prio: 1 },
    { ip: 'replica2', port: 6379, prio: 2 }
  ]
});

Redis Sentinel

Automatic failover when master fails.

┌──────────────────────────────────────────┐
│              Sentinel Cluster            │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐    │
│  │Sentinel1│ │Sentinel2│ │Sentinel3│    │
│  └────┬────┘ └────┬────┘ └────┬────┘    │
└───────┼───────────┼───────────┼─────────┘
        │  Monitor  │           │
        ▼           ▼           ▼
    ┌───────┐   ┌───────┐   ┌───────┐
    │Master │──▶│Replica│   │Replica│
    └───────┘   └───────┘   └───────┘

Sentinel provides:

  • Monitoring master and replicas
  • Automatic failover if master fails
  • Configuration provider for clients

Redis Cluster

Horizontal scaling by sharding data across multiple masters.

┌─────────────────────────────────────────────────────┐
│                   Redis Cluster                      │
│                                                      │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │
│  │   Master 1   │  │   Master 2   │  │   Master 3   │ │
│  │ Slots 0-5460 │  │Slots 5461-10922│ │Slots 10923-16383│
│  └──────┬──────┘  └──────┬──────┘  └──────┬──────┘ │
│         │                │                │         │
│  ┌──────▼──────┐  ┌──────▼──────┐  ┌──────▼──────┐ │
│  │  Replica 1  │  │  Replica 2  │  │  Replica 3  │ │
│  └─────────────┘  └─────────────┘  └─────────────┘ │
└─────────────────────────────────────────────────────┘

Key concepts:

  • 16384 hash slots distributed across masters
  • Keys hashed to slots: CRC16(key) % 16384
  • Each master handles a range of slots
  • Automatic resharding when adding/removing nodes
// Hash tags for related keys on same slot
await redis.set('{user:1}:profile', profileData);
await redis.set('{user:1}:settings', settingsData);
// Both keys hash based on {user:1}, ensuring same slot

Sentinel vs Cluster

AspectSentinelCluster
PurposeHigh availabilityHorizontal scaling
Data distributionAll data on masterSharded across masters
Max data sizeSingle node memoryCombined node memory
Automatic failoverYesYes
Multi-key operationsAll keys accessibleOnly same-slot keys

Quick Reference

Data structure selection:

  • Simple values → Strings
  • Objects with field access → Hashes
  • Queues → Lists
  • Unique items → Sets
  • Ranked data → Sorted Sets

Caching patterns:

  • Most cases → Cache-aside
  • Need consistency → Write-through
  • High write volume → Write-behind

Invalidation strategies:

  • Simple → TTL-based
  • Needs consistency → Event-driven
  • Avoid key tracking → Version-based

Prevent stampede:

  • Locking for guaranteed single regeneration
  • Probabilistic early refresh for high traffic
  • Background refresh for critical data

Scaling:

  • Read scaling → Replication
  • High availability → Sentinel
  • Data scaling → Cluster

Related Articles


What's Next?

Caching seems simple until it isn't. The difference between junior and senior developers is understanding the failure modes: cache stampedes, invalidation bugs, and consistency issues.

In interviews, focus on trade-offs. Why cache-aside over write-through? When is eventual consistency acceptable? How would you handle a cache failure?

Ready to ace your interview?

Get 550+ interview questions with detailed answers in our comprehensive PDF guides.

View PDF Guides