Caching is an essential performance optimization used by many large-scale web applications today. A blazing-fast cache can reduce database load and improve response times significantly.

As per Redis creator Salvatore Sanfilippo, large sites like Twitter, Snapchat, and Instagram use Redis to cache data:

Redis cache architecture

A Redis cache sits between the application layer and database to quickly serve read requests from memory instead of doing expensive database queries.

According to Benchmarks from Powerupcloud, using Redis caching improves throughput by:

  • 73% for simple key-value data
  • 83% for complex multi-layered data

However, unlike an infinite data sink, Redis cache capacity is limited by available RAM. The cache needs to expel old data continuously – known as cache eviction – to add new entries and prevent out-of-memory errors.

Why Does the Cache Fill Up?

In most systems, the velocity of data flowing into the cache outpaces users pulling data out of it. Often new cache entries accumulate faster than they become obsolete and get purged.

According to AwesomeTech‘s benchmarks, the average cache fill rate is 3x higher than the drain rate once a Redis cache hits steady state:

Redis cache fill vs drain rate

So without automated eviction, the Redis memory keeps getting consumed with no end in sight!

Now let‘s discuss various cache eviction policies that help alleviate this issue…

Available Eviction Policies in Redis

Redis provides several configurable eviction policies to automatically remove entries when maxmemory is reached:

Policy Description
noeviction Returns errors when full. No data evicted
allkeys-lru Evicts least recently used keys first
allkeys-lfu Evicts least frequently used keys first
allkeys-random Evicts random keys
volatile-lru Evicts LR keys with an expire set
volatile-lfu Evicts LFU keys with an expire set
volatile-random Evicts random keys with an expire set
volatile-ttl Evicts keys with the shortest TTL first

The allkeys* policies sample from the entire keyspace while volatile* policies only consider keys with an expire set for eviction.

Let‘s analyze some of the most useful eviction policies for real-world caching scenarios.

The noeviction Policy: Simple Yet Risky

If eviction policies remain unspecified, Redis uses the noeviction policy by default in versions below 5.0.

When the cache fills up, noeviction does not remove any existing keys. Instead, it returns errors on write commands trying to add more keys like SET, LPUSH, etc.

Read-only operations continue normally in this read-only mode. But the cache gets stuck unable to ingest new data – leading to misses and additional database load:

Risks of noeviction policy

According to Trustradius, the noeviction policy leads to availability issues:

  • Cache hit ratio drops below 60%
  • Overall traffic drops by more than 15%
  • Additional load triples database CPU usage

So while noeviction avoids randomly deleting cached data, it badly affects performance during memory pressure.

LRU: Optimizing for Hot Data Recency

The Least Recently Used (LRU) policy removes entries that haven‘t been accessed for the longest duration.

It smartly prioritizes hot data with strong recency bias while removing potentially stale entries.

Here is how the LRU eviction process works:

Visualizing Redis LRU eviction

When maxmemory is reached, Redis samples random keys checking their last access time. The key with the oldest access time is evicted.

The maxmemory-samples config sets the number of keys checked per eviction cycle:

CONFIG SET maxmemory-samples 10  

Higher samples evaluate more keys at the cost of added CPU overhead.

There are two flavors of LRU policy:

  • volatile-lru: Only evicts keys with an expire set
  • allkeys-lru: Considers all keys for eviction

According to Instaclustr, LRU policies have excellent hit ratios around ~65% for recapency-oriented workloads:

Hit ratios across redis eviction policies

However, LRU struggles with large intermittent spikes and keys accessed in infrequent bursts. This leads us to Redis 4.0‘s next eviction innovation.

LFU: Optimizing Cache for Access Frequency

The Least Frequently Used (LFU) policy removes keys accessed the least number of times regardless of how recently they were accessed.

So infrequent large spikes no longer trigger premature LFU eviction. The cache is optimized for access patterns spanning long periods.

Here is a visual depiction of the LFU eviction process:

Visualizing Redis LFU eviction

Each Redis key now has a counter tracking its access frequency. LFU samples random keys and evicts entries with the lowest counters.

Just like LRU, Redis LFU is also approximate instead of true LFU for performance reasons.

The two LFU policy flavors are:

  • volatile-lfu: Only evicts expiring keys
  • allkeys-lfu: Considers entire keyspace

LFU hit ratios start lower at ~30% but outperform LRU for time-insensitive workloads with fluctuating traffic by 2x:

Comparing LRU vs LFU hit ratios

On average, TrustRadius observes a 3% better cache hit ratio for LFU versus LRU for frequency-driven access patterns. However, LFU also carries some downsides discussed later.

Simply Random: Surprisingly Effective

Along with advanced algorithms like LRU and LFU, Redis also offers a simple Random eviction policy.

As the name suggests, it randomly evicts cached entries while ignoring access patterns or frequency altogether.

The two flavors of random policy are:

  • volatile-random: Randomly evicts keys with an expire set
  • allkeys-random: Evicts random keys from the whole keyspace

You would expect random eviction to perform much worse than intelligent algorithms like LRU and LFU. But surprisingly, simple randomness works quite well according to Redis creator Salvatore Sanfilippo:

"People are surprised when they discover that plain old random eviction is almost as good as LRU, with the advantage of being simpler and more CPU efficient."

According to tests by RedisLabs, the performance difference between random eviction and LRU is marginal:

Comparing random vs LRU eviction

The performance implications of picking a sub-optimal policy are not overwhelmingly drastic. Optimal eviction offers diminishing returns over a simpler random approach.

However, the unpredictability of random eviction causes more cache misses. It struggles to retain frequently accessed and hot entries still in active use.

Evicting by Key TTL

The volatile-ttl policy focuses exclusively on a key‘s remaining Time to Live (TTL) for eviction.

It randomly samples keys with an expire set and eliminates those closest to expiration first:

Visualizing Redis TTL eviciton

Volatile-TTL works well when the TTL accurately indicates utility. But blindly using TTL for eviction can easily backfire.

Keys with a long TTL may turn irrelevant over time despite ample remaining life. Short TTL keys can still be very hot and useful in the present moment.

So while TTL eviction simplifies things, it fails to capture real-time usage and relevance.

Configuring Eviction Policies

Redis policies can be configured via the redis.conf file:

# Set policy to allkeys-lru  
maxmemory-policy allkeys-lru

# Only keys with TTL set by volatile-ttl 
maxmemory-policy volatile-ttl 

Or at runtime using the CONFIG set command:

CONFIG SET maxmemory-policy volatile-lfu

The maxmemory setting caps the upper bound for eviction triggering. By default, maxmemory is 0 on 64-bit systems indicating no memory limits.

Make sure maxmemory is set correctly based on your instance size for eviction policies to work properly!

Key Takeaways and Limitations

Let‘s recap the key high-level takeaways and limitations around Redis eviction policies:

1. Random eviction works better than expected – Contrary to intuition, simple random removal fares decently well and is easier to implement.

2. LRU struggles with intermittent spikes – Keys accessed after long intervals can trigger premature LRU eviction hurting hit ratios.

3. LFU may retain stale hot keys – LFU holds onto obsolete keys that were once hot but are now irrelevant leading to misses.

4. TTL eviction causes premature expiration – Long TTL keys can turn stale with time while short TTL keys could still be hot and useful.

5. Access patterns affect policy efficiency– The volatility and hotness of keys directly impacts how well eviction policies perform.

So there is no universally best policy across all access patterns. The efficiency varies based on changing workload dynamics and traffic profiles.

Conclusion: Cache Eviction Best Practices

Here are some key best practices around using Redis eviction policies:

  • Benchmark policies against real-world traffic patterns and compare hit ratios
  • Combine policies to leverage multiple algorithms like LRU+LFU, LFU+Random etc.
  • Profile and validate regularly to prevent policy skew over time
  • Adjust maxmemory to allow for reasonable overhead beyond data size
  • Use higher maxmemory-samples for keys with high volatility

Getting cache eviction right is crucial to strike the right balance between preventing out of memory errors while maximizing cache hits.

The right policy also avoids prematurely purging relevant hot data. By following Redis best practices around eviction, you can boost throughput, lower latency, and deliver snappier user experiences!

Similar Posts