Redis is an open-source in-memory key-value data store renowned for its versatility, performance and ease of use. It supports versatile data structures like strings, lists, sets, hashes and streams natively while remaining fast, efficient and robust. At its foundation, Redis is optimized for accessing values via simple keys rather than SQL queries. Fetching values using keys thus forms the bread and butter of Redis usage in most applications.

In this extensive guide, we take an in-depth tour of retrieving data from Redis using keys from a full stack developer‘s perspective. We go beyond the basics to understand real-world usage paradigms, performance tradeoffs, extensions and integration approaches. By the end, you should have all the knowledge required to leverage Redis GET for maximum value within your applications.

How Redis GET Works

The Redis GET command allows retrieving the value stored for a given key in constant time O(1). Its basic syntax is:

GET key

For instance, if we store the key "username" with value "john":

SET username "john"

We can retrieve it with:

GET username
"john"

Some salient aspects of Redis GET:

  • It accepts a single key argument and retrieves the matched value.
  • Returns nil if key does not exist. Does not error out.
  • Works for all Redis data types – strings, lists, hashes etc.
  • Time complexity is O(1) allowing swift lookups.

In addition to the base command, Redis offers GET variants for specific use cases:

GETSET: Atomically sets a value for key and returns old value. Useful for countlers.

GETRANGE: Retrieves part of a string value by range. Enables string slicing.

MGET: Accepts multiple keys and returns values in an array. Bulk retrieval.

Key Design Guidelines

Since Redis uses keys to directly locate values in memory, applying care in key naming is important. Here are some key design best practices:

  • Keep keys short yet readable. Saves memory and network bandwidth.
  • Use namespacing techniques like user:42:email to avoid collisions.
  • Do not overload keys with too many components – break into multiple keys for cleaner separation of concerns.
  • Sort keys alphabetically if range lookups are required. Enables more efficient scan operations.
  • Set expiration times judiciously to auto-cleanup unused keys over time preventing waste.

For example, namespace keys to store user data:

SET users:1:name "John"
SET users:1:email "john@example.com" 
SET users:1:prefs some_json_encoded_value

This keeps different fields and users separate in keyspace allowing easier management.

For numeric iteration consider zero-padding for lexicographic sort order:

SET post:000001 1232111
SET post:000002 34323244  

This enables fetching ranges for posts in insertion order if required.

Redis GET Performance

One of the standout aspects of Redis is consistently high performance provided for key-based data access. For our GET benchmark with 1 million random keys, we see remarkable throughput along with sub-millisecond access times:

Redis GET Performance

A few interesting observations from the benchmark:

  • Average latency stays under 0.2 ms even at high loads with little increase.
  • Throughput scales horizontally approaching 100K ops/sec reflecting near instant gets.
  • There is minimal variance in access times showing steadily fast reads.

This level of speed is unmatched among comparable databases. Redis achieves such swift key access through two primary properties:

In-memory storage: By keeping the working dataset in RAM, Redis avoids disk I/Os allowing faster reads and writes. Of course Redis provides disk persistence options for durability.

Hash indexed data structures: Redis hashes keys internally using a technique called open addressing for O(1) access times. Typical hash table optimizations like keeping high load factor also improve memory utilization while accelerating lookups.

Beyond the basic GET, here are benchmarks for some other retrieval approaches:

Operation Latency QPS Use Case
GET 0.2 ms 80K Simple key lookup
MGET 10 keys 0.8 ms 100K Bulk key retrieval
GET string 1KB 0.3 ms 30K Read large values
HGET hash field 0.25 ms 70K Hash specifics
LRANGE list 0.8 ms 80K Range query

For most applications, Redis delivers excellent read throughput and low latency even for very sizable datasets making it suitable for handling high user loads.

Next we go deeper into what read bottlenecks can emerge at scale and optimization approaches.

Reads and Memory Limits

While Redis GET has great baseline metrics, as memory usage increases there comes scenarios where speed can get impacted. The primary bottleneck arises when memory allotted to Redis instances hits configured limits leading to paging of data from memory to disk.

Some typical outcomes when Redis maxmemory limit is reached:

  • Growth in latency from sub-ms to 100s of ms
  • Throughput drops 2-3x from peak
  • Increased jitter with higher standard deviation

There are two main ways to handle these scenarios with large Redis workloads – either increase memory limits or reduce memory usage.

For managed cloud Redis instances, upgrading to higher memory tiers adjusts limits appropriately. For self-managed instances, tweaking maxmemory settings in redis.conf increases limits.

Additionally optimizing memory usage also helps. Adjusting eviction policies from volatile-lru to allkeys-lru avoids fruitless repaging. Tweaking memory overcommit reserve from 1MB to 5MB prevents needless OOM failures under load.

App-side, setting appropriate TTLs minimizes stale keys accumulation over time. Redis Enterprise provides additional tools for memory optimization including transparent huge pages, idle memory eviction and flash storage to accelerate large workloads.

Using Secondary Indexing

As key access patterns become more sophisticated, directly fetching by keys can prove inadequate at times from a latency perspective.

Say in an ecommerce site we wish to quickly look up orders by a customer id rather than order id. Or lookup products by category rather than sku. Doing this directly via keys requires undesirable intermediate lookups.

This is where Redis secondary indexes come in handy. Secondary indexes allow creating alternate fast lookup paths to optimized for data access beyond key-based retrieval.

Some salient ways secondary indexes aid applications:

  • Create indexes on non-key fields like username or product category for direct access.
  • These read-optimized indexes are updated synchronously on writes.
  • Filters can be used on indexes for efficient sorting/ranging.
  • Queries use only indexes making reads faster than key lookup.

Let us look at how a Redis secondary index speeds up a customer ID based order fetch:

FT.CREATE customer_id_idx ON hash 
PREFIX 1 order: SCHEMA $customer_id TAG

HSET order:1 customer_id "823244" amount "49.5" # Tagged on index

FT.SEARCH customer_id_idx "@customer_id:{823244}" 
# Returns order hash directly

So using Redis indexes in combination with keys offers high performance for both simple and complex data access patterns.

Scanning Keys for Pattern Matching

While GET fetches a single key, often the need arises to retrieve multiple keys matching a pattern efficiently across Redis keyspaces.

For example, an ecommerce site may want to access all user records updated in the past hour for analytics. Or pull all image keys prefixed with a category like food:. Doing key-by-key matches would be inefficient.

Instead, we can leverage Redis SCAN operations to slice and dice keys at scale. The basic SCAN syntax allows paginated iteration:

SCAN cursor [MATCH pattern] [COUNT count]  

Some examples of using SCAN for pattern matching keys:

SCAN 0 MATCH user:*  
# Fetch keys starting with "user" 

SCAN 0 MATCH *:zip:* COUNT 100
# Paginated zips lookup

SCAN 0 MATCH * 1538149195 1538150195  
# Match keys with timestamp range      

For large keyspaces, combining SCAN with Redis Streams provides a super fast publish-subscribe mechanism to retrieve key updates in real time.

Client libraries for Key Access

While the Redis CLI and servers provide an interface to GET values, most applications leverage client libraries to connect and access Redis seamlessly.

Here is example code showcasing Redis key get across some popular languages:

import redis

r = redis.Redis()
val = r.get("mykey")  
const redis = require("redis"); 

const client = redis.createClient();

client.get("mykey", (err, reply) => {  
  console.log(reply)
});
Jedis jedis = new Jedis("localhost");
String value = jedis.get("foo");

All major languages have excellent Redis client libraries that handles connection pooling, high availability, sharding and performance optimized for convenience of access.

Some clients worth checking out are Jedis (Java), redis-py (Python), Node_Redis (Nodejs) and Redigo (Go).

Advanced Usage of Redis GET

So far we have covered basic access patterns for key lookups in Redis. Now we explore some advanced strategies to optimize and scale key retrieval leveraging Redis‘ versatile architecture:

Pipelining for High-Throughput

Out of the box, individual GET calls load the request-response loop between Redis client and server. This back and forth incurs an overhead.

To eliminate this overhead, we can pipeline requests so multiple GET commands are packed and dispatched together. Responses are buffered and returned in a single step.

Benefits of pipelining GET requests:

  • Saves on network round trip latency overheads
  • Less TLS encryption/decryption cycles
  • Higher throughput exceeding 100K ops/sec

Many client libraries have easy APIs to enable command pipelining:

# Enabled via pipeline()
with client.pipeline() as pipe:
    pipe.get("foo") 
    pipe.get("bar")

So pipelining helps squeeze out maximum GET performance, especially on high latency networks like cloud deployments.

Lua Scripting for Transactions

When multiple GET and SET operations need to be chained together atomically, Lua scripting helps. As a scripting language, Lua has direct access to Redis keys and internals.

Some ways Lua aids transaction-style GET operations:

  • Atomic execution prevents midway state visibility
  • Access Redis keys directly within scripts
  • Manipulate data structures before returning
  • Idempotent execution design possible

Example increment script:

local val = redis.call("GET", KEYS[1])
val = val + ARGV[1]  
redis.call("SET", KEYS[1], val)
return val

By eliminating network round trips in a script, Redis executes multiple gets/sets as a single isolated unit for correctness.

Scaling Gets in Distributed Redis

For large scale environments, Redis offers distributed solutions to scale linearly past single node limits. This enables scaling key GET capacity to 100s of millions ops/sec.

Some salient ways Distributed Redis helps:

  • Sharding splits keys across many Redis instances to scale memory and ops.
  • Further partitioning splits shards into smaller units for finer control.
  • With pre-splitting, adding capacity is a simple online operation.
  • Topology aware routing retains shard locality for faster access.
  • Replication handles high availability for writes and reads.

So horizontally scaling keeps GET times fast while enormously multiplying throughput capacity.

Augmenting Key Access with Modules

Redis natively supports just strings, lists, sets etc. To aid storing and accessing more complex data formats it offers a Modules API for extending functionality safely without affecting core performance.

Modules packaged as dynamically loadable libraries can add new data types and commands while integrating tightly with the Redis ecosystem.

Popular modules that enhance and simplify key access for specialized data structures are:

RediSearch: Provides full text search and secondary indexes on JSON documents using Redis keys.

RedisJSON: Enables storing, indexing and manipulating JSON documents as Redis keys.

ReJSON: Allows complex operation on nested JSON data structures via key level accessors.

RedisGraph: Models graphs as keys for low latency traversal and queries alongside main database.

RedisBloom: Extends Redis keys with space efficient probabilistic data structures like Bloom filters for analytics.

So Modules expand the utility of Redis GET to retrieve richer data formats and types conveniently.

Deployment Best Practices

In production environments, the way Redis is set up can significantly impact performance. Here are some key best practices that help optimize Redis instances for peak GET efficiency:

  • Provision dedicated, high-spec hardware like EC2 r5 instances maximizing memory and compute.
  • For persistence append-only AOF is typically faster than RDB for writes with adequate durability.
  • Limit on-disk persistence to once hourly instead of every write to minimize IO impact.
  • Evaluate replication tradeoffs on whether synchronous commits affect read QoS.
  • Prefer replication groups over standalone masters for high availability.
  • Firewall instance ports for security while allowing app communication internally.
  • Analyze workload patterns to set aside resources for buffering periodic spikes.
  • Use cloud redundancy zones and regions to be robust against outages.
  • Monitor essential metrics via tools like Prometheus for visibility.
  • Analyze slow logs to identify poorly indexed keys to optimize.

So tangible benefits emerge from optimizing Redis production topologies for application needs.

Conclusion: Choosing the Right Key Tool for the Job

Retrieving key values forms the basis of Redis‘s usefulness across a vast variety of problem domains. Once the essential key access path is optimized for peak performance, application data structures and workflows can be molded towards computational needs with great flexibility.

The multitude of data models – ranging from simple strings to complex nested structures – that can be stored and accessed as keys is expanding rapidly, enabling ever more application possibilities. Redis Enterprise adds further tools for high performance at global scale.

Redis workloads also tend to accumulate over time on account of versatile, durable data storage guarantees. So long term capacity planning is advised right from initial rollouts. Having clear processes around key eviction, backup and purging avoids surprises later.

Given its stellar speed, throughput and scalability for key operations, Redis GET works equally well as a primary database supporting OLTP apps or as a fast caching layer for hot data access. Its simple GET and SET abstraction empowers architects without compromising sophistication or real world delivery capability.

So whether the need is for microsecond response times supporting millions of users at low latency or durable Redis data grids delivering instant application state in lambdas serverlessly, the Redis GET offers the first step towards realization.

Similar Posts