OCI Cache Client Best Practices
The following best practices apply to all OCI Cache client interactions, whether the target cluster operates in non-sharded or sharded mode. Following these guidelines ensures consistent behavior and mitigates common operational risks. These recommendations provide general guidance and aren't intended as exhaustive documentation. For detailed information on your specific libraries and Redis or Valkey use cases, consult the appropriate resources.
Cluster sizing and configuration recommendations
Selecting the right cache size for your cluster helps you optimize performance and resource usage. Your cache size should reflect expected application workloads and adjust to changes during peak business periods. OCI Cache lets you scale your cluster up or down to match your needs.
Before you size your cache cluster, consider your application's workload patterns and future growth expectations.
- Is your application read-heavy or write-heavy?
- Read-heavy: If your data size do not exceed 500 GB, use a non-sharded cluster and add enough read replicas to distribute traffic.
- Write-heavy: If you plan to store large amounts of data in the cache, consider a sharded cluster. Sharding distributes write traffic across nodes, resulting in higher throughput and avoiding bottlenecks.
- Both read and write: Use a sharded cluster with read replicas for each shard. This setup provides high availability and scalability for both workloads.
- How do you choose the right cache size?
Set your cache size to at least 50% larger than the current data size. This buffer handles unexpected spikes in workloads until you resize the cluster.
- How can you plan for future increases in storage or write capacity?
Create a sharded cluster for flexibility. Sharded clusters make it easier to scale storage and write capacity as your requirements grow.
- When should you use multiple databases in the cache?
If your application needs to separate data in the same cache for different purposes, such as caching and session management, choose a non-sharded cluster and adjust the databases parameter as needed. Support for multiple databases is available only in non-sharded clusters.
- Can you run Lua scripts in a sharded cluster?
Use a non-sharded cluster for Lua scripts. Lua scripts require all referenced keys to reside in the same slot, which is easier to manage without sharding.
Client library compatibility
Use client libraries that are fully compatible with OCI Cache. The recommended minimum versions include Lettuce (6.x or later, especially for sharded cluster support) and Redisson (3.36.0 or later), which support features such as Transport Layer Security (TLS) and authentication mechanisms required by OCI Cache. Always test new library versions in a staging environment before deploying to production to ensure compatibility and stability.
The following table lists the minimum compatible versions of client libraries for various programming languages that support cluster mode (sharding) in OCI Cache, along with relevant notes on cluster support capabilities.
| Library (Language) | GitHub Link | Minimum Compatible Version | Cluster Support Notes |
|---|---|---|---|
| redis-py (Python) | redis/redis-py | 5.0.0+ | Native cluster mode support since 5.0.0 |
| Lettuce (Java) | lettuce | 6.3.0 | Full cluster support |
| Jedis (Java) | jedis | 3.1 | Cluster supported via JedisCluster |
| Redisson (Java) | redisson | 3.36.0 | Fully supports Redis cluster |
| redis-rb (Ruby) | redis-rb | 5.3.0 | Requires redis-clustering gem |
| redis-rs (Rust) | redis-rs | 0.26.1 | Partial support; cluster mode can vary by use case |
| phpredis (PHP) | phpredis | 6.0.0+ | Cluster support via RedisCluster |
| Go-Redis (Go) | go-redis | v9.6.1 | Fully supports cluster mode |
| StackExchange.Redis (C#) | StackExchange.Redis | Unsupported | Requires AllowAdmin=true, lacks full cluster compatibility; hostname communication unsupported |
| ServiceStack.Redis (C#) | ServiceStack.Redis | Unsupported | No cluster mode support |
| hiredis-cluster (C) | hiredis-cluster | Unsupported | No support for hostname communication |
Connection optimization
The following suggestions are for actions to take in the application code. You can also implement exponential backoffs.
- Enable connection pooling: Reuse existing connections to reduce the overhead of establishing new ones for each request. This practice prevents resource exhaustion and enhances application performance. Tune pool sizes based on application concurrency and cluster capacity. Set parameters such as minimum or maximum idle connections, connection timeouts, and maximum wait times for connections.
- Set appropriate timeouts: Define suitable timeout values for commands and connections, typically between 2000 ms and 5000 ms, to avoid delays during network disruptions or server delays.
- Configure retry mechanisms: Establish retry policies for handling transient failures such as network interruptions. For example, configure three retry attempts with a 1000 ms interval between tries to help ensure recovery.
- Manage idle connections: Close unused connections to avoid leaks and free up resources, preventing the cache node from exceeding its connection limit.
Security and TLS compliance
- OCI Cache mandates TLS for all client connections by default. TLS is always enabled on port 6379. You can allow additional TCP access on port 7379 if needed. Ensure that client libraries support TLS version 1.2 or later and use current cipher suites to maintain secure communication channels.
- If performance is critical, collaborate with OCI support and security teams to assess the impact of TLS overhead. Disabling TLS in production environments is discouraged because of security vulnerabilities.
Security best practices
Secure your OCI Cache from the outset to prevent unauthorized access and vulnerabilities. While OCI Cache mandates TLS for all client connections by default, additional security measures are critical for production environments.
- Restrict access: Configure network policies to allow connections only from trusted networks or localhost. Avoid exposing OCI Cache endpoints to the public internet.
- Set strong authentication: Ensure robust passwords or authentication mechanisms are in place as per OCI security guidelines to prevent unauthorized access. Additionally, use Zero Trust Packet Routing and network security group (NSG) support in the service to enforce policy-based access to OCI Cache instances. This approach further enhances security by restricting access to authorized entities only.
- Disable dangerous commands: Limit access to potentially harmful commands such as
FLUSHALLandCONFIGto prevent accidental or malicious data loss in Redis. - Leverage protected configurations: Use OCI Cache's built-in security features and regularly review access controls to maintain a secure environment.
Key management and expiry
You do not need to set a time-to-live (TTL) for every key. Some keys might persist indefinitely. However, we recommend setting reasonable TTL values for most keys. Setting TTLs helps prevent stale or orphaned data from remaining in the cache. This approach ensures data freshness and efficient memory usage.
When the cache reaches its memory limit, you can configure the maxmemory-policy parameter to automatically remove keys according to the selected eviction policy. This configuration helps optimize memory management.
Monitoring and alert configuration
- Monitor key metrics such as memory utilization, and set alerts for thresholds (for example, when utilization exceeds 80% for 30 minutes) to prevent out-of-memory conditions and cluster failures. Continuous monitoring is essential to maintain OCI Cache performance as workloads evolve.
- Enable detailed logging: Use tools such as
SLOWLOGto identify slow commands before they impact performance. - Review logs regularly: Check for warnings, errors, or unusual patterns in OCI Cache logs to preempt issues.
- Monitor memory and eviction rates: Track key eviction frequency alongside existing metrics such as memory utilization (for example, above 80% for 30 minutes) to adjust memory limits or policies proactively.
- Enable detailed logging: Use tools such as
- Track metrics including connected clients, rejected connections, and command latency to identify and resolve bottlenecks or anomalies. Monitor cache hit and miss ratios to detect inefficient usage and adjust key or eviction policies as needed.
- Set alarms for other metrics such as eviction events and connection count to maintain cluster health.
- Resize the cluster if memory utilization consistently breaches the limit, or if CPU utilization indicates higher write operations.
Optimized command usage
- Minimize the use of
O(n)commands such asKEYS. Instead, useSCAN,HSCAN, orSSCAN. - Refrain from using commands such as
KEYSorSMEMBERSon large datasets as they can freeze Redis operations. Instead, use iterative commands such asSCAN,SSCAN, andHSCANto traverse data without locking up the system - Consult the list of unsupported commands in OCI Cache documentation to ensure adherence to compatibility requirements for sharded clusters.
- Store related data in hashes rather than several single keys to save memory and quicken lookups in Redis environments.
- Utilize pipelining to send several commands at once, reducing round-trip times, and set up connection pooling to maintain low response times and minimize overhead
Client Best Practices for Non-Sharded Clusters
Non-sharded clusters in OCI Cache operate with a primary node and optional replica nodes, replicating data across all nodes to ensure high availability. Use these practices to maximize availability and performance.
High availability configuration
- Deploy clusters with at least three nodes (one primary and two replicas) to ensure high availability. OCI Cache distributes nodes across fault domains and availability domains to enhance resilience against localized failures.
- Direct write operations to the primary endpoint and use replica endpoints for read operations to distribute load and improve redundancy.
- Make applications flexible by specifying endpoints in configuration files to support any cluster changes because of failovers or scaling events.
Client Best Practices for Sharded Clusters
Sharded clusters in OCI Cache partition data across multiple shards, each with a primary node and optional replicas for scalability and performance. The following practices are specific to clients interfacing with sharded clusters, addressing the unique challenges of distributed data management.
Selection of cluster-compatible clients
For optimal performance and scalability, use client libraries that support cluster mode and hostname resolution. When selecting a client, prioritize libraries with explicit support for Redis cluster mode and hostname resolution, such as Lettuce (version 6.x or later) or Redisson (version 3.36.0 or later).
In addition to the libraries mentioned in the compatibility matrix, such as Lettuce (version 6.x or later) and Redisson (version 3.36.0 or later), other libraries are available for various programming languages that can support sharded clusters. For a comprehensive list of compatible client libraries, see the Valkey documentation.
Ensure that the selected client can dynamically manage topology changes and slot mapping without manual intervention. This capability is essential for seamless cluster operations.
Endpoint configuration for resilience
- Configure applications to specify primary endpoints for at least three distinct shards (for example, nodes with hostnames ending in
-1-1,-2-1, and-3-1) to ensure connectivity if a shard or node is unavailable. - Make applications flexible by specifying endpoints in configuration files. This approach enables the application to adapt to cluster changes that result from failovers or scaling events.
- Use the discovery endpoint, which does not change for the lifetime of the cluster.
Automatic topology discovery
- Enable client libraries to retrieve and periodically refresh cluster topology information at startup or runtime. This approach allows clients to adapt to node failover or cluster resizing without updating configurations.
- Use the discovery endpoint, which stays the same for the entire lifetime of the cluster.
Scalability and high availability support
- Use client libraries that support auto-reconnection and resharding to keep applications running during cluster resizing (for example, while adding or removing shards).
- Configure each shard with at least two and up to four replicas to support failover and high availability. OCI Cache distributes shards across availability domains and fault domains to improve resilience.
Sharded cluster metric monitoring
- Set alerts for critical metrics, including memory utilization (for example, above 80% for 30 minutes) and node availability.
- Monitor node-level metrics for each shard, such as memory usage, to decide the best data distribution across shards and ensure balanced cluster performance.