Types of Caches in Coherence

Overview

The following is an overview of the types of caches offered by Coherence. More detail is provided in later sections.

Replicated

Data is fully replicated to every member in the cluster. Offers the fastest read performance. Clustered, fault-tolerant cache with linear performance scalability for reads, but poor scalability for writes (as writes must be processed by every member in the cluster). Because data is replicated to all machines, adding servers does not increase aggregate cache capacity.

Optimistic

OptimisticCache is a clustered cache implementation similar to the ReplicatedCache implementation, but without any concurrency control. This implementation has the highest possible throughput. It also allows to use an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have a different store content than that held by another cluster member.

Distributed (Partitioned)

Clustered, fault-tolerant cache with linear scalability. Data is partitioned among all the machines of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one, two or more unique machines within a cluster.

Near

A hybrid cache; fronts a fault-tolerant, scalable partitioned cache with a local cache. Near cache invalidates front cache entries, using configurable invalidation strategy, and provides excellent performance and synchronization. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data access, while enabling concurrency and ensuring coherency and fail-over, effectively combining the best attributes of replicated and partitioned caches.

Summary of Cache Types

Numerical Terms:

JVMs = number of JVMs
DataSize = total size of cached data (measured without redundancy)
Redundancy = number of copies of data maintained
LocalCache = size of local cache (for near caches)

  Replicated Cache Optimistic Cache Partitioned Cache Near Cache backed by partitioned cache VersionedNearCache backed by partitioned cache LocalCache not clustered
Topology Replicated Replicated Partitioned Cache Local Caches + Partitioned Cache Local Caches + Partitioned Cache Local Cache
Fault Tolerance Extremely High Extremely High Configurable 4
Zero to Extremely High
Configurable 4
Zero to Extremely High
Configurable 4
Zero to Extremely High
Zero
Read Performance Instant 5 Instant 5 Locally cached: instant 5
Remote: network speed 1
Locally cached: instant 5
Remote: network speed 1
Locally cached: instant 5
Remote: network speed 1
Instant 5
Write Performance Fast 2 Fast 2 Extremely fast 3 Extremely fast 3 Extremely fast 3 Instant 5
Memory Usage (Per JVM) DataSize DataSize DataSize/JVMs x Redundancy LocalCache + [DataSize / JVMs] LocalCache +
[DataSize/JVMs]
DataSize
Memory Usage (Total) JVMs x DataSize JVMs x DataSize Redundancy x DataSize [Redundancy x DataSize] +
[JVMs x LocalCache]
[Redundancy x DataSize] + [JVMs x LocalCache] n/a
Coherency fully coherent fully coherent fully coherent fully coherent 6 fully coherent n/a
Locking fully transactional none fully transactional fully transactional fully transactional fully transactional
Typical Uses Metadata n/a (see Near Cache) Read-write caches Read-heavy caches w/ access affinity n/a (see Near Cache) Local data

Notes:

1 As a rough estimate, with 100mbit ethernet, network reads typically require ~20ms for a 100KB object. With gigabit ethernet, network reads for 1KB objects are typically sub-millisecond.
2 Requires UDP multicast or a few UDP unicast operations, depending on JVM count.
3 Requires a few UDP unicast operations, depending on level of redundancy.
4 Partitioned caches can be configured with as many levels of backup as desired, or zero if desired. Most installations use one backup copy (two copies total).
5 Limited by local CPU/memory performance, with negligible processing required (typically sub-millisecond performance).
6 Listener-based Near caches are coherent; expiry-based near caches are partially coherent for non-transactional reads and coherent for transactional access.

Error formatting macro: rate: java.lang.NullPointerException
Error formatting macro: rate: java.lang.NullPointerException
Unknown macro: {rate-table}