Skip Headers
Oracle® Coherence Developer's Guide
Release 3.7

Part Number E18677-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

11 Introduction to Caches

This chapter provides an overview and comparison of basic cache types offered by Coherence. The chapter includes the following sections:

11.1 Distributed Cache

A distributed, or partitioned, cache is a clustered, fault-tolerant cache that has linear scalability. Data is partitioned among all the computers of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one or more unique computers within a cluster. Distributed caches are the most commonly used caches in Coherence.

Coherence defines a distributed cache as a collection of data that is distributed (or, partitioned) across any number of cluster nodes such that exactly one node in the cluster is responsible for each piece of data in the cache, and the responsibility is distributed (or, load-balanced) among the cluster nodes.

There are several key points to consider about a distributed cache:

Access to the distributed cache often must go over the network to another cluster node. All other things equals, if there are n cluster nodes, (n - 1) / n operations go over the network:

Figure 11-1 Get Operations in a Partitioned Cache Environment

Conceptual view of a partitioned cache (get)

Since each piece of data is managed by only one cluster node, an access over the network is only a "single hop" operation. This type of access is extremely scalable, since it can use point-to-point communication and thus take optimal advantage of a switched network.

Similarly, a cache update operation can use the same single-hop point-to-point approach, which addresses a known limitation of a replicated cache, the requirement to push cache updates to all cluster nodes.

Figure 11-2 Put Operations in a Partitioned Cache Environment

Conceptual view of a partitioned cache (put)

In the figure above, the data is being sent to a primary cluster node and a backup cluster node. This is for failover purposes, and corresponds to a backup count of one. (The default backup count setting is one.) If the cache data were not critical, which is to say that it could be re-loaded from disk, the backup count could be set to zero, which would allow some portion of the distributed cache data to be lost if a cluster node fails. If the cache were extremely critical, a higher backup count, such as two, could be used. The backup count only affects the performance of cache modifications, such as those made by adding, changing or removing cache entries.

Modifications to the cache are not considered complete until all backups have acknowledged receipt of the modification. There is a slight performance penalty for cache modifications when using the distributed cache backups; however it guarantees that if a cluster node were to unexpectedly fail, that data consistency is maintained and no data is lost.

Failover of a distributed cache involves promoting backup data to be primary storage. When a cluster node fails, all remaining cluster nodes determine what data each holds in backup that the failed cluster node had primary responsible for when it died. Those data becomes the responsibility of whatever cluster node was the backup for the data:

Figure 11-3 Failover in a Partitioned Cache Environment

Conceptual view of a partitioned cache (failover)

If there are multiple levels of backup, the first backup becomes responsible for the data; the second backup becomes the new first backup, and so on. Just as with the replicated cache service, lock information is also retained with server failure; the sole exception is when the locks for the failed cluster node are automatically released.

The distributed cache service also allows certain cluster nodes to be configured to store data, and others to be configured to not store data. The name of this setting is local storage enabled. Cluster nodes that are configured with the local storage enabled option provides the cache storage and the backup storage for the distributed cache. Regardless of this setting, all cluster nodes have the same exact view of the data, due to location transparency.

Figure 11-4 Local Storage in a Partitioned Cache Environment

Conceptual view of a partitioned cache (storage)

There are several benefits to the local storage enabled option:

11.2 Replicated Cache

A replicated cache is a clustered, fault tolerant cache where data is fully replicated to every member in the cluster. This cache offers the fastest read performance with linear performance scalability for reads but poor scalability for writes (as writes must be processed by every member in the cluster). Because data is replicated to all servers, adding servers does not increase aggregate cache capacity.

The replicated cache excels in its ability to handle data replication, concurrency control and failover in a cluster, all while delivering in-memory data access speeds. A clustered replicated cache is exactly what it says it is: a cache that replicates its data to all cluster nodes.

There are several challenges to building a reliable replicated cache. The first is how to get it to scale and perform well. Updates to the cache have to be sent to all cluster nodes, and all cluster nodes have to end up with the same data, even if multiple updates to the same piece of data occur at the same time. Also, if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lock, otherwise it scales extremely poorly; yet with cluster node failure, all of the data and lock information must be kept safely. Coherence handles all of these scenarios transparently, and provides the most scalable and highly available replicated cache implementation available for Java applications.

The best part of a replicated cache is its access speed. Since the data is replicated to each cluster node, it is available for use without any waiting. This is referred to as "zero latency access," and is perfect for situations in which an application requires the highest possible speed in its data access. Each cluster node (JVM) accesses the data from its own memory:

Figure 11-5 Get Operation in a Replicated Cache Environment

Conceptual view of a replicated cache (get)

In contrast, updating a replicated cache requires pushing the new version of the data to all other cluster nodes:

Figure 11-6 Put Operation in a Replicated Cache Environment

Conceptual view of a replicated cache (put)

Coherence implements its replicated cache service in such a way that all read-only operations occur locally, all concurrency control operations involve at most one other cluster node, and only update operations require communicating with all other cluster nodes. The result is excellent scalable performance, and as with all of the Coherence services, the replicated cache service provides transparent and complete failover and failback.

The limitations of the replicated cache service should also be carefully considered. First, however much data is managed by the replicated cache service is on each and every cluster node that has joined the service. That means that memory utilization (the Java heap size) is increased for each cluster node, which can impact performance. Secondly, replicated caches with a high incidence of updates do not scale linearly as the cluster grows; in other words, the cluster suffers diminishing returns as cluster nodes are added.

11.3 Optimistic Cache

An optimistic cache is a clustered cache implementation similar to the replicated cache implementation but without any concurrency control. This implementation offers higher write throughput than a replicated cache. It also allows an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, the store content held by each member may be different.

11.4 Near Cache

A near cache is a hybrid cache; it typically fronts a distributed cache or a remote cache with a local cache. Near cache invalidates front cache entries, using a configured invalidation strategy, and provides excellent performance and synchronization. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data access, while enabling concurrency and ensuring coherency and fail over, effectively combining the best attributes of replicated and partitioned caches.

The objective of a Near Cache is to provide the best of both worlds between the extreme performance of the Replicated Cache and the extreme scalability of the Distributed Cache by providing fast read access to Most Recently Used (MRU) and Most Frequently Used (MFU) data. Therefore, the Near Cache is an implementation that wraps two caches: a "front cache" and a "back cache" that automatically and transparently communicate with each other by using a read-through/write-through approach.

The "front cache" provides local cache access. It is assumed to be inexpensive, in that it is fast, and is limited in terms of size. The "back cache" can be a centralized or multitiered cache that can load-on-demand in case of local cache misses. The "back cache" is assumed to be complete and correct in that it has much higher capacity, but more expensive in terms of access speed. The use of a Near Cache is not confined to Coherence*Extend; it also works with TCMP.

This design allows Near Caches to configure cache coherency, from the most basic expiry-based caches and invalidation-based caches, up to advanced caches that version data and provide guaranteed coherency. The result is a tunable balance between the preservation of local memory resources and the performance benefits of truly local caches.

The typical deployment uses a Local Cache for the "front cache". A Local Cache is a reasonable choice because it is thread safe, highly concurrent, size-limited, auto-expiring, and stores the data in object form. For the "back cache", a remote, partitioned cache is used.

The following figure illustrates the data flow in a Near Cache. If the client writes an object D into the grid, the object is placed in the local cache inside the local JVM and in the partitioned cache which is backing it (including a backup copy). If the client requests the object, it can be obtained from the local, or "front cache", in object form with no latency.

Figure 11-7 Put Operations in a Near Cache Environment

Conceptual view of a near cache (put)

If the client requests an object that has been expired or invalidated from the "front cache", then Coherence automatically retrieves the object from the partitioned cache. The "front cache" stores the object before the object is delivered to the client.

Figure 11-8 Get Operations in a Near Cache Environment

Conceptual view of a near cache (get)

11.5 Local Cache

While it is not a clustered service, the Coherence local cache implementation is often used in combination with various Coherence clustered cache services. The Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node. There are several attributes of the local cache that are particularly interesting:

The local cache is important to the clustered cache services for several reasons, including as part of Coherence's near cache technology, and with the modular backing map architecture.

11.6 Remote Cache

A remote cache describes any out of process cache accessed by a Coherence*Extend client. All cache requests are sent to a Coherence proxy where they are delegated to a cache (Replicated, Optimistic, Partitioned). See Oracle Coherence Client Guide for more information on using remote caches.

11.7 Summary of Cache Types

Numerical Terms:

Table 11-1 Summary of Cache Types and Characteristics


Replicated Cache Optimistic Cache Partitioned Cache Near Cache backed by partitioned cache LocalCache not clustered

Topology

Replicated

Replicated

Partitioned Cache

Local Caches + Partitioned Cache

Local Cache

Read Performance

Instant 5

Instant 5

Locally cached: instant 5 Remote: network speed 1

Locally cached: instant 5 Remote: network speed 1

Instant 5

Fault Tolerance

Extremely High

Extremely High

Configurable 4 Zero to Extremely High

Configurable 4 Zero to Extremely High

Zero

Write Performance

Fast 2

Fast 2

Extremely fast 3

Extremely fast 3

Instant 5

Memory Usage (Per JVM)

DataSize

DataSize

DataSize/JVMs x Redundancy

LocalCache + [DataSize / JVMs]

DataSize

Coherency

fully coherent

fully coherent

fully coherent

fully coherent 6

n/a

Memory Usage (Total)

JVMs x DataSize

JVMs x DataSize

Redundancy x DataSize

[Redundancy x DataSize] + [JVMs x LocalCache]

n/a

Locking

fully transactional

none

fully transactional

fully transactional

fully transactional

Typical Uses

Metadata

n/a (see Near Cache)

Read-write caches

Read-heavy caches w/ access affinity

Local data


Notes:

  1. As a rough estimate, with 100mb Ethernet, network reads typically require ~20ms for a 100KB object. With gigabit Ethernet, network reads for 1KB objects are typically sub-millisecond.

  2. Requires UDP multicast or a few UDP unicast operations, depending on JVM count.

  3. Requires a few UDP unicast operations, depending on level of redundancy.

  4. Partitioned caches can be configured with as many levels of backup as desired, or zero if desired. Most installations use one backup copy (two copies total)

  5. Limited by local CPU/memory performance, with negligible processing required (typically sub-millisecond performance).

  6. Listener-based Near caches are coherent; expiry-based near caches are partially coherent for non-transactional reads and coherent for transactional access.