Skip Headers
Oracle® Coherence Developer's Guide
Release 3.5

Part Number E14509-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

B Types of Caches in Coherence

This appendix provides an overview and comparison of the types of caches offered by Coherence.

Distributed Cache

A distributed, or partitioned, cache is a clustered, fault-tolerant cache that has linear scalability. Data is partitioned among all the machines of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one or more unique machines within a cluster. Distributed caches are the most commonly used caches in Coherence.

Replicated Cache

A replicated cache is a clustered, fault tolerant cache where data is fully replicated to every member in the cluster. This cache offers the fastest read performance with linear performance scalability for reads but poor scalability for writes (as writes must be processed by every member in the cluster). Because data is replicated to all machines, adding servers does not increase aggregate cache capacity.

Optimistic Cache

An optimistic cache is a clustered cache implementation similar to the replicated cache implementation but without any concurrency control. This implementation offers higher write throughput than a replicated cache. It also allows an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, it is possible that a cluster member may have a different store content than that held by another cluster member.

Near Cache

A near cache is a hybrid cache; it typically fronts a distributed cache or a remote cache with a local cache. Near cache invalidates front cache entries, using configurable invalidation strategy, and provides excellent performance and synchronization. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data access, while enabling concurrency and ensuring coherency and fail-over, effectively combining the best attributes of replicated and partitioned caches.

Local Cache

A local cache is a cache that is local to (completely contained within) a particular cluster node. While it is not a clustered service, the Coherence local cache implementation is often used in combination with various clustered cache services.

Remote Cache

A remote cache describes any out of process cache accessed by a Coherence*Extend client. All cache requests are sent to a Coherence proxy where they are delegated to one of the other Coherence cache types (Repilcated, Optimistic, Partitioned).

Summary of Cache Types

Numerical Terms:

Table B-1 Summary of Cache Types and Characteristics


Replicated Cache Optimistic Cache Partitioned Cache Near Cache backed by partitioned cache LocalCache not clustered

Topology

Replicated

Replicated

Partitioned Cache

Local Caches + Partitioned Cache

Local Cache

Read Performance

Instant 5

Instant 5

Locally cached: instant 5 Remote: network speed 1

Locally cached: instant 5 Remote: network speed 1

Instant 5

Fault Tolerance

Extremely High

Extremely High

Configurable 4 Zero to Extremely High

Configurable 4 Zero to Extremely High

Zero

Write Performance

Fast 2

Fast 2

Extremely fast 3

Extremely fast 3

Instant 5

Memory Usage (Per JVM)

DataSize

DataSize

DataSize/JVMs x Redundancy

LocalCache + [DataSize / JVMs]

DataSize

Coherency

fully coherent

fully coherent

fully coherent

fully coherent 6

n/a

Memory Usage (Total)

JVMs x DataSize

JVMs x DataSize

Redundancy x DataSize

[Redundancy x DataSize] + [JVMs x LocalCache]

n/a

Locking

fully transactional

none

fully transactional

fully transactional

fully transactional

Typical Uses

Metadata

n/a (see Near Cache)

Read-write caches

Read-heavy caches w/ access affinity

Local data


Notes:

  1. As a rough estimate, with 100mbit Ethernet, network reads typically require ~20ms for a 100KB object. With gigabit Ethernet, network reads for 1KB objects are typically sub-millisecond.

  2. Requires UDP multicast or a few UDP unicast operations, depending on JVM count.

  3. Requires a few UDP unicast operations, depending on level of redundancy.

  4. Partitioned caches can be configured with as many levels of backup as desired, or zero if desired. Most installations use one backup copy (two copies total)

  5. Limited by local CPU/memory performance, with negligible processing required (typically sub-millisecond performance).

  6. Listener-based Near caches are coherent; expiry-based near caches are partially coherent for non-transactional reads and coherent for transactional access.