While it is not a clustered service, the Coherence local cache implementation is often used in combination with various Coherence clustered cache services. The Coherence local cache is just that: a cache that is local to (completely contained within) a particular cluster node. There are several attributes of the local cache that are particularly interesting:
The local cache implements the same standard collections interface that the clustered caches implement, meaning that there is no programming difference between using a local or a clustered cache. Just like the clustered caches, the local cache is tracking to the JCache API, which itself is based on the same standard collections API that the local cache is based on.
The local cache can be size-limited. This means that the local cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies are customizable, for example allowing the cache to be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies.
The local cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time to live in the cache. Furthermore, the entire cache can be configured to flush itself on a periodic basis or at a preset time.
The local cache is thread safe and highly concurrent, allowing many threads to simultaneously access and update entries in the local cache.
The local cache supports cache notifications. These notifications are provided for additions (entries that are put by the client, or automatically loaded into the cache), modifications (entries that are put by the client, or automatically reloaded), and deletions (entries that are removed by the client, or automatically expired, flushed, or evicted.) These are the same cache events supported by the clustered caches.
The local cache maintains hit and miss statistics. These runtime statistics can be used to accurately project the effectiveness of the cache, and adjust its size-limiting and auto-expiring settings accordingly while the cache is running.
The local cache is important to the clustered cache services for several reasons, including as part of Coherence's near cache technology, and with the modular backing map architecture.
The key element for configuring the Local Cache is <local-scheme>
. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. Thus, this element can appear as a sub-element of any of these elements in the coherence-cache-config
file: <caching-schemes>
, <distributed-scheme>
, <replicated-scheme>
, <optimistic-scheme>
, <near-scheme>
, <versioned-near-scheme>
, <overflow-scheme>
, <read-write-backing-map-scheme>
, and <versioned-backing-map-scheme>
.
The <local-scheme>
provides several optional sub-elements that let you define the characteristics of the cache. For example, the <low-units
>
and <high-units>
sub-elements allow you to limit the cache in terms of size. When the cache reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
).
You can also limit the cache in terms of time. The <expiry-delay>
sub-element specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store (<cachestore-scheme>
). Expired values are periodically discarded from the cache based on the flush-delay.
If a <cache-store-scheme>
is not specified, then the cached data will only reside in memory, and only reflect operations performed on the cache itself. See <local-scheme>
for a complete description of all of the available sub-elements.
The XML code Example 18-1 illustrates the configuration of a Local Cache. See Sample Cache Configurations for additional examples.
Example 18-1 Local Cache Configuration
<?xml version="1.0"?> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>example-local-cache</cache-name> <scheme-name>example-local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>example-local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> <flush-delay>1000ms</flush-delay> <cachestore-scheme> <class-scheme> <class-name>ExampleCacheStore</class-name> </class-scheme> </cachestore-scheme> <pre-load>true</pre-load> </local-scheme> </caching-schemes> </cache-config>
For more information, see "Local Cache" in the C++ User Guide or .NET User Guide .