A Local Cache is just that: A cache that is local to (completely contained within) a particular .NET application. There are several attributes of the Local Cache that are particularly interesting:
The Local Cache implements the same standard cache interfaces that a remote cache implements (ICache
, IObservableCache
, IConcurrentCache
, IQueryCache
, and IInvocableCache
), meaning that there is no programming difference between using a local and a remote cache.
The Local Cache can be size-limited. This means that the Local Cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies are customizable, for example allowing the cache to be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies.
The Local Cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time-to-live value in the cache. Furthermore, the entire cache can be configured to flush itself on a periodic basis or at a preset time.
The Local Cache is thread safe and highly concurrent.
The Local Cache provides cache "get" statistics. It maintains hit and miss statistics. These runtime statistics can be used to accurately project the effectiveness of the cache, and adjust its size-limiting and auto-expiring settings accordingly while the cache is running.
The Coherence for .NET Local Cache functionality is implemented by the Tangosol.Net.Cache.LocalCache
class. As such, it can be programatically instantiated and configured; however, it is recommended that a LocalCache
be configured by using a cache configuration descriptor, just like any other Coherence for .NET cache.
The key element for configuring the Local Cache is <local-scheme
>. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. Thus, this element can appear as a subelement of any of these elements in the coherence-cache-config
file: <caching-schemes
>, <distributed-scheme
>, <replicated-scheme
>, <optimistic-scheme
>, <near-scheme
>, <versioned-near-scheme
>, <overflow-scheme
>, <read-write-backing-map
>, and <versioned-backing-map-scheme
>.
The <local-scheme>
provides several optional subelements that let you define the characteristics of the cache. For example, the <low-units>
and <high-units>
subelements allow you to limit the cache in terms of size. Once the cache reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
).
You can also limit the cache in terms of time. The <expiry-delay>
subelement specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store (<cachestore-scheme>
). Expired values are periodically discarded from the cache based on the flush-delay.
If a <cachestore-scheme>
is not specified, then the cached data will only reside in memory, and only reflect operations performed on the cache itself. See <local-scheme
> for a complete description of all of the available subelements.
Example 16-1 illustrates the configuration of a Local Cache. See "Sample Cache Configurations" for additional examples.
Example 16-1 Configuring a Local Cache
<?xml version="1.0"?> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>example-local-cache</cache-name> <scheme-name>example-local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>example-local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> <flush-delay>1000ms</flush-delay> <cachestore-scheme> <class-scheme> <class-name>ExampleCacheStore</class-name> </class-scheme> </cachestore-scheme> <pre-load>true</pre-load> </local-scheme> </caching-schemes> </cache-config>
A reference to a configured Local Cache can be obtained by name by using the CacheFactory
class:
Instances of all INamedCache
implementations, including LocalCache
, should be explicitly released by calling the INamedCache.Release()
method when they are no longer needed, to free up any resources they might hold.
If the particular INamedCache
is used for the duration of the application, then the resources will be cleaned up when the application is shut down or otherwise stops. However, if it is only used for a period, the application should call its Release()
method when finished using it.
Alternatively, you can leverage the fact that INamedCache
extends IDisposable
and that all cache implementations delegate a call to IDisposable.Dispose()
to INamedCache.Release()
. This means that if you need to obtain and release a cache instance within a single method, you can do so with a using
block:
Example 16-3 Obtaining and Releasing a Reference to a Local Cache
using (INamedCache cache = CacheFactory.GetCache("my-cache")) { // use cache as usual }
After the using
block terminates, IDisposable.Dispose()
will be called on the INamedCache
instance, and all resources associated with it will be released.