12 Configuring Caches
See Cache Configuration Elements for a complete reference of all the elements available in the cache configuration deployment descriptor. In addition, see Cache Configurations by Example for various sample cache configurations.
This chapter includes the following sections:
- Overview of Configuring Caches
Caches are configured in a Coherence cache configuration deployment descriptor. - Defining Cache Mappings
Cache mappings map a cache name to a cache scheme definition. - Defining Cache Schemes
Cache schemes are used to define the caches that are available to an application. - Using Scheme Inheritance
Scheme inheritance allows cache schemes to be created by inheriting another scheme and selectively overriding the inherited scheme's properties as required. - Using Cache Scheme Properties
Cache scheme properties modify cache behavior as required for a particular application. - Using Parameter Macros
Parameter macros are literal strings that are replaced with an actual value at runtime. - Using System Property Macros
The cache configuration deployment descriptor supports the use of system property macros.
Parent topic: Using Caches
Overview of Configuring Caches
coherence-cache-config.xml
deployment descriptor file that is found on the classpath is loaded. Coherence includes a sample coherence-cache-config.xml
file in the coherence.jar
library. To use a different coherence-cache-config.xml
file, the file must be located on the classpath and must be loaded before the coherence.jar
library; otherwise, the sample cache configuration deployment descriptor is used. See Specifying a Cache Configuration File.
The cache configuration descriptor allows caches to be defined independently from the application code. At run time, applications get an instance of a cache by referring to a cache using the name that is defined in the descriptor. This allows application code to be written independent of the cache definition. Based on this approach, cache definitions can be modified without making any changes to the application code. This approach also maximizes cache definition reuse.
The schema definition of the cache configuration descriptor is the coherence-cache-config.xsd
file, which imports the coherence-cache-config-base.xsd
file, which, in turn, implicitly imports the coherence-config-base.xsd
file. This file is located in the root of the coherence.jar
file. A cache configuration deployment descriptor consists of two primary elements that are detailed in this chapter: the <caching-scheme-mapping>
element and the <caching-schemes>
element. These elements are used to define caches schemes and to define cache names that map to the cache schemes.
Parent topic: Configuring Caches
Defining Cache Mappings
Cache mappings are defined using a <cache-mapping>
element within the <cache-scheme-mapping>
node. Any number of cache mappings can be created. The cache mapping must include the cache name and the scheme name to which the cache name is mapped. See cache-mapping.
Note:
The following characters are reserved and cannot be used in cache names:- slash (
/
) - colon (
:
) - asterisk (
*
) - question mark (
?
)
This section includes the following topics:
Using Exact Cache Mappings
Exact cache mappings map a specific cache name to a cache scheme definition. An
application must provide the exact name as specified in the mapping to use a cache. Example 12-1 creates a single cache mapping that maps the cache name
example
to a distributed cache scheme definition with the scheme name
distributed
.
Example 12-1 Sample Exact Cache Mapping
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> </distributed-scheme> </caching-schemes> </cache-config>
Parent topic: Defining Cache Mappings
Using Name Pattern Cache Mappings
Name pattern cache mappings allow applications to use patterns when specifying a
cache name. Patterns use the asterisk (*) wildcard. Name patterns alleviate an application
from having to know the exact name of a cache. Example 12-2 creates two cache mappings. The first mapping uses the wildcard (*) to
map any cache name to a distributed cache scheme definition with the scheme name
distributed
. The second mapping maps the name pattern
account-*
to the cache scheme definition with the scheme name
account-distributed
.
Example 12-2 Sample Cache Name Pattern Mapping
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> <cache-mapping> <cache-name>account-*</cache-name> <scheme-name>account-distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> </distributed-scheme> <distributed-scheme> <scheme-name>account-distributed</scheme-name> </distributed-scheme> </caching-schemes> </cache-config>
For the first mapping, an application can use any name when creating a cache and the name is mapped to the cache scheme definition with the scheme name distributed
. The second mapping requires an application to use a pattern when specifying a cache name. In this case, an application must use the prefix account-
before the name. For example, an application that specifies account-overdue
as the cache name uses the cache scheme definition with the scheme name account-distributed
.
As shown in Example 12-2, it is possible to have a cache name (for example account-overdue
) that can be matched to multiple cache mappings. In such cases, if an exact cache mapping is defined, then it is always selected over any wildcard matches. Among multiple wildcard matches, the last matching wildcard mapping (based on the order in which they are defined in the file) is selected. Therefore, it is common to define less specific wildcard patterns earlier in the file that can be overridden by more specific wildcard patterns later in the file.
Parent topic: Defining Cache Mappings
Defining Cache Schemes
Cache schemes are defined within the <caching-schemes>
element. Each cache type (distributed, replicated, and so on) has a corresponding scheme element and properties that are used to define a cache of that type. Cache schemes can also be nested to allow further customized and composite caches such as near caches. See caching-schemes.
This section describes how to define cache schemes for the most often used cache types and does not represent the full set of cache types provided by Coherence. Instructions for defining cache schemes for additional cache types are found throughout this guide and are discussed as part of the features that they support.
This section includes the following topics:
- Defining Distributed Cache Schemes
- Defining Replicated Cache Schemes
- Defining Optimistic Cache Schemes
- Defining Local Cache Schemes
- Defining Near Cache Schemes
- Defining View Cache Schemes
Parent topic: Configuring Caches
Defining Distributed Cache Schemes
The <distributed-scheme>
element is used to define distributed caches. A distributed cache utilizes a distributed (partitioned) cache service instance. Any number of distributed caches can be defined in a cache configuration file. See distributed-scheme.
Example 12-3 defines a basic distributed cache that uses distributed
as the scheme name and is mapped to the cache name example
. The <autostart>
element is set to true
to start the service on a cache server node.
Example 12-3 Sample Distributed Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config>
In the example, the distributed cache defines a local cache to be used as the backing map. See Local Storage.
Parent topic: Defining Cache Schemes
Defining Replicated Cache Schemes
The <replicated-scheme>
element is used to define replicated caches. A replicated cache utilizes a replicated cache service instance. Any number of replicated caches can be defined in a cache configuration file. See replicated-scheme.
Example 12-4 defines a basic replicated cache that uses replicated
as the scheme name and is mapped to the cache name example
. The <autostart>
element is set to true
to start the service on a cache server node.
Example 12-4 Sample Replicated Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>replicated</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <replicated-scheme> <scheme-name>replicated</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </replicated-scheme> </caching-schemes> </cache-config>
In the example, the replicated cache defines a local cache to be used as the backing map. See Local Storage.
Parent topic: Defining Cache Schemes
Defining Optimistic Cache Schemes
The <optimistic-scheme>
element is used to define optimistic caches. An optimistic cache utilizes an optimistic cache service instance. Any number of optimistic caches can be defined in a cache configuration file. See optimistic-scheme.
Example 12-5 defines a basic optimistic cache that uses optimistic
as the scheme name and is mapped to the cache name example
. The <autostart>
element is set to true
to start the service on a cache server node.
Example 12-5 Sample Optimistic Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>optimistic</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <optimistic-scheme> <scheme-name>optimistic</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </optimistic-scheme> </caching-schemes> </cache-config>
In the example, the optimistic cache defines a local cache to be used as the backing map. See Local Storage.
Parent topic: Defining Cache Schemes
Defining Local Cache Schemes
The <local-scheme>
element is used to define local caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near cache. Thus, this element can appear as a sub-element of any of the following elements: <caching-schemes>
, <distributed-scheme>
, <replicated-scheme>
, <optimistic-scheme>
, <near-scheme>
, <overflow-scheme>
, <read-write-backing-map-scheme>
, and <backing-map-scheme>
. See local-scheme.
This section includes the following topics:
- Sample Local Cache Definition
- Controlling the Growth of a Local Cache
- Specifying a Custom Eviction Policy
Parent topic: Defining Cache Schemes
Sample Local Cache Definition
Example 12-6 defines a local cache that uses local
as the scheme name and is mapped to the cache name example
.
Note:
A local cache is not typically used as a standalone cache on a cache server; moreover, a clustering cache server distribution does not start if the only cache definition in the cache configuration file is a local cache.
Example 12-6 Sample Local Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> </local-scheme> </caching-schemes> </cache-config>
See Defining a Local Cache for C++ Clients and Configuring a Local Cache for .NET Clients in Developing Remote Clients for Oracle Coherence.
Parent topic: Defining Local Cache Schemes
Controlling the Growth of a Local Cache
As shown in Example 12-6, the <local-scheme>
provides several optional sub-elements that control the growth of the cache. For example, the <low-units
>
and <high-units>
sub-elements limit the cache in terms of size. When the cache reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
).
Local caches use the <expiry-delay>
cache configuration element to configure the amount of time that items may remain in the cache before they expire. Entries that reach the expiry delay value are proactively evicted and are no longer accessible.
Note:
The expiry delay parameter (cExpiryMillis
) is defined as an integer and is expressed in
milliseconds. Therefore, the maximum amount of time can never exceed
Integer.MAX_VALUE
(2147483647) milliseconds or approximately 24
days.
When a cache entry expires, it is not immediately removed from the cache. Instead, it is removed the next time it is accessed after the expiration time. This means that there may be a delay between the expiration time and the actual eviction from the cache. However, when you get a cache entry, you will never receive an expired value. It is important to note that cached data may remain in the cache for a longer period than its expiration date.
When using a backing distributed (partitioned) cache (see In-memory Cache with Expiring Entries), there is a daemon called EvictionTask
that periodically checks for
evictions in the background. This daemon has an internal property called
EvictionDelay
, which is set to 250ms and specifies the minimum
delay until the next eviction attempt. For additional information, see Capacity Planning.
Parent topic: Defining Local Cache Schemes
Specifying a Custom Eviction Policy
The LocalCache
class is used for size-limited caches. It is used both for caching on-heap objects (as in a local cache or the front portion of a near cache) and as the backing map for a partitioned cache. Applications can provide custom eviction policies for use with a LocalCache
.
Coherence's default eviction policy is very effective for most workloads; the majority of applications do not have to provide a custom policy. See local-scheme. Generally, it is best to restrict the use of eviction policies to scenarios where the evicted data is present in a backing system (that is, the back portion of a near cache or a database). Eviction should be treated as a physical operation (freeing memory) and not a logical operation (deleting an entity).
Example 12-7 shows the implementation of a simple custom eviction policy:
Example 12-7 Implementing a Custom Eviction Policy
package com.tangosol.examples.eviction; import com.tangosol.net.cache.AbstractEvictionPolicy; import com.tangosol.net.cache.ConfigurableCacheMap; import com.tangosol.net.cache.LocalCache; import com.tangosol.net.BackingMapManagerContext; import com.tangosol.util.ConverterCollections; import java.util.Iterator; import java.util.Map; /** * Custom eviction policy that evicts items randomly (or more specifically, * based on the natural order provided by the map's iterator.) * This example may be used in cases where fast eviction is required * with as little processing as possible. */ public class SimpleEvictionPolicy extends AbstractEvictionPolicy { /** * Default constructor; typically used with local caches or the front * parts of near caches. */ public SimpleEvictionPolicy() { } /** * Constructor that accepts {@link BackingMapManagerContext}; should * be used with partitioned cache backing maps. * * @param ctx backing map context */ public SimpleEvictionPolicy(BackingMapManagerContext ctx) { m_ctx = ctx; } /** * {@inheritDoc} */ public void entryUpdated(ConfigurableCacheMap.Entry entry) { } /** * {@inheritDoc} */ public void entryTouched(ConfigurableCacheMap.Entry entry) { } /** * {@inheritDoc} */ public void requestEviction(int cMaximum) { ConfigurableCacheMap cache = getCache(); Iterator iter = cache.entrySet().iterator(); for (int i = 0, c = cache.getUnits() - cMaximum; i < c && iter.hasNext(); i++) { ConfigurableCacheMap.Entry entry = (ConfigurableCacheMap.Entry) iter.next(); StringBuffer buffer = new StringBuffer(); // If the contents of the entry (for example the key/value) need // to be examined, invoke convertEntry(entry) in case // the entry must be deserialized Map.Entry convertedEntry = convertEntry(entry); buffer.append("Entry: ").append(convertedEntry); // Here's how to get metadata about creation/last touched // timestamps for entries. This information might be used // in determining what gets evicted. if (entry instanceof LocalCache.Entry) { buffer.append(", create millis="); buffer.append(((LocalCache.Entry) entry).getCreatedMillis()); } buffer.append(", last touch millis="); buffer.append(entry.getLastTouchMillis()); // This output is for illustrative purposes; this may generate // excessive output in a production system System.out.println(buffer); // iterate and remove items // from the cache until below the maximum. Note that // the non converted entry key is passed to the evict method cache.evict(entry.getKey()); } } /** * If a {@link BackingMapManagerContext} is configured, wrap the * Entry with {@link ConverterCollections.ConverterEntry} in order * to deserialize the entry. * * @see ConverterCollections.ConverterEntry * @see BackingMapManagerContext * * @param entry entry to convert if necessary * * @return an entry that deserializes its key and value if necessary */ protected Map.Entry convertEntry(Map.Entry entry) { BackingMapManagerContext ctx = m_ctx; return ctx == null ? entry : new ConverterCollections.ConverterEntry(entry, ctx.getKeyFromInternalConverter(), ctx.getValueFromInternalConverter(), ctx.getValueToInternalConverter()); } private BackingMapManagerContext m_ctx; }
Example 12-8 illustrates a Coherence cache configuration file with an eviction policy:
Example 12-8 Custom Eviction Policy in a coherence-cache-config.xml File
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>example-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>example-near</scheme-name> <front-scheme> <local-scheme> <eviction-policy> <class-scheme> <class-name> com.tangosol.examples.eviction.SimpleEvictionPolicy </class-name> </class-scheme> </eviction-policy> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>example-distributed</scheme-ref> </distributed-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> <autostart>true</autostart> </near-scheme> <distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <eviction-policy> <class-scheme> <class-name> com.tangosol.examples.eviction.SimpleEvictionPolicy </class-name> <init-params> <!-- Passing the BackingMapManagerContext to the eviction policy; this is required for deserializing entries --> <init-param> <param-type> com.tangosol.net.BackingMapManagerContext</param-type> <param-value>{manager-context}</param-value> </init-param> </init-params> </class-scheme> </eviction-policy> <high-units>20</high-units> <unit-calculator>binary</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config>
Parent topic: Defining Local Cache Schemes
Defining Near Cache Schemes
The <near-scheme>
element is used to define a near cache. A near cache is a composite cache because it contains two caches: the <front-scheme>
element is used to define a local (front-tier) cache and the <back-scheme>
element is used to define a (back-tier) cache. Typically, a local cache is used for the front-tier, however, the front-tier can also use schemes based on Java Objects (using the <class-scheme>
) and non-JVM heap-based caches (using <external-scheme>
or <paged-external-scheme>
). The back-tier cache is described by the <back-scheme>
element. A back-tier cache can be any clustered cache type and any of the standalone cache types. See near-scheme.
This section includes the following topics:
Parent topic: Defining Cache Schemes
Sample Near Cache Definition
Example 12-9 defines of a near cache that uses near
as the scheme name and is mapped to the cache name example
. The front-tier is a local cache and the back-tier is a distributed cache.
Note:
Near caches are used for cache clients and are not typically used on a cache server; moreover, a cache server does not start if the only cache definition in the cache configuration file is a near cache.
Example 12-9 Sample Near Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>near</scheme-name> <front-scheme> <local-scheme/> </front-scheme> <back-scheme> <distributed-scheme> <scheme-name>near-distributed</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </back-scheme> </near-scheme> </caching-schemes> </cache-config>
Parent topic: Defining Near Cache Schemes
Near Cache Invalidation Strategies
The <invalidation-strategy>
is an optional subelement for a near cache. An invalidation strategy is used to specify how the front-tier and back-tier objects are kept synchronous. A near cache can be configured to listen to certain events in the back cache and automatically update or invalidate entries in the front cache. Depending on the interface that the back cache implements, the near cache provides five different strategies of invalidating the front cache entries that have changed by other processes in the back cache.
Note:
When using an invalidation strategy of all
, cache operations that modify a large number of entries (for example, a clear operation) can cause a flood of events that may saturate the network.
Table 12-1 describes the invalidation strategies.
Table 12-1 Near Cache Invalidation Strategies
Strategy Name | Description |
---|---|
|
The default strategy if no strategy is specified. This strategy is identical to the |
|
This strategy instructs a near cache to listen to the back cache events related only to the items currently present in the front cache. This strategy works best when each instance of a front cache contains distinct subset of data relative to the other front cache instances (for example, sticky data access patterns). |
|
This strategy instructs a near cache to listen to all back cache events. This strategy is optimal for read-heavy tiered access patterns where there is significant overlap between the different instances of front caches. |
|
This strategy instructs a near cache to listen to all backing map events that are not synthetic deletes. A synthetic delete event could be emitted as a result of eviction or expiration. With this invalidation strategy, it is possible for the front map to contain cache entries that have been synthetically removed from the backing map. Any subsequent re-insertion of the entries to the backing map causes the corresponding entries in the front map to be invalidated. |
|
This strategy instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy for the front cache. Note that the front map is reset if an extend client is disconnected with the proxy. |
Parent topic: Defining Near Cache Schemes
Defining View Cache Schemes
The <view-scheme>
element is used to define a view caches.
The <view-scheme>
creates a NamedCache
implementation that maintains a local in-memory cache and is backed by a clustered
scheme. The clustered scheme could either be a federated, or a distributed scheme. By
using the <view-scheme>
element, you can harvest the benefits of
having a local in-memory store that is backed by a distributed scheme. The local store
can be a full replica (all data) or a subset of the data in the distributed scheme. See
view-scheme.
Example 12-10 defines a basic view cache that uses view
as the scheme name and is
mapped to the cache name example
.
Example 12-10 Sample View Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>view</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <view-scheme> <scheme-name>view</scheme-name> <back-scheme> <distributed-scheme> <scheme-ref>partitioned-std</scheme-ref> </distributed-scheme> </back-scheme> </view-scheme> </caching-schemes> </cache-config>
Note:
This example is a simple configuration of view. If a<view-filter>
element is not
defined, then the view cache will use an AlwaysFilter
. If the
<view-filter>
element is defined, it uses the
class-scheme mechanism outlined here.
Parent topic: Defining Cache Schemes
Using Scheme Inheritance
<scheme-ref>
element is used within a cache scheme definition and specifies the name of the cache scheme from which to inherit.
Example 12-11 creates two distributed cache schemes that are equivalent. The first explicitly configures a local scheme to be used for the backing map. The second definition use the <scheme-ref>
element to inherit a local scheme named LocalSizeLimited
:
Example 12-11 Using Cache Scheme References
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme> </backing-map-scheme> </distributed-scheme> <distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
In Example 12-11, the first distributed scheme definition is more compact; however, the second definition offers the ability to easily reuse the LocalSizeLimited
scheme within multiple schemes. Example 12-12 demonstrates multiple schemes reusing the same LocalSizeLimited
base definition and overriding the expiry-delay
property.
Example 12-12 Multiple Cache Schemes Using Scheme Inheritance
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <replicated-scheme> <scheme-name>ReplicatedInMemoryCache</scheme-name> <service-name>ReplicatedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> <expiry-delay>10m</expiry-delay> </local-scheme> </backing-map-scheme> </replicated-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
Parent topic: Configuring Caches
Using Cache Scheme Properties
Many cache properties use default values unless a different value is explicitly given within the cache scheme definition. The clustered caches (distributed, replicated and optimistic) use the default values as specified by their respective cache service definition. Cache services are defined in the operational deployment descriptor. While it is possible to change property values using an operational override file, cache properties are most often set within the cache scheme definition.
Example 12-13 creates a basic distributed cache scheme that sets the service thread count property and the request timeout property. In addition, the local scheme that is used for the backing map sets properties to limit the size of the local cache. Instructions for using cache scheme properties are found throughout this guide and are discussed as part of the features that they support.
Example 12-13 Setting Cache Properties
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>DistributedInMemoryCache</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <thread-count-min>4</thread-count-min> <request-timeout>60s</request-timeout> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme> </caching-schemes> </cache-config>
Parent topic: Configuring Caches
Using Parameter Macros
This section includes the following topics:
Parent topic: Configuring Caches
Using User-Defined Parameter Macros
User-defined parameter macros allow property values in a scheme to be replaced at runtime by values that are configured within cache mapping initialization parameters. User-defined parameter macros maximize the reuse of cache scheme definitions and can significantly reduce the size of a cache configuration file.
Note:
Parameter macros should not be used for service-scoped (shared by all caches in the same service) items, such as thread count, partition count, and service name. Parameter macros should only be used for cache-scoped items, such as expiry, high units, or cache stores to name a few.
To define a user-defined parameter macro, place a literal string within curly braces as the value of a property. A parameter macro can also include an optional default value by placing the value after the string preceded by a space. The form of a user-defined macro is as follows:
{user-defined-name default_value}
The following example creates a user-defined macro that is called back-size-limit
. The macro is used for the <high-units>
property of a backing map and allows the property value to be replaced at runtime. The macro specifies a default value of 500
for the <high-units>
property.
<caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <backing-map-scheme> <local-scheme> <high-units>{back-size-limit 500}</high-units> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes>
At runtime, the <high-units>
value can be replaced by using an initialization parameter that is defined within a cache mapping definition. The following example overrides the default value of 500 with 1000 by using an <init-param>
element and setting the <param-name>
element to back-size-limit
and the <param-value>
element to 1000
. See init-param.
<caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> <init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>1000</param-value> </init-param> </init-params> </cache-mapping> <caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <backing-map-scheme> <local-scheme> <high-units>{back-size-limit 500}</high-units> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes>
The benefit of using user-defined parameter macros is that multiple cache mappings can use the same cache scheme and set different property values as required. The following example demonstrates two cache mappings that reuse the same cache scheme. However, the mappings result in caches with different values for the <high-units>
element.
... <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> <init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>1000</param-value> </init-param> </init-params> </cache-mapping> <caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <backing-map-scheme> <local-scheme> <high-units>{back-size-limit 500}</high-units> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes>
Parent topic: Using Parameter Macros
Using Predefined Parameter Macros
Coherence includes predefined parameter macros that minimize custom coding and enable the specification of commonly used attributes when configuring class constructor parameters. The macros must be entered within curly braces and are specific to either the param-type
or param-value
elements.
Table 12-2 describes the predefined parameter macros that may be specified.
Table 12-2 Predefined Parameter Macros for Cache Configuration
<param-type> | <param-value> | Description |
---|---|---|
|
|
Used to pass the current cache name as a constructor parameter For example: <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> |
|
|
Used to pass the current classloader as a constructor parameter. For example: <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>java.lang.ClassLoader</param-type> <param-value>{class-loader}</param-value> </init-param> </init-params> |
|
|
Used to pass the current <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type> com.tangosol.net.BackingMapManagerContext </param-type> <param-value>{manager-context}</param-value> </init-param> </init-params> |
|
|
Instantiates an object defined by the <class-scheme> <scheme-name>dbconnection</scheme-name> <class-name>com.mycompany.dbConnection</class-name> <init-params> <init-param> <param-name>driver</param-name> <param-type>String</param-type> <param-value>org.gjt.mm.mysql.Driver </param-value> </init-param> <init-param> <param-name>url</param-name> <param-type>String</param-type> <param-value> jdbc:mysql://dbserver:3306/companydb </param-value> </init-param> <init-param> <param-name>user</param-name> <param-type>String</param-type> <param-value>default</param-value> </init-param> <init-param> <param-name>password</param-name> <param-type>String</param-type> <param-value>default</param-value> </init-param> </init-params> </class-scheme> ... <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>{scheme-ref}</param-type> <param-value>dbconnection</param-value> </init-param> </init-params> |
|
cache name |
Used to obtain a <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>boston-*</cache-name> <scheme-name>wrapper</scheme-name> <init-params> <init-param> <param-name>delegate-cache-name</param-name> <param-value>london-*</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>london-*</cache-name> <scheme-name>partitioned</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <class-scheme> <scheme-name>wrapper</scheme-name> <class-name> com.tangosol.net.cache.WrapperNamedCache </class-name> <init-params> <init-param> <param-type>{cache-ref}</param-type> <param-value>{delegate-cache-name} </param-value> </init-param> <init-param> <param-type>string</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> <distributed-scheme> <scheme-name>partitioned</scheme-name> <service-name>partitioned</service-name> <backing-map-scheme> <local-scheme> <unit-calculator>BINARY</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config> The |
Parent topic: Using Parameter Macros
Using System Property Macros
To define a system property macro, place a literal string that represents a system property within curly braces and precede the curly braces with a dollar sign ($)
. A system property macro can also include an optional default value by placing the value after the string preceded by a space. The form of a system property macro is as follows:
${system.property default_value}
The following example is taken from the default cache configuration file and uses two system property macros: ${coherence.profile near}
and ${coherence.client direct}
. The macros are replaced at runtime with the values that are set for the respective system properties in order to use a specific cache scheme. If the system properties are not set, then the default values (near-direct
) are used as the scheme name.
<caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>${coherence.profile near}-${coherence.client direct} </scheme-name> </cache-mapping> </caching-scheme-mapping>
Setting the system properties at runtime changes the caching scheme that is used for the default cache. For example:
-Dcoherence.profile=thin -Dcoherence.client=remote
Parent topic: Configuring Caches