Used in: external-scheme, paged-external-scheme.
The async-store-manager element adds asynchronous write capabilities to other store manager implementations.
Supported store managers include:
This store manager is implemented by the com.tangosol.io.AsyncBinaryStoreManager class.
The following table describes the elements you can define within the async-store-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Optional | Specifies a custom implementation of the async-store-manager.
Any custom implementation must extend the com.tangosol.io.AsyncBinaryStoreManager class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom async-store-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<custom-store-manager> | Optional | Configures the external cache to use a custom storage manager implementation. |
<bdb-store-manager> | Optional | Configures the external cache to use Berkeley Database JE on-disk databases for cache storage. |
<lh-file-manager> | Optional | Configures the external cache to use a Tangosol LH on-disk database for cache storage. |
<nio-file-manager> | Optional | Configures the external cache to use a memory-mapped file for cache storage. |
<nio-memory-manager> | Optional | Configures the external cache to use an off JVM heap, memory region for cache storage. |
<async-limit> | Optional | Specifies the maximum number of bytes that will be queued to be written asynchronously. Setting the value to zero does not disable the asynchronous writes; instead, it indicates that the implementation default for the maximum number of bytes should be used.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed.
|
Used in: distributed-scheme.
The backup-storage element specifies the type and configuration of backup storage for a partitioned cache.
The following table describes the elements you can define within the backup-storage element.
Element | Required/Optional | Description |
---|---|---|
<type> | Required | Specifies the type of the storage used to hold the backup data.
Legal values are:
Default value is the value specified in the tangosol-coherence.xml descriptor. |
<initial-size> | Optional | Only applicable with the off-heap and file-mapped types. Specifies the initial buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed.
|
<maximum-size> | Optional | Only applicable with the off-heap and file-mapped types. Specifies the initial buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]]?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. |
<directory> | Optional | Only applicable with the file-mapped type.
Specifies the pathname for the directory that the disk persistence manager ( com.tangosol.util.nio.MappedBufferManager) will use as "root" to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used. Default value is the value specified in the tangosol-coherence.xml descriptor. |
<class-name> | Optional | Only applicable with the custom type.
Specifies a class name for the custom storage implementation. If the class implements com.tangosol.run.xml.XmlConfigurable interface then upon construction the setConfig method is called passing the entire backup-storage element. Default value is the value specified in the tangosol-coherence.xml descriptor. |
<scheme-name> | Optional | Only applicable with the scheme type.
Specifies a scheme name for the ConfigurableCacheFactory. Default value is the value specified in the tangosol-coherence.xml descriptor. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Berkeley Database JE Java class libraries are required to utilize a bdb-store-manager, visit the Berkeley Database JE product page for additional information. |
The BDB store manager is used to define external caches which will use Berkeley Database JE on-disk embedded databases for storage.
This store manager is implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class, and produces BinaryStore objects implemened by the com.tangosol.io.bdb.BerkeleyDBBinaryStore class.
The following table describes the elements you can define within the bdb-store-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Optional | Specifies a custom implementation of the Berkeley Database BinaryStoreManager.
Any custom implementation must extend the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies additional Berkeley DB configuration settings. See Berkeley DB Configuration.
Also used to specify initialization parameters, for use in custom implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<directory> | Optional | Specifies the pathname for the root directory that the Berkeley Database JE store manager will use to store files in. If not specified or specifies a non-existent directory, a temporary directory in the default location will be used. |
<store-name> | Optional |
Specifies the name for a database table that the Berkely Database JE store
manager will use to store data in. Specifying this parameter will cause
the bdb-store-manager to use non-temporary (persistent) database
instances. This is intended only for local caches that are backed by a cache
loader from a non-temporary store, so that the local cache can be pre-populated
from the disk on startup. When specified it is recommended that it utilize
the {cache-name}
macro. Normally this parameter should be left unspecified, indicating that temporary storage is to be used. |
The cache-config element is the root element of the cache configuration descriptor.
At a high level a cache configuration consists of cache schemes and cache scheme mappings. Cache schemes describe a type of cache, for instance a database backed, distributed cache. Cache mappings define what scheme to use for a given cache name.
The following table describes the elements you can define within the cache-config element.
Element | Required/Optional | Description |
---|---|---|
<caching-scheme-mapping> | Required | Specifies the cacheing-scheme that will be used for caches, based on the cache's name. |
<caching-schemes> | Required | Defines the available caching-schemes for use in the cluster. |
Used in: caching-scheme-mapping
Each cache-mapping element specifyies the cache-scheme which is to be used for a given cache name or pattern.
The following table describes the elements you can define within the cache-mapping element.
Element | Required/Optional | Description |
---|---|---|
<cache-name> | Required | Specifies a cache name or name pattern. The name is unique within a cache factory.
The following cache name patterns are supported:
The patterns get matched in the order of specificity (more specific definition is selected whenever possible). For example, if both "MyCache" and "My*" mappings are specified, the scheme from the "MyCache" mapping will be used to configure a cache named "MyCache". |
<scheme-name> | Required | Contains the caching scheme name. The name is unique within a configuration file.
Caching schemes are configured in the caching-schemes section. |
<init-params> | Optional | Allows specifying replaceable cache scheme parameters.
During cache scheme parsing, any occurrence of any replaceable parameter in format "{parameter-name}" is replaced with the corresponding parameter value. Consider the following cache mapping example: <cache-mapping> <cache-name>My*</cache-name> <scheme-name>my-scheme</scheme-name> <init-params> <init-param> <param-name>cache-loader</param-name> <param-value>com.acme.MyCacheLoader</param-value> </init-param> <init-param> <param-name>size-limit</param-name> <param-value>1000</param-value> </init-param> </init-params> </cache-mapping> For any cache name match "My*", any occurrence of the literal "{cache-loader}" in any part of the corresponding cache-scheme element will be replaced with the string "com.acme.MyCacheLoader" and any occurrence of the literal "{size-limit}" will be replaced with the value of "1000".
|
Used in: cache-config
Defines mappings between cache names, or name patterns, and caching-schemes. For instance you may define that caches whose names start with "accounts-" will use a distributed caching scheme, while caches starting with the name "rates-" will use a replicated caching scheme.
The following table describes the elements you can define within the caching-scheme-mapping element.
Element | Required/Optional | Description |
---|---|---|
<cache-mapping> | Optional | Contains a single binding between a cache name and the caching scheme this cache will use. |
Used in: cache-config
The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using cache-scheme-mappings.
Each of the cache scheme element types is used to describe a different type of cache, for instance distributed, versus replicated. Multiple instances of the same type may be defined so long as each has a unique scheme-name.
For example the following defines two different distributed schemes:
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> </distributed-scheme> <distributed-scheme> <scheme-name>DistributedOnDiskCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <external-scheme> <nio-file-manager> <initial-size>8MB</initial-size> <maximum-size>512MB</maximum-size> <directory></directory> </nio-file-manager> </external-scheme> </backing-map-scheme> </distributed-scheme>
Some caching scheme types contain nested scheme definitions. For instance in the above example the distributed schemes include a nested scheme defintion describing their backing map.
Caching schemes can be defined by specifying all the elements required for a given scheme type, or by inheriting from another named scheme of the same type, and selectively overriding specific values. Scheme inheritance is accomplished by including a <scheme-ref> element in the inheriting scheme containing the scheme-name of the scheme to inherit from.
For example:
The following two configurations will produce equivalent "DistributedInMemoryCache" scheme defintions:
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme> </backing-map-scheme> </distributed-scheme>
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
Please note that while the first is somewhat more compact, the second offers the ability to easily resuse the "LocalSizeLimited" scheme within multiple schemes. The following example demonstrates multiple schemes reusing the same "LocalSizeLimited" base defintion, but the second imposes a diffrent expiry-delay.
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <replicated-scheme> <scheme-name>ReplicatedInMemoryCache</scheme-name> <service-name>ReplicatedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> <expiry-delay>10m</expiry-delay> </local-scheme> </backing-map-scheme> </replicated-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
The following table describes the different types of schemes you can define within the caching-schemes element.
Element | Required/Optional | Description |
---|---|---|
<local-scheme> | Optional | Defines a cache scheme which provides on-heap cache storage. |
<external-scheme> | Optional | Defines a cache scheme which provides off-heap cache storage, for instance on disk. |
<paged-external-scheme> | Optional | Defines a cache scheme which provides off-heap cache storage, that is size-limited via time based paging. |
<distributed-scheme> | Optional | Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes. |
<replicated-scheme> | Optional | Defines a cache scheme where each cache entry is stored on all cluster nodes. |
<optimistic-scheme> | Optional | Defines a replicated cache scheme which uses optimistic rather then pessimistic locking. |
<near-scheme> | Optional | Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache. |
<versioned-near-scheme> | Optional | Defines a near-scheme which uses object versioning to ensure coherence between the front and back tiers. |
<overflow-scheme> | Optional | Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache. |
<invocation-scheme> | Optional | Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes. |
<read-write-backing-map-scheme> | Optional | Defines a backing map scheme which provides a cache of a persistent store. |
<versioned-backing-map-scheme> | Optional | Defines a backing map scheme which utilizes object versioning to determine what updates need to be written to the persistent store. |
<jms-scheme> | Optional | Defines a Coherence*Extend-JMS cache scheme allowing for caches to be accessed from outside a Coherence cluster. |
<class-scheme> | Optional | Defines a cache scheme using a custom cache implementation. Any custom implementation must implement the java.util.Map interface, and include a zero-parameter public constructor. Additionally if the contents of the Map can be modified by anything other than the CacheService itself (e.g. if the Map automatically expires its entries periodically or size-limits its contents), then the returned object must implement the com.tangosol.util.ObservableMap interface. |
<disk-scheme> | Optional | Note: As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced by the external-scheme and paged-external-scheme configuration elements. |
Used in: caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme, cachestore-scheme, listener
Class schemes provide a mechanism for instantiating an arbitrary Java class for use by other schemes. The scheme which contains this element will dictate what class or interface(s) must be extended.
The following table describes the elements you can define within the class-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Required | Contains a fully specified Java class name to instantiate.
This class must extend an appropriate implementation class as dictated by the containing scheme and must declare the exact same set of public constructors as the superclass. |
<init-params> | Optional | Specifies initialization parameters which are accessible by implementations which support the com.tangosol.run.xml.XmlConfigurable interface. |
Used in: local-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
Cache store schemes define a mechanism for connecting a cache to a backend data store. The cache store scheme may use any class implementing either the com.tangosol.net.cache.CacheStore or com.tangosol.net.cache.CacheLoader interfaces, where the former offers read-write capabilities, where the latter is read-only. Custom implementations of these interfaces may be produced to connect Coherence to various data stores.
The following table describes the elements you can define within the cachestore-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-scheme> | Optional | Specifies the implementation of the cache store.
Implementation classes must implement one of the following two interfaces, and include a zero-parameter public constructor:
|
<jms-scheme> | Optional | Configures the cachestore-scheme to use Coherence*Extend-JMS as its cache store implementation. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Used to create and configure custom implementations of a store manager for use in external caches.
The following table describes the elements you can define within the custom-store-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Required | Specifies the implementation of the store manager.
All implementations must implement the com.tangosol.io.BinaryStoreManager interface, and include a zero-parameter public constructor. |
<init-params> | Optional | Specifies initialization parameters, for use in custom store manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
As of Coherence 3.0, the disk-scheme configuration element has been deprecated and replaced with by the external-scheme and paged-external-scheme configuration elements. |
Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See the service overview for a more detailed description of partitioned caches.
Partitioned caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.
The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled are considered "cache clients".
The cache entries are evenly segmented into a number of logical partitions, and each storage enabled cluster node running the specified partitioned service will be responsible for maintain a fair-share of these partitions.
By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
Storage for the cache is specified via the backing-map-scheme. For instance a partitioned cache which uses a local cache for its backing map will result in cache entries being stored in-memory on the storage enabled cluster nodes.
For the purposes of failover a configurable number of backups of the cache may be maintained in backup-storage across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition will not reside on the same physically machine as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions will automatically be reassigned to the existing backups, and new backups of those partitions will be created (on remote nodes) in order to maintain the configured backup count.
When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold.
The following table describes the elements you can define within the distributed-scheme element.
Element | Required/Optional | Description | ||
---|---|---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. | ||
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. | ||
<service-name> | Optional | Specifies the name of the service which will manage caches created from this scheme.
services are configured from within the operational descriptor. |
||
<listener> | Optional | Specifies an implementation of a com.tangosol.MapListener which will be notified of events occurring on the cache. | ||
<backing-map-scheme> | Optional | Specifies what type of cache will be used within the cache server to store the entries.
Legal values are: |
||
<partition-count> | Optional | Specifies the number of partitions that a partitioned cache will be "chopped up" into. Each node running the partitioned cache service that has the local-storage option set to true will manage a "fair" (balanced) number of partitions. The number of partitions should be larger than the square of the number of cluster members to achieve a good balance, and it is suggested that the number be prime. Good defaults include 257 and 1021 and prime numbers in-between, depending on the expected cluster size. A list of first 1,000 primes can be found at http://primes.utm.edu/lists/small/1000.txt
Legal values are prime numbers. Default value is the value specified in the tangosol-coherence.xml descriptor. |
||
<key-associator> | Optional | Specifies a class that will be responsible for providing associations between keys. Allowing associated keys to reside on the same partition. | ||
<key-partitioning> | Optional | Specifies the class which will be responsible for assigning keys to partitions.
If unspecified the default key partitioning algorithm will be used, which ensures that keys are evenly segmented across partitions. |
||
<backup-count> | Optional | Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache.
Value of 0 means that in the case of abnormal termination, some portion of the data in the cache will be lost. Value of N means that if up to N cluster nodes terminate at once, the cache data will be preserved. To maintain the partitioned cache of size M, the total memory usage in the cluster does not depend on the number of cluster nodes and will be in the order of M*(N+1). Recommended values are 0, 1 or 2. Default value is the value specified in the tangosol-coherence.xml descriptor. |
||
<backup-storage> | Optional | Specifies the type and configuration for the partitioned cache backup storage. | ||
<thread-count> | Optional | Specifies the number of daemon threads used by the partitioned cache service.
If zero, all relevant tasks are performed on the service thread. Legal values are from positive integers or zero. Default value is the value specified in the tangosol-coherence.xml descriptor. |
||
<standard-lease-milliseconds> | Optional | Specifies the duration of the standard lease in milliseconds.
Once a lease has aged past this number of milliseconds, the lock will automatically
be released. Set this value to zero to specify a lease that never expires.
The purpose of this setting is to avoid deadlocks or blocks caused by stuck
threads; the value should be set higher than the longest expected lock duration
(e.g. higher than a transaction timeout). It's also recommended to set this
value higher then packet-delivery/timeout-milliseconds
value. Legal values are from positive long numbers or zero. Default value is the value specified in the tangosol-coherence.xml descriptor. |
||
<lease-granularity> | Optional | Specifies the lease ownership granularity. Available since release 2.3.
Legal values are:
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.
|
||
<transfer-threshold> | Optional | Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower will make the distribution process take longer, but will reduce network bandwidth utilization during this activity.
Legal values are integers greater then zero. Default value is the value specified in the tangosol-coherence.xml descriptor. |
||
<local-storage> | Optional | Specifies whether or not a cluster node will contribute storage to the cluster, i.e. maintain partitions. When disabled the node is considered a cache client.
Legal values are true or false.
|
||
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
External schemes define caches which are not JVM heap based, allowing for greater storage capacity.
This scheme is implemented by:
The implementation type is chosen based on the following rule:
External schemes use a pluggable store manager to store and retrieve binary key value pairs. Supported store managers include:
The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself.
Eviction against disk based caches can be expensive, consider using a paged-external-scheme for such cases. |
External caches are generally used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. Certain implementations do however support persistence for non-clustered caches, see the bdb-store-manager and lh-file-manager for details. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme.
The following table describes the elements you can define within the external-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the external cache.
Any custom implementation must extend either:
and declare the exact same set of public constructors as the superclass. |
<init-params> | Optional | Specifies initialization parameters, for use in custom external cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<high-units> | Optional | Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement. Once this limit is exceeded, the cache will begin the pruning process, evicting the least recently used entries until the number of units is brought below this limit. The scheme's class-name element may be used to provide custom extensions to SerializationCache, which implement alternative eviction policies.
Legal values are positive integers or zero. Zero implies no limit. Default value is zero. |
<async-store-manager> | Optional | Configures the external cache to use an asynchronous storage manager wrapper for any other storage manager. |
<custom-store-manager> | Optional | Configures the external cache to use a custom storage manager implementation. |
<bdb-store-manager> | Optional | Configures the external cache to use Berkeley Database JE on-disk databases for cache storage. |
<lh-file-manager> | Optional | Configures the external cache to use a Tangosol LH on-disk database for cache storage. |
<nio-file-manager> | Optional | Configures the external cache to use a memory-mapped file for cache storage. |
<nio-memory-manager> | Optional | Configures the external cache to use an off JVM heap, memory region for cache storage. |
Used in: init-params.
Defines an individual initialization parameter.
The following table describes the elements you can define within the init-param element.
Element | Required/Optional | Description |
---|---|---|
<param-name> | Optional | Contains the name of the initialization parameter.
For example: <class-name>com.mycompany.cache.CustomCacheLoader</class-name> <init-params> <init-param> <param-name>sTableName</param-name> <param-value>EmployeeTable</param-value> </init-param> <init-param> <param-name>iMaxSize</param-name> <param-value>2000</param-value> </init-param> </init-params> |
<param-type> | Optional | Contains the Java type of the initialization parameter.
The following standard types are supported:
For example:
<class-name>com.mycompany.cache.CustomCacheLoader</class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>EmployeeTable</param-value> </init-param> <init-param> <param-type>int</param-type> <param-value>2000</param-value> </init-param> </init-params> Please refer to the list of available Parameter Macros. |
<param-value> | Optional | Contains the value of the initialization parameter.
The value is in the format specific to the Java type of the parameter. Please refer to the list of available Parameter Macros. |
Used in: class-scheme, cache-mapping.
Defines a series of initialization parameters as name/value pairs.
The following table describes the elements you can define within the init-params element.
Element | Required/Optional | Description |
---|---|---|
<init-param> | Optional | Defines an individual initialization parameter. |
Used in: caching-schemes.
Defines an Invocation Service. The invocation service may be used to perform custom operations in parallel on any number of cluster nodes. See the com.tangosol.net.InvocationService API for additional details.
The following table describes the elements you can define within the invocation-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<thread-count> | Optional | Specifies the number of daemon threads used by the invocation service.
If zero, all relevant tasks are performed on the service thread. Legal values are positive integers or zero. Default value is the value specified in the tangosol-coherence.xml descriptor. |
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not this service should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: caching-schemes, cachestore-scheme, near-scheme.
A jms-scheme defines a cache which may be accessed from outside a Coherence cluster, via a JMS-based protocol. For additional details see the Coherence*Extend-JMS configuration instructions.
This scheme uses the com.tangosol.net.jms.AdapterFactory to instantiate the necessary implementations (stub and proxy) to support Coherence API calls over JMS.
The Coherence*Extend-JMS stub implements both the com.tangosol.net.NamedCache and com.tangosol.net.cache.CacheStore interfaces.
The following table describes the elements you can define within the jms-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<queue-connection-factory-name> | Required | Specifies the JNDI name of the JMS QueueConnectionFactory used by Coherence*Extend-JMS. |
<topic-connection-factory-name> | Required | Specifies the JNDI name of the JMS TopicConnectionFactory used by Coherence*Extend-JMS. |
<queue-name> | Required | Specifies the JNDI name of the JMS Queue used by Coherence*Extend-JMS. |
<topic-name> | Required | Specifies the JNDI name of the JMS Topic used by Coherence*Extend-JMS. |
<request-timeout> | Optional | Specifies the maximum amount of time that a Coherence*Extend-JMS stub will wait for a response from a Coherence*Extend-JMS proxy.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed.
|
When you are are configuring caches for use with Coherence*Extend-JMS you need to use the cache with the same name, but different configurations on the stub (client) and the proxy (cluster) sides of Coherence*Extend-JMS. One possible way to accomplish this easily using a single cache configuration descriptor file is to use the Command Line Setting Override Feature of the cache configuration descriptor in the cache-mapping element in conjunction with two separate cache-scheme configurations.
For Example, if you configured a Coherence*Extend-JMS and set up the configuration for the Vehicles cache as follows:
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>Vehicles</cache-name>
<scheme-name system-property="cache-scheme">distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<local-scheme>
<scheme-name>jms-local</scheme-name>
<cachestore-scheme>
<jms-scheme>
<scheme-ref>jms-direct</scheme-ref>
</jms-scheme>
</cachestore-scheme>
</local-scheme>
<jms-scheme>
<scheme-name>jms-direct</scheme-name>
<queue-connection-factory-name>jms/coherence/ConnectionFactory</queue-connection-factory-name>
<topic-connection-factory-name>jms/coherence/ConnectionFactory</topic-connection-factory-name>
<queue-name>jms/coherence/Queue</queue-name>
<queue-name>jms/coherence/Topic</queue-name>
<request-timeout>10s</request-timeout>
</jms-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<backing-map-scheme>
<local-scheme/>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
Then you could use the same cache configuration descriptor on both the stub (client) and proxy (cluster) sides of Coherence*Extend-JMS.
On the proxy (cluster) side, where you need the Vehicles to be a Distributed cache (using the distributed cache-scheme), you would either not specify any command line override for this setting in the java command line, or specify distributed as follows:
java -Dcache-scheme=distributed -jar coherence.jar
On the stub (client) side, where you need the Vehicles to be a Local cache with a Coherence*Extend-JMS cache loader (using the configuration specified by the jms-local cache-scheme), you would specify the following command line override:
java -Dcache-scheme=local-jms -jar coherence.jar
Used in: distributed-scheme
Specifies an implementation of a com.tangosol.net.partition.KeyAssociator which will be used to determine associations between keys, allowing related keys to reside on the same partition.
Alternatively the cache's keys may manage the association by implementing the com.tangosol.net.cache.KeyAssociation interface.
The following table describes the elements you can define within the key-associator element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Required | The name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor.
Default value is the value specified in the tangosol-coherence.xml descriptor. |
Used in: distributed-scheme
Specifies an implementation of a com.tangosol.net.partition.KeyPartitioningStrategy which will be used to determine the partition in which a key will reside.
The following table describes the elements you can define within the key-partitioning element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Required | The name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor.
Default value is the value specified in the tangosol-coherence.xml descriptor. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures a store manager which will use a Tangosol LH on-disk embedded database for storage.
Implemented by the com.tangosol.io.lh.LHBinaryStoreManager class. The BinaryStore objects created by this class are instances of com.tangosol.io.lh.LHBinaryStore.
The following table describes the elements you can define within the lh-file-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Optional | Specifies a custom implementation of the LH BinaryStoreManager.
Any custom implementation must extend the com.tangosol.io.lh.LHBinaryStoreManager class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom LH file manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<directory> | Optional | Specifies the pathname for the root directory that the LH file manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used. |
<file-name> | Optional |
Specifies the name for a non-temporary (persistent) file that the LH file
manager will use to store data in. Specifying this parameter will cause
the lh-file-manager to use non-temporary database instances. This
is intended only for local caches that are backed by a cache loader from
a non-temporary file, so that the local cache can be pre-populated from
the disk file on startup. When specified it is recommended that it utilize
the {cache-name}
macro. Normally this parameter should be left unspecified, indicating that temporary storage is to be used. |
Used in: local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
The Listener element specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on a cache.
The following table describes the elements you can define within the listener element.
Element | Required/Optional | Description |
---|---|---|
<class-scheme> | Required | Specifies the full class name of listener implementation to use.
Any implementation must implement the com.tangosol.util.MapListener interface and include a zero-parameter public constructor. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme.
Local caches are implemented by the com.tangosol.net.cache.LocalCache class.
A local cache may be backed by an external cache store, cache misses will read-through to the backend store to retrieve the data. If a writable store is provided, cache writes will propogate to the cache store as well. For optimizing read/write access against a cache store see the read-write-backing-map-scheme.
The cache may be configured as size-limited, which means that once it reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to its eviction-policy. The entries and size limitiations are measured in terms of units as calculated by the scheme's unit-calculator.
The local cache supports automatic expiration of entries based on the age of the value, as configured by the expiry-delay.
The following table describes the elements you can define within the local-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<service-name> | Optional | Specifies the name of the service which will manage caches created from this scheme. services are configured from within the operational descriptor. |
<class-name> | Optional | Specifies a custom implementation of the local cache. Any custom implementation must extend the com.tangosol.net.cache.LocalCache class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom local cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occuring on the cache. |
<cachestore-scheme> | Optional | Specifies the store which is being cached. If unspecified the cached data will only reside in memory, and only reflect operations performed on the cache itself. |
<eviction-policy> | Optional | Specifies the type of eviction policy to use. Legal values are:
Default value is HYBRID. |
<high-units> | Optional | Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. Once this limit is exceeded, the cache will begin the pruning process, evicting entries according to the eviction policy until the low-units size is reached. Legal values are positive integers or zero. Zero implies no limit. Default value is 1000. |
<low-units> | Optional | Contains the number of units that the cache will be pruned down to when pruning takes place. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator. When pruning occurs entries will continue to be evicted according to the eviction policy until this size. Legal values are positive integers or zero. Zero implies no limit. Default value is 75% of the high-units setting (i.e. for a high-units setting of 1000 the default low-units will be 750). |
<unit-calculator> | Optional | Specifies the type of unit calculator to use. A unit calculator is used to determine the cost (in "units") of a given object. Legal values are:
Default value is FIXED. |
<expiry-delay> | Optional | Specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store. Expired values are periodically discarded from the cache based on the flush-delay. The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. |
<flush-delay> | Optional | Specifies the time interval between periodic cache flushes, which will discard expired entries from the cache, thus freeing resources. The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. |
Used in: caching-schemes.
The near-scheme defines a two tier cache consisting of a front-tier which caches a subset of a back-tier cache. The front-tier is generally a fast, size limited cache, while the back-tier is slower, but much higher capacity. A typical deployment might use a local-scheme for the front-tier, and a distributed-scheme for the back-tier. The result is that a portion of a large partitioned cache will be cached locally in-memory allowing for very fast read access. See the services overview for a more detailed description of near caches.
The near scheme is implemented by the com.tangosol.net.cache.NearCache class.
Specifying an invalidation-strategy defines a strategy that is used to keep the front tier of the near cache in sync with the back tier. Depending on that strategy a near cache is configured to listen to certain events occurring on the back tier and automatically update (or invalidate) the front portion of the near cache.
The following table describes the elements you can define within the near-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the near cache.
Any custom implementation must extend the com.tangosol.net.cache.NearCache class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<front-scheme> | Required | Specifies the cache-scheme to use in creating the front-tier cache.
Legal values are: The eviction policy of the front-scheme defines which entries will be cached locally. <front-scheme> <local-scheme> <eviction-policy>HYBRID</eviction-policy> <high-units>1000</high-units> </local-scheme> </front-scheme> |
<back-scheme> | Required | Specifies the cache-scheme to use in creating the back-tier cache. Legal values are: For example: <back-scheme> <distributed-scheme> <scheme-ref>default-distributed</scheme-ref> </distributed-scheme> </back-scheme> |
<invalidation-strategy> | Optional | Specifies the strategy used keep the front-tier in-sync with the back-tier. Please see com.tangosol.net.cache.NearCache for more details. Legal values are:
|
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures an external store which uses memory-mapped file for storage.
This store manager is implemented by the com.tangosol.io.nio.MappedStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
The following table describes the elements you can define within the nio-file-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Optional | Specifies a custom implementation of the local cache.
Any custom implementation must extend the com.tangosol.io.nio.MappedStoreManager class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom nio-file-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<initial-size> | Optional | Specifies the initial buffer size in megabytes.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. |
<maximum-size> | Optional | Specifies the maximum buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. |
<directory> | Optional | Specifies the pathname for the root directory that the manager will use to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location will be used. |
Used in: external-scheme, paged-external-scheme, async-store-manager.
Configures a store-manager which uses an off JVM heap, memory region for storage, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses
Some JVMs (starting with 1.4) require the use of a command line parameter if the total NIO buffers will be greater than 64MB. For example: -XX:MaxDirectMemorySize=512M |
Implemented by the com.tangosol.io.nio.DirectStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
The following table describes the elements you can define within the nio-memory-manager element.
Element | Required/Optional | Description |
---|---|---|
<class-name> | Optional | Specifies a custom implementation of the local cache.
Any custom implementation must extend the com.tangosol.io.nio.DirectStoreManager class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom nio-memory-manager implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<initial-size> | Optional | Specifies the initial buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. |
<maximum-size> | Optional | Specifies the maximum buffer size in bytes. The value of this element must be in the following format: [\d]+[[.][\d]+]?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceeding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. |
Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme
The optimistic scheme defines a cache which fully replicates all of its data to all cluster nodes that are running the service. See the services overview for a more detailed description of optimistic caches.
Unlike the replicated and partitioned caches, optimistic caches do not support concurrency control (locking). Individual operations against entries are atomic but there is no guarantee that the value stored in the cache does not change between atomic operations. The lack of concurrency control allows optimistic caches to support very fast write operations.
Storage for the cache is specified via the backing-map-scheme. For instance an optimistic cache which uses a local cache for its backing map will result in cache entries being stored in-memory.
The following table describes the elements you can define within the optimistic-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<service-name> | Optional | Specifies the name of the service which will manage caches created from this scheme.
services are configured from within the operational descriptor. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<backing-map-scheme> | Optional | Specifies what type of cache will be used within the cache server to store the entries.
Legal values are: In order to ensure cache coherence, the backing-map of an optimistic cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching. |
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme.
The overflow-scheme defines a two tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache. When the size limited front fills up, evicted entries are transparently moved to the back. In the event of a cache miss, entries may move from the back to the front. A typical deployment might use a local-scheme for the front-tier, and a external-scheme for the back-tier, allowing for fast local caches with capacities larger the the JVM heap would allow.
Implemented by either com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap, see expiry-enabled for details.
The following table describes the elements you can define within the overflow-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the overflow cache.
Any custom implementation must extend either the com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap class, and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom overflow cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<front-scheme> | Required | Specifies the cache-scheme to use in creating the front-tier cache.
Legal values are: The eviction policy of the front-scheme defines which entries which items are in the front vs back tiers. <front-scheme> <local-scheme> <eviction-policy>HYBRID</eviction-policy> <high-units>1000</high-units> </local-scheme> </front-scheme> |
<back-scheme> | Required | Specifies the cache-scheme to use in creating the back-tier cache.
Legal values are: For Example: <back-scheme> <external-scheme> <lh-file-manager/> </external-scheme> </back-scheme> |
<miss-cache-scheme> | Optional | Specifies a cache-scheme for maintaining information on cache misses. For caches which are not expiry-enabled, the miss-cache is used track keys which resulted in both a front and back tier cache miss. The knowledge that a key is not in either tier allows some operations to perform faster, as they can avoid querying the potentially slow back-tier. A size limited scheme may be used to control how many misses are tracked. If unspecified no cache-miss data will be maintained.
Legal values are: |
<expiry-enabled> | Optional | Turns on support for automatically-expiring data, as provided by the com.tangosol.net.cache.CacheMap API.
When enabled the overflow-scheme will be implemented using com.tangosol.net.cache.OverflowMap, rather then com.tangosol.net.cache.SimpleOverflowMap. Legal values are true or false. Default value is false. |
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, versioned-near-scheme, overflow-scheme, read-write-backing-map-scheme, versioned-backing-map-scheme
As with external-schemes, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The paged-external-scheme optimizes LRU eviction by using a paging approach. See the Serialization Paged Cache overview for a detailed description of the paged cache functionality.
This scheme is implemented by the com.tangosol.net.cache.SerializationPagedCache class.
Cache entries are maintained over a series of pages, where each page is a separate com.tangosol.io.BinaryStore, obtained from the configured storage manager. When a page is created it is considered to be the "current" page, and all write operations are performed against this page. On a configurable interval the current page is closed and a new current page is created. Read operations for a given key are performed against the last page in which the key was stored. When the number of pages exceeds a configured maximum, the oldest page is destroyed and those items which were not updated since the page was closed are be evicted. For example configuring a cache with a duration of ten minutes per page, and a maximum of six pages, will result in entries being cached for at most an hour.
Paging improves performance by avoiding individual delete operations against the storage manager as cache entries are removed or evicted. Instead the cache simply releases its references to those entries, and relies on the eventual destruction of an entire page to free the associated storage of all page entries in a single stroke.
External schemes use a pluggable store manager to create and destroy pages, as well as to access entries within those pages. Supported store-managers include:
Paged external caches are used for temporary storage of large data sets, for example as the back-tier of an overflow-scheme. These caches are not usable as for long-term storage (persistence), and will not survive beyond the life of the JVM. Clustered persistence should be configured via a read-write-backing-map-scheme on a distributed-scheme. If a non-clustered persistent cache is what is needed, refer to the external-scheme.
The following table describes the elements you can define within the paged-external-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the external paged cache.
Any custom implementation must extend the com.tangosol.net.cache.SerializationPagedCache class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom external paged cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<page-limit> | Required | Specifies the maximum number of active pages for the paged cache.
Legal values are positive integers between 2 and 3600. |
<page-duration> | Optional | Specifies the length of time, in seconds, that a page in the paged cache is current.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed.
|
<async-store-manager> | Optional | Configures the paged external cache to use an asynchronous storage manager wrapper for any other storage manager. |
<custom-store-manager> | Optional | Configures the paged external cache to use a custom storage manager implementation. |
<bdb-store-manager> | Optional | Configures the paged external cache to use Berkeley Database JE on-disk databases for cache storage. |
<lh-file-manager> | Optional | Configures the paged external cache to use a Tangosol LH on-disk database for cache storage. |
<nio-file-manager> | Optional | Configures the paged external cache to use a memory-mapped file for cache storage. |
<nio-memory-manager> | Optional | Configures the paged external cache to use an off JVM heap, memory region for cache storage. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.
The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store. See the Read-Through, Write-Through, Refresh-Ahead and Write-Behind Caching overview for more details.
The read-write-backing-map-scheme is implemented by the com.tangosol.net.cache.ReadWriteBackingMap class.
A read write backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. If a writable store is provided, cache writes will propogate to the cache store as well.
When enabled the cache will watch for recently accessed entries which are about to expire, and asynchronously reload them from the cache store. This insulates the application from potentially slow reads against the cache store, as items periodically expire.
When enabled the cache will delay writes to the backend cache store. This allows for the writes to be batched into more efficient update blocks, which occur asynchronously from the client thread.
The following table describes the elements you can define within the read-write-backing-map-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the read write backing map.
Any custom implementation must extend the com.tangosol.net.cache.ReadWriteBackingMap class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom read write backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<cachestore-scheme> | Optional | Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself. |
<internal-cache-scheme> | Required | Specifies a cache-scheme which will be used to cache entries.
Legal values are: |
<miss-cache-scheme> | Optional | Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.
Legal values are: |
<read-only> | Optional | Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.
Legal values are true or false. Default value is false. |
<write-delay> | Optional | Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed.
|
<write-batch-factor> | Optional | The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.
A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries). This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored. Legal values are non-negative doubles less than or equal to 1.0. Default is zero. |
<write-requeue-threshold> | Optional | Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.
The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail. If zero, write-behind requeueing is disabled. Legal values are positive integers or zero. Default is zero. |
<refresh-ahead-factor> | Optional | The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.
Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration. The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload. Legal values are non-negative doubles less than or equal to 1.0. Default value is zero. |
<rollback-cachestore-failures> | Optional | Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).
If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated. If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction. Legal values are true or false. Default value is false. |
Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme
The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See the service overview for a more detailed description of replicated caches.
Replicated caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.
Storage for the cache is specified via the backing-map scheme. For instance a replicated cache which uses a local cache for its backing map will result in cache entries being stored in-memory.
The following table describes the elements you can define within the replicated-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<service-name> | Optional | Specifies the name of the service which will manage caches created from this scheme.
services are configured from within the operational descriptor. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<backing-map-scheme> | Optional | Specifies what type of cache will be used within the cache server to store the entries.
Legal values are: In order to ensure cache coherence, the backing-map of an replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching. |
<standard-lease-milliseconds> | Optional | Specifies the duration of the standard lease in milliseconds.
Once a lease has aged past this number of milliseconds, the lock will automatically
be released. Set this value to zero to specify a lease that never expires.
The purpose of this setting is to avoid deadlocks or blocks caused by stuck
threads; the value should be set higher than the longest expected lock duration
(e.g. higher than a transaction timeout). It's also recommended to set this
value higher then packet-delivery/timeout-milliseconds
value. Legal values are from positive long numbers or zero. Default value is the value specified in the tangosol-coherence.xml descriptor. |
<lease-granularity> | Optional | Specifies the lease ownership granularity. Available since release 2.3.
Legal values are:
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.
|
<mobile-issues> | Optional | Specifies whether or not the lease issues should be transfered to the most recent lock holders.
Legal values are true or false. Default value is the value specified in the tangosol-coherence.xml descriptor. |
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme.
The versioned-backing-map-scheme is an extension of a read-write-backing-map-scheme, defining a size limited cache of a persistent store. It utilizes object versioning to determine what updates need to be written to the persistent store.
The versioned-backing-map-scheme scheme is implemented by the com.tangosol.net.cache.VersionedBackingMap class.
As with the read-write-backing-map-scheme, a versioned backing map maintains a cache backed by an external persistent cache store, cache misses will read-through to the backend store to retrieve the data. Cache stores may also support updates to the backend data store.
As with the read-write-backing-map-scheme both the refresh-ahead and write-behind caching optimizations are supported. See Read-Through, Write-Through, Refresh-Ahead and Write-Behind Caching for more details.
For entries whose values implement the com.tangosol.util.Versionable interface, the versioned backing map will utilize the version identifier to determine if an update needs to be written to the persistent store. The primary benefit of this feature is that in the event of cluster node failover, the backup node can determine if the most recent version of an entry has already been written to the persistent store, and if so it can avoid an extraneous write.
The following table describes the elements you can define within the versioned-backing-map-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the versioned backing map.
Any custom implementation must extend the com.tangosol.net.cache.VersionedBackingMap class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom versioned backing map implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<cachestore-scheme> | Optional | Specifies the store to cache. If unspecified the cached data will only reside within the internal cache, and only reflect operations performed on the cache itself. |
<internal-cache-scheme> | Required | Specifies a cache-scheme which will be used to cache entries.
Legal values are: |
<miss-cache-scheme> | Optional | Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data will be maintained.
Legal values are: |
<read-only> | Optional | Specifies if the cache is read only. If true the cache will load data from cachestore for read operations and will not perform any writing to the cachestore when the cache is updated.
Legal values are true or false. Default value is false. |
<write-delay> | Optional | Specifies the time interval for a write-behind queue to defer asynchronous writes to the cachestore by.
The value of this element must be in the following format: [\d]+[[.][\d]+]?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed.
|
<write-batch-factor> | Optional | The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries.
A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time prior to the actual "ripe" time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other "ripe" and "soft-ripe" entries). This element is only applicable if asynchronous writes are enabled (i.e. the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the write-delay interval. For example, if the value is zero, only "ripe" entries from the write-behind queue will be batched. On the other hand, if the value is 1.0, all currently queued entries will be batched and the value of the write-delay element will be effectively ignored. Legal values are non-negative doubles less than or equal to 1.0. Default is zero. |
<write-requeue-threshold> | Optional | Specifies the maximum size of the write-behind queue for which failed cachestore write operations are requeued.
The purpose of this setting is to prevent flooding of the write-behind queue with failed cachestore operations. This can happened in situations where a large number of successive write operations fail. If zero, write-behind requeueing is disabled. Legal values are positive integers or zero. Default is zero. |
<refresh-ahead-factor> | Optional | The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries.
Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable if the internal cache is a LocalCache, configured with automatic expiration. The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling will be disabled. If 1.0, then any get operation will immediately trigger an asynchronous reload. Legal values are non-negative doubles less than or equal to 1.0. Default value is zero. |
<rollback-cachestore-failures> | Optional | Specifies whether or not exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member).
If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated. If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction. Legal values are true or false. Default value is false. |
<version-persistent-scheme> | Optional | Specifies a cache-scheme for tracking the version identifier for entries in the persistent cachestore. |
<version-transient-scheme> | Optional | Specifies a cache-scheme for tracking the version identifier for entries in the transient internal cache. |
<manage-transient> | Optional | Specifies if the backing map is responsible for keeping the transient version cache up to date.
If disabled the backing map manages the transient version cache only for operations for which no other party is aware (such as entry expiry). This is used when there is already a transient version cache of the same name being maintained at a higher level, for instance within a versioned-near-scheme. Legal values are true or false. Default value is false. |
Used in: caching-schemes.
As of Coherence release 2.3, it is suggested that a near-scheme be used instead of versioned-near-scheme. Legacy Coherence applications use versioned-near-scheme to ensure coherence through object versioning. As of Coherence 2.3 the near-scheme includes a better alternative, in the form of reliable and efficient front cache invalidation. |
As with the near-scheme, the versioned-near-scheme defines a two tier cache consisting of a small and fast front-end, and higher-capacity but slower back-end cache. The front-end and back-end are expressed as normal cache-schemes. A typical deployment might use a local-scheme for the front-end, and a distributed-scheme for the back-end. See the services overview for a more detailed description of versioned near caches.
The versioned near scheme is implemented by the com.tangosol.net.cache.VersionedNearCache class.
Object versioning is used to ensure coherence between the front and back tiers.
The following table describes the elements you can define within the near-scheme element.
Element | Required/Optional | Description |
---|---|---|
<scheme-name> | Optional | Specifies the scheme's name. The name must be unique within a configuration file. |
<scheme-ref> | Optional | Specifies the name of another scheme to inherit from. |
<class-name> | Optional | Specifies a custom implementation of the versioned near cache.
Any custom implementation must extend the com.tangosol.net.cache.VersionedNearCache class and declare the exact same set of public constructors. |
<init-params> | Optional | Specifies initialization parameters, for use in custom versioned near cache implementations which implement the com.tangosol.run.xml.XmlConfigurable interface. |
<listener> | Optional | Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache. |
<front-scheme> | Required | Specifies the cache-scheme to use in creating the front-tier cache. Legal values are: For Example: <front-scheme> <local-scheme> <scheme-ref>default-eviction</scheme-ref> </local-scheme> </front-scheme> or <front-scheme> <class-scheme> <class-name>com.tangosol.util.SafeHashMap</class-name> <init-params></init-params> </class-scheme> </front-scheme> |
<back-scheme> | Required | Specifies the cache-scheme to use in creating the back-tier cache. Legal values are:
<back-scheme> <distributed-scheme> <scheme-ref>default-distributed</scheme-ref> </distributed-scheme> </back-scheme> |
<version-transient-scheme> | Optional | Specifies a scheme for versioning cache entries, which ensures coherence between the front and back tiers. |
<autostart> | Optional | The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.
Legal values are true or false. Default value is false. |
Used in: versioned-near-scheme, versioned-backing-map-scheme.
The version-transient-scheme defines a cache for storing object versioning information for use in versioned near-caches. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.
The following table describes the elements you can define within the version-transient-scheme element.
Element | Required/Optional | Description |
---|---|---|
<cache-name-suffix> | Optional | Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.
Legal value is a string. Default value is "-version". For example, if the base case is named "Sessions" and this name modifier is set to "-version", the associated version cache will be named "Sessions-version". |
<replicated-scheme> or
<distributed-scheme> |
Required | Specifies the scheme for the cache used to maintain the versioning information.
Legal values are: |
Used in: versioned-backing-map-scheme.
The version-persistent-scheme defines a cache for storing object versioning information in a clustered cache. Specifying a size limit on the specified scheme's backing-map allows control over how many version identifiers are tracked.
The following table describes the elements you can define within the version-persistent-scheme element.
Element | Required/Optional | Description |
---|---|---|
<cache-name-suffix> | Optional | Specifies the name modifier that is used to create a cache of version objects associated with a given cache. The value of this element is appended to the base cache name.
Legal value is a string. Default value is "-persist". For example, if the base case is named "Sessions" and this name modifier is set to "-persist", the associated version cache will be named "Sessions-persist". |
<replicated-scheme> or
<distributed-scheme> |
Required | Specifies the scheme for the cache used to maintain the versioning information.
Legal values are: |