replicated-scheme

replicated-scheme

Used in: caching-schemes, near-scheme, versioned-near-scheme, overflow-scheme, versioned-backing-map-scheme

Description

The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See the service overview for a more detailed description of replicated caches.

Clustered Concurrency Control

Replicated caches support cluster wide key-based locking so that data can be modified in a cluster without encountering the classic missing update problem. Note that any operation made without holding an explicit lock is still atomic but there is no guarantee that the value stored in the cache does not change between atomic operations.

Cache Storage (Backing Map)

Storage for the cache is specified via the backing-map scheme. For instance a replicated cache which uses a local cache for its backing map will result in cache entries being stored in-memory.

Elements

The following table describes the elements you can define within the replicated-scheme element.

Element Required/Optional Description
<scheme-name> Optional Specifies the scheme's name. The name must be unique within a configuration file.
<scheme-ref> Optional Specifies the name of another scheme to inherit from.
<service-name> Optional Specifies the name of the service which will manage caches created from this scheme.

services are configured from within the operational descriptor.
<listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which will be notified of events occurring on the cache.
<backing-map-scheme> Optional Specifies what type of cache will be used within the cache server to store the entries.

Legal values are:

In order to ensure cache coherence, the backing-map of an replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.

<standard-lease-milliseconds> Optional Specifies the duration of the standard lease in milliseconds. Once a lease has aged past this number of milliseconds, the lock will automatically be released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (e.g. higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value.

Legal values are from positive long numbers or zero.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<lease-granularity> Optional Specifies the lease ownership granularity. Available since release 2.3.

Legal values are:

  • thread
  • member

A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it.

Default value is the value specified in the tangosol-coherence.xml descriptor.

<mobile-issues> Optional Specifies whether or not the lease issues should be transfered to the most recent lock holders.

Legal values are true or false.

Default value is the value specified in the tangosol-coherence.xml descriptor.
<autostart> Optional The autostart element is intended to be used by cache servers (i.e. com.tangosol.net.DefaultCacheServer). It specifies whether or not the cache services associated with this cache scheme should be automatically started at a cluster node.

Legal values are true or false.

Default value is false.