The following sections are included in this chapter:
Configuring and using Coherence for .NET requires five basic steps:
Configure Coherence*Extend on both the client and on one or more JVMs within the cluster. See "Configuring Coherence*Extend" below.
Configure a POF context on the client and on all of the JVMs within the cluster that run the Coherence*Extend clustered service. See "Configuring a POF Context: Overview".
Implement the .NET client application using the Coherence for .NET API. See "Using the Coherence .NET APIs".
Make sure the Coherence cluster is up and running. See "Starting a Coherence DefaultCacheServer Process".
Launch the .NET client application.
To configure Coherence*Extend, you need to add the appropriate configuration elements to both the cluster and client-side cache configuration descriptors. The cluster-side cache configuration elements instruct a Coherence DefaultCacheServer
to start a Coherence*Extend clustered service that will listen for incoming TCP/IP requests from Coherence*Extend clients. The client-side cache configuration elements are used by the client library to determine the IP address and port of one or more servers in the cluster that run the Coherence*Extend clustered service so that it can connect to the cluster. It also contains various connection-related parameters, such as connection and request timeouts.
In order for a Coherence*Extend client to connect to a Coherence cluster, one or more DefaultCacheServer
JVMs within the cluster must run a TCP/IP Coherence*Extend clustered service. To configure a DefaultCacheServer
to run this service, a proxy-scheme
element with a child tcp-acceptor
element must be added to the cache configuration descriptor used by the DefaultCacheServer
. this is illustrated in Example 18-1.
Example 18-1 Configuration of a Default Cache Server for Coherence*Extend
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-*</cache-name> <scheme-name>dist-default</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>dist-default</scheme-name> <lease-granularity>member</lease-granularity> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <thread-count>5</thread-count> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>9099</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config>
This cache configuration descriptor defines two clustered services, one that allows remote Coherence*Extend clients to connect to the Coherence cluster over TCP/IP and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer
, it is important that the autostart
configuration element for each service is set to true
so that clustered services are automatically restarted upon termination. The proxy-scheme
element has a tcp-acceptor
child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP.
The Coherence*Extend clustered service configured above will listen for incoming requests on the localhost
address and port
9099
. When, for example, a client attempts to connect to a Coherence cache called dist-extend
, the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache
with the same name which, in this example, will be a Partitioned cache.
A Coherence*Extend client uses the information within an initiator-config
cache configuration descriptor element to connect to and communicate with a Coherence*Extend clustered service running within a Coherence cluster. This is illustrated in Example 18-2.
Example 18-2 Configuration to Connect to a Remote Coherence Cluster
<?xml version="1.0"?> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend</cache-name> <scheme-name>extend-dist</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>9099</port> </socket-address> </remote-addresses> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> </cache-config>
This cache configuration descriptor defines a caching scheme that connects to a remote Coherence cluster. The remote-cache-scheme
element has a tcp-initiator
child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.
When the client application retrieves a named cache with CacheFactory
using, for example, the name dist-extend
, the Coherence*Extend client will connect to the Coherence cluster by using TCP/IP (using the address localhost
and port
9099
) and return a INamedCache
implementation that routes requests to the NamedCache
with the same name running within the remote cluster. Note that the remote-addresses
configuration element can contain multiple socket-address
child elements. The Coherence*Extend client will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.
A Local Cache is just that: A cache that is local to (completely contained within) a particular .NET application. There are several attributes of the Local Cache that are particularly interesting:
The Local Cache implements the same standard cache interfaces that a remote cache implements (ICache
, IObservableCache
, IConcurrentCache
, IQueryCache
, and IInvocableCache
), meaning that there is no programming difference between using a local and a remote cache.
The Local Cache can be size-limited. This means that the Local Cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies are customizable, for example allowing the cache to be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies.
The Local Cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time-to-live value in the cache. Furthermore, the entire cache can be configured to flush itself on a periodic basis or at a preset time.
The Local Cache is thread safe and highly concurrent.
The Local Cache provides cache "get" statistics. It maintains hit and miss statistics. These runtime statistics can be used to accurately project the effectiveness of the cache, and adjust its size-limiting and auto-expiring settings accordingly while the cache is running.
The Coherence for .NET Local Cache functionality is implemented by the Tangosol.Net.Cache.LocalCache
class. As such, it can be programmatically instantiated and configured; however, it is recommended that a LocalCache
be configured by using a cache configuration descriptor, just like any other Coherence for .NET cache.
The key element for configuring the Local Cache is <local-scheme
>. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. Thus, this element can appear as a subelement of any of these elements in the coherence-cache-config
file: <caching-schemes
>, <distributed-scheme
>, <replicated-scheme
>, <optimistic-scheme
>, <near-scheme
>, <versioned-near-scheme
>, <overflow-scheme
>, <read-write-backing-map
>, and <versioned-backing-map-scheme
>.
The <local-scheme>
provides several optional subelements that let you define the characteristics of the cache. For example, the <low-units>
and <high-units>
subelements allow you to limit the cache in terms of size. Once the cache reaches its maximum allowable size it prunes itself back to a specified smaller size, choosing which entries to evict according to a specified eviction-policy (<eviction-policy>
). The entries and size limitations are measured in terms of units as calculated by the scheme's unit-calculator (<unit-calculator>
). A custom class can be defined using the <class-scheme>
subelement for both the <eviction-policy>
and <unit-calculator>
element to specify custom behavior as required.
You can also limit the cache in terms of time. The <expiry-delay>
subelement specifies the amount of time from last update that entries will be kept by the cache before being marked as expired. Any attempt to read an expired entry will result in a reloading of the entry from the configured cache store (<cachestore-scheme>
). Expired values are periodically discarded from the cache based on the flush-delay.
If a <cachestore-scheme>
is not specified, then the cached data will only reside in memory, and only reflect operations performed on the cache itself. See <local-scheme
> for a complete description of all of the available subelements.
Example 18-3 demonstrates a near cache configuration.
Example 18-3 Configuring a Local Cache
<?xml version="1.0"?> <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>example-local-cache</cache-name> <scheme-name>example-local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>example-local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> <flush-delay>1000ms</flush-delay> <cachestore-scheme> <class-scheme> <class-name>ExampleCacheStore</class-name> </class-scheme> </cachestore-scheme> <pre-load>true</pre-load> </local-scheme> </caching-schemes> </cache-config>
This section describes the Near Cache as it pertains to Coherence for .NET clients. For a complete discussion of the concepts behind a Near Cache, its configuration, and ways to keep it synchronized with the back tier, see "Configuring a Near Cache" in the Oracle Coherence Developer's Guide.
In Coherence for .NET, the Near Cache is an INamedCache
implementation that wraps the front cache and the back cache using a read-through/write-through approach. If the back cache implements the IObservableCache
interface, then the Near Cache can use either the Listen
None
, Listen
Present
, Listen
All
, or Listen
Auto
strategy to invalidate any front cache entries that might have been changed in the back cache
The Tangosol.Net.Cache.NearCache
class enables you to programmatically instantiate and configure .NET Near Cache functionality. However, it is recommended that you use a cache configuration descriptor to configure the NearCache.
A typical Near Cache is configured to use a local cache (thread safe, highly concurrent, size-limited and/or auto-expiring local cache) as the front cache and a remote cache as a back cache. A Near Cache is configured by using the near-scheme
element which has two child elements: front-scheme
for configuring a local (front) cache and back-scheme
for defining a remote (back) cache.
A Near Cache is configured by using the <near-scheme
> element in the coherence-cache-config
file. This element has two required subelements: front-scheme for configuring a local (front-tier) cache and a back-scheme for defining a remote (back-tier) cache. While a local cache (<local-scheme
>) is a typical choice for the front-tier, you can also use non-JVM heap based caches, (<external-scheme
> or <paged-external-scheme
>) or schemes based on Java objects (<class-scheme
>).
The remote or back-tier cache is described by the <back-scheme
> element. A back-tier cache can be either a distributed cache (<distributed-scheme
>) or a remote cache (<remote-cache-scheme
>). The <remote-cache-scheme
> element enables you to use a clustered cache from outside the current cluster.
Optional subelements of <near-scheme
> include <invalidation-strategy>
for specifying how the front-tier and back-tier objects will be kept synchronized and <listener
> for specifying a listener which will be notified of events occurring on the cache.
Example 18-4 demonstrates a near cache configuration.
Example 18-4 Near Cache Configuration
<?xml version="1.0"?> <!DOCTYPE cache-config SYSTEM "cache-config.dtd"> <cache-config> <cache-scheme-mapping> <cache-mapping> <cache-name>dist-extend-near</cache-name> <scheme-name>extend-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>extend-near</scheme-name> <front-scheme> <local-scheme> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <remote-cache-scheme> <scheme-ref>extend-dist</scheme-ref> </remote-cache-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> </near-scheme> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>localhost</address> <port>9099</port> </socket-address> </remote-addresses> <connect-timeout>10s</connect-timeout> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> </cache-config>
When a Coherence*Extend client service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, ICacheService
or IInvocationService
) will raise a MemberEventType.Left
event (by using the MemberEventHandler
delegate) and the service will be stopped. If the client application attempts to subsequently use the service, the service will automatically restart itself and attempt to reconnect to the cluster. If the connection is successful, the service will raise a MemberEventType.Joined
event; otherwise, a fatal exception will be thrown to the client application.
A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherent to the underlying protocol (such as TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the outgoing-message-handler
configuration element.
The primary configurable mechanism used by a Coherence*Extend client service to detect dropped connections is a request timeout. When the service sends a request to the remote cluster and does not receive a response within the request timeout interval (see <request-timeout>
), the service assumes that the connection has been dropped. The Coherence*Extend client and clustered services can also be configured to send a periodic heartbeat over the connection (see <heartbeat-interval>
and <heartbeat-timeout>
). If the service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.
To start a DefaultCacheServer
that uses the cluster-side Coherence cache configuration described earlier to allow Coherence for .NET clients to connect to the Coherence cluster by using TCP/IP, you need to do the following:
Change the current directory to the Oracle Coherence library directory (%COHERENCE_HOME%\lib
on Windows and $COHERENCE_HOME/lib
on UNIX).
Make sure that the paths are configured so that the Java command will run.
Start the DefaultCacheServer
command line application with the -Dtangosol.coherence.cacheconfig
system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier.
Example 18-5 illustrates a sample command line.
A reference to a configured cache can be obtained by name by using the CacheFactory
class:
Instances of all INamedCache
implementations, including LocalCache
, should be explicitly released by calling the INamedCache.Release()
method when they are no longer needed, to free up any resources they might hold.
If the particular INamedCache
is used for the duration of the application, then the resources will be cleaned up when the application is shut down or otherwise stops. However, if it is only used for a period, the application should call its Release()
method when finished using it.
Alternatively, you can leverage the fact that INamedCache
extends IDisposable
and that all cache implementations delegate a call to IDisposable.Dispose()
to INamedCache.Release()
. This means that if you need to obtain and release a cache instance within a single method, you can do so with a using
block:
Example 18-7 Obtaining and Releasing a Reference to a Cache
using (INamedCache cache = CacheFactory.GetCache("my-cache")) { // use cache as usual }
After the using
block terminates, IDisposable.Dispose()
will be called on the INamedCache
instance, and all resources associated with it will be released.