Skip Headers
Oracle® Coherence Client Guide
Release 3.6.1

Part Number E15726-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Setting Up Coherence*Extend

This chapter provides instructions for configuring Coherence*Extend. The instructions provide basic setup and do not represent a complete configuration reference. In addition, refer to the platform-specific parts of this guide for additional configuration instructions. For a complete Java example that also includes configuration and setup, see Chapter 4, "Building Your First Extend Client."

This chapter includes the following sections:

Overview

Coherence*Extend requires configuration both on the client side and the cluster side. On the cluster side, extend proxy services are setup to accept client requests. Proxy services provide access to cache service instances and invocation service instances that are running on the cluster. On the client side, remote cache services and the remote invocation services are configured and used by clients to access cluster data through the extend proxy service. Extend clients and extend proxy services communicate using TCP/IP.

Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. Extend clients are also configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. For detailed information about the cache configuration deployment descriptor, see "Specifying a Cache Configuration File" in the Oracle Coherence Developer's Guide.

Configuring the Cluster Side

A Coherence cluster must include an extend proxy service in order to accept extend client connections and must include a cache that is used by clients to retrieve and store data. Both the extend proxy service and caches are configured in the cluster's cache configuration deployment descriptor. Extend proxy services and caches are started as part of a cache server (DefaultCacheServer) process.

The following topics are included this section:

Setting Up Extend Proxy Services

The extend proxy service (ProxyService) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. A proxy service includes proxies for two types of cluster services: the CacheService cluster service, which is used by clients to access caches; and, the InvocationService cluster service, which is used by clients to execute Invocable objects on the cluster.

The following topics are included in this section:

Defining a Proxy Service

Extend proxy services are configured within a <caching-schemes> node using the <proxy-scheme> element. The <proxy-scheme> element has a <tcp-acceptor> child element that includes the address (IP or DNS name) and port that an extend proxy service listens to for TCP/IP client communication. See the "proxy-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <proxy-scheme> subelements.

Example 3-1 defines a proxy service named ExtendTcpProxyService and is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. Both the cache and invocation cluster service proxies are enabled for client requests. In addition, the <autostart> element is set to true so that the service automatically starts at a cluster node.

Example 3-1 Extend Proxy Service Configuration

...
<caching-schemes>
   ...
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <proxy-config>
         <cache-service-proxy>
            <enabled>true</enabled>
         </cache-service-proxy>
         <invocation-service-proxy>
            <enabled>true</enabled>
         </invocation-service-proxy>
      </proxy-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

Note:

For clarity, the above example explicitly enables the cache and invocation cluster service proxies. However, both proxies are enabled by default and do not require a <cache-service-proxy> and <invocation-service-proxy> element to be included in the proxy scheme definition.

Defining Multiple Proxy Services

Any number of extend proxy services can be set up to support an expected number of client connections as well as to support fault tolerance. For more information on fault tolerance, see "Configuring Fault Tolerance for Remote Addresses".

The following example defines two extend proxy services. ExtendTcpProxyService1 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. ExtendTcpProxyService2 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.6 and port 9099.

...
<caching-schemes>
   ...
   <proxy-scheme>
      <service-name>ExtendTcpProxyService1</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService2</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.6</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

Disabling Cluster Service Proxies

The cache service and invocation service proxies can be disabled within an extend proxy service definition. Both of these proxies are enabled by default and can be explicitly disabled if a client does not require a service.

Cluster service proxies are disabled by setting the <enabled> element to false within the <cache-service-proxy> and <invocation-service-proxy> respectively.

The following example disables the inovcation service proxy so that extend clients cannot execute Invocable objects within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <invocation-service-proxy>
         <enabled>false</enabled>
      </invocation-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

Likewise, the following example disables the cache service proxy to restrict extend clients from accessing caches within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <enabled>false</enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

Specifying Read-Only NamedCache Access

By default, extend clients are allowed to both read and write data to proxied NamedCache instances. The <read-only> element can be specified within a <cache-service-proxy> element to prohibit extend clients from modifying cached content on the cluster. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <read-only>true</read-only>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

Specifying NamedCache Locking

By default, extend clients are not allowed to acquire NamedCache locks. The <lock-enabled> element can be specified within a <cache-service-proxy> element to allow extend clients to perform locking. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <lock-enabled>true</lock-enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

If client-side locking is enabled and a client application uses the NamedCache.lock() and unlock() methods, it is important that a member-based (rather than thread-based) locking strategy is configured when using a partitioned or replicated cache. The locking strategy is configured using the <lease-granularity> element when defining cluster-side caches. A granularity value of thread (the default setting) means that locks are held by a thread that obtained them and can only be released by that thread. A granularity value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release the lock. Because the extend proxy clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread will execute subsequent requests from the same extend client.

The following example demonstrates setting the lease granularity to member for a partitioned cache

...
<distributed-scheme>
   <scheme-name>dist-default</scheme-name>
   <lease-granularity>member</lease-granularity>
   <backing-map-scheme>
      <local-scheme/>
   </backing-map-scheme>
   <autostart>true</autostart>
</distributed-scheme>
...

Defining Caches for Use By Extend Clients

Extend clients read and write data to a cache on the cluster. Any of the cache types can be used to store client data. For extend clients, the cache on the cluster must have the same name as the cache that is being used on the client side; see "Defining a Remote Cache". For more information on defining caches, see "Using Caches" in the Oracle Coherence Developer's Guide.

The following example defines a partitioned cache named dist-extend.

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend</cache-name>
         <scheme-name>dist-default</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <distributed-scheme>
         <scheme-name>dist-default</scheme-name>
         <backing-map-scheme>
            <local-scheme/>
         </backing-map-scheme>
         <autostart>true</autostart>
      </distributed-scheme>
   </caching-schemes>
</cache-config>

Configuring the Client Side

Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. The services must be configured to connect to extend proxy services that are running on the cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend-based client application starts.

The following topics are included in this section:

Defining a Remote Cache

A remote cache is specialized cache service that routes cache operations to a cache on the cluster. The remote cache and the cache on the cluster must have the same name. Extend clients use the NamedCache interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.

A remote cache is defined within a <caching-schemes> node using the <remote-cache-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-cache-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-cache-scheme> subelements.

Table 3-0 defines a remote cache named dist-extend that connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099. To use this remote cache, there must be a cache defined on the cluster that is also named dist-extend. See "Defining Caches for Use By Extend Clients" for more information on defining caches on the cluster.

Example 3-2 Remote Cache Definition

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend</cache-name>
         <scheme-name>extend-dist</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <remote-cache-scheme>
         <scheme-name>extend-dist</scheme-name>
         <service-name>ExtendTcpCacheService</service-name>
         <initiator-config>
            <tcp-initiator>
               <remote-addresses>
                  <socket-address>
                     <address>198.168.1.5</address>
                     <port>9099</port>
                  </socket-address>
               </remote-addresses>
               <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
               <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
         </initiator-config>
      </remote-cache-scheme>
   </caching-schemes>
</cache-config>

Using a Remote Cache as a Back Cache

Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. For more information, see "Defining a Near Cache for C++ Clients" and "Defining a Near Cache for .NET Clients", respectively.

The following example creates a near cache that uses a local cache together with a remote cache.

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
   <cache-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend-near</cache-name>
         <scheme-name>extend-near</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <near-scheme>
         <scheme-name>extend-near</scheme-name>
         <front-scheme>
            <local-scheme>
               <high-units>1000</high-units>
            </local-scheme>
         </front-scheme>
         <back-scheme>
            <remote-cache-scheme>
               <scheme-ref>extend-dist</scheme-ref>
            </remote-cache-scheme>
         </back-scheme>
         <invalidation-strategy>all</invalidation-strategy>
      </near-scheme>

      <remote-cache-scheme>
         <scheme-name>extend-dist</scheme-name>
         <service-name>ExtendTcpCacheService</service-name>
         <initiator-config>
            <tcp-initiator>
               <remote-addresses>
                  <socket-address>
                     <address>localhost</address>
                     <port>9099</port>
                  </socket-address>
               </remote-addresses>
               <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
               <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
         </initiator-config>
      </remote-cache-scheme>
   </caching-schemes>
</cache-config>

Defining Remote Invocation Schemes

A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. Extend clients use the InvocationService interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService implementation is returned that executes synchronous Invocable tasks within the remote cluster JVM to which the client is connected.

Remote invocation schemes are defined within a <caching-schemes> node using the <remote-invocation-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-invocation-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-invocation-scheme> subelements.

Example 3-3 defines a remote invocation scheme that is called ExtendTcpInvocationService and connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099.

Example 3-3 Remote Invocation Scheme Definition

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
   ...
   
   <caching-schemes>
      ...
      <remote-invocation-scheme>
         <scheme-name>extend-invocation</scheme-name>
         <service-name>ExtendTcpInvocationService</service-name>
         <initiator-config>
            <tcp-initiator>
               <remote-addresses>
                  <socket-address>
                     <address>198.168.1.5</address>
                     <port>9099</port>
                  </socket-address>
               </remote-addresses>
               <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
               <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
         </initiator-config>
      </remote-invocation-scheme>
   </caching-schemes>
</cache-config>

Configuring Fault Tolerance for Remote Addresses

Remote cache schemes and remote invocation schemes can include multiple extend proxy service addresses to ensure a client can always connect to the cluster. Each address is attempted in a random order until either the list is exhausted or a TCP/IP connection is established. To configure multiple address, add additional <socket-address> child elements within the <tcp-initiator> element of a <remote-cache-scheme> and <remote-invocation-scheme> node as required. The following example defines two extend proxy addresses for a remote cache scheme.

...
<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>ExtendTcpCacheService</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <socket-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </socket-address>
            <socket-address>
               <address>192.168.1.6</address>
               <port>9099</port>
            </socket-address>
         </remote-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>
...

Detecting Connection Errors

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) dispatches a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service is stopped. For cases where the application calls CacheFactory.shutdown(), the service implementation dispatches a MemberEvent.MEMBER_LEAVING event followed by a MemberEvent.MEMBER_LEFT event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED event; otherwise, a fatal exception is thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying TCP/IP protocol, whereas others are implemented by the service itself. The latter mechanisms are configured within the <outgoing-message-handler> element.

The <request-timeout> element is the primary mechanism used to detect dropped connections. When a service sends a request to the remote cluster and does not receive a response within the request timeout interval, the service assumes that the connection has been dropped.

WARNING:

If a <request-timeout> value is not specified, a Coherence*Extend service uses an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, specify a reasonable finite request timeout.

The following example is taken from Example 3-2 and demonstrates setting the request timeout to 5 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
   </outgoing-message-handler>
</initiator-config>
...

The <heartbeat-interval> and <heartbeat-timeout> can also be used to detect dropped connections. If a service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

The following example sets the heartbeat interval to 500 milliseconds and the heartbeat timeout to 5 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <heartbeat-interval>500ms</heartbeat-interval>
      <heartbeat-timeout>5s</heartbeat-timeout>
   </outgoing-message-handler>
</initiator-config>
...

Using an Address Provider for TCP Addresses

An address provider can be used to dynamically assign TCP address and port settings when binding to a server socket. The address provider must be an implementation of the com.tangosol.net.AddressProvider interface. Dynamically assigning addresses is typically used to implement custom load balancing algorithms.

Address providers are defined using the <address-provider> element, which can be used within the <tcp-acceptor> element for extend proxy schemes and within the <tcp-initiator> element for remote cache and remote invocation schemes.

Note:

The <address-provider> element also supports using a factory for object instantiation. See the <address-provider> element reference in the Oracle Coherence Developer's Guide.

The following example demonstrates configuring an AddressProvider implementation called MyAddressProvider for a TCP acceptor when configuring an extend proxy scheme.

<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
      <thread-count>5</thread-count>
      <acceptor-config>
         <tcp-acceptor>
            <address-provider>
               <class-name>com.MyAddressProvider</class-name>
            </address-provider>
        </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

The following example demonstrates configuring an AddressProvider implementation called MyClientAddressProvider for a TCP initiator when configuring a remote cache scheme.

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>ExtendTcpCacheService</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>
               <class-name>com.MyClientAddressProvider</class-name>
            </address-provider>
         </remote-addresses>
         <connect-timeout>10s</connect-timeout>
      </tcp-initiator>
      <outgoing-message-handler>
         <request-timeout>5s</request-timeout>
      </outgoing-message-handler>
   </initiator-config>
</remote-cache-scheme>

Using Network Filters with Extend Clients

Like Coherence clustered services, Coherence*Extend services support pluggable network filters. Filters can be used to modify the contents of network traffic before it is placed on the wire. Most standard Coherence network filters are supported, including the compression and symmetric encryption filters. For more information on configuring filters, see "Using Network Filters" in the Oracle Coherence Developer's Guide.

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

Note:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

For example, to encrypt network traffic exchanged between an extend client and the clustered service to which it is connected, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements as follows (assuming the symmetric encryption filter has been named symmetric-encryption):

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

For the cluster side, add a <use-filters> element within the <proxy-scheme> element that specifies a filter with the same name as the client-side configuration (for this example, symmetric-encryption):

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <thread-count>5</thread-count>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>9099</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>