4 Setting Up Coherence*Extend

This chapter provides instructions for configuring Coherence*Extend. The instructions provide basic setup and do not represent a complete configuration reference. In addition, refer to the platform-specific parts of this guide for additional configuration instructions.

For a complete Java example that also includes configuration and setup, see Chapter 3, "Building Your First Extend Client."

This chapter includes the following sections:

4.1 Overview

Coherence*Extend requires configuration both on the client side and the cluster side. On the cluster side, extend proxy services are setup to accept client requests. Proxy services provide access to cache service instances and invocation service instances that run on the cluster. On the client side, remote cache services and the remote invocation services are configured and used by clients to access cluster data through the extend proxy service. Extend clients and extend proxy services communicate using TCP/IP.

Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. Extend clients are also configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. See Developing Applications with Oracle Coherence for detailed information about the cache configuration deployment descriptor

4.2 Configuring the Cluster Side

A Coherence cluster must include an extend proxy service to accept extend client connections and must include a cache that is used by clients to retrieve and store data. Both the extend proxy service and caches are configured in the cluster's cache configuration deployment descriptor. Extend proxy services and caches are started as part of a cache server (DefaultCacheServer) process.

The following topics are included this section:

4.2.1 Setting Up Extend Proxy Services

The extend proxy service (ProxyService) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. A proxy service includes proxies for two types of cluster services: the CacheService cluster service, which is used by clients to access caches; and, the InvocationService cluster service, which is used by clients to execute Invocable objects on the cluster.

The following topics are included in this section:

4.2.1.1 Defining a Proxy Service

Extend proxy services are configured within a <caching-schemes> node using the <proxy-scheme> element. The <tcp-acceptor> subelement includes the address (IP, or DNS name, and port) that an extend proxy service listens to for TCP/IP client communication. The address can be explicitly defined using the <local-address> element, or the address can be defined within an operational override configuration file and referenced using the <address-provider> element. The latter approach decouples the address configuration from the proxy scheme definition and allows the address to change at runtime without having to change the proxy definition. For details on referencing an address definition, see "Using Address Provider References for TCP Addresses".

Example 4-1 defines a proxy service named ExtendTcpProxyService and is set up to listen for client requests on a TCP/IP socket that is bound to 198.168.1.5 and port 9099. Both the cache and invocation cluster service proxies are enabled for client requests. In addition, the <autostart> element is set to true so that the service automatically starts at a cluster node. See the <proxy-scheme> element reference in the Developing Applications with Oracle Coherence for a complete list and description of all <proxy-scheme> subelements.

Example 4-1 Extend Proxy Service Configuration

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <proxy-config>
         <cache-service-proxy>
            <enabled>true</enabled>
         </cache-service-proxy>
         <invocation-service-proxy>
            <enabled>true</enabled>
         </invocation-service-proxy>
      </proxy-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

Note:

  • For clarity, the above example explicitly enables the cache and invocation cluster service proxies. However, both proxies are enabled by default and do not require a <cache-service-proxy> and <invocation-service-proxy> element to be included in the proxy scheme definition.

  • The <address> element also supports using CIDR notation as a subnet and mask (for example 192.168.1.0/24). CIDR simplifies configuration by allowing a single address configuration to be shared across computers on the same sub-net. Each cluster member specifies the same CIDR address block and a local NIC on each computer is automatically found that matches the address pattern. The /24 prefix size matches up to 256 available addresses: from 192.168.1.0 to 192.168.1.255.

4.2.1.2 Defining Multiple Proxy Service Instances

Multiple extend proxy service instances can be defined in order to support an expected number of client connections and to support fault tolerance and load balancing. Client connections are automatically balanced across proxy service instances. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.

To define multiple proxy service instances, include a proxy service definition in multiple cache servers and use the same service name for each proxy service. Proxy services that share the same service name are considered peers.

The following examples define two instances of the ExtendTcpProxyService proxy service that are set up to listen for client requests on a TCP/IP ServerSocket that is bound to port 9099. The proxy service definition is included in each cache server's respective cache configuration file within the <proxy-scheme> element.

On cache server 1:

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

On cache server 2:

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.6</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

4.2.1.3 Defining Multiple Proxy Services

Multiple extend proxy services can be defined in order to provide different applications with their own proxies. Extend clients for a particular application can be directed toward specific proxies to provide a more predictable environment.

The following example defines two extend proxy services. ExtendTcpProxyService1 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. ExtendTcpProxyService2 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9098.

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService1</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService2</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9098</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

4.2.1.4 Disabling Cluster Service Proxies

The cache service and invocation service proxies can be disabled within an extend proxy service definition. Both of these proxies are enabled by default and can be explicitly disabled if a client does not require a service.

Cluster service proxies are disabled by setting the <enabled> element to false within the <cache-service-proxy> and <invocation-service-proxy> respectively.

The following example disables the inovcation service proxy so that extend clients cannot execute Invocable objects within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <invocation-service-proxy>
         <enabled>false</enabled>
      </invocation-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

Likewise, the following example disables the cache service proxy to restrict extend clients from accessing caches within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <enabled>false</enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

4.2.1.5 Specifying Read-Only NamedCache Access

By default, extend clients are allowed to both read and write data to proxied NamedCache instances. The <read-only> element can be specified within a <cache-service-proxy> element to prohibit extend clients from modifying cached content on the cluster. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <read-only>true</read-only>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

4.2.1.6 Specifying NamedCache Locking

Note:

The NamedCache lock API's are deprecated. Use the locking support that is provided by the entry processor API instead (EntryProcessor for Java and C++, IEntryProcessor for .NET).

By default, extend clients are not allowed to acquire NamedCache locks. The <lock-enabled> element can be specified within a <cache-service-proxy> element to allow extend clients to perform locking. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <lock-enabled>true</lock-enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

If client-side locking is enabled and a client application uses the NamedCache.lock() and unlock() methods, it is important that a member-based (rather than thread-based) locking strategy is configured when using a partitioned or replicated cache. The locking strategy is configured using the <lease-granularity> element when defining cluster-side caches. A granularity value of thread (the default setting) means that locks are held by a thread that obtained them and can only be released by that thread. A granularity value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release the lock. Because the extend proxy clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread executes subsequent requests from the same extend client.

The following example demonstrates setting the lease granularity to member for a partitioned cache

...
<distributed-scheme>
   <scheme-name>dist-default</scheme-name>
   <lease-granularity>member</lease-granularity>
   <backing-map-scheme>
      <local-scheme/>
   </backing-map-scheme>
   <autostart>true</autostart>
</distributed-scheme>
...

4.2.2 Defining Caches for Use By Extend Clients

Extend clients read and write data to a cache on the cluster. Any of the cache types can store client data. For extend clients, the cache on the cluster must have the same name as the cache that is being used on the client side; see "Defining a Remote Cache". For more information on defining caches, see "Using Caches" in the Developing Applications with Oracle Coherence.

The following example defines a partitioned cache named dist-extend.

...
<caching-scheme-mapping>
   <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>dist-default</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <backing-map-scheme>
         <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
   </distributed-scheme>
</caching-schemes>
...

4.2.3 Disabling Storage on a Proxy Server

Proxy services typically run on cluster members that are not responsible for storing data in the cluster. Storage-enabled cluster members can be adversely affected by a proxy service, which requires additional resources to handle client loads. Collocating a proxy service on a storage-enabled member is generally acceptable for simplified development, but should not be used during testing and production.

To ensure that a distributed cache does not store data on a cluster member that is configured as a proxy server, use the tangosol.coherence.distributed.localstorage Java property set to false when starting the cluster member. For example:

-Dtangosol.coherence.distributed.localstorage=false

Storage can also be disabled in the cache configuration file as part of a distributed cache definition by setting the <local-storage> element to false. For additional details, see the <distributed-scheme> element reference in the Developing Applications with Oracle Coherence.

...
<distributed-scheme>
   <scheme-name>dist-default</scheme-name>
   <local-storage>false</local-storage>
   <backing-map-scheme>
      <local-scheme/>
   </backing-map-scheme>
   <autostart>true</autostart>
</distributed-scheme>
...

4.3 Configuring the Client Side

Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. The services must be configured to connect to extend proxy services that run on the cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend-based client application starts.

The following topics are included in this section:

4.3.1 Defining a Remote Cache

A remote cache is specialized cache service that routes cache operations to a cache on the cluster. The remote cache and the cache on the cluster must have the same name. Extend clients use the NamedCache interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.

A remote cache is defined within a <caching-schemes> node using the <remote-cache-scheme> element. A <tcp-initiator> element is used to define the address (IP, or DNS name, and port) of the extend proxy service on the cluster to which the client connects. For details on <remote-cache-scheme> subelements, see the Developing Applications with Oracle Coherence.

Example 4-2 defines a remote cache named dist-extend and uses the <socket-address> element to explicitly configure the address that the extend proxy service is listening on (198.168.1.5 and port 9099). The address can also be defined within an operational override configuration file and referenced using the <address-provider> element. The latter approach decouples the address configuration from the remote cache definition and allows the address to change at runtime without having to change the remote cache definition. For details on referencing an address definition, see "Using Address Provider References for TCP Addresses".

Note:

To use this remote cache, there must be a cache defined on the cluster that is also named dist-extend. See "Defining Caches for Use By Extend Clients" for more information on defining caches on the cluster.

Example 4-2 Remote Cache Definition

...
<caching-scheme-mapping>
   <cache-mapping>
      che-name>dist-extend</cache-name>
         <scheme-name>extend-dist</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
            <connect-timeout>10s</connect-timeout>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

4.3.2 Using a Remote Cache as a Back Cache

Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. For more information, see "Defining a Near Cache for C++ Clients" and "Defining a Near Cache for .NET Clients", respectively.

The following example creates a near cache that uses a local cache and a remote cache.

...
<caching-scheme-mapping>
   <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
         <local-scheme>
            <high-units>1000</high-units>
         </local-scheme>
      </front-scheme>
      <back-scheme>
         <remote-cache-scheme>
            <scheme-ref>extend-dist</scheme-ref>
         </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
   </near-scheme>

   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>localhost</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
            <connect-timeout>10s</connect-timeout>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

4.3.3 Defining Remote Invocation Schemes

A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. Extend clients use the InvocationService interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService implementation is returned that executes synchronous Invocable tasks within the remote cluster JVM to which the client is connected.

Remote invocation schemes are defined within a <caching-schemes> node using the <remote-invocation-scheme> element. A <tcp-initiator> element is used to define the address (IP, or DNS name, and port) of the extend proxy service on the cluster to which the client connects. For details of the <remote-invocation-scheme> subelements, See the Developing Applications with Oracle Coherence.

Example 4-3 defines a remote invocation scheme that is called ExtendTcpInvocationService and uses the <socket-address> element to explicitly configure the address that the extend proxy service is listening on (198.168.1.5 and port 9099). The address can also be defined within an operational override configuration file and referenced using the <address-provider> element. The latter approach decouples the address configuration from the remote invocation definition and allows the address to change at runtime without having to change the remote invocation definition. For details on referencing an address definition, see "Using Address Provider References for TCP Addresses".

Example 4-3 Remote Invocation Scheme Definition

...
<caching-schemes>
   <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
            <connect-timeout>10s</connect-timeout>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-invocation-scheme>
</caching-schemes>
...

4.3.4 Defining Multiple Remote Addresses

Remote cache schemes and remote invocation schemes can include multiple extend proxy service addresses to ensure a client can always connect to the cluster. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.

To configure multiple addresses, add additional <socket-address> child elements within the <tcp-initiator> element of a <remote-cache-scheme> and <remote-invocation-scheme> node as required. The following example defines two extend proxy addresses for a remote cache scheme. See "Defining Multiple Proxy Service Instances", for instructions on setting up multiple proxy addresses.

...
<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>192.168.1.5</address>
                  <port>9099</port>
               </socket-address>
               <socket-address>
                  <address>192.168.1.6</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
         </tcp-initiator>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

4.3.5 Detecting Connection Errors

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) dispatches a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service is stopped. For cases where the application calls CacheFactory.shutdown(), the service implementation dispatches a MemberEvent.MEMBER_LEAVING event followed by a MemberEvent.MEMBER_LEFT event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED event; otherwise, a irrecoverable error exception is thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherit to the underlying protocol (such as TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler> element. For details on this element, see Developing Applications with Oracle Coherence. In particular, the <request-timeout> value controls the amount of time to wait for a response before abandoning the request. The <heartbeat-interval> and <heartbeat-timeout> values control the amount of time to wait for a response to a ping request before the connection is closed.

The following example is taken from Example 4-2 and demonstrates setting the request timeout to 5 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
   </outgoing-message-handler>
</initiator-config>
...

The following example sets the heartbeat interval to 500 milliseconds and the heartbeat timeout to 10 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <heartbeat-interval>500ms</heartbeat-interval>
      <heartbeat-timeout>10s</heartbeat-timeout>
   </outgoing-message-handler>
</initiator-config>
...

4.3.6 Disabling TCMP Communication

Java-based extend clients that are located within the network must disable TCMP communication to exclusively connect to clustered services using extend proxies. If TCMP is not disabled, Java-based extend clients may cluster with each other and may even join an existing cluster. TCMP is disabled in the client-side tangosol-coherence-override.xml file.

To disable TCMP communication, set the <enabled> element within the <packet-publisher> element to false. For example:

...
<cluster-config>
   <packet-publisher>
         <enabled system-property="tangosol.coherence.tcmp.enabled">false
      </enabled>
   </packet-publisher>
</cluster-config>
...

The tangosol.coherence.tcmp.enabled system property is used to specify whether TCMP is enabled instead of using the operational override file. For example:

-Dtangosol.coherence.tcmp.enabled=false

4.4 Using Address Provider References for TCP Addresses

Proxy service, remote cache, and remote invocation definitions can use the <address-provider> element to reference a TCP socket address that is defined in an operational override configuration file instead of explicitly defining an addresses in a cache configuration file. Referencing socket address definitions allows network addresses to change without having to update a cache configuration file.

To use address provider references for TCP addresses:

  1. Edit the tangosol-coherence-override.xml file (both on the client side and cluster side) and add a <socket-address> definition, within an <address-provider> element, that includes the socket's address and port. Use the <address-provider> elements's id attribute to define a unique ID for the socket address. For details on the <address-provider> element in an operational override configuration file, see Developing Applications with Oracle Coherence. The following example defines an address with proxy1 ID:

    ...
    <cluster-config>
       <address-providers>
          <address-provider id="proxy1">
             <socket-address>
                <address>198.168.1.5</address>
                <port>9099</port>
             </socket-address>
          </address-provider>
       </address-providers>
    </cluster-config>
    ...
    
  2. Edit the cluster-side coherence-cache-config.xml and create, or update, a proxy service definition and reference a socket address definition by providing the definition's ID as the value of the <address-provider> element within the <tcp-acceptor> element. The following example defines a proxy service that references the address that is defined in step 1:

    ...
    <caching-schemes>
       <proxy-scheme>
          <service-name>ExtendTcpProxyService</service-name>
          <acceptor-config>
             <tcp-acceptor>
                <address-provider>proxy1</address-provider>
             </tcp-acceptor>
          </acceptor-config>
          <proxy-config>
             <cache-service-proxy>
                <enabled>true</enabled>
             </cache-service-proxy>
             <invocation-service-proxy>
                <enabled>true</enabled>
             </invocation-service-proxy>
          </proxy-config>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  3. Edit the client-side coherence-cache-config.xml and create, or update, a remote cache or remote invocation definition and reference a socket address definition by providing the definition's ID as the value of the <address-provider> element within the <tcp-initiator> element. The following example defines a remote cache that references the address that is defined in step 1:

    <remote-cache-scheme>
       <scheme-name>extend-dist</scheme-name>
       <service-name>ExtendTcpCacheService</service-name>
       <initiator-config>
          <tcp-initiator>
             <remote-addresses>
                <address-provider>proxy1</address-provider>
             </remote-addresses>
             <connect-timeout>10s</connect-timeout>
          </tcp-initiator>
          <outgoing-message-handler>
             <request-timeout>5s</request-timeout>
          </outgoing-message-handler>
       </initiator-config>
    </remote-cache-scheme>
    

4.5 Using the Name Service Acceptor to Connect to a Proxy

A name service is a specialized TCP acceptor that allows extend clients to connect to a proxy by specifying a proxy service name instead of a proxy service address. Clients connect to the name service acceptor, which provides the actual address of the requested proxy. The use of the name service acceptor allows actual proxy addresses to change without having to update a cache configuration file.

A name service acceptor automatically starts on the same port as the TCMP socket (8088 by default) if a proxy service is configured on a cluster member. In addition, multiple proxy services can also be configured to use the same listening port as is used by the TCMP socket on the cluster. The use of the same port for TCMP, the name service, and proxy services minimizes the number of ports that are used by Coherence and simplifies firewall configuration.

Note:

Clients that are configured to use a name service acceptor can only connect to clusters that support the name service acceptor.

To use the name service acceptor to connect to a proxy:

  1. Edit the cluster-side coherence-cache-config.xml and create, or update, a proxy service definition and do not explicitly define a sock address within the <tcp-acceptor> element. The following example defines a proxy service that is named TcpExtend that binds to the same port that is used by TCMP.

    ...
    <caching-schemes>
       <proxy-scheme>
          <service-name>TcpExtend/service-name>
          <acceptor-config/>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  2. Edit the client-side coherence-cache-config.xml and create, or update, a remote cache or remote invocation definition and add a <name-service-addresses> element, within <tcp-initiator> element, that includes the socket address of the name service acceptor on the cluster. The following example defines a remote cache definition that connects to a name service to get a connection to the TcpExtend proxy service that was configured in step 1. For this example, port 8088 is used as the TCMP cluster port, name service port, and proxy service port.

    Note:

    • If the remote cache or remote invocation scheme <service-name> value is different than the proxy scheme <service-name> value on the cluster, a <proxy-service-name> element must also be provided in the remote cache and invocation scheme that contains the value of the <service-name> element that is configured in the proxy scheme.

    • The <name-services-addresses> element supports the use of the <address-provider> element for referencing a socket address that is configured in the operational override configuration file. For details, see "Using Address Provider References for TCP Addresses"

    <remote-cache-scheme>
       <scheme-name>extend-dist</scheme-name>
       <service-name>TcpExtend</service-name>
       <initiator-config>
          <tcp-initiator>
             <name-service-addresses>
                <socket-address>
                   <address>198.168.1.5</address>
                   <port>8088</port>
                </socket-address>
             </name-service-addresses>
             <connect-timeout>5s</connect-timeout>
          </tcp-initiator>
       </initiator-config>
    </remote-cache-scheme>
    

4.6 Using a Custom Address Provider for TCP Addresses

A custom address provider dynamically assigns TCP address and port settings when binding to a server socket. The address provider must be an implementation of the com.tangosol.net.AddressProvider interface. Dynamically assigning addresses is typically used to implement custom load balancing algorithms.

Address providers are defined using the <address-provider> element, which can be used within the <tcp-acceptor> element for extend proxy schemes and within the <tcp-initiator> element for remote cache and remote invocation schemes.

The following example demonstrates configuring an AddressProvider implementation called MyAddressProvider for a TCP acceptor when configuring an extend proxy scheme.

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <address-provider>
            <class-name>com.MyAddressProvider</class-name>
         </address-provider>
      </tcp-acceptor>
   </acceptor-config>
   <autostart>true</autostart>
</proxy-scheme>
...

The following example demonstrates configuring an AddressProvider implementation called MyClientAddressProvider for a TCP initiator when configuring a remote cache scheme.

...
<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>ExtendTcpCacheService</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>
               <class-name>com.MyClientAddressProvider</class-name>
            </address-provider>
         </remote-addresses>
         <connect-timeout>10s</connect-timeout>
      </tcp-initiator>
      <outgoing-message-handler>
         <request-timeout>5s</request-timeout>
      </outgoing-message-handler>
   </initiator-config>
</remote-cache-scheme>
...

In addition, the <address-provider> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating AddressProvider instances and a <method-name> element to specify the static factory method on the factory class that performs object instantiation.

4.7 Load Balancing Connections

Extend client connections are load balanced across proxy service members. By default, a proxy-based strategy is used that distributes client connections to proxy service members that are being utilized the least. Custom proxy-based strategies can be created or the default strategy can be modified as required. As an alternative, a client-based load balance strategy can be implemented by creating a client-side address provider or by relying on randomized client connections to proxy service members. The random approach provides minimal balancing as compared to proxy-based load balancing.

Coherence*Extend can be used with F5 BIG-IP Local Traffic Manager (LTM), which provides hardware-based load balancing. See Appendix B, "Integrating with F5 BIG-IP LTM," for detailed instructions.

The following topics are included in this section:

4.7.1 Using Proxy-Based Load Balancing

Proxy-based load balancing is the default strategy that is used to balance client connections between two or more members of the same proxy service. The strategy is weighted by a proxy's existing connection count, then by its daemon pool utilization, and lastly by its message backlog.

The proxy-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to proxy. For clarity, the following example explicitly specifies the strategy. However, the strategy is used by default if no strategy is specified and is not required in a proxy scheme definition.

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>192.168.1.5</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <load-balancer>proxy</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>
...

Note:

When using proxy-based load balancing, clients are not required to list the full set of proxy service members in their cache configuration. However, a minimum of two proxy service members should always be configured for redundancy sake. See "Defining Multiple Remote Addresses" for details on how to define multiple remote address to be used by a client.

4.7.1.1 Understanding the Proxy-Based Load Balancing Default Algorithm

The proxy-based load balancing algorithm distributes client connections equally across proxy service members. The algorithm redirects clients to proxy service members that are being utilized the least. The following factors are used to determine a proxy's utilization:

  • Connection Utilization – this utilization is calculated by adding the current connection count and pending connection count. If a proxy has a configured connection limit and the current connection count plus pending connection count equals the connection limit, the utilization is considered to be infinite.

  • Daemon Pool Utilization – this utilization equals the current number of active daemon threads. If all daemon threads are currently active, the utilization is considered to be infinite.

  • Message Backlog Utilization – this utilization is calculated by adding the current incoming message backlog and the current outgoing message backlog.

Each proxy service maintains a list of all members of the proxy service ordered by their utilization. The ordering is weighted first by connection utilization, then by daemon pool utilization, and then by message backlog. The list is resorted whenever a proxy service member's utilization changes. The proxy service members send each other their current utilization whenever their connection count changes or every 10 seconds (whichever comes first).

When a new connection attempt is made on a proxy, the proxy iterates the list as follows:

  • If the current proxy has the lowest connection utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower connection utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If the connection utilizations of the proxies are equal, the daemon pool utilization of the proxies takes precedence. If the current proxy has the lowest daemon pool utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower daemon pool utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If the daemon pool utilization of the proxies are equal, the message backlog of the proxies takes precedence. If the current proxy has the lowest message backlog utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower message backlog utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If all proxies have the same utilization, then the client remains connected to the current proxy.

4.7.1.2 Implementing a Custom Proxy-Based Load Balancing Strategy

The com.tangosol.coherence.net.proxy package includes the APIs that are used to balance client load across proxy service members. See Java API Reference for Oracle Coherence for details on using the proxy-based load balancing APIs that are discussed in this section.

A custom strategy must implement the ProxyServiceLoadBalancer interface. New strategies can be created or the default strategy (DefaultProxyServiceLoadBalancer) can be extended and modified as required. For example, to change which utilization factor takes precedence on the list of proxy services, extend DefaultProxyServerLoadBalancer and pass a custom Comparator object in the constructor that imposes the desired ordering. Lastly, the client's Member object (which uniquely defines each client) is passed to a strategy. The Member object provides a means for implementing client-weighted strategies. See Developing Applications with Oracle Coherence for details on configuring a client's member identity information.

To enable a custom load balancing strategy, include an <instance> subelement within the <load-balancer> element and provide the fully qualified name of a class that implements the ProxyServiceLoadBalancer interface. The following example enables a custom proxy-based load balancing strategy that is implemented in the MyProxyServiceLoadBalancer class:

...
<load-balancer>
   <instance>
      <class-name>package.MyProxyServiceLoadBalancer</class-name>
   </instance>
</load-balancer>
...

In addition, the <instance> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating ProxyServiceLoadBalancer instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. See Developing Applications with Oracle Coherence for detailed instructions on using the <instance> element.

4.7.2 Using Client-Based Load Balancing

The client-based load balancing strategy relies upon a client address provider implementation to dictate the distribution of clients across proxy service members. If no client address provider implementation is provided, the extend client tries each configured proxy service in a random order until a connection is successful. See "Using a Custom Address Provider for TCP Addresses" for more information on providing an address provider implementation.

The client-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to client. For example:

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService1</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>192.168.1.5</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <load-balancer>client</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>
...

The above configuration sets the client strategy on a single proxy service and must be repeated for all proxy services that are to use the client strategy. To set the client strategy as the default strategy for all proxy services if no strategy is specified, override the load-balancer parameter for the proxy service type in the operational override file. For example:

...
<cluster-config>
   <services>
      <service id="7">
         <init-params>
            <init-param id="12">
               <param-name>load-balancer</param-name>
               <param-value>client</param-value>
            </init-param>
         </init-params>
      </service>
   </services>
</cluster-config>
...

4.8 Using Network Filters with Extend Clients

Coherence*Extend services support pluggable network filters in the same way as Coherence clustered services. Filters modify the contents of network traffic before it is placed on the wire. For more information on configuring filters, see the Developing Applications with Oracle Coherence.

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

Note:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

For example, to compress network traffic exchanged between an extend client and the clustered service using the predefined gzip filter, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements as follows:

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

For the cluster side, add a <use-filters> element within the <proxy-scheme> element that specifies a filter with the same name as the client-side configuration:

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>9099</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>