Skip Headers
Oracle® Coherence Client Guide
Release 3.7.1

Part Number E22839-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Setting Up Coherence*Extend

This chapter provides instructions for configuring Coherence*Extend. The instructions provide basic setup and do not represent a complete configuration reference. In addition, refer to the platform-specific parts of this guide for additional configuration instructions. For a complete Java example that also includes configuration and setup, see Chapter 4, "Building Your First Extend Client."

This chapter includes the following sections:

3.1 Overview

Coherence*Extend requires configuration both on the client side and the cluster side. On the cluster side, extend proxy services are setup to accept client requests. Proxy services provide access to cache service instances and invocation service instances that run on the cluster. On the client side, remote cache services and the remote invocation services are configured and used by clients to access cluster data through the extend proxy service. Extend clients and extend proxy services communicate using TCP/IP.

Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. Extend clients are also configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. See Oracle Coherence Developer's Guide for detailed information about the cache configuration deployment descriptor

3.2 Configuring the Cluster Side

A Coherence cluster must include an extend proxy service to accept extend client connections and must include a cache that is used by clients to retrieve and store data. Both the extend proxy service and caches are configured in the cluster's cache configuration deployment descriptor. Extend proxy services and caches are started as part of a cache server (DefaultCacheServer) process.

The following topics are included this section:

3.2.1 Setting Up Extend Proxy Services

The extend proxy service (ProxyService) is a cluster service that allows extend clients to access a Coherence cluster using TCP/IP. A proxy service includes proxies for two types of cluster services: the CacheService cluster service, which is used by clients to access caches; and, the InvocationService cluster service, which is used by clients to execute Invocable objects on the cluster.

The following topics are included in this section:

3.2.1.1 Defining a Proxy Service

Extend proxy services are configured within a <caching-schemes> node using the <proxy-scheme> element. The <proxy-scheme> element has a <tcp-acceptor> child element that includes the address (IP or DNS name) and port that an extend proxy service listens to for TCP/IP client communication. See the "proxy-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <proxy-scheme> subelements.

Example 3-1 defines a proxy service named ExtendTcpProxyService and is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. Both the cache and invocation cluster service proxies are enabled for client requests. In addition, the <autostart> element is set to true so that the service automatically starts at a cluster node.

Example 3-1 Extend Proxy Service Configuration

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <proxy-config>
         <cache-service-proxy>
            <enabled>true</enabled>
         </cache-service-proxy>
         <invocation-service-proxy>
            <enabled>true</enabled>
         </invocation-service-proxy>
      </proxy-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

Note:

For clarity, the above example explicitly enables the cache and invocation cluster service proxies. However, both proxies are enabled by default and do not require a <cache-service-proxy> and <invocation-service-proxy> element to be included in the proxy scheme definition.

3.2.1.2 Defining Multiple Proxy Service Instances

Multiple extend proxy service instances can be defined in order to support an expected number of client connections and to support fault tolerance and load balancing. Client connections are automatically balanced across proxy service instances. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.

To define multiple proxy service instances, include a proxy service definition in multiple cache servers and use the same service name for each proxy service. Proxy services that share the same service name are considered peers.

The following examples define two instances of the ExtendTcpProxyService proxy service that are set up to listen for client requests on a TCP/IP ServerSocket that is bound to port 9099. The proxy service definition is included in each cache server's respective cache configuration file within the <proxy-scheme> element.

On cache server 1:

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

On cache server 2:

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.6</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

3.2.1.3 Defining Multiple Proxy Services

Multiple extend proxy services can be defined in order to provide different applications with their own proxies. Extend clients for a particular application can be directed toward specific proxies to provide a more predictable environment.

The following example defines two extend proxy services. ExtendTcpProxyService1 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9099. ExtendTcpProxyService2 is set up to listen for client requests on a TCP/IP ServerSocket that is bound to 198.168.1.5 and port 9098.

...
<caching-schemes>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService1</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
   <proxy-scheme>
      <service-name>ExtendTcpProxyService2</service-name>
      <acceptor-config>
         <tcp-acceptor>
            <local-address>
               <address>192.168.1.5</address>
               <port>9098</port>
            </local-address>
         </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
   </proxy-scheme>
</caching-schemes>
...

3.2.1.4 Disabling Cluster Service Proxies

The cache service and invocation service proxies can be disabled within an extend proxy service definition. Both of these proxies are enabled by default and can be explicitly disabled if a client does not require a service.

Cluster service proxies are disabled by setting the <enabled> element to false within the <cache-service-proxy> and <invocation-service-proxy> respectively.

The following example disables the inovcation service proxy so that extend clients cannot execute Invocable objects within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <invocation-service-proxy>
         <enabled>false</enabled>
      </invocation-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

Likewise, the following example disables the cache service proxy to restrict extend clients from accessing caches within the cluster:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <enabled>false</enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

3.2.1.5 Specifying Read-Only NamedCache Access

By default, extend clients are allowed to both read and write data to proxied NamedCache instances. The <read-only> element can be specified within a <cache-service-proxy> element to prohibit extend clients from modifying cached content on the cluster. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <read-only>true</read-only>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

3.2.1.6 Specifying NamedCache Locking

By default, extend clients are not allowed to acquire NamedCache locks. The <lock-enabled> element can be specified within a <cache-service-proxy> element to allow extend clients to perform locking. For example:

<proxy-scheme>
   ...
   <proxy-config>
      <cache-service-proxy>
         <lock-enabled>true</lock-enabled>
      </cache-service-proxy>
   </proxy-config>
   ...
</proxy-scheme>

If client-side locking is enabled and a client application uses the NamedCache.lock() and unlock() methods, it is important that a member-based (rather than thread-based) locking strategy is configured when using a partitioned or replicated cache. The locking strategy is configured using the <lease-granularity> element when defining cluster-side caches. A granularity value of thread (the default setting) means that locks are held by a thread that obtained them and can only be released by that thread. A granularity value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release the lock. Because the extend proxy clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread executes subsequent requests from the same extend client.

The following example demonstrates setting the lease granularity to member for a partitioned cache

...
<distributed-scheme>
   <scheme-name>dist-default</scheme-name>
   <lease-granularity>member</lease-granularity>
   <backing-map-scheme>
      <local-scheme/>
   </backing-map-scheme>
   <autostart>true</autostart>
</distributed-scheme>
...

3.2.2 Defining Caches for Use By Extend Clients

Extend clients read and write data to a cache on the cluster. Any of the cache types can store client data. For extend clients, the cache on the cluster must have the same name as the cache that is being used on the client side; see "Defining a Remote Cache". For more information on defining caches, see "Using Caches" in the Oracle Coherence Developer's Guide.

The following example defines a partitioned cache named dist-extend.

<?xml version="1.0"?>

<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
   coherence-cache-config.xsd">
   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend</cache-name>
         <scheme-name>dist-default</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <distributed-scheme>
         <scheme-name>dist-default</scheme-name>
         <backing-map-scheme>
            <local-scheme/>
         </backing-map-scheme>
         <autostart>true</autostart>
      </distributed-scheme>
   </caching-schemes>
</cache-config>

3.3 Configuring the Client Side

Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. The services must be configured to connect to extend proxy services that run on the cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend-based client application starts.

The following topics are included in this section:

3.3.1 Defining a Remote Cache

A remote cache is specialized cache service that routes cache operations to a cache on the cluster. The remote cache and the cache on the cluster must have the same name. Extend clients use the NamedCache interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.

A remote cache is defined within a <caching-schemes> node using the <remote-cache-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-cache-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-cache-scheme> subelements.

Table 3-0 defines a remote cache named dist-extend that connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099. To use this remote cache, there must be a cache defined on the cluster that is also named dist-extend. See "Defining Caches for Use By Extend Clients" for more information on defining caches on the cluster.

Example 3-2 Remote Cache Definition

<?xml version="1.0"?>

<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
   cache-config.xsd">
   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend</cache-name>
         <scheme-name>extend-dist</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <remote-cache-scheme>
         <scheme-name>extend-dist</scheme-name>
         <service-name>ExtendTcpCacheService</service-name>
         <initiator-config>
            <tcp-initiator>
               <remote-addresses>
                  <socket-address>
                     <address>198.168.1.5</address>
                     <port>9099</port>
                  </socket-address>
               </remote-addresses>
               <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
               <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
         </initiator-config>
      </remote-cache-scheme>
   </caching-schemes>
</cache-config>

3.3.2 Using a Remote Cache as a Back Cache

Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. For more information, see "Defining a Near Cache for C++ Clients" and "Defining a Near Cache for .NET Clients", respectively.

The following example creates a near cache that uses a local cache and a remote cache.

<?xml version="1.0"?>

<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config
   coherence-cache-config.xsd">
   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>dist-extend-near</cache-name>
         <scheme-name>extend-near</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
      <near-scheme>
         <scheme-name>extend-near</scheme-name>
         <front-scheme>
            <local-scheme>
               <high-units>1000</high-units>
            </local-scheme>
         </front-scheme>
         <back-scheme>
            <remote-cache-scheme>
               <scheme-ref>extend-dist</scheme-ref>
            </remote-cache-scheme>
         </back-scheme>
         <invalidation-strategy>all</invalidation-strategy>
      </near-scheme>

      <remote-cache-scheme>
         <scheme-name>extend-dist</scheme-name>
         <service-name>ExtendTcpCacheService</service-name>
         <initiator-config>
            <tcp-initiator>
               <remote-addresses>
                  <socket-address>
                     <address>localhost</address>
                     <port>9099</port>
                  </socket-address>
               </remote-addresses>
               <connect-timeout>10s</connect-timeout>
            </tcp-initiator>
            <outgoing-message-handler>
               <request-timeout>5s</request-timeout>
            </outgoing-message-handler>
         </initiator-config>
      </remote-cache-scheme>
   </caching-schemes>
</cache-config>

3.3.3 Defining Remote Invocation Schemes

A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. Extend clients use the InvocationService interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService implementation is returned that executes synchronous Invocable tasks within the remote cluster JVM to which the client is connected.

Remote invocation schemes are defined within a <caching-schemes> node using the <remote-invocation-scheme> element. A <tcp-initiator> element is used to define the address (IP or DNS name) and port of the extend proxy service on the cluster to which the client connects. See the "remote-invocation-scheme" element reference in the Oracle Coherence Developer's Guide for a complete list and description of all <remote-invocation-scheme> subelements.

Example 3-3 defines a remote invocation scheme that is called ExtendTcpInvocationService and connects to an extend proxy service that is listening on address 198.168.1.5 and port 9099.

Example 3-3 Remote Invocation Scheme Definition

...
<caching-schemes>
   <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
            <connect-timeout>10s</connect-timeout>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-invocation-scheme>
</caching-schemes>
...

3.3.4 Defining Multiple Remote Addresses

Remote cache schemes and remote invocation schemes can include multiple extend proxy service addresses to ensure a client can always connect to the cluster. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.

To configure multiple addresses, add additional <socket-address> child elements within the <tcp-initiator> element of a <remote-cache-scheme> and <remote-invocation-scheme> node as required. The following example defines two extend proxy addresses for a remote cache scheme. See "Defining Multiple Proxy Service Instances", for instructions on setting up multiple proxy addresses.

...
<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>192.168.1.5</address>
                  <port>9099</port>
               </socket-address>
               <socket-address>
                  <address>192.168.1.6</address>
                  <port>9099</port>
               </socket-address>
            </remote-addresses>
         </tcp-initiator>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

3.3.5 Detecting Connection Errors

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) dispatches a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service is stopped. For cases where the application calls CacheFactory.shutdown(), the service implementation dispatches a MemberEvent.MEMBER_LEAVING event followed by a MemberEvent.MEMBER_LEFT event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED event; otherwise, a irrecoverable error exception is thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying TCP/IP protocol, whereas others are implemented by the service itself. The latter mechanisms are configured within the <outgoing-message-handler> element.

The <request-timeout> element is the primary mechanism used to detect dropped connections. When a service sends a request to the remote cluster and does not receive a response within the request timeout interval, the service assumes that the connection has been dropped.

WARNING:

If a <request-timeout> value is not specified, a Coherence*Extend service uses an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, specify a reasonable finite request timeout.

The following example is taken from Example 3-2 and demonstrates setting the request timeout to 5 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
   </outgoing-message-handler>
</initiator-config>
...

The <heartbeat-interval> and <heartbeat-timeout> can also be used to detect dropped connections. If a service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

The following example sets the heartbeat interval to 500 milliseconds and the heartbeat timeout to 5 seconds.

...
<initiator-config>
   <tcp-initiator>
      <remote-addresses>
         <socket-address>
            <address>198.168.1.5</address>
            <port>9099</port>
         </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
   </tcp-initiator>
   <outgoing-message-handler>
      <heartbeat-interval>500ms</heartbeat-interval>
      <heartbeat-timeout>5s</heartbeat-timeout>
   </outgoing-message-handler>
</initiator-config>
...

3.3.6 Disabling TCMP Communication

Java-based extend clients that are located within the network must disable TCMP communication to exclusively connect to clustered services using extend proxies. If TCMP is not disabled, Java-based extend clients may cluster with each other and may even join an existing cluster. TCMP is disabled in the client-side tangosol-coherence-override.xml file.

To disable TCMP communication, set the <enabled> element within the <packet-publisher> element to false. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <packet-publisher>
         <enabled system-property="tangosol.coherence.tcmp.enabled">false
         </enabled>
      </packet-publisher>
   </cluster-config>
</coherence>

The tangosol.coherence.tcmp.enabled system property is used to specify whether TCMP is enabled instead of using the operational override file. For example:

-Dtangosol.coherence.tcmp.enabled=false

3.4 Using an Address Provider for TCP Addresses

An address provider dynamically assigns TCP address and port settings when binding to a server socket. The address provider must be an implementation of the com.tangosol.net.AddressProvider interface. Dynamically assigning addresses is typically used to implement custom load balancing algorithms.

Address providers are defined using the <address-provider> element, which can be used within the <tcp-acceptor> element for extend proxy schemes and within the <tcp-initiator> element for remote cache and remote invocation schemes.

The following example demonstrates configuring an AddressProvider implementation called MyAddressProvider for a TCP acceptor when configuring an extend proxy scheme.

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <thread-count>5</thread-count>
   <acceptor-config>
      <tcp-acceptor>
         <address-provider>
            <class-name>com.MyAddressProvider</class-name>
         </address-provider>
      </tcp-acceptor>
   </acceptor-config>
   <autostart>true</autostart>
</proxy-scheme>
...

The following example demonstrates configuring an AddressProvider implementation called MyClientAddressProvider for a TCP initiator when configuring a remote cache scheme.

...
<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>ExtendTcpCacheService</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>
               <class-name>com.MyClientAddressProvider</class-name>
            </address-provider>
         </remote-addresses>
         <connect-timeout>10s</connect-timeout>
      </tcp-initiator>
      <outgoing-message-handler>
         <request-timeout>5s</request-timeout>
      </outgoing-message-handler>
   </initiator-config>
</remote-cache-scheme>
...

In addition, the <address-provider> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating AddressProvider instances and a <method-name> element to specify the static factory method on the factory class that performs object instantiation.

3.5 Load Balancing Connections

Extend client connections are load balanced across proxy service members. By default, a proxy-based strategy is used that distributes client connections to proxy services that are being utilized the least. Custom proxy-based strategies can be created or the default strategy can be modified as required. As an alternative, a client-based load balance strategy can be implemented by creating a client-side address provider or by relying on randomized client connections to proxy services. The random approach provides minimal balancing as compared to proxy-based load balancing.

Coherence*Extend can be used with F5 BIG-IP Local Traffic Manager (LTM), which provides hardware-based load balancing. See Appendix B, "Integrating with F5 BIG-IP LTM," for detailed instructions.

The following topics are included in this section:

3.5.1 Using Proxy-Based Load Balancing

Proxy-based load balancing is the default strategy that is used to balance client connections between two or more proxy services. The strategy is weighted by a proxy's existing connection count, then by its daemon pool utilization, and lastly by its message backlog.

The proxy-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to proxy. For clarity, the following example explicitly specifies the strategy. However, the strategy is used by default if no strategy is specified and is not required in a proxy scheme definition.

<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>192.168.1.5</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <load-balancer>proxy</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>

Note:

When using proxy-based load balancing, clients are not required to list the full set of proxy services in their cache configuration. However, a minimum of two proxy servers should always be configured for redundancy sake. See "Defining Multiple Remote Addresses" for details on how to define multiple remote address to be used by a client.

3.5.1.1 Understanding the Proxy-Based Load Balancing Default Algorithm

The proxy-based load balancing algorithm distributes client connections equally across proxy service members. The algorithm redirects clients to proxy services that are being utilized the least. The following factors are used to determine a proxy's utilization:

  • Connection Utilization – this utilization is calculated by adding the current connection count and pending connection count. If a proxy has a configured connection limit and the current connection count plus pending connection count equals the connection limit, the utilization is considered to be infinite.

  • Daemon Pool Utilization – this utilization equals the current number of active daemon threads. If all daemon threads are currently active, the utilization is considered to be infinite.

  • Message Backlog Utilization – this utilization is calculated by adding the current incoming message backlog and the current outgoing message backlog.

Each proxy service maintains a list of all proxy services ordered by their utilization. The ordering is weighted first by connection utilization, then by daemon pool utilization, and then by message backlog. The list is resorted whenever a proxy service's utilization changes. The proxy services send each other their current utilization whenever their connection count changes or every 10 seconds (whichever comes first).

When a new connection attempt is made on a proxy, the proxy iterates the list as follows:

  • If the current proxy has the lowest connection utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy servers that have a lower connection utilization. The client then attempts to connect to a proxy service in the order of the returned list.

  • If the connection utilizations of the proxies are equal, the daemon pool utilization of the proxies takes precedence. If the current proxy has the lowest daemon pool utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy servers that have a lower daemon pool utilization. The client then attempts to connect to a proxy service in the order of the returned list.

  • If the daemon pool utilization of the proxies are equal, the message backlog of the proxies takes precedence. If the current proxy has the lowest message backlog utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy servers that have a lower message backlog utilization. The client then attempts to connect to a proxy service in the order of the returned list.

  • If all proxies have the same utilization, then the client remains connected to the current proxy.

3.5.1.2 Implementing a Custom Proxy-Based Load Balancing Strategy

The com.tangosol.coherence.net.proxy package includes the APIs that are used to balance client load across proxy service members. See Oracle Coherence Java API Reference for details on using the proxy-based load balancing APIs that are discussed in this section.

A custom strategy must implement the ProxyServiceLoadBalancer interface. New strategies can be created or the default strategy (DefaultProxyServiceLoadBalancer) can be extended and modified as required. For example, to change which utilization factor takes precedence on the list of proxy services, extend DefaultProxyServerLoadBalancer and pass a custom Comparator object in the constructor that imposes the desired ordering. Lastly, the client's Member object (which uniquely defines each client) is passed to a strategy. The Member object provides a means for implementing client-weighted strategies. See Oracle Coherence Developer's Guide for details on configuring a client's member identity information.

To enable a custom load balancing strategy, include an <instance> subelement within the <load-balancer> element and provide the fully qualified name of a class that implements the ProxyServiceLoadBalancer interface. The following example enables a custom proxy-based load balancing strategy that is implemented in the MyProxyServiceLoadBalancer class:

...
<load-balancer>
   <instance>
      <class-name>package.MyProxyServiceLoadBalancer</class-name>
   </instance>
</load-balancer>
...

In addition, the <instance> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating ProxyServiceLoadBalancer instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. See Oracle Coherence Developer's Guide for detailed instructions on using the <instance> element.

3.5.2 Using Client-Based Load Balancing

The client-based load balancing strategy relies upon a client address provider implementation to dictate the distribution of clients across proxy service members. If no client address provider implementation is provided, the extend client tries each configured proxy service in a random order until a connection is successful. See "Using an Address Provider for TCP Addresses" for more information on providing an address provider implementation.

The client-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to client. For example:

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService1</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <local-address>
            <address>192.168.1.5</address>
            <port>9099</port>
         </local-address>
      </tcp-acceptor>
   </acceptor-config>
   <load-balancer>client</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>
...

The above configuration sets the client strategy on a single proxy service and must be repeated for all proxy services that are to use the client strategy. To set the client strategy as the default strategy for all proxy services if no strategy is specified, override the load-balancer parameter for the proxy service type in the operational override file. For example:

<?xml version='1.0'?>

<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/
   coherence-operational-config coherence-operational-config.xsd">
   <cluster-config>
      <services>
         <service id="7">
            <init-params>
               <init-param id="12">
                  <param-name>load-balancer</param-name>
                  <param-value>client</param-value>
               </init-param>
            </init-params>
         </service>
      </services>
   </cluster-config>
</coherence>