5 Advanced Extend Configuration

There are several advanced configuration options for extend clients and extend proxies that are typically used to change operational defaults or to address specific use cases.

This chapter includes the following sections:

5.1 Using Address Provider References for TCP Addresses

Proxy service, remote cache, and remote invocation definitions can use the <address-provider> element to reference a TCP socket address that is defined in an operational override configuration file instead of explicitly defining an addresses in a cache configuration file. Referencing socket address definitions allows network addresses to change without having to update a cache configuration file.

To use address provider references for TCP addresses:

  1. Edit the tangosol-coherence-override.xml file (both on the client side and cluster side) and add a <socket-address> definition, within an <address-provider> element, that includes the socket's address and port. Use the <address-provider> elements's id attribute to define a unique ID for the socket address. See address-provider in Developing Applications with Oracle Coherence. The following example defines an address with proxy1 ID:
    ...
    <cluster-config>
       <address-providers>
          <address-provider id="proxy1">
             <socket-address>
                <address>198.168.1.5</address>
                <port>7077</port>
             </socket-address>
          </address-provider>
       </address-providers>
    </cluster-config>
    ...
    
  2. Edit the cluster-side coherence-cache-config.xml and create, or update, a proxy service definition and reference a socket address definition by providing the definition's ID as the value of the <address-provider> element within the <tcp-acceptor> element. The following example defines a proxy service that references the address that is defined in step 1:
    ...
    <caching-schemes>
       <proxy-scheme>
          <service-name>ExtendTcpProxyService</service-name>
          <acceptor-config>
             <tcp-acceptor>
                <address-provider>proxy1</address-provider>
             </tcp-acceptor>
          </acceptor-config>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  3. Edit the client-side coherence-cache-config.xml and create, or update, a remote cache or remote invocation definition and reference a socket address definition by providing the definition's ID as the value of the <address-provider> element within the <tcp-initiator> element. The following example defines a remote cache that references the address that is defined in step 1:
    <remote-cache-scheme>
       <scheme-name>extend-dist</scheme-name>
       <service-name>ExtendTcpCacheService</service-name>
       <initiator-config>
          <tcp-initiator>
             <remote-addresses>
                <address-provider>proxy1</address-provider>
             </remote-addresses>
          </tcp-initiator>
          <outgoing-message-handler>
             <request-timeout>5s</request-timeout>
          </outgoing-message-handler>
       </initiator-config>
    </remote-cache-scheme>
    

5.2 Using a Custom Address Provider for TCP Addresses

Custom address providers dynamically assigns TCP address and port settings when binding to a server socket. The address provider must be an implementation of the com.tangosol.net.AddressProvider interface. Dynamically assigning addresses is typically used to implement custom load balancing algorithms.

Address providers are defined using the <address-provider> element, which can be used within the <tcp-acceptor> element for extend proxy schemes and within the <tcp-initiator> element for remote cache and remote invocation schemes.

The following example demonstrates configuring an AddressProvider implementation called MyAddressProvider for a TCP acceptor when configuring an extend proxy scheme.

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <acceptor-config>
      <tcp-acceptor>
         <address-provider>
            <class-name>com.MyAddressProvider</class-name>
         </address-provider>
      </tcp-acceptor>
   </acceptor-config>
   <autostart>true</autostart>
</proxy-scheme>
...

The following example demonstrates configuring an AddressProvider implementation called MyClientAddressProvider for a TCP initiator when configuring a remote cache scheme.

...
<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>ExtendTcpCacheService</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>
               <class-name>com.MyClientAddressProvider</class-name>
            </address-provider>
         </remote-addresses>
      </tcp-initiator>
      <outgoing-message-handler>
         <request-timeout>5s</request-timeout>
      </outgoing-message-handler>
   </initiator-config>
</remote-cache-scheme>
...

In addition, the <address-provider> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating AddressProvider instances and a <method-name> element to specify the static factory method on the factory class that performs object instantiation.

5.3 Load Balancing Connections

Extend client connections are load balanced across proxy service members. The default load balancing strategy can be changed as required.

The default proxy-based strategy distributes client connections to proxy service members that are being utilized the least. Custom proxy-based strategies can be created or the default strategy can be modified as required. As an alternative, a client-based load balance strategy can be implemented by creating a client-side address provider or by relying on randomized client connections to proxy service members. The random approach provides minimal balancing as compared to proxy-based load balancing.

Coherence*Extend can be used with F5 BIG-IP Local Traffic Manager (LTM), which provides hardware-based load balancing. See Integrating with F5 BIG-IP LTM.

This section includes the following topics:

5.3.1 Using Proxy-Based Load Balancing

Proxy-based load balancing is the default strategy that is used to balance client connections between two or more members of the same proxy service. The strategy is weighted by a proxy's existing connection count, then by its daemon pool utilization, and lastly by its message backlog.

The proxy-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to proxy. For clarity, the following example explicitly specifies the strategy. However, the strategy is used by default if no strategy is specified and is not required in a proxy scheme definition.

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService</service-name>
   <load-balancer>proxy</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>
...

Note:

If multiple proxy address are explicitly specified, clients are not required to list the full set of proxy service members in their cache configuration. However, a minimum of two proxy service members should always be configured for redundancy sake.

5.3.2 Understanding the Proxy-Based Load Balancing Default Algorithm

The proxy-based load balancing algorithm distributes client connections equally across proxy service members. The algorithm redirects clients to proxy service members that are being utilized the least. The following factors are used to determine a proxy's utilization:

  • Connection Utilization – this utilization is calculated by adding the current connection count and pending connection count. If a proxy has a configured connection limit and the current connection count plus pending connection count equals the connection limit, the utilization is considered to be infinite.

  • Daemon Pool Utilization – this utilization equals the current number of active daemon threads. If all daemon threads are currently active, the utilization is considered to be infinite.

  • Message Backlog Utilization – this utilization is calculated by adding the current incoming message backlog and the current outgoing message backlog.

Each proxy service maintains a list of all members of the proxy service ordered by their utilization. The ordering is weighted first by connection utilization, then by daemon pool utilization, and then by message backlog. The list is resorted whenever a proxy service member's utilization changes. The proxy service members send each other their current utilization whenever their connection count changes or every 10 seconds (whichever comes first).

When a new connection attempt is made on a proxy, the proxy iterates the list as follows:

  • If the current proxy has the lowest connection utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower connection utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If the connection utilizations of the proxies are equal, the daemon pool utilization of the proxies takes precedence. If the current proxy has the lowest daemon pool utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower daemon pool utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If the daemon pool utilization of the proxies are equal, the message backlog of the proxies takes precedence. If the current proxy has the lowest message backlog utilization, then the connection is accepted; otherwise, the proxy redirects the new connection by replying to the connection attempt with an ordered list of proxy service members that have a lower message backlog utilization. The client then attempts to connect to a proxy service member in the order of the returned list.

  • If all proxies have the same utilization, then the client remains connected to the current proxy.

5.3.3 Implementing a Custom Proxy-Based Load Balancing Strategy

The com.tangosol.coherence.net.proxy package includes the APIs that are used to balance client load across proxy service members.

A custom strategy must implement the ProxyServiceLoadBalancer interface. New strategies can be created or the default strategy (DefaultProxyServiceLoadBalancer) can be extended and modified as required. For example, to change which utilization factor takes precedence on the list of proxy services, extend DefaultProxyServerLoadBalancer and pass a custom Comparator object in the constructor that imposes the desired ordering. Lastly, the client's Member object (which uniquely defines each client) is passed to a strategy. The Member object provides a means for implementing client-weighted strategies. See Specifying a Cluster Member's Identity in Developing Applications with Oracle Coherence.

To enable a custom load balancing strategy, include an <instance> subelement within the <load-balancer> element and provide the fully qualified name of a class that implements the ProxyServiceLoadBalancer interface. The following example enables a custom proxy-based load balancing strategy that is implemented in the MyProxyServiceLoadBalancer class:

...
<load-balancer>
   <instance>
      <class-name>package.MyProxyServiceLoadBalancer</class-name>
   </instance>
</load-balancer>
...

In addition, the <instance> element also supports the use of a <class-factory-name> element to use a factory class that is responsible for creating ProxyServiceLoadBalancer instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. See instance in Developing Applications with Oracle Coherence.

5.3.4 Using Client-Based Load Balancing

The client-based load balancing strategy relies upon a client address provider implementation to dictate the distribution of clients across proxy service members. If no client address provider implementation is provided, the extend client tries each configured proxy service in a random order until a connection is successful. See Using a Custom Address Provider for TCP Addresses.

The client-based load balancing strategy is configured within a <proxy-scheme> definition using a <load-balancer> element that is set to client. For example:

...
<proxy-scheme>
   <service-name>ExtendTcpProxyService1</service-name>
   <load-balancer>client</load-balancer>
   <autostart>true</autostart>
</proxy-scheme>
...

The above configuration sets the client strategy on a single proxy service and must be repeated for all proxy services that are to use the client strategy. To set the client strategy as the default strategy for all proxy services if no strategy is specified, override the load-balancer parameter for the proxy service type in the operational override file. For example:

...
<cluster-config>
   <services>
      <service id="7">
         <init-params>
            <init-param id="12">
               <param-name>load-balancer</param-name>
               <param-value>client</param-value>
            </init-param>
         </init-params>
      </service>
   </services>
</cluster-config>
...

5.4 Using Network Filters with Extend Clients

Coherence*Extend services support pluggable network filters in the same way as Coherence clustered services. Filters modify the contents of network traffic before it is placed on the wire. For more information on configuring filters, see Using Network Filters in Developing Applications with Oracle Coherence.

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

Note:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

For example, to compress network traffic exchanged between an extend client and the clustered service using the predefined gzip filter, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements as follows:

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>7077</port>
        </socket-address>
      </remote-addresses>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>7077</port>
        </socket-address>
      </remote-addresses>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

For the cluster side, add a <use-filters> element within the <proxy-scheme> element that specifies a filter with the same name as the client-side configuration:

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>7077</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>gzip</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>