4 Configuring Extend Clients

This chapter provides instructions for configuring Coherence*Extend. The instructions provide basic setup and do not represent a complete configuration reference. In addition, refer to the platform-specific parts of this guide for additional configuration instructions.

For a complete Java example that also includes configuration and setup, see Building Your First Extend Application.

This chapter includes the following sections:

4.1 Overview of Configuring Extend Clients

Coherence*Extend requires configuration both on the client side and the cluster side. On the cluster side, extend proxy services are setup to accept client requests. Proxy services provide access to cache service instances and invocation service instances that run on the cluster. On the client side, remote cache services and the remote invocation services are configured and used by clients to access cluster data through the extend proxy service. Extend clients and extend proxy services communicate using TCP/IP.

Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. Extend clients are also configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. See Developing Applications with Oracle Coherence for detailed information about the cache configuration deployment descriptor

Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend client application starts.

4.2 Defining a Remote Cache

A remote cache is specialized cache service that routes cache operations to a cache on the cluster. The remote cache and the cache on the cluster must have the same cache name. Extend clients use the NamedCache interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.

A remote cache is defined within a <caching-schemes> node using the <remote-cache-scheme> element. Example 4-1 creates a remote cache scheme that is named ExtendTcpCacheService and connects to the name service, which then redirects the request to the address of the requested proxy service. The use of the name service simplifies port management and firewall configuration. For details about <remote-cache-scheme> subelements, see the Developing Applications with Oracle Coherence.

Example 4-1 Remote Cache Definition

...
<caching-scheme-mapping>
   <cache-mapping>
      <cache-name>dist-extend</cache-name>
         <scheme-name>extend-dist</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <name-service-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>7574</port>
               </socket-address>
            </name-service-addresses>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

If the <service-name> value is different than the proxy scheme <service-name> value on the cluster, use the <proxy-service-name> element to enter the value of the <service-name> element that is configured in the proxy scheme. For example:

   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <proxy-service-name>SomeOtherProxyService</proxy-service-name>
      ...

As configured in Example 4-1, the remote cache scheme uses the <name-service-addresses> element to define the socket address (IP, or DNS name, and port) of the name service on the cluster. The name service listens on the cluster port (7574) by default and is available on all machines running cluster nodes. If the target cluster uses the default cluster port, then the port can be omitted from the configuration. Moreover, extend clients by default use the cluster discovery addresses to find the cluster and proxy. If the extend client is on the same network as the cluster, then no specific configuration is required as long as the client uses a cache configuration file that specifies the same cluster-side cluster name.

The <name-services-addresses> element also supports the use of the <address-provider> element for referencing a socket address that is configured in the operational override configuration file. For details, see "Using Address Provider References for TCP Addresses". For details about explicitly defining the address and port of a proxy, see "Connecting to Specific Proxy Addresses".

Note:

Clients that are configured to use a name service can only connect to Coherence versions that also support the name service. In addition, for previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.

4.3 Using a Remote Cache as a Back Cache

Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. The following example creates a near cache that uses a local cache and a remote cache.

...
<caching-scheme-mapping>
   <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
         <local-scheme>
            <high-units>1000</high-units>
         </local-scheme>
      </front-scheme>
      <back-scheme>
         <remote-cache-scheme>
            <scheme-ref>extend-dist</scheme-ref>
         </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
   </near-scheme>

   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <name-service-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>7574</port>
               </socket-address>
            </name-service-addresses>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

4.4 Defining Remote Invocation Schemes

A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. Extend clients use the InvocationService interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService implementation is returned that executes synchronous Invocable tasks within the remote cluster JVM to which the client is connected.

Remote invocation schemes are defined within a <caching-schemes> node using the <remote-invocation-scheme> element. Example 4-2 defines a remote invocation scheme that is called ExtendTcpInvocationService and uses the <name-service-address> element to configure the address that the name service is listening on. For details about the <remote-invocation-scheme> subelements, see the Developing Applications with Oracle Coherence.

Example 4-2 Remote Invocation Scheme Definition

...
<caching-schemes>
   <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
         <tcp-initiator>
            <name-service-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>7574</port>
               </socket-address>
            </name-service-addresses>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-invocation-scheme>
</caching-schemes>
...

If the <service-name> value is different than the proxy scheme <service-name> value on the cluster, then use the <proxy-service-name> element to enter the value of the <service-name> element that is configured in the proxy scheme. For example:

   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <proxy-service-name>SomeOtherProxyService</proxy-service-name>
      ...

4.5 Connecting to Specific Proxy Addresses

Clients can connect to specific proxy addresses if the client predates the name service feature or if the client has specific firewall constraints. For additional details about firewall configuration, see "Configuring Firewalls for Extend Clients".

Example 4-1 uses the <socket-address> element to explicitly configure the address that an extend proxy service is listening on (198.168.1.5 and port 7077). The address can also be defined within an operational override configuration file and referenced using the <address-provider> element. The latter approach decouples the address configuration from the remote cache definition and allows the address to change at runtime without having to change the remote cache definition. For details on referencing an address definition, see "Using Address Provider References for TCP Addresses".

Example 4-3 Remote Cache Definition with Explicit Address

...
<caching-scheme-mapping>
   <cache-mapping>
      che-name>dist-extend</cache-name>
         <scheme-name>extend-dist</scheme-name>
   </cache-mapping>
</caching-scheme-mapping>

<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>198.168.1.5</address>
                  <port>7077</port>
               </socket-address>
            </remote-addresses>
         </tcp-initiator>
         <outgoing-message-handler>
            <request-timeout>5s</request-timeout>
         </outgoing-message-handler>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

If multiple proxy service instances are configured, then a remote cache scheme or invocation scheme can include each proxy service addresses to ensure a client can always connect to the cluster. The algorithm used to balance connections depends on the load balancing strategy that is configured. See "Load Balancing Connections", for more information on load balancing.

To configure multiple addresses, add additional <socket-address> child elements within the <tcp-initiator> element of a <remote-cache-scheme> and <remote-invocation-scheme> node as required. The following example defines two extend proxy addresses for a remote cache scheme:

...
<caching-schemes>
   <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
         <tcp-initiator>
            <remote-addresses>
               <socket-address>
                  <address>192.168.1.5</address>
                  <port>7077</port>
               </socket-address>
               <socket-address>
                  <address>192.168.1.6</address>
                  <port>7077</port>
               </socket-address>
            </remote-addresses>
         </tcp-initiator>
      </initiator-config>
   </remote-cache-scheme>
</caching-schemes>
...

While either an IP address or DNS name can be used, DNS names have an additional advantage: any IP addresses that are associated with a DNS name are automatically resolved at runtime. This allows the list of proxy addresses to be stored in a DNS server and centrally managed and updated in real time. For example, if the proxy address list is going to be 192.168.1.1, 192.168.1.2, and 192.168.1.3, then a single DNS entry for hostname ExtendTcpCacheService can contain those addresses and a single address named ExtendTcpCacheService can be specified for the proxy address:

<tcp-initiator>
   <remote-addresses>
      <socket-address>
         <address>ExtendTcpCacheService</address>
         <port>7077</port>
      </socket-address>
   </remote-addresses>
</tcp-initiator>

4.6 Detecting Connection Errors

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) dispatches a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service is stopped. For cases where the application calls CacheFactory.shutdown(), the service implementation dispatches a MemberEvent.MEMBER_LEAVING event followed by a MemberEvent.MEMBER_LEFT event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED event; otherwise, a irrecoverable error exception is thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherit to the underlying protocol (such as TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler> element. For details on this element, see Developing Applications with Oracle Coherence. In particular, the <request-timeout> value controls the amount of time to wait for a response before abandoning the request. The <heartbeat-interval> and <heartbeat-timeout> values control the amount of time to wait for a response to a ping request before the connection is closed. As a best practice, the heartbeat timeout should be less than the heartbeat interval to ensure other members are not unnecessarily pinged and to not have multiple pings outstanding.

The following example is taken from Example 4-1 and demonstrates setting the request timeout to 5 seconds.

...
<initiator-config>
   ...
   <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
   </outgoing-message-handler>
</initiator-config>
...

The following example sets the heartbeat interval to 3 seconds and the heartbeat timeout to 2 seconds.

...
<initiator-config>
   ...
   <outgoing-message-handler>
      <heartbeat-interval>3s</heartbeat-interval>
      <heartbeat-timeout>2s</heartbeat-timeout>
   </outgoing-message-handler>
</initiator-config>
...

4.7 Disabling TCMP Communication

Java-based extend clients that are located within the network must disable TCMP communication to exclusively connect to clustered services using extend proxies. If TCMP is not disabled, Java-based extend clients may cluster with each other and may even join an existing cluster. TCMP is disabled in the client-side tangosol-coherence-override.xml file.

To disable TCMP communication, set the <enabled> element within the <packet-publisher> element to false. For example:

...
<cluster-config>
   <packet-publisher>
         <enabled system-property="coherence.tcmp.enabled">false
      </enabled>
   </packet-publisher>
</cluster-config>
...

The coherence.tcmp.enabled system property is used to specify whether TCMP is enabled instead of using the operational override file. For example:

-Dcoherence.tcmp.enabled=false