4 Configuring Extend Clients
This chapter includes the following sections:
- Overview of Configuring Extend Clients
Coherence*Extend requires configuration both on the client side and the cluster side. - Defining a Remote Cache
A remote cache is specialized cache service that routes cache operations to a cache on the Coherence cluster. - Using a Remote Cache as a Back Cache
Extend clients typically use remote caches as part of a near cache. In such scenarios, a local cache is used as a front cache and the remote cache is used as the back cache. - Defining Remote Invocation Schemes
A remote invocation scheme defines an invocation service that is used by clients to execute tasks on the remote Coherence cluster. - Connecting to Specific Proxy Addresses
Clients can connect to specific proxy addresses if the client predates the name service feature or if the client has specific firewall constraints. - Detecting Connection Errors
Coherence*Extend can detect and notify clients when connection errors occur. Various configuration options are available for controlling dropped connections. - Disabling TCMP Communication
Java-based extend clients that are located within the network must disable TCMP communication to exclusively connect to clustered services using extend proxies.
Parent topic: Getting Started
Overview of Configuring Extend Clients
Extend clients are configured using a cache configuration deployment descriptor. This deployment descriptor is deployed with the client and is often referred to as the client-side cache configuration file. Extend proxy services are configured in a cache configuration deployment descriptor. This deployment descriptor is often referred to as the cluster-side cache configuration file. It is the same cache configuration file that is used to set up caches on the cluster. See Specifying a Cache Configuration File in Developing Applications with Oracle Coherence.
Extend clients use the remote cache service and the remote invocation service to interact with a Coherence cluster. Both remote cache services and remote invocation services are configured in a cache configuration deployment descriptor that must be found on the classpath when an extend client application starts.
Parent topic: Configuring Extend Clients
Defining a Remote Cache
NamedCache
interface as normal to get an instance of the cache. At run time, the cache operations are not executed locally but instead are sent using TCP/IP to an extend proxy service on the cluster. The fact that the cache operations are delegated to a cache on the cluster is transparent to the extend client.
A remote cache is defined within a <caching-schemes>
node using the <remote-cache-scheme>
element. Example 4-1 creates a remote cache scheme that is named ExtendTcpCacheService
and connects to the name service, which then redirects the request to the address of the requested proxy service. The use of the name service simplifies port management and firewall configuration. See remote-cache-scheme in Developing Applications with Oracle Coherence.
Example 4-1 Remote Cache Definition
...
<caching-scheme-mapping>
<cache-mapping>
<cache-name>dist-extend</cache-name>
<scheme-name>extend-dist</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<initiator-config>
<tcp-initiator>
<name-service-addresses>
<socket-address>
<address>198.168.1.5</address>
<port>7574</port>
</socket-address>
</name-service-addresses>
</tcp-initiator>
<outgoing-message-handler>
<request-timeout>5s</request-timeout>
</outgoing-message-handler>
</initiator-config>
</remote-cache-scheme>
</caching-schemes>
...
If the <service-name>
value is different than the proxy scheme <service-name>
value on the cluster, use the <proxy-service-name>
element to enter the value of the <service-name>
element that is configured in the proxy scheme. For example:
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<proxy-service-name>SomeOtherProxyService</proxy-service-name>
...
If the client is in a different cluster than the proxy server, use the
<cluster-name
> element to specify the cluster name of the proxy
server. For example:
<remote-cache-scheme>
<scheme-name>extend-dist</scheme-name>
<service-name>ExtendTcpCacheService</service-name>
<cluster-name
system-property="cache.server.cluster">CacheCluster</cluster-name>
...
As configured in Example 4-1, the remote cache scheme uses the <name-service-addresses>
element to define the socket address (IP, or DNS name, and port) of the name service on the cluster. The <address>
element also supports external NAT addresses that route to local addresses; however, both addresses must use the same port number. The name service listens on the cluster port (7574) by default and is available on all machines running cluster nodes. If the target cluster uses the default cluster port, then the port can be omitted from the configuration. Moreover, extend clients by default use the cluster discovery addresses to find the cluster and proxy. If the extend client is on the same network as the cluster, then no specific configuration is required as long as the client uses a cache configuration file that specifies the same cluster-side cluster name.
The <name-services-addresses>
element also supports the use of the <address-provider>
element for referencing a socket address that is configured in the operational override configuration file. See Using Address Provider References for TCP Addresses and Connecting to Specific Proxy Addresses.
Note:
Clients that are configured to use a name service can only connect to Coherence versions that also support the name service. In addition, for previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.
Parent topic: Configuring Extend Clients
Using a Remote Cache as a Back Cache
The following example creates a near cache that uses a local cache and a remote cache.
... <caching-scheme-mapping> <cache-mapping> <cache-name>dist-extend-near</cache-name> <scheme-name>extend-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>extend-near</scheme-name> <front-scheme> <local-scheme> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <remote-cache-scheme> <scheme-ref>extend-dist</scheme-ref> </remote-cache-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> </near-scheme> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <name-service-addresses> <socket-address> <address>198.168.1.5</address> <port>7574</port> </socket-address> </name-service-addresses> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> ...
Parent topic: Configuring Extend Clients
Defining Remote Invocation Schemes
InvocationService
interface as normal. At run time, a TCP/IP connection is made to an extend proxy service and an InvocationService
implementation is returned that executes synchronous Invocable
tasks within the remote cluster JVM to which the client is connected.
Remote invocation schemes are defined within a <caching-schemes>
node using the <remote-invocation-scheme>
element. Example 4-2 defines a remote invocation scheme that is called ExtendTcpInvocationService
and uses the <name-service-address>
element to configure the address that the name service is listening on. See remote-invocation-scheme in Developing Applications with Oracle Coherence.
Example 4-2 Remote Invocation Scheme Definition
... <caching-schemes> <remote-invocation-scheme> <scheme-name>extend-invocation</scheme-name> <service-name>ExtendTcpInvocationService</service-name> <initiator-config> <tcp-initiator> <name-service-addresses> <socket-address> <address>198.168.1.5</address> <port>7574</port> </socket-address> </name-service-addresses> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-invocation-scheme> </caching-schemes> ...
If the <service-name>
value is different than the proxy scheme <service-name>
value on the cluster, then use the <proxy-service-name>
element to enter the value of the <service-name>
element that is configured in the proxy scheme. For example:
<remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpInvocationService</service-name> <proxy-service-name>SomeOtherProxyService</proxy-service-name> ...
Parent topic: Configuring Extend Clients
Connecting to Specific Proxy Addresses
Example 4-1 uses the <socket-address>
element to explicitly configure the address that an extend proxy service is listening on (198.168.1.5
and port 7077
). The <address>
element also supports external NAT addresses that route to local addresses; however, both addresses must use the same port number. The address can also be defined within an operational override configuration file and referenced using the <address-provider>
element. The latter approach decouples the address configuration from the remote cache definition and allows the address to change at runtime without having to change the remote cache definition. See Using Address Provider References for TCP Addresses.
Example 4-3 Remote Cache Definition with Explicit Address
... <caching-scheme-mapping> <cache-mapping> che-name>dist-extend</cache-name> <scheme-name>extend-dist</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>198.168.1.5</address> <port>7077</port> </socket-address> </remote-addresses> </tcp-initiator> <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> </remote-cache-scheme> </caching-schemes> ...
If multiple proxy service instances are configured, then a remote cache scheme or invocation scheme can include each proxy service addresses to ensure a client can always connect to the cluster. The algorithm used to balance connections depends on the load balancing strategy that is configured. See Load Balancing Connections.
To configure multiple addresses, add additional <socket-address>
child elements within the <tcp-initiator>
element of a <remote-cache-scheme>
and <remote-invocation-scheme>
node as required. The following example defines two extend proxy addresses for a remote cache scheme:
... <caching-schemes> <remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>ExtendTcpCacheService</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>192.168.1.5</address> <port>7077</port> </socket-address> <socket-address> <address>192.168.1.6</address> <port>7077</port> </socket-address> </remote-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme> </caching-schemes> ...
While either an IP address or DNS name can be used, DNS names have an additional advantage: any IP addresses that are associated with a DNS name are automatically resolved at runtime. This allows the list of proxy addresses to be stored in a DNS server and centrally managed and updated in real time. For example, if the proxy address list is going to be 192.168.1.1
, 192.168.1.2
, and 192.168.1.3
, then a single DNS entry for hostname ExtendTcpCacheService
can contain those addresses and a single address named ExtendTcpCacheService
can be specified for the proxy address:
<tcp-initiator> <remote-addresses> <socket-address> <address>ExtendTcpCacheService</address> <port>7077</port> </socket-address> </remote-addresses> </tcp-initiator>
Parent topic: Configuring Extend Clients
Detecting Connection Errors
When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService
or InvocationService
) dispatches a MemberEvent.MEMBER_LEFT
event to all registered MemberListeners
and the service is stopped. For cases where the application calls CacheFactory.shutdown()
, the service implementation dispatches a MemberEvent.MEMBER_LEAVING
event followed by a MemberEvent.MEMBER_LEFT
event. In both cases, if the client application attempts to subsequently use the service, the service automatically restarts itself and attempts to reconnect to the cluster. If the connection is successful, the service dispatches a MemberEvent.MEMBER_JOINED
event; otherwise, a irrecoverable error exception is thrown to the client application.
A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherit to the underlying protocol (such as TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler>
element. See outgoing-message-handler in Developing Applications with Oracle Coherence. In particular, the <request-timeout>
value controls the amount of time to wait for a response before abandoning the request. The <heartbeat-interval>
and <heartbeat-timeout>
values control the amount of time to wait for a response to a ping request before the connection is closed. As a best practice, the heartbeat timeout should be less than the heartbeat interval to ensure other members are not unnecessarily pinged and to not have multiple pings outstanding.
The following example is taken from Example 4-1 and demonstrates setting the request timeout to 5
seconds.
... <initiator-config> ... <outgoing-message-handler> <request-timeout>5s</request-timeout> </outgoing-message-handler> </initiator-config> ...
The following example sets the heartbeat interval to 3
seconds and the heartbeat timeout to 2
seconds.
... <initiator-config> ... <outgoing-message-handler> <heartbeat-interval>3s</heartbeat-interval> <heartbeat-timeout>2s</heartbeat-timeout> </outgoing-message-handler> </initiator-config> ...
Parent topic: Configuring Extend Clients
Disabling TCMP Communication
tangosol-coherence-override.xml
file.
To disable TCMP communication, set the <enabled>
element within the <packet-publisher>
element to false
. For example:
... <cluster-config> <packet-publisher> <enabled system-property="coherence.tcmp.enabled">false </enabled> </packet-publisher> </cluster-config> ...
The coherence.tcmp.enabled
system property is used to specify whether TCMP is enabled instead of using the operational override file. For example:
-Dcoherence.tcmp.enabled=false
Parent topic: Configuring Extend Clients