Skip Headers
Oracle® Coherence Developer's Guide
Release 3.5

Part Number E14509-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

17 Configuring and Using Coherence*Extend

Coherence*Extend extends the reach of the core Coherence TCMP cluster to a wider range of consumers, including desktops, remote servers and machines located across WAN connections. Typical uses of Coherence*Extend include providing desktop applications with access to Coherence caches (including support for Near Cache and continuous query) and Coherence cluster "bridges" that link together multiple Coherence clusters connected by using a high-latency, unreliable WAN.

Coherence*Extend consists of two basic components: a client and a Coherence*Extend clustered service hosted by one or more DefaultCacheServer processes. The adapter library includes implementations of both the CacheService and InvocationService interfaces that route all requests to a Coherence*Extend clustered service instance running within the Coherence cluster. The Coherence*Extend clustered service in turn responds to client requests by delegating to an actual Coherence clustered service (for example, a Partitioned or Replicated cache service). The client adapter library and Coherence*Extend clustered service use a low-level messaging protocol to communicate with each other. Coherence*Extend includes the Extend-TCP transport binding for this protocol which uses a high performance, scalable TCP/IP-based communication layer to connect to the cluster.

Note:

Coherence*Extend-JMS support has been deprecated.

The choice of a transport binding is configuration-driven and is completely transparent to the client application that uses Coherence*Extend. A Coherence*Extend service is retrieved like a Coherence clustered service: using the CacheFactory class. Once obtained, a client uses the Coherence*Extend service in the same way as it would if it were part of the Coherence cluster. The fact that operations are being sent to a remote cluster node is transparent to the client application.

General Instructions

Configuring and using Coherence*Extend requires four basic steps:

  1. Create a client-side Coherence cache configuration descriptor that includes one or more <remote-cache-scheme> and <remote-invocation-scheme> configuration elements

  2. Create a cluster-side Coherence cache configuration descriptor that includes one or more <proxy-scheme> configuration elements

  3. Launch one or more DefaultCacheServer processes

  4. Create a client application that uses one or more Coherence*Extend services. See "Sample Coherence*Extend Client Application".

  5. Launch the client application

Configuring and Using Coherence*Extend-TCP

Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-TCP transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <tcp-initiator> element containing various TCP/IP-specific configuration information. Example 17-1 illustrates a sample descriptor.

Example 17-1 Coherence*Extend Client Descriptor that uses Extend-TCP

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>extend-dist</scheme-name>
    </cache-mapping>

    <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
        <local-scheme>
          <high-units>1000</high-units>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>extend-dist</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>

    <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-invocation-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two caching schemes, one that uses Extend-TCP to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed by using Extend-TCP). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <tcp-initiator> child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache by using the CacheFactory using, for example, the name dist-extend, the Coherence*Extend adapter library will connect to the Coherence cluster by using TCP/IP (using the address localhost and port 9099) and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendTcpInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster by using TCP/IP (again, using the address localhost and port 9099) and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Note that the <remote-addresses> configuration element (see <tcp-initiator> can contain multiple <socket-address> child elements. The Coherence*Extend adapter library will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.

Cluster-side Cache (a.k.a Coherence Extend Proxy) Configuration Descriptor

For a Coherence*Extend-TCP client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor. This descriptor must include a <proxy-scheme> element with a child <tcp-acceptor> element containing various TCP/IP-specific configuration information. Example 17-2 illustrates a sample descriptor.

Example 17-2 Cluster-Side Cache Configuration Descriptor for Extend-TCP

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-*</cache-name>
      <scheme-name>dist-default</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <lease-granularity>member</lease-granularity>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <thread-count>5</thread-count>
      <acceptor-config>
        <tcp-acceptor>
          <local-address>
            <address>localhost</address>
            <port>9099</port>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two clustered services, one that uses Extend-TCP to allow remote Extend-TCP clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <tcp-acceptor> child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP.

The Coherence*Extend clustered service will listen to a TCP/IP ServerSocket (bound to address localhost and port 9099) for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called dist-extend-direct, the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Partitioned cache.

Launching an Extend-TCP DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-TCP clients to connect to the Coherence cluster by using TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX)

  • Make sure that the paths are configured so that the Java command will run

  • Start the DefaultCacheServer command line application with the -Dtangosol.coherence.cacheconfig system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

Launching an Extend-TCP Client Application

To start a client application that uses Extend-TCP to connect to a remote Coherence cluster by using TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX)

  • Make sure that the paths are configured so that the Java command will run

  • Start your client application with the -Dtangosol.coherence.cacheconfig system property set to the location of the client-side Coherence cache configuration descriptor described earlier

For example (note that the command in Example 17-3 is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

Example 17-3 Command to Start a Client Application that Uses Extend-TCP

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

Sample Coherence*Extend Client Application

Example 17-4 demonstrates how to retrieve and use a Coherence*Extend CacheService and InvocationService. This example increments an Integer value in a remote Partitioned cache and then retrieves the value by executing an Invocable on the clustered JVM to which the client is attached:

Example 17-4 Sample Coherence*Extend Application

public static void main(String[] asArg)
        throws Throwable
    {
    NamedCache cache  = CacheFactory.getCache("dist-extend");
    Integer    IValue = (Integer) cache.get("key");
    if (IValue == null)
        {
        IValue = new Integer(1);
        }
    else
        {
        IValue = new Integer(IValue.intValue() + 1);
        }
    cache.put("key", IValue);

    InvocationService service = (InvocationService)
            CacheFactory.getConfigurableCacheFactory()
                .ensureService("ExtendTcpInvocationService");

    Map map = service.query(new AbstractInvocable()
            {
            public void run()
                {
                setResult(CacheFactory.getCache("dist-extend").get("key"));
                }
            }, null);

    Integer IValue = (Integer) map.get(service.getCluster().getLocalMember());
    }

Note that this example could also be run on a Coherence node (that is, within the cluster) verbatim. The fact that operations are being sent to a remote cluster node over TCP is completely transparent to the client application.

Coherence*Extend InvocationService

Since, by definition, a Coherence*Extend client has no direct knowledge of the cluster and the members running within the cluster, the Coherence*Extend InvocationService only allows Invocable tasks to be executed on the JVM to which the client is connected. Therefore, you should always pass a null member set to the query() method. As a consequence of this, the single result of the execution will be keyed by the local Member, which will be null if the client is not part of the cluster. This Member can be retrieved by calling service.getCluster().getLocalMember(). Additionally, the Coherence*Extend InvocationService only supports synchronous task execution (that is, the execute() method is not supported).

Advanced Configuration

Network Filters

Like Coherence clustered services, Coherence*Extend services support pluggable network filters. Filters can be used to modify the contents of network traffic before it is placed "on the wire". Most standard Coherence network filters are supported, including the compression and symmetric encryption filters. For more information on configuring filters, see Chapter 8, "Network Filters."

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

For example, to encrypt network traffic exchanged between a Coherence*Extend client and the clustered service to which it is connected, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements as illustrated in Example 17-5 (assuming the symmetric encryption filter has been named symmetric-encryption):

Example 17-5 Client-Side Configuration to Encrypt Network Traffic

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

Example 17-6 illustrates the configuration for the cluster-side <proxy-scheme> element:

Example 17-6 Cluster-Side Proxy Scheme Configuration

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <thread-count>5</thread-count>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>9099</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>

Note:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

Connection Error Detection and Failover

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) will dispatch a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service will be stopped. If the client application attempts to subsequently use the service, the service will automatically restart itself and attempt to reconnect to the cluster. If the connection is successful, the service will dispatch a MemberEvent.MEMBER_JOINED event; otherwise, a fatal exception will be thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying TCP/IP protocol, whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler> configuration element.

The primary configurable mechanism used by a Coherence*Extend client service to detect dropped connections is a request timeout. When the service sends a request to the remote cluster and does not receive a response within the request timeout interval (see the <request-timeout> subelement of <outgoing-message-handler>), the service assumes that the connection has been dropped. The Coherence*Extend client and clustered services can also be configured to send a periodic heartbeat over the connection (see <heartbeat-interval> and <heartbeat-timeout> subelements of <outgoing-message-handler>). If the service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

WARNING:

If you do not specify a <request-timeout/>, a Coherence*Extend service will use an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, you should specify a reasonable finite request timeout.

Read-only NamedCache Access

By default, the Coherence*Extend clustered service allows both read and write access to proxied NamedCache instances. To prohibit Coherence*Extend clients from modifying cached content, use the <cache-service-proxy> child configuration element. Example 17-7 illustrates a sample configuration.

Example 17-7 Client-Side Configuration to Allow Read-only Access to the Cache

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <read-only>true</read-only>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

Client-side NamedCache Locking

By default, the Coherence*Extend clustered service disallows Coherence*Extend clients from acquiring NamedCache locks. To enable client-side locking, use the <cache-service-proxy> child configuration element. For example:

Example 17-8 Client Configuration to Allow NamedCache Locking

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <lock-enabled>true</lock-enabled>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

If you enable client-side locking and your client application uses the NamedCache.lock() and unlock() methods, it is important that you specify the member-based (rather than thread-based) locking strategy for any Partitioned or Replicated cache services defined in your cluster-side Coherence cache configuration descriptor. Because the Coherence*Extend clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread will execute subsequent requests from the same Coherence*Extend client.

To specify the member-based locking strategy for a Partitioned or Replicated cache service, use the <lease-granularity> configuration element. Example 17-9 illustrates a sample configuration.

Example 17-9 Client Configuration to Allow Locking for Partitioned or Replicated Caches

<distributed-scheme>
  <scheme-name>dist-default</scheme-name>
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme/>
  </backing-map-scheme>
  <autostart>true</autostart>
</distributed-scheme>

Disabling Proxied Services

By default, the Coherence*Extend clustered service exposes two proxied services to clients: a CacheService proxy and an InvocationService proxy. In some cases, it may be desirable to disable one of the two proxies. This is possible by using the <enabled> configuration element in each of the corresponding proxy configuration sections. For example, to disable the InvocationService proxy so that remote clients cannot execute Invocable objects within the cluster, you'd configure the Coherence*Extend clustered service as illustrated in Example 17-10:

Example 17-10 Client Configuration to Disable Proxy Service

<proxy-scheme>
  ...

  <proxy-config>
    <invocation-service-proxy>
      <enabled>false</enabled>
    </invocation-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

Likewise, to prevent remote clients from accessing caches in the cluster, you would use a configuration similar to the one illustrated in Example 17-11:

Example 17-11 Client Configuration to Prevent Cache Access

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <enabled>false</enabled>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>