Configuring and Using Coherence*Extend

Overview

Coherence*ExtendTM extends the reach of the core Coherence TCMP cluster to a wider range of consumers, including desktops, remote servers and machines located across WAN connections. Typical uses of Coherence*Extend include providing desktop applications with access to Coherence caches (including support for Near Cache and Continuous Query) and Coherence cluster "bridges" that link together multiple Coherence clusters connected via a high-latency, unreliable WAN.

Coherence*Extend consists of two basic components: a client and a Coherence*Extend clustered service hosted by one or more DefaultCacheServer processes. The adapter library includes implementations of both the CacheService and InvocationService interfaces that route all requests to a Coherence*Extend clustered service instance running within the Coherence cluster. The Coherence*Extend clustered service in turn responds to client requests by delegating to an actual Coherence clustered service (for example, a Partitioned or Replicated cache service). The client adapter library and Coherence*Extend clustered service use a low-level messaging protocol to communicate with each other. Coherence*Extend includes the following transport bindings for this protocol:

The choice of a transport binding is configuration-driven and is completely transparent to the client application that uses Coherence*Extend. A Coherence*Extend service is retrieved just like a Coherence clustered service: via the CacheFactory class. Once it is obtained, a client uses the Coherence*Extend service in the same way as it would if it were part of the Coherence cluster. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is transparent to the client application.

General Instructions

Configuring and using Coherence*Extend requires four basic steps:

  1. Create a client-side Coherence cache configuration descriptor that includes one or more <remote-cache-scheme> and/or <remote-invocation-scheme> configuration elements
  2. Create a cluster-side Coherence cache configuration descriptor that includes one or more <proxy-scheme> configuration elements
  3. Launch one or more DefaultCacheServer processes
  4. Create a client application that uses one or more Coherence*Extend services
  5. Launch the client application

The following sections describe each of these steps in detail for the Extend-JMS and Extend-TCP transport bindings.

Configuring and Using Coherence*Extend-JMS

Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-JMS transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <jms-initiator> element containing various JMS-specific configuration information. For example:

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>extend-dist</scheme-name>
    </cache-mapping>

    <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
        <local-scheme>
          <high-units>1000</high-units>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>extend-dist</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendJmsCacheService</service-name>
      <initiator-config>
        <jms-initiator>
          <queue-connection-factory-name>jms/tangosol/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/tangosol/Queue</queue-name>
          <connect-timeout>10s</connect-timeout>
        </jms-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendJmsInvocationService</service-name>
      <initiator-config>
        <jms-initiator>
          <queue-connection-factory-name>jms/tangosol/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/tangosol/Queue</queue-name>
          <connect-timeout>10s</connect-timeout>
        </jms-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two caching schemes, one that uses Extend-JMS to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed via Extend-JMS). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <jms-initiator> child element which includes all JMS-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache via the CacheFactory using, for example, the name "dist-extend", the Coherence*Extend adapter library will connect to the Coherence cluster via a JMS Queue (retrieved via JNDI using the name "jms/tangosol/Queue") and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendJmsInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster via the same JMS Queue and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Cluster-side Cache Configuration Descriptor

In order for a Coherence*Extend-JMSTM client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor which includes a <proxy-scheme> element with a child <jms-acceptor> element containing various JMS-specific configuration information. For example:

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-*</cache-name>
      <scheme-name>dist-default</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <lease-granularity>member</lease-granularity>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <proxy-scheme>
      <service-name>ExtendJmsProxyService</service-name>
      <acceptor-config>
        <jms-acceptor>
          <queue-connection-factory-name>jms/tangosol/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/tangosol/Queue</queue-name>
        </jms-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two clustered services, one that uses Extend-JMS to allow remote Extend-JMS clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <jms-acceptor> child element which includes all JMS-specific information needed to accept client connection requests over JMS.

The Coherence*Extend clustered service will listen to a JMS Queue (retrieved via JNDI using the name "jms/tangosol/Queue") for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called "dist-extend", the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Parititioned cache. Note that Extend-JMS client connection requests will be load balanced across all DefaultCacheServer processes that are running a Coherence*Extend clustered service with the same configuration.

Configuring your JMS Provider

Coherence*Extend-JMS uses JNDI to obtain references to all JMS resources. To specify the JNDI properties that Coherence*Extend-JMS uses to create a JNDI InitialContext, create a file called jndi.properties that contains your JMS provider's configuration properties and add the directory that contains the file to both the client application and DefaultCacheServer classpaths.

For example, if you are using WebLogic Server as your JMS provider, your jndi.properties file would look something like the following:

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
java.naming.provider.url=t3://localhost:7001
java.naming.security.principal=system
java.naming.security.credentials=weblogic

Additionally, Coherence*Extend-JMS uses a JMS Queue to connect Extend-JMS clients to a Coherence*Extend clustered service instance. Therefore, you must deploy an appropriately configured JMS QueueConnectionFactory and Queue and register them under the JNDI names specified in the <jms-initiator> and <jms-acceptor> configuration elements.

For example, if you are using WebLogic Server, you can use the following Ant script to create and deploy these JMS resources:

<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Ant build script for configuring a WebLogic Server domain with the    -->
<!-- necessary JMS resources required by Coherence*Extend-JMS              -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Usage:                                                                -->
<!--                                                                       -->
<!--   1) Create the WLS domain:                                           -->
<!--      prompt> ant create.domain                                        -->
<!--                                                                       -->
<!--   2) Start the WLS instance:                                          -->
<!--      prompt> domain/startmydomain.cmd|sh                              -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<project name="extend-jms-wls" default="create.domain" basedir=".">

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project properties                                                  -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <property name="weblogic.home"   value="c:/opt/bea/weblogic8.1.5"/>
  <property name="weblogic.jar"    value="${weblogic.home}/server/lib/weblogic.jar"/>
  <property name="server.user"     value="system"/>
  <property name="server.password" value="weblogic"/>
  <property name="domain.dir"      value="domain"/>
  <property name="domain.name"     value="mydomain"/>
  <property name="server.name"     value="myserver"/>
  <property name="realm.name"      value="myrealm"/>
  <property name="server.host"     value="localhost"/>
  <property name="server.port"     value="7001"/>
  <property name="admin.url"       value="t3://${server.host}:${server.port}"/>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project paths                                                       -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <path id="project.classpath">
    <pathelement location="${weblogic.jar}"/>
  </path>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project task definitions                                            -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <taskdef name="wlserver"
    classname="weblogic.ant.taskdefs.management.WLServer"
    classpathref="project.classpath"/>
  <taskdef name="wlconfig"
    classname="weblogic.ant.taskdefs.management.WLConfig"
    classpathref="project.classpath"/>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project targets                                                     -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <target name="clean" description="Remove all build artifacts.">
    <delete dir="${domain.dir}"/>
  </target>

  <target name="create.domain"
    description="Create a WLS domain for use with Coherence*Extend-JMS.">
    <delete dir="${domain.dir}"/>
    <mkdir dir="${domain.dir}"/>

    <wlserver weblogicHome="${weblogic.home}"
      dir="${domain.dir}"
      classpathref="project.classpath"
      host="${server.host}"
      port="${server.port}"
      servername="${server.name}"
      domainname="${domain.name}"
      generateConfig="true"
      username="${server.user}"
      password="${server.password}"
      action="start"/>

    <antcall target="config.domain"/>
  </target>

  <target name="config.domain" 
    description="Configure a WLS domain for use with Coherence*Extend-JMS.">
      <wlconfig url="${admin.url}"
        username="${server.user}"
        password="${server.password}">
        <query domain="${domain.name}"
          type="Server"
          name="${server.name}"
          property="server"/>

        <!-- Create a JMS template -->
        <create type="JMSTemplate" name="TangosolTemplate" property="template"/>

        <!-- Add a JMS server and queue for the application -->
        <create type="JMSServer" name="MyJMSServer">
            <set attribute="Targets" value="${server}"/>
            <create type="JMSQueue" name="TangosolQueue">
                <set attribute="JNDIName" value="jms/tangosol/Queue"/>
            </create>
            <set attribute="TemporaryTemplate" value="${template}"/>
        </create>

        <!-- Create a JMS connection factory -->
        <create type="JMSConnectionFactory" name="TangosolConnectionFactory">
            <set attribute="JNDIName"
                value="jms/tangosol/ConnectionFactory"/>
            <set attribute="Targets" value="${server}"/>
        </create>
      </wlconfig>
  </target>
</project>

Launching an Extend-JMS DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-JMS clients to connect to the Coherence cluster via JMS, you need to do the following:

For example, if you are using WebLogic server as your JMS provider, you would run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

On Unix:

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

Launching an Extend-JMS Client Application

To start a client application that uses Extend-JMS to connect to a remote Coherence cluster via JMS, you need to do the following:

For example, if you are using WebLogic server as your JMS provider, you would run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

On Unix:

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

Configuring and Using Coherence*Extend-TCP

Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-TCP transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <tcp-initiator> element containing various TCP/IP-specific configuration information. For example:

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>extend-dist</scheme-name>
    </cache-mapping>

    <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
        <local-scheme>
          <high-units>1000</high-units>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>extend-dist</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>

    <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-invocation-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two caching schemes, one that uses Extend-TCP to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed via Extend-TCP). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <tcp-initiator> child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache via the CacheFactory using, for example, the name "dist-extend", the Coherence*Extend adapter library will connect to the Coherence cluster via TCP/IP (using the address "localhost" and port 9099) and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendJmsInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster via TCP/IP (again, using the address "localhost" and port 9099) and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Note that the <remote-addresses> configuration element can contain multiple <socket-address> child elements. The Coherence*Extend adapter library will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.

Cluster-side Cache Configuration Descriptor

In order for a Coherence*Extend-TCPTM client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor which includes a <proxy-scheme> element with a child <tcp-acceptor> element containing various TCP/IP-specific configuration information. For example:

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-*</cache-name>
      <scheme-name>dist-default</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <lease-granularity>member</lease-granularity>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <thread-count>5</thread-count>
      <acceptor-config>
        <tcp-acceptor>
          <local-address>
            <address>localhost</address>
            <port>9099</port>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two clustered services, one that uses Extend-TCP to allow remote Extend-TCP clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <tcp-acceptor> child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP.

The Coherence*Extend clustered service will listen to a TCP/IP ServerSocket (bound to address "localhost" and port 9099) for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called "dist-extend-direct", the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Partitioned cache.

Launching an Extend-TCP DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-TCP clients to connect to the Coherence cluster via TCP/IP, you need to do the following:

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

Launching an Extend-TCP Client Application

To start a client application that uses Extend-TCP to connect to a remote Coherence cluster via TCP/IP, you need to do the following:

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

Example Coherence*Extend Client Application

The following example demonstrates how to retrieve and use a Coherence*Extend CacheService and InvocationService. This example increments an Integer value in a remote Partitioned cache and then retrieves the value by executing an Invocable on the clustered JVM to which the client is attached:

public static void main(String[] asArg)
        throws Throwable
    {
    NamedCache cache  = CacheFactory.getCache("dist-extend");
    Integer    IValue = (Integer) cache.get("key");
    if (IValue == null)
        {
        IValue = new Integer(1);
        }
    else
        {
        IValue = new Integer(IValue.intValue() + 1);
        }
    cache.put("key", IValue);

    InvocationService service = (InvocationService)
            CacheFactory.getConfigurableCacheFactory()
                .ensureService("ExtendTcpInvocationService");

    Map map = service.query(new AbstractInvocable()
            {
            public void run()
                {
                setResult(CacheFactory.getCache("dist-extend").get("key"));
                }
            }, null);

    Integer IValue = (Integer) map.get(service.getCluster().getLocalMember());
    }

Note that this example could also be run on a Coherence node (i.e. within the cluster) verbatum. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is completely transparent to the client application.

Coherence*Extend InvocationService

Since, by definition, a Coherence*Extend client has no direct knowledge of the cluster and the members running within the cluster, the Coherence*Extend InvocationService only allows Invocable tasks to be executed on the JVM to which the client is connected. Therefore, you should always pass a null member set to the query() method. As a consequence of this, the single result of the execution will be keyed by the local Member, which will be null if the client is not part of the cluster. This Member can be retrieved by calling service.getCluster().getLocalMember(). Additionally, the Coherence*Extend InvocationService only supports synchronous task execution (i.e. the execute() method is not supported).

Advanced Configuration

Network Filters

Like Coherence clustered services, Coherence*Extend services support pluggable network filters. Filters can be used to modify the contents of network traffic before it is placed "on the wire". Most standard Coherence network filters are supported, including the compression and symmetric encryption filters. For more information on configuring filters, see the Network Filters section.

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

For example, to encrypt network traffic exchanged between a Coherence*Extend client and the clustered service to which it is connected, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements like so (assuming the symmetric encryption filter has been named symmetric-encryption):

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

and the cluster-side <proxy-scheme> element like so:

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <thread-count>5</thread-count>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>9099</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

Connection Error Detection and Failover

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (i.e. CacheService or InvocationService) will dispatch a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service will be stopped. If the client application attempts to subsequently use the service, the service will automatically restart itself and attempt to reconnect to the cluster. If the connection is successful, the service will dispatch a MemberEvent.MEMBER_JOINED event; otherwise, a fatal exception will be thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some mechanisms are inherit to the underlying protocol (i.e. a javax.jms.ExceptionListener in Extend-JMS and TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured via the <outgoing-message-handler> configuration element.

The primary configurable mechanism used by a Coherence*Extend client service to detect dropped connections is a request timeout. When the service sends a request to the remote cluster and does not receive a response within the request timeout interval (see <request-timeout>), the service assumes that the connection has been dropped. The Coherence*Extend client and clustered services can also be configured to send a periodic heartbeat over the connection (see <heartbeat-interval> and <heartbeat-timeout>). If the service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

You should always enable heartbeats when using a connectionless transport, as is the case with Extend-JMS.

If you do not specify a <request-timeout/>, a Coherence*Extend service will use an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, you should specify a reasonable finite request timeout.

Read-only NamedCache Access

By default, the Coherence*Extend clustered service allows both read and write access to proxied NamedCache instances. If you would like to prohibit Coherence*Extend clients from modifying cached content, you may do so using the <cache-service-proxy> child configuration element. For example:

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <read-only>true</read-only>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

Client-side NamedCache Locking

By default, the Coherence*Extend clustered service disallows Coherence*Extend clients from acquiring NamedCache locks. If you would like to enable client-side locking, you may do so using the <cache-service-proxy> child configuration element. For example:

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <lock-enabled>true</lock-enabled>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

If you do enable client-side locking and your client application makes use of the NamedCache.lock() and unlock() methods, it is important that you specified the member-based rather than thread-based locking strategy for any Partitioned or Replicated cache services defined in your cluster-side Coherence cache configuration descriptor. The reason being is that the Coherence*Extend clustered service uses a pool of threads to execute client requests concurrently; therefore, it cannot be guaranteed that the same thread will execute subsequent requests from the same Coherence*Extend client.

To specify the member-based locking strategy for a Partitioned or Replicated cache service, use the <lease-granularity> configuration element. For example:

<distributed-scheme>
  <scheme-name>dist-default</scheme-name>
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme/>
  </backing-map-scheme>
  <autostart>true</autostart>
</distributed-scheme>
Error formatting macro: rate: java.lang.NullPointerException
Error formatting macro: rate: java.lang.NullPointerException
Unknown macro: {rate-table}