18 Configuring and Using Coherence*Extend

Coherence*Extend extends the reach of the core Coherence TCMP cluster to a wider range of consumers, including desktops, remote servers and machines located across WAN connections. Typical uses of Coherence*Extend include providing desktop applications with access to Coherence caches (including support for Near Cache and continuous query) and Coherence cluster "bridges" that link together multiple Coherence clusters connected by using a high-latency, unreliable WAN.

Coherence*Extend consists of two basic components: a client and a Coherence*Extend clustered service hosted by one or more DefaultCacheServer processes. The adapter library includes implementations of both the CacheService and InvocationService interfaces that route all requests to a Coherence*Extend clustered service instance running within the Coherence cluster. The Coherence*Extend clustered service in turn responds to client requests by delegating to an actual Coherence clustered service (for example, a Partitioned or Replicated cache service). The client adapter library and Coherence*Extend clustered service use a low-level messaging protocol to communicate with each other. Coherence*Extend includes the following transport bindings for this protocol:

The choice of a transport binding is configuration-driven and is completely transparent to the client application that uses Coherence*Extend. A Coherence*Extend service is retrieved like a Coherence clustered service: by using the CacheFactory class. When obtained, a client uses the Coherence*Extend service in the same way as it would if it were part of the Coherence cluster. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is transparent to the client application.

18.1 General Instructions

Configuring and using Coherence*Extend requires four basic steps:

  1. Create a client-side Coherence cache configuration descriptor that includes one or more <remote-cache-scheme> and <remote-invocation-scheme> configuration elements

  2. Create a cluster-side Coherence cache configuration descriptor that includes one or more <proxy-scheme> configuration elements

  3. Launch one or more DefaultCacheServer processes

  4. Create a client application that uses one or more Coherence*Extend services. See "Sample Coherence*Extend Client Application".

  5. Launch the client application

The following sections describe each of these steps in detail for the Extend-JMS and Extend-TCP transport bindings.

18.2 Configuring and Using Coherence*Extend-JMS

18.2.1 Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-JMS transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <jms-initiator> element containing various JMS-specific configuration information. Example 18-1 illustrates a sample descriptor.

Example 18-1 Client-Side Cache Configuration Descriptor for Extend-JMS

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>extend-dist</scheme-name>
    </cache-mapping>

    <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
        <local-scheme>
          <high-units>1000</high-units>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>extend-dist</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendJmsCacheService</service-name>
      <initiator-config>
        <jms-initiator>
          <queue-connection-factory-name>jms/coherence/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/coherence/Queue</queue-name>
          <connect-timeout>10s</connect-timeout>
        </jms-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendJmsInvocationService</service-name>
      <initiator-config>
        <jms-initiator>
          <queue-connection-factory-name>jms/coherence/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/coherence/Queue</queue-name>
          <connect-timeout>10s</connect-timeout>
        </jms-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two caching schemes, one that uses Extend-JMS to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed by Extend-JMS). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <jms-initiator> child element which includes all JMS-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache by using the CacheFactory using, for example, the name dist-extend, the Coherence*Extend adapter library will connect to the Coherence cluster by using a JMS Queue (retrieved by JNDI using the name jms/coherence/Queue") and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendJmsInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster by using the same JMS Queue and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

18.2.2 Cluster-side Cache Configuration Descriptor

For a Coherence*Extend-JMS client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor. This desciptor must include a <proxy-scheme> element with a child <jms-acceptor> element containing various JMS-specific configuration information. Example 18-2 illustrates a sample descriptor.

Example 18-2 Cluster-Side Cache Configuration Descriptor for Extend-JMS

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-*</cache-name>
      <scheme-name>dist-default</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <lease-granularity>member</lease-granularity>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <proxy-scheme>
      <service-name>ExtendJmsProxyService</service-name>
      <acceptor-config>
        <jms-acceptor>
          <queue-connection-factory-name>jms/coherence/ConnectionFactory</queue-connection-factory-name>
          <queue-name>jms/coherence/Queue</queue-name>
        </jms-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two clustered services: one that uses Extend-JMS to allow remote Extend-JMS clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <jms-acceptor> child element which includes all JMS-specific information needed to accept client connection requests over JMS.

The Coherence*Extend clustered service will listen to a JMS Queue (retrieved by JNDI using the name jms/coherence/Queue) for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called dist-extend, the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Partitioned cache. Note that Extend-JMS client connection requests will be load balanced across all DefaultCacheServer processes that run a Coherence*Extend clustered service with the same configuration.

18.2.3 Configuring your JMS Provider

Coherence*Extend-JMS uses JNDI to obtain references to all JMS resources. To specify the JNDI properties that Coherence*Extend-JMS uses to create a JNDI InitialContext, create a file called jndi.properties that contains your JMS provider's configuration properties and add the directory that contains the file to both the client application and DefaultCacheServer classpaths.

For example, if you are using WebLogic Server as your JMS provider, your jndi.properties file would look like Example 18-3:

Example 18-3 jndi.properties Values for a WebLogic Server Acting as a JMS Provider

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
java.naming.provider.url=t3://localhost:7001
java.naming.security.principal=system
java.naming.security.credentials=weblogic

Additionally, Coherence*Extend-JMS uses a JMS Queue to connect Extend-JMS clients to a Coherence*Extend clustered service instance. Therefore, you must deploy an appropriately configured JMS QueueConnectionFactory and Queue and register them under the JNDI names specified in the <jms-initiator> and <jms-acceptor> configuration elements.

For example, if you are using WebLogic Server, you can use the Ant script in Example 18-4 to create and deploy these JMS resources:

Example 18-4 Ant Script to Create JMS Resources and Deploy on a WebLogic Server

<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!-- Ant build script for configuring a WebLogic Server domain with the    -->
<!-- necessary JMS resources required by Coherence*Extend-JMS              -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<!--                                                                       -->
<!-- Usage:                                                                -->
<!--                                                                       -->
<!--   1) Create the WLS domain:                                           -->
<!--      prompt> ant create.domain                                        -->
<!--                                                                       -->
<!--   2) Start the WLS instance:                                          -->
<!--      prompt> domain/startmydomain.cmd|sh                              -->
<!--                                                                       -->
<!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
<project name="extend-jms-wls" default="create.domain" basedir=".">

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project properties                                                  -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <property name="weblogic.home"   value="c:/opt/bea/weblogic8.1.5"/>
  <property name="weblogic.jar"    value="${weblogic.home}/server/lib/weblogic.jar"/>
  <property name="server.user"     value="system"/>
  <property name="server.password" value="weblogic"/>
  <property name="domain.dir"      value="domain"/>
  <property name="domain.name"     value="mydomain"/>
  <property name="server.name"     value="myserver"/>
  <property name="realm.name"      value="myrealm"/>
  <property name="server.host"     value="localhost"/>
  <property name="server.port"     value="7001"/>
  <property name="admin.url"       value="t3://${server.host}:${server.port}"/>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project paths                                                       -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <path id="project.classpath">
    <pathelement location="${weblogic.jar}"/>
  </path>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project task definitions                                            -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <taskdef name="wlserver"
    classname="weblogic.ant.taskdefs.management.WLServer"
    classpathref="project.classpath"/>
  <taskdef name="wlconfig"
    classname="weblogic.ant.taskdefs.management.WLConfig"
    classpathref="project.classpath"/>

  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->
  <!-- Project targets                                                     -->
  <!-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -->

  <target name="clean" description="Remove all build artifacts.">
    <delete dir="${domain.dir}"/>
  </target>

  <target name="create.domain"
    description="Create a WLS domain for use with Coherence*Extend-JMS.">
    <delete dir="${domain.dir}"/>
    <mkdir dir="${domain.dir}"/>

    <wlserver weblogicHome="${weblogic.home}"
      dir="${domain.dir}"
      classpathref="project.classpath"
      host="${server.host}"
      port="${server.port}"
      servername="${server.name}"
      domainname="${domain.name}"
      generateConfig="true"
      username="${server.user}"
      password="${server.password}"
      action="start"/>

    <antcall target="config.domain"/>
  </target>

  <target name="config.domain" 
    description="Configure a WLS domain for use with Coherence*Extend-JMS.">
      <wlconfig url="${admin.url}"
        username="${server.user}"
        password="${server.password}">
        <query domain="${domain.name}"
          type="Server"
          name="${server.name}"
          property="server"/>

        <!-- Create a JMS template -->
        <create type="JMSTemplate" name="CoherenceTemplate" property="template"/>

        <!-- Add a JMS server and queue for the application -->
        <create type="JMSServer" name="MyJMSServer">
            <set attribute="Targets" value="${server}"/>
            <create type="JMSQueue" name="CoherenceQueue">
                <set attribute="JNDIName" value="jms/coherence/Queue"/>
            </create>
            <set attribute="TemporaryTemplate" value="${template}"/>
        </create>

        <!-- Create a JMS connection factory -->
        <create type="JMSConnectionFactory" name="CoherenceConnectionFactory">
            <set attribute="JNDIName"
                value="jms/coherence/ConnectionFactory"/>
            <set attribute="Targets" value="${server}"/>
        </create>
      </wlconfig>
  </target>
</project>

18.2.4 Launching an Extend-JMS DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-JMS clients to connect to the Coherence cluster by using JMS, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX).

  • Make sure that the paths are configured so that the Java command will run.

  • Start the DefaultCacheServer command line application with the directory that contains your jndi.properties file and your JMS provider's libraries on the classpath and the -Dtangosol.coherence.cacheconfig system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier.

For example, if you are using WebLogic server as your JMS provider, run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

Example 18-5 Windows Command to Start the Default Cache Server for the Cluster-Side

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

On UNIX:

Example 18-6 UNIX Command to Start the Default Cache Server for the Cluster-Side

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

18.2.5 Launching an Extend-JMS Client Application

To start a client application that uses Extend-JMS to connect to a remote Coherence cluster by using JMS, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX).

  • Make sure that the paths are configured so that the Java command will run.

  • Start your client application with the directory that contains your jndi.properties file and your JMS provider's libraries on the classpath and the -Dtangosol.coherence.cacheconfig system property set to the location of the client-side Coherence cache configuration descriptor described earlier.

For example, if you are using WebLogic server as your JMS provider, you would run the following command on Windows (note that it is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

Example 18-7 Windows Command to Start the Client Application

java -cp coherence.jar;<directory containing jndi.properties>;<WebLogic home>\server\lib\wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

On UNIX:

Example 18-8 UNIX Command to Start the Client Application

java -cp coherence.jar:<directory containing jndi.properties>:<WebLogic home>/server/lib/wljmsclient.jar
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

18.3 Configuring and Using Coherence*Extend-TCP

18.3.1 Client-side Cache Configuration Descriptor

A Coherence*Extend client that uses the Extend-TCP transport binding must define a Coherence cache configuration descriptor which includes a <remote-cache-scheme> and/or <remote-invocation-scheme> element with a child <tcp-initiator> element containing various TCP/IP-specific configuration information. Example 18-9 illustrates a sample descriptor.

Example 18-9 Coherence*Extend Client Descriptor that uses Extend-TCP

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-extend</cache-name>
      <scheme-name>extend-dist</scheme-name>
    </cache-mapping>

    <cache-mapping>
      <cache-name>dist-extend-near</cache-name>
      <scheme-name>extend-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <near-scheme>
      <scheme-name>extend-near</scheme-name>
      <front-scheme>
        <local-scheme>
          <high-units>1000</high-units>
        </local-scheme>
      </front-scheme>
      <back-scheme>
        <remote-cache-scheme>
          <scheme-ref>extend-dist</scheme-ref>
        </remote-cache-scheme>
      </back-scheme>
      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>

    <remote-cache-scheme>
      <scheme-name>extend-dist</scheme-name>
      <service-name>ExtendTcpCacheService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-cache-scheme>

    <remote-invocation-scheme>
      <scheme-name>extend-invocation</scheme-name>
      <service-name>ExtendTcpInvocationService</service-name>
      <initiator-config>
        <tcp-initiator>
          <remote-addresses>
            <socket-address>
              <address>localhost</address>
              <port>9099</port>
            </socket-address>
          </remote-addresses>
          <connect-timeout>10s</connect-timeout>
        </tcp-initiator>
        <outgoing-message-handler>
          <request-timeout>5s</request-timeout>
        </outgoing-message-handler>
      </initiator-config>
    </remote-invocation-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two caching schemes, one that uses Extend-TCP to connect to a remote Coherence cluster (<remote-cache-scheme>) and one that maintains an in-process size-limited near cache of remote Coherence caches (again, accessed by using Extend-TCP). Additionally, the cache configuration descriptor defines a <remote-invocation-scheme> that allows the client application to execute tasks within the remote Coherence cluster. Both the <remote-cache-scheme> and <remote-invocation-scheme> elements have a <tcp-initiator> child element which includes all TCP/IP-specific information needed to connect the client with the Coherence*Extend clustered service running within the remote Coherence cluster.

When the client application retrieves a NamedCache by using the CacheFactory using, for example, the name dist-extend, the Coherence*Extend adapter library will connect to the Coherence cluster by using TCP/IP (using the address localhost and port 9099) and return a NamedCache implementation that routes requests to the NamedCache with the same name running within the remote cluster. Likewise, when the client application retrieves a InvocationService by calling CacheFactory.getConfigurableCacheFactory().ensureService("ExtendTcpInvocationService"), the Coherence*Extend adapter library will connect to the Coherence cluster by using TCP/IP (again, using the address localhost and port 9099) and return an InvocationService implementation that executes synchronous Invocable tasks within the remote clustered JVM to which the client is connected.

Note that the <remote-addresses> configuration element can contain multiple <socket-address> child elements. The Coherence*Extend adapter library will attempt to connect to the addresses in a random order, until either the list is exhausted or a TCP/IP connection is established.

18.3.2 Cluster-side Cache (a.k.a Coherence Extend Proxy) Configuration Descriptor

For a Coherence*Extend-TCP client to connect to a Coherence cluster, one or more DefaultCacheServer processes must be running that use a Coherence cache configuration descriptor. This descriptor must include a <proxy-scheme> element with a child <tcp-acceptor> element containing various TCP/IP-specific configuration information. Example 18-10 illustrates a sample descriptor.

Example 18-10 Cluster-Side Cache Configuration Descriptor for Extend-TCP

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-*</cache-name>
      <scheme-name>dist-default</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>dist-default</scheme-name>
      <lease-granularity>member</lease-granularity>
      <backing-map-scheme>
        <local-scheme/>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>

    <proxy-scheme>
      <service-name>ExtendTcpProxyService</service-name>
      <thread-count>5</thread-count>
      <acceptor-config>
        <tcp-acceptor>
          <local-address>
            <address>localhost</address>
            <port>9099</port>
          </local-address>
        </tcp-acceptor>
      </acceptor-config>
      <autostart>true</autostart>
    </proxy-scheme>
  </caching-schemes>
</cache-config>

This cache configuration descriptor defines two clustered services, one that uses Extend-TCP to allow remote Extend-TCP clients to connect to the Coherence cluster and a standard Partitioned cache service. Since this descriptor is used by a DefaultCacheServer it is important that the <autostart> configuration element for each service is set to true so that clustered services are automatically restarted upon termination. The <proxy-scheme> element has a <tcp-acceptor> child element which includes all TCP/IP-specific information needed to accept client connection requests over TCP/IP.

The Coherence*Extend clustered service will listen to a TCP/IP ServerSocket (bound to address localhost and port 9099) for connection requests. When, for example, a client attempts to connect to a Coherence NamedCache called dist-extend-direct, the Coherence*Extend clustered service will proxy subsequent requests to the NamedCache with the same name which, in this case, will be a Partitioned cache.

18.3.3 Launching an Extend-TCP DefaultCacheServer Process

To start a DefaultCacheServer that uses the cluster-side Coherence cache configuration described earlier to allow Extend-TCP clients to connect to the Coherence cluster by using TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX)

  • Make sure that the paths are configured so that the Java command will run

  • Start the DefaultCacheServer command line application with the -Dtangosol.coherence.cacheconfig system property set to the location of the cluster-side Coherence cache configuration descriptor described earlier

For example (note that the following command is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the server-side cache configuration descriptor>
     com.tangosol.net.DefaultCacheServer

18.3.4 Launching an Extend-TCP Client Application

To start a client application that uses Extend-TCP to connect to a remote Coherence cluster by using TCP/IP, you need to do the following:

  • Change the current directory to the Coherence library directory (%COHERENCE_HOME%\lib on Windows and $COHERENCE_HOME/lib on UNIX)

  • Make sure that the paths are configured so that the Java command will run

  • Start your client application with the -Dtangosol.coherence.cacheconfig system property set to the location of the client-side Coherence cache configuration descriptor described earlier

For example (note that the command in Example 18-11 is broken up into multiple lines here only for formatting purposes; this is a single command typed on one line):

Example 18-11 Command to Start a Client Application that Uses Extend-TCP

java -cp coherence.jar:<classpath to client application> 
     -Dtangosol.coherence.cacheconfig=file://<path to the client-side cache configuration descriptor>
     <client application Class name>

18.4 Sample Coherence*Extend Client Application

Example 18-12 demonstrates how to retrieve and use a Coherence*Extend CacheService and InvocationService. This example increments an Integer value in a remote Partitioned cache and then retrieves the value by executing an Invocable on the clustered JVM to which the client is attached:

Example 18-12 Sample Coherence*Extend Application

public static void main(String[] asArg)
        throws Throwable
    {
    NamedCache cache  = CacheFactory.getCache("dist-extend");
    Integer    IValue = (Integer) cache.get("key");
    if (IValue == null)
        {
        IValue = new Integer(1);
        }
    else
        {
        IValue = new Integer(IValue.intValue() + 1);
        }
    cache.put("key", IValue);

    InvocationService service = (InvocationService)
            CacheFactory.getConfigurableCacheFactory()
                .ensureService("ExtendTcpInvocationService");

    Map map = service.query(new AbstractInvocable()
            {
            public void run()
                {
                setResult(CacheFactory.getCache("dist-extend").get("key"));
                }
            }, null);

    Integer IValue = (Integer) map.get(service.getCluster().getLocalMember());
    }

Note that this example could also be run on a Coherence node (that is, within the cluster) verbatim. The fact that operations are being sent to a remote cluster node (over either JMS or TCP) is completely transparent to the client application.

18.4.1 Coherence*Extend InvocationService

Since, by definition, a Coherence*Extend client has no direct knowledge of the cluster and the members running within the cluster, the Coherence*Extend InvocationService only allows Invocable tasks to be executed on the JVM to which the client is connected. Therefore, you should always pass a null member set to the query() method. As a consequence of this, the single result of the execution will be keyed by the local Member, which will be null if the client is not part of the cluster. This Member can be retrieved by calling service.getCluster().getLocalMember(). Additionally, the Coherence*Extend InvocationService only supports synchronous task execution (that is, the execute() method is not supported).

18.5 Advanced Configuration

18.5.1 Network Filters

Like Coherence clustered services, Coherence*Extend services support pluggable network filters. Filters can be used to modify the contents of network traffic before it is placed "on the wire". Most standard Coherence network filters are supported, including the compression and symmetric encryption filters. For more information on configuring filters, see Chapter 8, "Network Filters."

To use network filters with Coherence*Extend, a <use-filters> element must be added to the <initiator-config> element in the client-side cache configuration descriptor and to the <acceptor-config> element in the cluster-side cache configuration descriptor.

For example, to encrypt network traffic exchanged between a Coherence*Extend client and the clustered service to which it is connected, configure the client-side <remote-cache-scheme> and <remote-invocation-scheme> elements as illustrated in Example 18-13 (assuming the symmetric encryption filter has been named symmetric-encryption):

Example 18-13 Client-Side Configuration to Encrypt Network Traffic

<remote-cache-scheme>
  <scheme-name>extend-dist</scheme-name>
  <service-name>ExtendTcpCacheService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-cache-scheme>

<remote-invocation-scheme>
  <scheme-name>extend-invocation</scheme-name>
  <service-name>ExtendTcpInvocationService</service-name>
  <initiator-config>
    <tcp-initiator>
      <remote-addresses>
        <socket-address>
          <address>localhost</address>
          <port>9099</port>
        </socket-address>
      </remote-addresses>
      <connect-timeout>10s</connect-timeout>
    </tcp-initiator>
    <outgoing-message-handler>
      <request-timeout>5s</request-timeout>
    </outgoing-message-handler>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>    
  </initiator-config>
</remote-invocation-scheme>

Example 18-14 illustrates the configuration for the cluster-side <proxy-scheme> element:

Example 18-14 Cluster-Side Proxy Scheme Configuration

<proxy-scheme>
  <service-name>ExtendTcpProxyService</service-name>
  <thread-count>5</thread-count>
  <acceptor-config>
    <tcp-acceptor>
      <local-address>
        <address>localhost</address>
        <port>9099</port>
      </local-address>
    </tcp-acceptor>
    <use-filters>
      <filter-name>symmetric-encryption</filter-name>
    </use-filters>
  </acceptor-config>
  <autostart>true</autostart>
</proxy-scheme>

Note:

The contents of the <use-filters> element must be the same in the client and cluster-side cache configuration descriptors.

18.5.2 Connection Error Detection and Failover

When a Coherence*Extend service detects that the connection between the client and cluster has been severed (for example, due to a network, software, or hardware failure), the Coherence*Extend client service implementation (that is, CacheService or InvocationService) will dispatch a MemberEvent.MEMBER_LEFT event to all registered MemberListeners and the service will be stopped. If the client application attempts to subsequently use the service, the service will automatically restart itself and attempt to reconnect to the cluster. If the connection is successful, the service will dispatch a MemberEvent.MEMBER_JOINED event; otherwise, a fatal exception will be thrown to the client application.

A Coherence*Extend service has several mechanisms for detecting dropped connections. Some are inherent to the underlying protocol (that is, a javax.jms.ExceptionListener in Extend-JMS and TCP/IP in Extend-TCP), whereas others are implemented by the service itself. The latter mechanisms are configured by using the <outgoing-message-handler> configuration element.

The primary configurable mechanism used by a Coherence*Extend client service to detect dropped connections is a request timeout. When the service sends a request to the remote cluster and does not receive a response within the request timeout interval (see the <request-timeout> subelement of <outgoing-message-handler>), the service assumes that the connection has been dropped. The Coherence*Extend client and clustered services can also be configured to send a periodic heartbeat over the connection (see <heartbeat-interval> and <heartbeat-timeout> subelements of <outgoing-message-handler>). If the service does not receive a response within the configured heartbeat timeout interval, the service assumes that the connection has been dropped.

Notes:

  • You should always enable heartbeats when using a connectionless transport, as is the case with Extend-JMS.

  • If you do not specify a <request-timeout/>, a Coherence*Extend service will use an infinite request timeout. In general, this is not a recommended configuration, as it could result in an unresponsive application. For most use cases, you should specify a reasonable finite request timeout.

18.5.3 Read-only NamedCache Access

By default, the Coherence*Extend clustered service allows both read and write access to proxied NamedCache instances. To prohibit Coherence*Extend clients from modifying cached content, use the <cache-service-proxy> child configuration element. Example 18-15 illustrates a sample configuration.

Example 18-15 Client-Side Configuration to Allow Read-only Access to the Cache

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <read-only>true</read-only>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

18.5.4 Client-side NamedCache Locking

By default, the Coherence*Extend clustered service disallows Coherence*Extend clients from acquiring NamedCache locks. To enable client-side locking, use the <cache-service-proxy> child configuration element. For example:

Example 18-16 Client Configuration to Allow NamedCache Locking

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <lock-enabled>true</lock-enabled>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

If you enable client-side locking and your client application uses the NamedCache.lock() and unlock() methods, it is important that you specify the member-based (rather than thread-based) locking strategy for any Partitioned or Replicated cache services defined in your cluster-side Coherence cache configuration descriptor. Because the Coherence*Extend clustered service uses a pool of threads to execute client requests concurrently, it cannot guarantee that the same thread will execute subsequent requests from the same Coherence*Extend client.

To specify the member-based locking strategy for a Partitioned or Replicated cache service, use the <lease-granularity> configuration element. Example 18-17 illustrates a sample configuration.

Example 18-17 Client Configuration to Allow Locking for Partitioned or Replicated Caches

<distributed-scheme>
  <scheme-name>dist-default</scheme-name>
  <lease-granularity>member</lease-granularity>
  <backing-map-scheme>
    <local-scheme/>
  </backing-map-scheme>
  <autostart>true</autostart>
</distributed-scheme>

18.5.5 Disabling Proxied Services

By default, the Coherence*Extend clustered service exposes two proxied services to clients: a CacheService proxy and an InvocationService proxy. In some cases, it may be desirable to disable one of the two proxies. This is possible by using the <enabled> configuration element in each of the corresponding proxy configuration sections. For example, to disable the InvocationService proxy so that remote clients cannot execute Invocable objects within the cluster, you'd configure the Coherence*Extend clustered service as illustrated in Example 18-18:

Example 18-18 Client Configuration to Disable Proxy Service

<proxy-scheme>
  ...

  <proxy-config>
    <invocation-service-proxy>
      <enabled>false</enabled>
    </invocation-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>

Likewise, to prevent remote clients from accessing caches in the cluster, you would use a configuration similar to the one illustrated in Example 18-19:

Example 18-19 Client Configuration to Prevent Cache Access

<proxy-scheme>
  ...

  <proxy-config>
    <cache-service-proxy>
      <enabled>false</enabled>
    </cache-service-proxy>
  </proxy-config>

  <autostart>true</autostart>
</proxy-scheme>