Configuring and Using Coherence*Extend-JMS

Overview

Coherence*Extend-JMS allows you to use Coherence caching from outside of a Coherence cluster, using your existing JMS infrastructure as the means to connect to the cluster. Coherence*Extend-JMS uses a JMS-based protocol to invoke cache operations on a remote cluster node, but the details of doing so are hidden behind a local interface. Coherence*Extend-JMS includes support for the CacheStore and NamedCache interfaces.

The client (non-clustered) portion of Coherence*Extend-JMS is configured using the <jms-scheme> caching scheme. The <jms-scheme> can be used directly, from within a <cachestore-scheme>, or as the <back-scheme> of a near cache.

The clustered portion of Coherence*Extend-JMS can either be deployed as part of a J2EE application using the included NamedCacheProxyBean Message-Driven EJB or run in a stand-alone JVM using the included AdapterFactory command line application.

To configure and use Coherence*Extend-JMS:

The <jms-scheme> Cache Scheme

The <jms-scheme> cache scheme allows a non-clustered application JVM to access cached data from a Coherence cluster using a JMS-based protocol.

Consider the following scenario: there are a number of nodes on a local subnet running in a Coherence cluster. Each node of the cluster is reacheable by both UDP unicast and multicast and takes part in caching application data and performing various cluster-related tasks. Assume that you have another machine that you would like to be able to retrieve or update the cached application data, but due to network topology limitations the machine cannot be part of the Coherence cluster. In this case, the <jms-scheme> cache configuration descriptor element can be leveraged to access clustered application data from outside the Coherence cluster.

The following jms-cache-config.xml cache configuration descriptor is an example of using the <jms-scheme> element:

<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>dist-jms-direct</cache-name>
      <scheme-name>jms-direct</scheme-name>
    </cache-mapping>
    <cache-mapping>
      <cache-name>dist-jms-local</cache-name>
      <scheme-name>jms-local</scheme-name>
    </cache-mapping>
    <cache-mapping>
      <cache-name>dist-jms-near</cache-name>
      <scheme-name>jms-near</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>

  <caching-schemes>
    <jms-scheme>
      <scheme-name>jms-direct</scheme-name>
      <queue-connection-factory-name>jms/tangosol/ConnectionFactory</queue-connection-factory-name>
      <topic-connection-factory-name>jms/tangosol/ConnectionFactory</topic-connection-factory-name>
      <queue-name>jms/tangosol/Queue</queue-name>
      <topic-name>jms/tangosol/Topic</topic-name>
      <request-timeout>10</request-timeout>
    </jms-scheme>

    <local-scheme>
      <scheme-name>jms-local</scheme-name>

      <eviction-policy>HYBRID</eviction-policy>
      <expiry-delay>30</expiry-delay>
      <flush-delay>30</flush-delay>

      <cachestore-scheme>
        <jms-scheme>
          <scheme-ref>jms-direct</scheme-ref>
        </jms-scheme>
      </cachestore-scheme>
    </local-scheme>

    <near-scheme>
      <scheme-name>jms-near</scheme-name>

      <front-scheme>
        <local-scheme>
          <high-units>100</high-units>
        </local-scheme>
      </front-scheme>

      <back-scheme>
        <jms-scheme>
          <scheme-ref>jms-direct</scheme-ref>
        </jms-scheme>
      </back-scheme>

      <invalidation-strategy>all</invalidation-strategy>
    </near-scheme>
  </caching-schemes>
</cache-config>

 

Assuming one or more Coherence*Extend-JMS proxies are running in the cluster (see below), start your Java application pointing to this cache configuration file using the following Java command:

java -Dtangosol.coherence.cacheconfig=jms-cache-config.xml ...

 

The NamedCache returned by the CacheFactory.getCache(String sCacheName) method will uses JMS to communicate with the Coherence cluster to retrieve and update data from the clustered cache with the same name.

Note that unlike other cache configurations, the <jms-scheme> will not cause any Coherence clustered service to be started.

Specifying JNDI Properties for your JNDI Provider

Coherence*Extend-JMS uses JNDI to obtain references to all JMS resources. To specify the JNDI properties that Coherence*Extend-JMS uses to create a JNDI InitialContext, create a file called jndi.properties that contains your JNDI provider's configuration properties and add the directory that contains the file to your classpath.

For example, if you are using WebLogic Server as your JNDI provider, your jndi.properties file would look something like the following:

java.naming.factory.initial=weblogic.jndi.WLInitialContextFactory
java.naming.provider.url=t3://localhost:7001
java.naming.security.principal=system
java.naming.security.credentials=weblogic

 

Configuring JMS Resources for the JMS Adapter

Coherence*Extend-JMS uses a JMS Queue and Topic to pass messages between the JMS stub (non-clustered node) and proxy (clustered node). Therefore, you must deploy an appropriately configured JMS QueueConnectionFactory, TopicConnectionFactory, Queue, and Topic. You must also be sure to register the JMS resources under the JNDI names that you specified in the <jms-scheme> cache scheme configuration.

For example, if you are using WebLogic Server as your JMS provider:

Starting a Coherence*Extend-JMS Proxy

The cluster-side portion of Coherence*Extend-JMS is called a JMS proxy. You can run the JMS proxy as part of a J2EE application using an included Message-Driven EJB or in one or more stand-alone JVMs. Coherence includes an example of running the JMS proxy in a stand-alone JVM that launches a DefaultCacheServer, but you can create your own JMS proxy application using the com.tangosol.net.jms.AdapterFactory class. See the AdapterFactory JavaDoc for additional information.

To deploy the JMS proxy as part of a J2EE application:

The QueueConnectionFactory environment entry must be set to the JNDI name of the JMS QueueConnectionFactory that you configured for your JMS provider. This entry defaults to jms/tangosol/ConnectionFactory.

The TopicConnectionFactory environment entry must be set to the JNDI name of the JMS TopicConnectionFactory that you configured for your JMS provider. This entry defaults to jms/tangosol/ConnectionFactory.

The Queue environment entry must be set to the JNDI name of the JMS Queue that you configured for your JMS provider. This entry defaults to jms/tangosol/Queue. Note that the destination of this MDB must be the same Queue as specified by the Queue environment entry.

The Topic environment entry must be set to the JNDI name of the JMS Topic that you configured for your JMS provider. This entry defaults to jms/tangosol/Topic.

The ClusterOwned environment entry indicates whether or not the Coherence cluster should be shut down fully by the Message-Driven EJB when it shuts down. This entry defaults to true.

To run the example of launching a JMS proxy in a stand-alone JVM:

Advanced Configuration

The following table summarizes the various Java System properties that can be used to override advanced Coherence*Extend-JMS settings:

Property Description Default
com.tangosol.coherence.jms.ttl This property can be used to override the default JMS Message time-to-live in milliseconds. 10000 (10 seconds)
com.tangosol.coherence.jms.readonly If set to true, all JMS NamedCache proxy instances running within the JVM will reject any request from a JMS NamedCache stub that may potentially modify the contents of the target NamedCache. false