How can I connect separate Coherence clusters together?

The attached examples replicated_site.jar and synchronized_site.jar demonstrate how to use the features of Coherence*Extend to enable one Coherence cluster to access caches from another Coherence cluster and visa versa. Use cases for this type of configuration include disaster recovery strategies and sharing data efficiently and reliably across a potentially high latency, unreliable WAN.

Error formatting macro: toc: java.lang.NullPointerException

This information is also included in the README.txt file in the attached archives.

Synchronized-site Coherence*Extend Example

This example demonstrates how to use the features of Coherence*Extend to allow one cluster to maintain a synchronized subset of a remote cluster cache(s). Use cases for this capability include hot/hot disaster recovery strategies and read-write access to a locally cached replica of remote data.

Assume that there are two sites, one in Boston and the other in London, which are connected via a WAN. Each site is part of a separate Coherence cluster (i.e. Coherence unicast and multicast UDP traffic cannot be sent over the WAN). Each cluster runs two distributed cache services, one that manages "local" data and one that caches "remote" data. Storage-enabled members running the "local" cache service work in concert to manage all data local to the site, whereas storage-enabled members running the "remote" cache service work in concert to maintain a distributed near cache of remote data. Storage-disabled clients in each site can access either "local" or "remote" data.

All access to remote data is performed via a Coherence*Extend backing map. Since Coherence*Extend fully supports the ObservableMap interface, the near caches of remote data are kept in sync with the "master" copies maintained by the remote cluster. Local data can be accessed and updated at "cluster-local" speed. Once cached, remote data can be accessed at "cluster-local" speed. Initial access and update of remote data (initiated by either site) are the only operations that must traverse the WAN.

If the WAN ever fails, the replica sites still are able to both read and write their copy of the "master" data set. When disconnected, the Coherence*Extend backing map maintains a delta map of all changes made to the replica while in a disconnected state. When the WAN comes back up, a customizable distributed reconciliation policy is automatically executed to resolve local changes with the "master" copy. The default reconciliation policy is to simply resynchronize the replica with the master, but more advance policies that take advantage of the delta map can be implemented.

Prerequisites

To build the example, you must have the following software installed:

Build Instructions

  1. Update bin/set-env.sh to reflect your system environment.
  2. Open a shell and execute the following command in the bin directory: ./ant build
  3. To completely remove all build artifacts from your filesystem, run:./ant clean

Running the Example

  1. Start the Boston cluster by executing the following scripts:
  2. Start the London cluster by executing the following scripts:

To access local data in the Boston cluster, run the following command:
Map : cache boston-test

To access London data in the Boston cluster, run the following command:Map : cache london-test

To access local data in the London cluster, run the following command:
Map : cache london-test

To access Boston data in the London cluster, run the following command:Map : cache boston-test

Replicated-site Coherence*Extend Example

This example demonstrates how to use the features of Coherence*Extend to replicate one or more caches from one site to another geographically
separated site. Use cases for this capability include hot/warm disaster recovery strategies and read-only access to a locally cached replica of remote data.

Assume that there are two sites, one in Boston and the other in London, which are connected via a WAN. Each site is part of a separate Coherence cluster (i.e. Coherence unicast and multicast UDP traffic cannot be sent over the WAN). This example uses a ReadWriteBackingMap implementation in the Boston cluster to replicate Boston caches to the London cluster via Coherence*Extend. Once replicated, Boston data can be accessed in the London cluster at "cluster-local" speeds. A Boston partitioned cache service uses write-behind to decouple updates to the Boston caches from the corresponding updates to the London replicas and to batch updates. Additionally, the Boston partitioned cache service is configured to requeued failed updates to the London replicas, so that if the WAN ever fails, changes to the Boston caches will be automatically persisted to the London caches when the WAN comes back up. Lastly, this example includes a Replicator command line utility that allows the entire set of Boston caches to be re-replicated in case the London cluster is restarted.

Prerequisites

To build the example, you must have the following software installed:

Build Instructions

  1. Update bin/set-env.sh to reflect your system environment.
  2. Open a shell and execute the following command in the bin directory:./ant build
  3. To completely remove all build artifacts from your filesystem, run:./ant clean

Running the Example

  1. Start the London cluster by executing the following scripts:
  2. Start the Boston cluster by executing the following scripts:

To access local data in the Boston cluster, run the following command:
Map : cache boston-test

To access Boston data in the London cluster, run the following command:
Map : cache boston-test

To force a replication of the "boston-test" cache from the Boston cluster to the London cluster, run the following script:
./start-replicator Invocation boston-test


Attachments:
replicated-site.jar (application/octet-stream)
synchronized-site.jar (application/octet-stream)
replicated-site.jar (application/octet-stream)
replicated-site.jar (application/octet-stream)
synchronized-site.jar (application/octet-stream)