12 Configuring and Managing Coherence Clusters

Learn how to define Coherence clusters in an OracleWebLogic Server domain and associate an Oracle Coherence cluster with multiple Oracle WebLogic Server clusters.

This chapter includes the following sections:

Overview of Coherence Clusters

Coherence clusters consist of multiple Managed Coherence server instances that distribute data in-memory to increase application scalability, availability, and performance. An application interacts with the data in a local cache and the distribution and backup of the data is performed automatically across cluster members.

Coherence clusters are different than WebLogic Server clusters. They use different clustering protocols and are configured separately. Multiple WebLogic Server clusters can be associated with a Coherence cluster and a WebLogic Server domain typically contains a single Coherence cluster. Managed servers configured as Coherence cluster members are referred to as managed Coherence servers.

Managed Coherence servers can be explicitly associated with a Coherence cluster or they can be associated with a WebLogic Server cluster that is associated with a Coherence cluster. Managed Coherence servers are typically setup in tiers that are based on their type: a data tier for storing data, an application tier for hosting applications, and a proxy tier that allows external clients to access caches.

Figure 12-1 shows a conceptual view of a Coherence cluster in a WebLogic Server domain.

Figure 12-1 Conceptual View of a Coherence Domain Topology

Description of Figure 12-1 follows
Description of "Figure 12-1 Conceptual View of a Coherence Domain Topology"

Setting Up a Coherence Cluster

A WebLogic Server domain typically contains a single Coherence cluster. The cluster is represented as a single system-level resource (CoherenceClusterSystemResource). A CoherenceClusterSystemResource instance is created using the WebLogic Server Administration Console or WLST.

A Coherence cluster can contain any number of managed Coherence servers. The servers can be standalone managed servers or can be part of a WebLogic Server cluster that is associated with a Coherence cluster. Typically, multiple WebLogic Server clusters are associated with a Coherence cluster. For details on creating WebLogic Server clusters for use by Coherence, see Creating Coherence Deployment Tiers.

Note:

Cloning a managed Coherence server does not clone its association with a Coherence cluster. The managed server will not be a member of the Coherence cluster. You must manually associate the cloned managed server with the Coherence cluster.

Define a Coherence Cluster Resource

To define a Coherence cluster resource:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and click Coherence Clusters.
  2. From the Summary of Coherence Clusters page, click New.
  3. From the Create a Coherence Cluster Configuration page, enter a name for the cluster in the Name field and click Next.
  4. From the Coherence Cluster Addressing section, select the clustering mode or keep the default settings. The default cluster listen port (7574) does not need to be changed for most clusters. For details on configuring the clustering mode, see Configure Cluster Communication.
  5. From the Coherence Cluster Members section, click to select the managed Coherence servers or WebLogic Server clusters that are to be part of the Coherence cluster or skip this section if managed Coherence servers and WebLogic Clusters are yet to be defined.
  6. Click Finish. The Summary of Coherence Clusters screen displays and the Coherence Clusters table lists the cluster resource.

Create Standalone Managed Coherence Servers

Managed Coherence servers are managed server instances that are associated with a Coherence cluster. Managed Coherence servers join together to form a Coherence cluster and are often referred to as cluster members. Cluster members have seniority and the senior member performs cluster tasks (for example, issuing the cluster heart beat).

Note:

  • Managed Coherence servers and standalone Coherence cluster members (those that are not managed within a WebLogic Server domain) can join the same cluster. However, standalone cluster members cannot be managed from within a WebLogic Server domain; operational configuration and application lifecycles must be manually administered and monitored.

  • Standalone Coherence cluster members must be configured to use Well Known Addresses (WKA) when joining a Coherence cluster that is managed in a WebLogic Server domain.

  • The Administration Server is typically not used as a managed Coherence server in a production environment.

Managed Coherence servers are distinguished by their role in the cluster. A best practice is to use different managed server instances (and preferably different WebLogic Server clusters) for each cluster role.

  • Storage-enabled: A managed Coherence server that is responsible for storing data in the cluster. Coherence applications are packaged as Grid ARchives (GAR) and deployed on storage-enabled managed Coherence servers.

  • Storage-disabled: A managed Coherence server that is not responsible for storing data and is used to host Coherence applications (cache clients). A Coherence application GAR is packaged within an enterprise archive (EAR) and deployed on storage-disabled managed Coherence servers.

  • Proxy: A managed Coherence server that is storage-disabled and allows external clients (non-cluster members) to use a cache. A Coherence application GAR is deployed on Managed Coherence proxy servers.

To create managed Coherence servers:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and click Servers.
  2. Click New to create a new managed server.
  3. From the Create a New Server page, enter the server's properties as required.
  4. Select whether to make the server part of a WebLogic Server cluster. For details on creating WebLogic Server clusters for use as a Coherence deployment tier, see Creating Coherence Deployment Tiers.
  5. Click Finish. The Summary of Servers page displays and the new server is listed.
  6. Select the new server to configure its settings.
  7. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this managed server. By default, the managed server is a storage-enabled Coherence member as indicated by the Local Storage Enabled field. For details on changing managed Coherence settings, see Configuring Managed Coherence Servers.
  8. Click Save. The Summary of Servers page displays.
  9. From the Summary of Servers page, click the Control tab and start the server.

Creating Coherence Deployment Tiers

Coherence supports different topologies within a WebLogic Server domain to provide varying levels of performance, scalability, and ease of use.

For example, during development, a single standalone managed server instance may be used as both a cache server and a cache client. The single-server topology is easy to setup and use, but does not provide optimal performance or scalability. For production, Coherence is typically setup using WebLogic Server Clusters. A WebLogic Server cluster is used as a Coherence data tier and hosts one or more cache servers; a different WebLogic Server cluster is used as a Coherence application tier and hosts one or more cache clients; and (if required) different WebLogic Server clusters are used for the Coherence proxy tier that hosts one or more managed Coherence proxy servers and the Coherence extend client tier that hosts extend clients. The tiered topology approach provides optimal scalability and performance.

The instructions in this section use both the Clusters Settings page and Servers Settings page in the WebLogic Server Administration Console to create Coherence deployment tiers. WebLogic Server clusters and managed servers instances can be associated with a Coherence cluster resource using the ClusterMBean and ServerMBean respectively. Managed servers that are associated with a WebLogic Server cluster inherit the cluster's Coherence settings. However, the settings may not be reflected in the Servers Settings page.

Configuring and Managing a Coherence Data Tier

A Coherence data tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-enabled managed Coherence servers. Managed Coherence servers in the data tier store and distribute data (both primary and backup) on the cluster. The number of managed Coherence servers that are required in a data tier depends on the expected amount of data that is stored in the Coherence cluster and the amount of memory available on each server. In addition, a cluster must contain a minimum of four physical computers to avoid the possibility of data loss during a computer failure.

Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) are packaged as a GAR and deployed on the data tier. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server. For details on calculating cache size and hardware requirements, see the production checklist in Administering Oracle Coherence.

Create a Coherence Data Tier

To create a Coherence data tier:

  1. Create a WebLogic Server cluster. See Setting up WebLogic Clusters.
  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.
  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster. By default, the managed servers assigned to this WebLogic Server cluster will be storage-enabled Coherence members as indicated by the Local Storage Enabled field.
  4. If using Coherence for HTTP session replication, select either the Coherence Web Local Storage Enabled or the Coherence Web Federated Storage Enabled option to enable session replication. When selecting the federated storage option, the default federation topology which is configured is used. For details about configuring federation, see Configuring Cache Federation.
Create Managed Coherence Servers for a Data Tier

To create managed servers for a Coherence data tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.
  2. Click New to create a new managed server.
  3. From the Create a New Server page, enter the server's properties as required.
  4. Click Yes option to add the server to an existing cluster and use the drop-down list to select the data tier WebLogic Server cluster. The managed server inherits the Coherence settings from the WebLogic Server cluster data tier.
  5. Click Finish. The Summary of Servers page displays and the new server is listed.
  6. Repeat these steps to create additional managed servers as required.
  7. From the Control tab, select the servers to start and click Start.

Configuring and Managing a Coherence Application Tier

A Coherence Application tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-disabled managed Coherence servers. Managed Coherence servers in the application tier host applications (cache factory clients) and are Coherence cluster members. Multiple application tiers can be created for different applications.

Clients in the application tier are deployed as EARs and implemented using Java EE standards such as servlet, JSP, and EJB. Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) must be packaged as a GAR and also deployed within an EAR. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server.

Create a Coherence Application Tier

To create a Coherence application tier:

  1. Create a WebLogic Server cluster. See Setting up WebLogic Clusters.
  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.
  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster.
  4. Click the Local Storage Enabled check box to remove the check mark and disable storage on the application tier. The managed Coherence servers assigned to this WebLogic Server cluster will be storage-disabled Coherence members (cache factory clients). Servers in the application tier should never be used to store cache data. Storage-enabled servers require resources to store and distribute data and can adversely affect client performance.
  5. Click Save.
Create Managed Coherence Servers for an Application Tier

To create managed servers for a Coherence application tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.
  2. Click New to create a new managed server.
  3. From the Create a New Server page, enter the server's properties as required.
  4. Click Yes to add the server to an existing cluster and use the drop-down list to select the application tier WebLogic Server cluster. The managed server inherits the Coherence settings from the data tier WebLogic Server cluster.
  5. Click Finish. The Summary of Servers page displays and the new server is listed.
  6. Repeat these steps to create additional managed servers as required.
  7. From the Control tab, select the servers to start and click Start.

Configuring and Managing a Coherence Proxy Tier

A Coherence proxy tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of managed Coherence proxy servers. Managed Coherence proxy servers allow Coherence*Extend clients to use Coherence caches without being cluster members. The number of managed Coherence proxy servers that are required in a proxy tier depends on the number of expected clients. At least two proxy servers must be created to allow for load balancing; however, additional servers may be required when supporting a large number of client connections and requests.

For details on Coherence*Extend and creating extend clients, see Developing Remote Clients for Oracle Coherence.

Create a Coherence Proxy Tier

To create a Coherence proxy tier:

  1. Create a WebLogic Server cluster. See Setting up WebLogic Clusters.
  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.
  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster.
  4. Click the Local Storage Enabled check box to remove the check mark and disable storage on the proxy tier. Proxy servers should never be used to store cache data. Storage-enabled cluster members can be adversely affected by a proxy service, which requires additional resources to handle client loads.
  5. Click Save.
Create Managed Coherence Servers for a Proxy Tier

To create managed servers for a Coherence proxy tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.
  2. Click New to create a new managed server.
  3. From the Create a New Server page, enter the server's properties as required.
  4. Click Yes to add the server to an existing cluster and use the drop-down list to select the proxy tier WebLogic Server cluster. The managed server inherits the Coherence settings from the data tier WebLogic Server cluster.
  5. Click Finish. The Summary of Servers page displays and the new server is listed.
  6. Repeat these steps to create additional managed servers as required.
  7. From the Control tab, select the servers to start and click Start.
Configure Coherence Proxy Services

Coherence proxy services are clustered services that manage remote connections from extend clients. Proxy services are defined and configured in a coherence-cache-config.xml file within the <proxy-scheme> element. The definition includes, among other settings, the TCP listener address (IP, or DNS name, and port) that is used to accept client connections. For details on the <proxy-scheme> element, see Developing Applications with Oracle Coherence. Two ways to setup proxy services are:

Using a Name Service

A name service is a specialized listener that allows extend clients to connect to a proxy service by name. Clients connect to the name service, which returns the addresses of all proxy services on the cluster. The naming service provides an efficient setup and is typically preferred in a Coherence proxy tier.

Note:

If a domain includes multiple tiers (for example, a data tier, an application tier, and a proxy tier), then the proxy tier should be started first, before a client can connect to the proxy.

A name service automatically starts on port 7574 (the same default port that the Tangosol Cluster Management Protocol (TCMP) socket uses) when a proxy service is configured on a managed Coherence proxy server. The reuse of the same port minimizes the number of ports that are used by Coherence and simplifies firewall configuration.

To configure a proxy service and enable the name service on the default TCMP port:

  1. Edit the coherence-cache-config.xml file and create a <proxy-scheme> definition and do not explicitly define a socket address. The following example defines a proxy service that is named TcpExtend and automatically enables a cluster name service. A proxy address and ephemeral port is automatically assigned and registered with the cluster's name service.
    ...
    <caching-schemes>
       ...
       <proxy-scheme>
          <service-name>TcpExtend</service-name>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  2. Deploy the coherence-cache-config.xml file to each managed Coherence proxy server in the Coherence proxy tier. Typically, the coherence-cache-config.xml file is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file to override the coherence-cache-config.xml file that is located in the GAR. This allows a single GAR to be deployed to the cluster and the proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.

To connect to a name service, a client's coherence-cache-config.xml file must include a <name-service-addresses> element, within the <tcp-initiator> element, of a remote cache or remote invocation definition. The <name-service-addresses> element provides the socket address of a name service that is on a managed Coherence proxy server. The following example defines a remote cache definition and specifies a name service listening at host 192.168.1.5 on port 7574. The client automatically connects to the name service and gets a list of all managed Coherence proxy servers that contain a TcpExtend proxy service. The cache on the cluster must also be called TcpExtend. In this example, a single address is provided. A second name service address could be provided in a failure at the primary address. For details on client configuration and proxy service load balancing, see Configuring Extend Proxies in Developing Remote Clients for Oracle Coherence.

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <name-service-addresses>
            <socket-address>
               <address>192.168.1.5</address>
               <port>7574</port>
            </socket-address>
         </name-service-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

The name service listens on the cluster port (7574) by default and is available on all machines running Coherence cluster nodes. If the target cluster uses the default TCMP cluster port, then the port can be omitted from the configuration.

Note:

  • The <service-name> value must match the proxy scheme's <service-name> value; otherwise, a <proxy-service-name> element must also be provided in a remote cache and remote invocation scheme that contains the value of the <service-name> element that is configured in the proxy scheme.

  • In previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.

  • An address provider can also be used to specify name service addresses.

Using an Address Provider

An address provider specifies the TCP listener address (IP, or DNS name, and port) for a proxy service. The listener address can be explicitly defined within a <proxy-scheme> element in a coherence-cache-config.xml file; however, the preferred approach is to define address providers in a cluster configuration file and then reference the addresses within the <proxy-scheme> element. The latter approach decouples deployment configuration from application configuration and allows network addresses to change without having to update a coherence-cache-config.xml file.

To use an address provider:

  1. Use the Address Providers tab on a Coherence cluster's Settings page to create address provider definitions. The CoherenceAddressProvidersBean MBean also exposes the address provider definition. An address provider contains a unique name in addition to the listener address for a proxy service. For example, an address provider called proxy1 might specify host 192.168.1.5 and port 9099 as the listener address.
  2. Repeat step 1 and create an address provider definition for each proxy service (at least one for each managed Coherence proxy server).
  3. For each managed Coherence proxy server, edit the coherence-cache-config.xml file and create a <proxy-scheme> definition and reference an address provider definition, by name, in an <address-provider> element. The following example defines a proxy service that references an address provider named proxy1:
    ...
    <caching-schemes>
       <proxy-scheme>
          <service-name>TcpExtend</service-name>
          <acceptor-config>
             <tcp-acceptor>
                <address-provider>proxy1</address-provider>
             </tcp-acceptor>
          </acceptor-config>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  4. Deploy each coherence-cache-config.xml file to its respective managed Coherence proxy server. Typically, the coherence-cache-config.xml file is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file. The cluster cache configuration file overrides the coherence-cache-config.xml file that is located in the GAR. This allows the same GAR to be deployed to all cluster members, but then use unique settings that are specific to a proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.

To connect to a proxy service, a client's coherence-cache-config.xml file must include a <remote-addresses> element, within the <tcp-initiator> element of a remote cache or remote invocation definition, that includes the address provider name. For example:

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>proxy1</address-provider>
         </remote-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

Clients can also explicitly specify remote addresses. The following example defines a remote cache definition and specifies a proxy service on host 192.168.1.5 and port 9099. The client automatically connects to the proxy service and uses a cache on the cluster named TcpExtend. In this example, a single address is provided. A second address could be provided in case of a failure at the primary address. For details on client configuration and proxy service load balancing, see Configuring Extend Proxies in Developing Remote Clients for Oracle Coherence.

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <socket-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </socket-address>
         </remote-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

Configuring a Coherence Cluster

A Coherence cluster resource exposes several cluster settings that can be configured for a specific domain.

Many of the settings use default values that can be changed as required. The following instructions assume that a cluster resource has already been created. For details on creating a cluster resource, see Setting Up a Coherence Cluster. For security details, see Securing Oracle Coherence.

Use the Coherence tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterSystemResource MBean and its associated CoherenceClusterResource MBean expose cluster settings. The CoherenceClusterResource MBean provides access to multiple MBeans for configuring a Coherence cluster.

Note:

WLS configuration take precedence over Coherence system properties. In general, change the Coherence configuration in WLS using WLST or a Coherence cluster configuration file instead of using the system properties.

Use the following tasks to configure cluster settings:

Adding and Removing Coherence Cluster Members

Any existing managed server instance can be added to a Coherence cluster. In addition, managed Coherence servers can be removed from a cluster. Adding and removing cluster members is available when configuring a Coherence Cluster and is a shortcut that is used instead of explicitly configuring each instance. However, when adding existing managed server instances, default Coherence settings may need to be changed. For details on configuring managed Coherence servers, see Configuring Managed Coherence Servers.

Use the Member tab on the Coherence Cluster Settings page to select which managed servers or WebLogic Server clusters are associated with a Coherence cluster. When selecting a WebLogic Server cluster, it is recommended that all the managed servers in the WebLogic Server cluster be associated with a Coherence cluster. A CoherenceClusterSystemResource exposes all managed Coherence servers as targets. A CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.

Setting Advanced Cluster Configuration Options

WebLogic Server MBeans expose a subset of Coherence operational settings that are sufficient for most use cases and are detailed throughout this section. These settings are available natively through the WLST utility and the WebLogic Server Administration Console. For more advanced use cases, use an external Coherence cluster configuration file (tangosol-coherence-override.xml), which provides full control over Coherence operational settings.

Note:

The use of an external cluster configuration file is only recommended for operational settings that are not available through the provided MBeans. That is, avoid configuring the same operational settings in both an external cluster configuration file and through the MBeans.

Use the General tab on the Coherence Cluster Settings page to enter the path and name of a cluster configuration file that is located on the administration server or use the CoherenceClusterSystemResource MBean. For details on using a Coherence cluster configuration file, see Specifying an Operational Configuration File in Developing Applications with Oracle Coherence, which also provides usage instructions for each element and a detailed schema reference.

Checking Which Operational Configuration is Used

Coherence generates an operational configuration from WebLogic Server MBeans, a Coherence cluster configuration file (if imported), and Coherence system properties (if set). The result is written to the managed Coherence server log if the system property weblogic.debug.DebugCoherence=true is set. If you use the WebLogic start-up scripts, you can use the JAVA_PROPERTIES environment variable. For example,

export JAVA_PROPERTIES=-Dweblogic.debug.DebugCoherence=true

Configure Cluster Communication

Cluster members communicate using the TCMP. The protocol operates independently of the WLS cluster protocol. TCMP is an IP-based protocol for discovering cluster members, managing the cluster, provisioning services, and transmitting data. TCMP can be transmitted over different transport protocols and can use both multicast and unicast. TCMP uses multicast UDP for discovery and TCP for data transmission (using TCP/IP Message Bus (TMB)), by default. If the WKA is configured, then TCMP is transmitted over unicast User Datagram Protocol (UDP) for discovery and Transmission Control Protocol (TCP) for data transmission. If Secure Sockets Layer (SSL) is configured for TCMP, then SSL over TCP is used for both discovery and data transmission. The use of different transport protocols and multicast requires support from the underlying network.

Use the General tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterParamsBean and CoherenceClusterWellKnownAddressesBean MBeans expose the cluster communication parameters.

Changing the Coherence Cluster Mode

Coherence clusters support both unicast and multicast communication. Multicast must be explicitly configured and is not the default option. The use of multicast should be avoided in environments that do not properly support or allow multicast. The use of unicast disables all multicast transmission and automatically uses the Coherence WKA feature to discover and communicate between cluster members.

For details on using multicast, unicast, and WKA in Coherence, see Setting Up a Cluster in Developing Applications with Oracle Coherence.

Selecting Unicast For the Coherence Cluster Mode

To use unicast for cluster communication, select Unicast from the Clustering Mode drop-down list and enter a cluster port or keep the default port, which is 7574. From Coherence 12.2.1 onwards, you only need to set the coherence cluster name and all the clusters can use same clusterport. For most clusters, the port does not need to be changed. The only reason to change clusterport is interference with an another application using the port. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.

Note:

The Coherence default cluster port is registered with IANA for the coherence application usage.

Specifying Well Known Address Members

When unicast is enabled, use the Well Known Addresses tab to explicitly configure WKA machine addresses. If no addresses are defined for a cluster, then addresses are automatically assigned. The recommended best practice is to always explicitly specify WKA machine addresses when using unicast.

In addition, if a domain contains multiple managed Coherence server that are located on different machines, then at least one non-local WKA machine address must be defined to ensure a Coherence cluster is formed; otherwise, multiple individual clusters are formed on each machine. If the managed Coherence servers are all running on the same machine, then a cluster can be created without specifying a non-local listen address.

Note:

WKA machine addresses must be explicitly defined in production environments. In production mode, a managed Coherence server fails to start if WKA machines addresses have not been explicitly defined. Automatically assigned WKA machine addresses is a design time convenience and should only be used during development on a single server.

Selecting Multicast For the Coherence Cluster Mode

To use multicast for cluster communication, select Multicast from the Clustering Mode drop-down list and enter a cluster port and multicast listen address. The same cluster port can be shared across distinct clusters (as identified by the cluster name), even if the clusters run on the same computer or multicast address. Thus, changing the cluster port is not necessary if the cluster name is being set to a value which is unique to the environment. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.

Use the TTL field to designate how far multicast packets can travel on a network. The time-to-live value is expressed in terms of how many hops a packet survives; each network interface, router, and managed switch is considered one hop. The TTL value should be set to the lowest integer value that works.

Changing the Coherence Cluster Transport Protocol

The following transport protocols are supported for TCMP.

  • User Datagram Protocol (UDP) – UDP is the default TCMP transport protocol and is used for both multicast and unicast communication. If multicast is disabled, all communication is done using UDP unicast.

  • Transmission Control Protocol (TCP) – The TCP transport protocol is used in network environments that favor TCP communication. All TCMP communication uses TCP if unicast is enabled. If multicast is enabled, TCP is only used for unicast communication and UDP is used for multicast communication.

    Note:

    Selecting TCP sets both TMB and TCP socket provider used by cluster discovery.
  • Secure Sockets Layer (SSL) – The SSL/TCP transport protocol is used in network environments that require highly secure communication between cluster members. SSL is only supported with unicast communication; ensure multicast is disabled when using SSL. The use of SSL requires additional configuration. For details on securing Coherence within WebLogic Server, see Securing Oracle Coherence.

  • Infiniband Message Bus (IMB) – IMB uses an optimized protocol based on native InfiniBand verbs. IMB is only valid on Exalogic.

The CoherenceClusterParamsBean MBean exposes the transport protocol setting through the Transport drop-down list with the following options:

  • TMB: UDP + TMB (default)
  • TCP: TCP + TMB
  • UDP: UDP + datagram
  • SSL: SSL over TCP + TMBS
  • SSLUDP: SSL over TCP + SSL over datagram
  • SDMB: UDP + SDMB
  • IMB: UDP + IMB

These options are a combination of the Coherence Cluster Transport Protocol for cluster service communication and reliable point-to-point data service communication. The following table explains these combinations:

Table 12-1 Transport Types

Transport Type Description

TMB: UDP + TMB

For transport type TMB (TCP/IP Message Bus), the cluster service communication uses UDP and the reliable point-to-point data service communication uses TMB.

TCP: TCP + TMB

For transport type TCP, the cluster service communication uses TCP and the reliable point-to-point data service communication uses TMB.

UDP: UDP + datagram

For transport type UDP, the cluster service communication uses UDP and the reliable point-to-point data service communication uses datagram.

SSL: SSL over TCP + TMBS

For transport type SSL, the cluster service communication uses SSL over TCP and the reliable point-to-point data service communication uses TMBS.

SSLUDP: SSL over TCP + SSL over datagram

For transport type SSLUDP, the cluster service communication uses SSL over TCP and the reliable point-to-point data service communication uses SSL over datagram.

SDMB: UDP + SDMB

For transport type SDMB (Socket Direct Protocol message bus), the cluster service communication uses UDP and the reliable point-to-point data service communication uses SDMB.

IMB: UDP + IMB

For transport type IMB, the cluster service communication uses UDP and the reliable point-to-point data service communication uses IMB.

For more information about changing these protocols, see Changing Transport Protocols in Developing Applications with Oracle Coherence.

Overriding a Cache Configuration File

A Coherence cache configuration file defines the caches that are used by an application. Typically, a cache configuration file is included in a GAR module. A GAR is deployed to all managed Coherence servers in the data tier and can also be deployed as part of an EAR to the application tier. The GAR ensures that the cache configuration is available on every Oracle Coherence cluster member. However, there are use cases that require a different cache configuration file to be used on specific managed Coherence servers. For example, a proxy tier requires access to all artifacts in the GAR but needs a different cache configuration file that defines the proxy services to start.

A cache configuration file can be associated with WebLogic clusters or managed Coherence servers at runtime. In this case, the cache configuration overrides the cache configuration file that is included in a GAR. You can also omit the cache configuration file from a GAR file and assign it at runtime. To override a cache configuration file at runtime, the cache configuration file must be bound to a JNDI name. The JNDI name is defined using the override-property attribute of the <cache-configuration-ref> element. The element is located in the coherence-application.xml file that is packaged in a GAR file. For details on the coherence-application.xml file, see Creating a Coherence Application Deployment Descriptor in Developing Oracle Coherence Applications for Oracle WebLogic Server.

The following example defines an override property named cache-config/ExamplesGar that can be used to override the META-INF/example-cache-config.xml cache configuration file in the GAR:

...
<cache-configuration-ref override-property="cache-config/ExamplesGar">
   META-INF/example-cache-config.xml</cache-configuration-ref>
...

At runtime, use the Cache Configurations tab on the Coherence Cluster Settings page to override a cache configuration file. You must supply the same JNDI name that is defined in the override-property attribute. The cache configuration can be located on the administration server or at a URL. In addition, you can choose to import the file to the domain or use it from the specified location. Use the Targets tab to specify which Oracle Coherence cluster members use the cache configuration file.

The following WLST (online) example demonstrates how a cluster cache configuration can be overridden using a CoherenceClusterSystemResource object.

edit()
startEdit()
cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs')
create('ExamplesGar', 'CoherenceCacheConfig')
cd('ExamplesGar')
set('JNDIName', 'ExamplesGar')
cmo.importCacheConfigurationFile('/tmp/cache-config.xml')
cmo.addTarget(getMBean('/Servers/coh_server'))
save()
activate()

The WLST example creates a CoherenceCacheConfig resource as a child. The script then imports the cache configuration file to the domain and specifies the JNDI name to which the resource binds. The file must be found at the path provided. Lastly, the cache configuration is targeted to a specific server. The ability to target a cache configuration resource to certain servers or WebLogic Server clusters allows the application to load different configurations based on the context of the server (cache servers, cache clients, proxy servers, and so on).

The following example demonstrates how the cache configuration resource can be configured as a URL.

edit()
startEdit()
cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs')
create('ExamplesGar', 'CoherenceCacheConfig')
cd('ExamplesGar')
set('JNDIName', 'ExamplesGar')
set('CacheConfigurationFile', 'http://cache.locator/app1/cache-config.xml')
cmo.addTarget(getMBean('/Servers/coh_server'))
save()
activate()

Configuring Coherence Logging

Configure cluster logging using the WebLogic Server Administration Console's Logging tab that is located on the Coherence Cluster Settings page or use the CoherenceLoggingParamsBean MBean. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server. Coherence logging configuration includes:

  • Disabling and enabling logging

  • Changing the default logger name

    WebLogic Server provides two loggers that can be used for Coherence logging: the default com.oracle.coherence logger and the com.oracle.wls logger. The com.oracle.wls logger is generic and uses the same handler that is configured for WebLogic Server log output. This logger does not allow for Coherence-specific configuration. The com.oracle.coherence logger allows Coherence-specific configuration, which includes the use of different handlers for Coherence logs.

    Note:

    If logging is configured through a standard logging.properties file, then make sure the file uses the same logger name that is currently configured for Coherence logging.
  • Changing the log message format

    Add or remove information from a log message. A log message can include static text and parameters that are replaced at run time (for example, {date}). For details on supported log message parameters, see Changing the Log Message Format in Developing Applications with Oracle Coherence.

Note:

The following logger configuration will enable a dynamic Coherence logging:

<logger-severity>Trace</logger-severity>
<log-file-severity>Trace</log-file-severity>
<stdout-severity>Debug</stdout-severity>
<platform-logger-levels>com.oracle.coherence=FINEST</platform-logger-levels>

Configuring Cache Persistence

Coherence persistence manages the persistence and recovery of Coherence distributed caches. Cached data is persisted so that it can be quickly recovered after a catastrophic failure or after a cluster restart due to planned maintenance. For complete details about Coherence cache persistence, see Persisting Caches in Administering Oracle Coherence.

Use the Persistence tab on the Coherence Cluster Settings page to enable active persistence and to override the default location where persistence files are stored. The CoherencePersistenceParamsBean MBean exposes the persistence parameters. Managed Coherence servers must be restarted for persistence changes to take affect.

On-demand persistence allows a cache service to be manually persisted and recovered upon request (a snapshot) using the persistence coordinator. The persistence coordinator is exposed as an MBean interface (PersistenceCoordinatorMBean) that provides operations for creating, archiving, and recovering snapshots of a cache service. To use the MBean, JMX must be enabled on the cluster. For details about enabling JMX management and accessing Coherence MBeans, see Using JMX to Manage Oracle Coherence in Managing Oracle Coherence. Active persistence automatically persists cache contents on all mutations and automatically recovers the contents on cluster/service startup. The persistence coordinator can still be used in active persistence mode to perform on-demand snapshots.

Configuring Cache Federation

The federated caching feature federates cache data asynchronously across multiple geographically dispersed clusters. Cached data is federated across clusters to provide redundancy, off-site backup, and multiple points of access for application users in different geographical locations. For complete details about Coherence Federation, see Federating Caches Across Clusters in Administering Oracle Coherence.

Use the Federation tab on the Coherence Cluster Settings page to enable a federation topology and to configure a remote cluster participant to which caches are federated. When selecting a topology, a topology configuration is automatically created and named Default-Topology. Federation must be configured on both the local cluster participant and the remote cluster participant. At least one host on the remote cluster must be provided. If a custom port is being used on the remote cluster participant, then change the cluster port accordingly. Managed Coherence servers must be restarted for federation changes to take affect. The CoherenceFederationParamsBean MBean also exposes the cluster federation parameters and can be used to configure cache federation.

Note:

  • The Default-Topology topology configuration is created and used if no federation topology is specified in the cache configuration file.

  • When using federation, matching topologies must be configured on both the local and remote clusters. For example, selecting none for the topology in a local cluster and active-active as the topology in the remote cluster can lead to unpredictable behavior. Similarly, if a local cluster is set to use active-passive, then the remote cluster must be set to use passive-active.

Configuring Managed Coherence Servers

Managed Coherence servers expose several cluster member settings that can be configured for a specific domain.

Many of the settings use default values that can be changed as required. The instructions in this section assume that a managed server has already been created and associated with a Coherence cluster. For details on creating managed Coherence servers, see Create Standalone Managed Coherence Servers.

Use the Coherence tab on a managed server's Setting page to configure Coherence cluster member settings. A CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.

Note:

WLS configuration take precedence over Coherence system properties. In general, Coherence configuration in WLS should be changed using WLST or a Coherence cluster configuration file instead of using the system properties.

Use the following tasks to configure a managed Coherence server:

Configure Coherence Cluster Member Storage Settings

The storage settings for managed Coherence servers can be configured as required. Enabling storage on a server means the server is responsible for storing a portion of both primary and backup data for the Coherence cluster. Servers that are intended to store data must be configured as storage-enabled servers. Servers that host cache applications and cluster proxy servers should be configured as storage-disabled servers and are typically not responsible for storing data because sharing resource can become problematic and affect application and cluster performance.

Note:

If a managed Coherence server is part of a WebLogic Server cluster, then the Coherence storage settings that are specified on the WebLogic Server cluster override the storage settings on the server. The storage setting is an exception to the general rule that server settings override WebLogic Server cluster settings. Moreover, the final runtime configuration is not reflected in the console. Therefore, a managed Coherence server may show that storage is disabled even though storage has been enabled through the Coherence tab for a WebLogic Server cluster. Always check the WebLogic Server cluster settings to determine whether storage has been enabled for a managed Coherence server.

Use the following fields on the Coherence tab to configure storage settings:

  • Local Storage Enabled – This field specifies whether a managed Coherence server to stores data. If this option is not selected, then the managed Coherence server does not store data and is considered a cluster client.

  • Coherence Web Local Storage Enabled – This field specifies whether a managed Coherence server stores HTTP session data. For details on using Coherence to store session data, see Using Coherence*Web with WebLogic Server in Administering HTTP Session Management with Oracle Coherence*Web.

Configure Coherence Cluster Member Unicast Settings

Managed Coherence servers communicate with each other using unicast (point-to-point) communication. Unicast is used even if the cluster is configured to use multicast communication. For details on unicast in Coherence, see Specifying a Cluster Member's Unicast Address in Developing Applications with Oracle Coherence.

Use the following fields on the Coherence tab to configure unicast settings:

  • Unicast Listen Address – This field specifies the address on which the server listens for unicast communication. If no address is provided, then a routable IP address is automatically selected. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address.

  • Unicast Listen Port – This field specifies the ports on which the server listens for unicast communication. A cluster member uses two unicast UDP ports which are automatically assigned from the operating system's available ephemeral port range (as indicated by a value of 0). The default value ensures that Coherence cannot accidentally cause port conflicts with other applications. However, if a firewall is required between cluster members (an atypical configuration), then a port can be manually assigned and a second port is automatically selected (port1 +1).

  • Unicast Port Auto Adjust – This field specifies whether the port automatically increments if the port is already in use.

Use Dynamic Management

A Coherence cluster can be managed from any JMX-compatible client such as JConsole or Java VisualVM. The management information includes runtime statistics and operational settings. The management information is specific to the Coherence management domain and is different than the management information that is provided for Coherence as part of the WebLogic management domain. For a detailed reference of Coherence MBeans, see Oracle Coherence MBeans Reference in Managing Oracle Coherence.

Coherence is configured to start in dynamic management mode. One cluster member is automatically selected as a management proxy and is responsible for aggregating the management information from all other cluster members. The Administration server for the WebLogic domain then integrates the management information and it is made available through the domain runtime MBean server. If the cluster member is not operational, then another cluster member is automatically selected as the management proxy.

At runtime, use a JMX client to connect to the domain runtime MBean server where the Coherence management information is located within the Coherence management namespace. For details about connecting to the domain runtime MBean server, see Accessing WebLogic Server MBeans with JMX in Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.

Configure Coherence Cluster Member Identity Settings

A set of identifiers are used to give a managed Coherence server an identity within the cluster. The identity information is used to differentiate servers and conveys the server's role within the cluster. Some identifiers are also used by the cluster service when performing cluster tasks. Lastly, the identity information is valuable when displaying management information (for example, JMX) and facilitates interpreting log entries.

Use the following fields on the Coherence tab to configure member identity settings:

  • Site Name – This field specifies the name of the geographic site that hosts the managed Coherence server. The server's domain name is used if no name is specified. For WAN clustering, this value identifies the data center where the member is located. The site name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate geographic sites). The site name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.

  • Rack Name – This field specifies the name of the location within a geographic site that the managed Coherence server is hosted at and is often a cage, rack, or bladeframe identifier. The rack name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate bladeframes). The rack name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.

  • Role Name – This field specifies the managed Coherence server's role in the cluster. The role name allows an application to organize cluster members into specialized roles, such as storage-enabled or storage-disabled.

    If a managed Coherence server is part of a WebLogic Server cluster, the cluster name is automatically used as the role name and this field cannot be set. If no name is provided, the default role name that is used is WebLogicServer.

Configure Coherence Cluster Member Logging Levels

Logging levels can be configured for each managed Coherence server. The default log level is D5 and can be changed using the server's Logging tab. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server.

To configure a managed Coherence server's logging level:

  1. From the Summary of Servers screen, select a managed Coherence server.
  2. On the server's settings page, select the Logging tab.
  3. From the General tab, click Advanced.
  4. From the Platform Logger Levels field, enter a logging level.
    Value Resultant Message Displays

    com.oracle.coherence=FINEST

    D9

    com.oracle.coherence=INFO

    D3

    No Value

    D5 (Default)

  5. Click Save.

Using a Single-Server Cluster

A single-server cluster is a cluster that is constrained to run on a single managed server instance and does not access the network.

The server instance acts as a storage-enabled cluster member, a client, and a proxy. A single-server cluster is easy to setup and offers a quick way to start and stop a cluster. A single-server cluster is used during development and should not be used for production or testing environments.

To create a single-server cluster:

Using WLST with Coherence

The WLST is a command-line interface that you can use to automate domain configuration tasks, including configuring and managing Coherence clusters.

For more information, see Understanding the WebLogic Scripting Tool.

Setting Up Coherence with WLST (Offline)

WLST can be used to set up Coherence clusters. The following examples demonstrate using WLST in offline mode to create and configure a Coherence cluster. It is assumed that a domain has already been created and that the examples are completed in the order in which they are presented. In addition, the examples only create a data tier. Additional tiers can be created as required. Lastly, the examples are not intended to demonstrate every Coherence MBean. For a complete list of Coherence MBeans, see MBean Reference for Oracle WebLogic Server.

readDomain('/ORACLE_HOME/user_projects/domains/base_domain')

Create a Coherence Cluster

create('myCoherenceCluster', 'CoherenceClusterSystemResource')

Create a Tier of Managed Coherence Servers

create('coh_server1', 'Server')
cd('Server/coh_server1')
set('ListenPort', 7005)
set('ListenAddress', '192.168.0.100')
set('CoherenceClusterSystemResource', 'myCoherenceCluster')

cd('/')
create('coh_server2','Server')
cd('Server/coh_server2')
set('ListenPort', 7010)
set('ListenAddress', '192.168.0.101')
set('CoherenceClusterSystemResource', 'myCoherenceCluster')

cd('/')
create('DataTier', 'Cluster')
assign('Server', 'coh_server1,coh_server2','Cluster','DataTier')
cd('Cluster/DataTier')
set('MulticastAddress', '237.0.0.101')
set('MulticastPort', 8050)

cd('/CoherenceClusterSystemResource/myCoherenceCluster')
set('Target', 'DataTier')

Configure Coherence Cluster Parameters

cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster/CoherenceClusterParams/NO_NAME_0')
set('ClusteringMode', 'unicast')
set('SecurityFrameworkEnabled','false')
set('ClusterListenPort', 7574)

Configure Well Known Addresses

create('wka_config','CoherenceClusterWellKnownAddresses')
cd('CoherenceClusterWellKnownAddresses/NO_NAME_0')
 
create('WKA1','CoherenceClusterWellKnownAddress')
cd('CoherenceClusterWellKnownAddress/WKA1')
set('ListenAddress', '192.168.0.100')
cd('../..')

create('WKA2','CoherenceClusterWellKnownAddress')
cd('CoherenceClusterWellKnownAddress/WKA2')
set('ListenAddress', '192.168.0.101')

Set Logging Properties

cd('/')
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster')
create('log_config)','CoherenceLoggingParams')
cd('CoherenceLoggingParams/NO_NAME_0')
set('Enabled', 'true')
set('LoggerName', 'com.oracle.coherence')

Configure Managed Coherence Servers

cd('/')
cd('Servers/coh_server1')
create('member_config', 'CoherenceMemberConfig')
cd('CoherenceMemberConfig/member_config')
set('LocalStorageEnabled', 'true')
set('RackName', '100A')
set('RoleName', 'Server')
set('SiteName', 'pa-1')
set('UnicastListenAddress', '192.168.0.100')
set('UnicastListenPort', 0)
set('UnicastPortAutoAdjust', 'true')

cd('/')
cd('Servers/coh_server2')
create('member_config', 'CoherenceMemberConfig')
cd('CoherenceMemberConfig/member_config')
set('LocalStorageEnabled', 'true')
set('RackName', '100A')
set('RoleName', 'Server')
set('SiteName', 'pa-1')
set('UnicastListenAddress', '192.168.0.101')
set('UnicastListenPort', 0)
set('UnicastPortAutoAdjust', 'true')

updateDomain()
closeDomain()

Setting the Cluster Name and Port

readDomain('/ORACLE_HOME/user_projects/domains/base_domain')

cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster)
set('Name', 'MyCluster')

cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster/CoherenceClusterParams/NO_NAME_0')
set('ClusterListenPort', 9123)

updateDomain()
closeDomain()

Accessing Coherence MBeans by Using WLST

When running Coherence within WebLogic Server in a managed Coherence Servers environment, WebLogic Server domain runtime MBean server collects JMX information from the management proxy and this information is accessible by using WLST.

After you have connected using WLST and switched to domainRuntime(), the MBeanServer is available through the mbs object, where you can perform queries and operations on standard Coherence MBeans.

Given below are samples on how to access various MBean values and operations. Note that this is not an exhaustive list and is provided to illustrate how to access MBeans by using WLST. For a list of Coherence MBeans, including their attributes and operations, see Oracle Coherence MBeans Reference in Managing Oracle Coherence.

Note:

As WebLogic Server adds an additional location key when returning MBeans, some of the examples use the following WLST function to return the fully qualified name, including this location key.
# Return the fully qualified Name for a query
def coh_getFullyQualifiedName(query):
   beans = list(mbs.queryMBeans(ObjectName(query),None))
   if len(beans) == 0:
      raise RuntimeException('No results found')
   for bean in beans:
      return bean.getObjectName()

Example 12-1 Access a single MBean such as the ClusterMBean

This example shows how to access ClusterMBean and display various attributes.

# Return the fully qualified Name for a query
def coh_getFullyQualifiedName(query):
   beans = list(mbs.queryMBeans(ObjectName(query),None))
   if len(beans) == 0:
      raise RuntimeException('No results found')
   for bean in beans:
      return bean.getObjectName()

#
## Entry point 
#
connect ("username","password", "t3://host:port")
# Enter the WebLogic Server administrator username, password, and the administration server host and port.

domainRuntime()

query = coh_getFullyQualifiedName('Coherence:type=Cluster,*')
print 'Fully qualified query: ' + str(query)
beans = list(mbs.queryMBeans(query, None))

# Should only be one cluster MBean
if len(beans) == 0:
   print 'Unable to find ClusterMBean'
else:
   # Normally there is only one ClusterMBean but if you had multiple
   # Coherence clusters then > 1 could be returned
   bean = beans[0]
   cluster = bean.getObjectName()
   clusterName = mbs.getAttribute(cluster, 'ClusterName')
   clusterSize = mbs.getAttribute(cluster, 'ClusterSize')
   clusterVersion = mbs.getAttribute(cluster, 'Version')

   print 'Cluster Name:      ' + clusterName
   print 'Cluster Size:      ' + str(clusterSize)
   print 'Coherence Version: ' + clusterVersion

Sample Output:

Location changed to domainRuntime tree. This is a read-only tree 
with DomainMBean as the root MBean. 
For more help, use help('domainRuntime')

Fully qualified query: Coherence:cluster=CoherenceCluster,Location=storage1,type=Cluster
Cluster Name:      CoherenceCluster
Cluster Size:      4

Example 12-2 Display Information from Multiple MBeans such as the CacheMBean

This example shows how to access the CoherenceMBeans for all defined caches.

connect ("username","password", "t3://host:port")
# Enter the WebLogic Server administrator username, password, and the administration server host and port.

domainRuntime()

beans = list(mbs.queryMBeans(ObjectName('Coherence:type=Cache,*'),None))
for bean in beans:
   cache = bean.getObjectName()
   cacheSize = mbs.getAttribute(cache, 'Size')
   cacheUnits = mbs.getAttribute(cache, 'Units')

   print 'Mbean: ' + str(cache)
   print 'Size:  ' + str(cacheSize)
   print 'Units: ' + str(cacheUnits)

Sample Output

Mbean: Coherence:Location=storage1,nodeId=1,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage1,name=control,tier=back
Size:  0
Units: 0
Mbean: Coherence:Location=storage1,nodeId=2,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage2,name=stores,tier=back
Size:  12
Units: 3488
Mbean: Coherence:Location=storage1,nodeId=1,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage1,name=contacts,tier=back
Size:  10
Units: 3360
Mbean: Coherence:Location=storage1,nodeId=2,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage2,name=contacts,tier=back
Size:  10
Units: 3352
Mbean: Coherence:Location=storage1,nodeId=2,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage2,name=control,tier=back
Size:  0
Units: 0
Mbean: Coherence:Location=storage1,nodeId=1,type=Cache,cluster=CoherenceCluster,service="ExampleGAR:PartitionedPofCache",member=storage1,name=stores,tier=back
Size:  8
Units: 2328

Example 12-3 Invoke an Operation against an MBean to force Persistence Recovery

This example shows how to force Persistence Recovery on a specific service by invoking the forceRecovery operation.

# Return the Persistence coordinator MBean
def coh_getPersistenceCoordinator(serviceName):
   query = 'Coherence:type=Persistence,service=' + serviceName + ',responsibility=PersistenceCoordinator,*'
   beans = list(mbs.queryMBeans(ObjectName(query), None))
   if len(beans) == 0:
      raise RuntimeException('No results found')
   for bean in beans:
      return bean.getObjectName()

##
## Entry Point
##

connect("username","password", "t3://host:port")
# Enter the WebLogic Server administrator username, password, and the administration server host and port.

domainRuntime()

# if serviceName includes ':' it must be quoted
serviceName = '"ExampleGAR:PartitionedPofCache"'
objectName = coh_getPersistenceCoordinator(serviceName)
print 'Coordinator is ' + str(objectName)

mbs.invoke(objectName, 'forceRecovery', None, None)

print 'Force Recovery invoked'

Example 12-4 Invoke an Operation which returns a value

This example shows how to invoke an MBean operation, which returns a value by invoking reportScheduledDistributions on the PartitionAssignment MBean.

# Return the fully qualified Name for a query
def coh_getFullyQualifiedName(query):
   beans = list(mbs.queryMBeans(ObjectName(query),None))
   if len(beans) == 0:
      raise RuntimeException('No results found')
   for bean in beans:
      return bean.getObjectName()

##
## Entry Point
##

connect("username","password", "t3://host:port")
# Enter the WebLogic Server administrator username, password, and the administration server host and port.

domainRuntime()

# if serviceName includes ':' it must be quoted
serviceName = '"ExampleGAR:PartitionedPofCache"'
partitionAssignment = coh_getFullyQualifiedName('Coherence:type=PartitionAssignment,service=' + serviceName + ',responsibility=DistributionCoordinator,*')
print 'Partition Assignment is ' + str(partitionAssignment)

print mbs.invoke(partitionAssignment, 'reportScheduledDistributions', [java.lang.Boolean("true")], ["boolean"])

Sample Output

Location changed to domainRuntime tree. This is a read-only tree 
with DomainMBean as the root MBean. 
For more help, use help('domainRuntime')

Partition Assignment is Coherence:service="ExampleGAR:PartitionedPofCache",cluster=CoherenceCluster,responsibility=DistributionCoordinator,Location=storage1,type=PartitionAssignment
No distributions are currently scheduled for this service.

Example 12-5 Invoke a Federation Operation

This example shows how to invoke a Federation operation for a service.

# Return the fully qualified Name for a query
def coh_getFullyQualifiedName(query):
   beans = list(mbs.queryMBeans(ObjectName(query),None))
   if len(beans) == 0:
      raise RuntimeException('No results found')
   for bean in beans:
      return bean.getObjectName()

##
## Entry Point
##

connect("username","password", "t3://host:port")
# Enter the WebLogic Server administrator username, password, and the administration server host and port.

domainRuntime()

# if serviceName includes ':' it must be quoted
serviceName   = '"ExampleGAR:PartitionedPofCache"'
targetCluster = 'Boston'
command       = 'start'

# Other Federation commands could be stop, pause or replicateAll

federation = coh_getFullyQualifiedName('Coherence:type=Federation,service=' + serviceName + ',responsibility=Coordinator,*')
print 'Federation MBean is ' + str(federation)

print mbs.invoke(federation, command, [java.lang.String(target)], ["java.lang.String"])

Sample Output

Location changed to domainRuntime tree. This is a read-only tree 
with DomainMBean as the root MBean. 
For more help, use help('domainRuntime')

Federation Mbean is Coherence:service="ExampleGAR:PartitionedPofCache",cluster=CoherenceCluster,responsibility=Coordinator,Location=storage1,type=Federation

Persisting Coherence Caches with WLST

WLST includes a set of commands that can be used to persist and recover cached data from disk. The commands are automatically available when connected to an Administration Server domain runtime MBean server.

For more information about Coherence cache persistence, see Persisting Caches in Administering Oracle Coherence.

Table 12-2 lists WLST commands for persisting Coherence caches. Example 12-6 demonstrates using the commands.

Table 12-2 WLST Coherence Persistence Commands

Command Description

coh_createSnapshot(snapshotName, serviceName)

Persist the data partitions of a service to disk

  • snapshotName – any user defined name

  • serviceName – the name of the partitioned or federated cache service for which the snapshot is created

coh_recoverSnapshot(snapshotName, serviceName)

Restore the data partitions of a service from disk. Any existing data in the caches of a service are lost.

  • snapshotName – the name of a snapshot to recover

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was created

coh_listSnapshots(serviceName)

Return a list of available snapshots

  • serviceName – the name of the partitioned or federated cache service for which the snapshots are listed

coh_validateSnapshot(snapshotDir, verbose)

Check whether a snapshot is complete and without error

  • snapshotDir – The full path to a snapshot including the snapshot name. The default snapshot location is USER_HOME/coherence/snapshot.

  • verbose – return more detailed validation information

coh_archiveSnapshot(snapshotName, serviceName)

Save a snapshot to a central location. The location is specified in the snapshot archiver definition that is associated with a service.

  • snapshotName – the name of a snapshot to archive

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was created

coh_retrieveArchivedSnapshot(snapshotName, serviceName)

Retrieve an archived snapshot so that it can be recovered using the coh_recoverSnapshot command

  • snapshotName – the name of a snapshot to retrieve

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was archived

coh_listArchivedSnapshots(serviceName)

Return a list of available archived snapshots

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was archived

coh_validateArchivedSnapshot(snapshotName, clusterName, serviceName, achviverName, verbose)

Check whether an archived snapshot is complete and without error. The operational override configuration file containing the archiver must be available on the classpath.

  • snapshotName – the name of an archived snapshot to validate

  • clusterName – the name of the cluster where the partitioned or federated cache service is running

  • serviceName – the name of the partitioned or federated cache service for which the archived snapshot was created

  • archiverName – the name of the snapshot archiver definition that is being used by the service.

  • verbose – return more detailed validation information

coh_removeArchivedSnapshot(snapshotName, serviceName)

Delete an archived snapshot from disk

  • snapshotName – the name of an archived snapshot to delete

  • serviceName – the name of the partitioned or federated cache service for which the archived snapshot is deleted

coh_removeSnapshot(snapshotName, serviceName)

Delete a snapshot from disk

  • snapshotName – the name of a snapshot to delete

  • serviceName – the name of the partitioned or federated cache service for which the snapshot is deleted

Note:

If the serviceName includes a colon (:), then enclose it in double quotation marks. For example:
coh_createSnapshot('Snapshot1', '"ExampleGAR:PartitionedPofCache"')

Example 12-6 demonstrates using the persistence API from WLST to persist the caches for a partitioned cache service.

Example 12-6 WLST Example for Persisting Caches

serviceName   = '"ExampleGAR:ExamplesPartitionedPofCache"';
snapshotName  = 'new-snapshot'
 
connect('weblogic','password','t3://machine:7001')
 
# Must be in domain runtime tree otherwise no MBeans are returned
domainRuntime()
 
try:
   coh_listSnapshots(serviceName)
   coh_createSnapshot(snapshotName, serviceName)
   coh_listSnapshots(serviceName)
   coh_recoverSnapshot(snapshotName, serviceName)
   coh_archiveSnapshot(snapshotName, serviceName)
   coh_listArchivedSnapshots(serviceName)
   coh_removeSnapshot(snapshotName, serviceName)
   coh_retrieveArchivedSnapshot(snapshotName, serviceName)
   coh_recoverSnapshot(snapshotName, serviceName)
   coh_listSnapshots(serviceName)
except PersistenceException, rce:
   print 'PersistenceException: ' + str(rce)
except Exception,e:
   print 'Unknown Exception' + str(e)
else:
   print 'All operations complete'