12 Configuring and Managing Coherence Clusters

This chapter provides instructions for defining Coherence clusters in a WebLogic Server domain and how to associate a Coherence Cluster with multiple WebLogic Server clusters. The instructions in this chapter assume that a WebLogic Server domain has already been created.

This chapter includes the following sections:

Overview of Coherence Clusters

Coherence clusters consist of multiple managed Coherence server instances that distribute data in-memory to increase application scalability, availability, and performance. An application interacts with the data in a local cache and the distribution and backup of the data is automatically performed across cluster members.

Coherence clusters are different than WebLogic Server clusters. They use different clustering protocols and are configured separately. Multiple WebLogic Server clusters can be associated with a Coherence cluster and a WebLogic Server domain typically contains a single Coherence cluster. Managed servers that are configured as Coherence cluster members are referred to as managed Coherence servers.

Managed Coherence servers can be explicitly associated with a Coherence cluster or they can be associated with a WebLogic Server cluster that is associated with a Coherence cluster. Managed Coherence servers are typically setup in tiers that are based on their type: a data tier for storing data, an application tier for hosting applications, and a proxy tier that allows external clients to access caches.

Figure 12-1 shows a conceptual view of a Coherence cluster in a WebLogic Server domain:

Figure 12-1 Conceptual View of a Coherence Domain Topology

Description of Figure 12-1 follows
Description of ''Figure 12-1 Conceptual View of a Coherence Domain Topology''

Setting Up a Coherence Cluster

A WebLogic Server domain typically contains a single Coherence cluster. The cluster is represented as a single system-level resource (CoherenceClusterSystemResource). A CoherenceClusterSystemResource instance is created using the WebLogic Server Administration Console or WLST.

A Coherence cluster can contain any number of managed Coherence servers. The servers can be standalone managed servers or can be part of a WebLogic Server cluster that is associated with a Coherence cluster. Typically, multiple WebLogic Server clusters are associated with a Coherence cluster. For details on creating WebLogic Server clusters for use by Coherence, see Creating Coherence Deployment Tiers.

Note:

Cloning a managed Coherence server does not clone its association with a Coherence cluster. The managed server will not be a member of the Coherence cluster. You must manually associate the cloned managed server with the Coherence cluster.

Define a Coherence Cluster Resource

To define a Coherence cluster resource:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and click Coherence Clusters.

  2. From the Summary of Coherence Clusters page, click New.

  3. From the Create a Coherence Cluster Configuration page, enter a name for the cluster using the Name field. Click Next.

  4. From the Coherence Cluster Addressing section, select the clustering mode or keep the default settings. The default cluster listen port (7574) does not need to be changed for most clusters. For details on configuring the clustering mode, see Configure Cluster Communication.

  5. From the Coherence Cluster Members section, click to select the managed Coherence servers or WebLogic Server clusters that are to be part of the Coherence cluster or skip this section if managed Coherence servers and WebLogic Clusters are yet to be defined.

  6. Click Finish. The Summary of Coherence Clusters screen displays and the Coherence Clusters table lists the cluster resource.

Create Standalone Managed Coherence Servers

Managed Coherence servers are managed server instances that are associated with a Coherence cluster. Managed Coherence servers join together to form a Coherence cluster and are often referred to as cluster members. Cluster members have seniority and the senior member performs cluster tasks (for example, issuing the cluster heart beat).

Note:

  • Managed Coherence servers and standalone Coherence cluster members (those that are not managed within a WebLogic Server domain) can join the same cluster. However, standalone cluster members cannot be managed from within a WebLogic Server domain; operational configuration and application lifecycles must be manually administered and monitored.

  • The Administration Server is typically not used as a managed Coherence server in a production environment.

Managed Coherence servers are distinguished by their role in the cluster. A best practice is to use different managed server instances (and preferably different WebLogic Server clusters) for each cluster role.

  • storage-enabled – a managed Coherence server that is responsible for storing data in the cluster. Coherence applications are packaged as Grid ARchives (GAR) and deployed on storage-enabled managed Coherence servers.

  • storage-disabled – a managed Coherence server that is not responsible for storing data and is used to host Coherence applications (cache clients). A Coherence application GAR is packaged within an EAR and deployed on storage-disabled managed Coherence servers.

  • proxy – a managed Coherence server that is storage-disabled and allows external clients (non-cluster members) to use a cache. A Coherence application GAR is deployed on managed Coherence proxy servers.

To create managed Coherence servers:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and click Servers.

  2. Click New to create a new managed server.

  3. From the Create a New Server page, enter the server's properties as required.

  4. Select whether to make the server part of a WebLogic Server cluster. For details on creating WebLogic Server clusters for use as a Coherence deployment tier, see Creating Coherence Deployment Tiers.

  5. Click Finish. The Summary of Servers page displays and the new server is listed.

  6. Select the new server to configure its settings.

  7. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this managed server. By default, the managed server is a storage-enabled Coherence member as indicated by the Local Storage Enabled field. For details on changing managed Coherence settings, see Configuring Managed Coherence Servers.

  8. Click Save. The Summary of Servers page displays.

  9. From the Summary of Servers page, click the Control tab and start the server.

Creating Coherence Deployment Tiers

Coherence supports different topologies within a WebLogic Server domain to provide varying levels of performance, scalability, and ease of use. For example, during development, a single standalone managed server instance may be used as both a cache server and a cache client. The single-server topology is easy to setup and use, but does not provide optimal performance or scalability. For production, Coherence is typically setup using WebLogic Server Clusters. A WebLogic Server cluster is used as a Coherence data tier and hosts one or more cache servers; a different WebLogic Server cluster is used as a Coherence application tier and hosts one or more cache clients; and (if required) different WebLogic Server clusters are used for the Coherence proxy tier that hosts one or more managed Coherence proxy servers and the Coherence extend client tier that hosts extend clients. The tiered topology approach provides optimal scalability and performance.

The instructions in this section use both the Clusters Settings page and Servers Settings page in the WebLogic Server Administration Console to create Coherence deployment tiers. WebLogic Server clusters and managed servers instances can be associated with a Coherence cluster resource using the ClusterMBean and ServerMBean MBeans, respectively. Managed servers that are associated with a WebLogic Server cluster inherit the cluster's Coherence settings. However, the settings may not be reflected in the Servers Settings page.

Configuring and Managing a Coherence Data Tier

A Coherence Data tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-enabled managed Coherence servers. Managed Coherence servers in the data tier store and distribute data (both primary and backup) on the cluster. The number of managed Coherence servers that are required in a data tier depends on the expected amount of data that is stored in the Coherence cluster and the amount of memory available on each server. In addition, a cluster must contain a minimum of four physical computers to avoid the possibility of data loss during a computer failure.

Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) are packaged as a GAR and deployed on the data tier. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server. For details on calculating cache size and hardware requirements, see the production checklist in Administering Oracle Coherence.

Create a Coherence Data Tier

To create a Coherence data tier:

  1. Create a WebLogic Server cluster. For details, see Chapter 10, "Setting up WebLogic Clusters."

  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.

  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster. By default, the managed servers assigned to this WebLogic Server cluster will be storage-enabled Coherence members as indicated by the Local Storage Enabled field.

Create Managed Coherence Servers for a Data Tier

To create managed servers for a Coherence data tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.

  2. Click New to create a new managed server.

  3. From the Create a New Server page, enter the server's properties as required.

  4. Click the Yes option to add the server to an existing cluster and use the drop-down list to select the data tier WebLogic Server cluster. The managed server inherits the Coherence settings from the data tier WebLogic Server cluster.

  5. Click Finish. The Summary of Servers page displays and the new server is listed.

  6. Repeat these steps to create additional managed servers as required.

  7. From the Control tab, select the servers to start and click Start.

Configuring and Managing a Coherence Application Tier

A Coherence Application tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-disabled managed Coherence servers. Managed Coherence servers in the application tier host applications (cache factory clients) and are Coherence cluster members. Multiple application tiers can be created for different applications.

Clients in the application tier are deployed as EARs and implemented using Java EE standards such as servlet, JSP, and EJB. Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) must be packaged as a GAR and also deployed within an EAR. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server.

Create a Coherence Application Tier

To create a Coherence application tier:

  1. Create a WebLogic Server cluster. For details, see Chapter 10, "Setting up WebLogic Clusters."

  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.

  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster.

  4. Click the Local Storage Enabled check box to remove the check mark and disable storage on the application tier. The managed Coherence servers assigned to this WebLogic Server cluster will be storage-disabled Coherence members (cache factory clients). Servers in the application tier should never be used to store cache data. Storage-enabled servers require resources to store and distribute data and can adversely affect client performance.

  5. Click Save.

Create Managed Coherence Servers for an Application Tier

To create managed servers for a Coherence application tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.

  2. Click New to create a new managed server.

  3. From the Create a New Server page, enter the server's properties as required.

  4. Click the Yes option to add the server to an existing cluster and use the drop-down list to select the application tier WebLogic Server cluster. The managed server inherits the Coherence settings from the data tier WebLogic Server cluster.

  5. Click Finish. The Summary of Servers page displays and the new server is listed.

  6. Repeat these steps to create additional managed servers as required.

  7. From the Control tab, select the servers to start and click Start.

Configuring and Managing a Coherence Proxy Tier

A Coherence proxy tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of managed Coherence proxy servers. Managed Coherence proxy servers allow Coherence*Extend clients to use Coherence caches without being cluster members. The number of managed Coherence proxy servers that are required in a proxy tier depends on the number of expected clients. At least two proxy servers must be created to allow for load balancing; however, additional servers may be required when supporting a large number of client connections and requests.

For details on Coherence*Extend and creating extend clients, see Developing Remote Clients for Oracle Coherence.

Create a Coherence Proxy Tier

To create a Coherence proxy tier:

  1. Create a WebLogic Server cluster. For details, see Chapter 10, "Setting up WebLogic Clusters."

  2. From the Summary of Clusters page, select the cluster from the Clusters table to configure it.

  3. From the Coherence tab, use the Coherence Cluster drop-down list and select a Coherence cluster to associate it with this WebLogic Server cluster.

  4. Click the Local Storage Enabled check box to remove the check mark and disable storage on the proxy tier. Proxy servers should never be used to store cache data. Storage-enabled cluster members can be adversely affected by a proxy service, which requires additional resources to handle client loads.

  5. Click Save.

Create Managed Coherence Servers for a Proxy Tier

To create managed servers for a Coherence proxy tier:

  1. From the WebLogic Server Administration Console Domain Structure pane, expand Environment and, click Servers.

  2. Click New to create a new managed server.

  3. From the Create a New Server page, enter the server's properties as required.

  4. Click the Yes option to add the server to an existing cluster and use the drop-down list to select the proxy tier WebLogic Server cluster. The managed server inherits the Coherence settings from the data tier WebLogic Server cluster.

  5. Click Finish. The Summary of Servers page displays and the new server is listed.

  6. Repeat these steps to create additional managed servers as required.

  7. From the Control tab, select the servers to start and click Start.

Configure Coherence Proxy Services

Coherence proxy services are clustered services that manage remote connections from extend clients. Proxy services are defined and configured in a coherence-cache-config.xml file within the <proxy-scheme> element. The definition includes, among other settings, the TCP listener address (IP, or DNS name, and port) that is used to accept client connections. For details on the <proxy-scheme> element, see Developing Applications with Oracle Coherence.There are two ways to setup proxy services: using a name service and using an address provider. The naming service provides an efficient setup and is typically preferred in a Coherence proxy tier.

Using a Name Service

A name service is a specialized listener that allows extend clients to connect to a proxy service by name. Clients connect to the name service, which returns the addresses of all proxy services on the cluster.

Note:

If a domain includes multiple tiers (for example, a data tier, an application tier, and a proxy tier), then the proxy tier should be started first, before a client can connect to the proxy.

A name service automatically starts on port 7574 (the same default port that the TCMP socket uses) when a proxy service is configured on a managed Coherence proxy server. The reuse of the same port minimizes the number of ports that are used by Coherence and simplifies firewall configuration.

To configure a proxy service and enable the name service on the default TCMP port:

  1. Edit the coherence-cache-config.xml file and create a <proxy-scheme> definition and do not explicitly define a socket address. The following example defines a proxy service that is named TcpExtend and automatically enables a cluster name service. A proxy address and ephemeral port is automatically assigned and registered with the cluster's name service.

    ...
    <caching-schemes>
       ...
       <proxy-scheme>
          <service-name>TcpExtend</service-name>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  2. Deploy the coherence-cache-config.xml file to each managed Coherence proxy server in the Coherence proxy tier. Typically, the coherence-cache-config.xml file is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file to override the coherence-cache-config.xml file that is located in the GAR. This allows a single GAR to be deployed to the cluster and the proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.

To connect to a name service, a client's coherence-cache-config.xml file must include a <name-service-addresses> element, within the <tcp-initiator> element, of a remote cache or remote invocation definition. The <name-service-addresses> element provides the socket address of a name service that is on a managed Coherence proxy server. The following example defines a remote cache definition and specifies a name service listening at host 192.168.1.5 on port 7574. The client automatically connects to the name service and gets a list of all managed Coherence proxy servers that contain a TcpExtend proxy service. The cache on the cluster must also be called TcpExtend. In this example, a single address is provided. A second name service address could be provided in case of a failure at the primary address. For details on client configuration and proxy service load balancing, see Developing Remote Clients for Oracle Coherence.

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <name-service-addresses>
            <socket-address>
               <address>192.168.1.5</address>
               <port>7574</port>
            </socket-address>
         </name-service-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

The name service listens on the cluster port (7574) by default and is available on all machines running Coherence cluster nodes. If the target cluster uses the default TCMP cluster port, then the port can be omitted from the configuration.

Note:

  • The <service-name> value must match the proxy scheme's <service-name> value; otherwise, a <proxy-service-name> element must also be provided in a remote cache and remote invocation scheme that contains the value of the <service-name> element that is configured in the proxy scheme.

  • In previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.

  • An address provider can also be used to specify name service addresses.

Using an Address Provider

An address provider specifies the TCP listener address (IP, or DNS name, and port) for a proxy service. The listener address can be explicitly defined within a <proxy-scheme> element in a coherence-cache-config.xml file; however, the preferred approach is to define address providers in a cluster configuration file and then reference the addresses from within a <proxy-scheme> element. The latter approach decouples deployment configuration from application configuration and allows network addresses to change without having to update a coherence-cache-config.xml file.

To use an address provider:

  1. Use the Address Providers tab on a Coherence cluster's Settings page to create address provider definitions. The CoherenceAddressProvidersBean MBean also exposes the address provider definition. An address provider contains a unique name in addition to the listener address for a proxy service. For example, an address provider called proxy1 might specify host 192.168.1.5 and port 9099 as the listener address.

  2. Repeat step 1 and create an address provider definition for each proxy service (at least one for each managed Coherence proxy server).

  3. For each managed Coherence proxy server, edit the coherence-cache-config.xml file and create a <proxy-scheme> definition and reference an address provider definition, by name, in an <address-provider> element. The following example defines a proxy service that references an address provider that is named proxy1:

    ...
    <caching-schemes>
       <proxy-scheme>
          <service-name>TcpExtend</service-name>
          <acceptor-config>
             <tcp-acceptor>
                <address-provider>proxy1</address-provider>
             </tcp-acceptor>
          </acceptor-config>
          <autostart>true</autostart>
       </proxy-scheme>
    </caching-schemes>
    ...
    
  4. Deploy each coherence-cache-config.xml file to its respective managed Coherence proxy server. Typically, the coherence-cache-config.xml file is included in a GAR file. However, for the proxy tier, use a cluster cache configuration file. The cluster cache configuration file overrides the coherence-cache-config.xml file that is located in the GAR. This allows the same GAR to be deployed to all cluster members, but then use unique settings that are specific to a proxy tier. For details on using a cluster cache configuration file, see Overriding a Cache Configuration File.

To connect to a proxy service, a client's coherence-cache-config.xml file must include a <remote-addresses> element, within the <tcp-initiator> element of a remote cache or remote invocation definition, that includes the address provider name. For example:

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <address-provider>proxy1</address-provider>
         </remote-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

Clients can also explicitly specify remote addresses. The following example defines a remote cache definition and specifies a proxy service on host 192.168.1.5 and port 9099. The client automatically connects to the proxy service and uses a cache on the cluster named TcpExtend. In this example, a single address is provided. A second address could be provided in case of a failure at the primary address. For details on client configuration and proxy service load balancing, see Developing Remote Clients for Oracle Coherence.

<remote-cache-scheme>
   <scheme-name>extend-dist</scheme-name>
   <service-name>TcpExtend</service-name>
   <initiator-config>
      <tcp-initiator>
         <remote-addresses>
            <socket-address>
               <address>192.168.1.5</address>
               <port>9099</port>
            </socket-address>
         </remote-addresses>
      </tcp-initiator>
   </initiator-config>
</remote-cache-scheme>

Configuring a Coherence Cluster

A Coherence cluster resource exposes several cluster settings that can be configured for a specific domain. Use the following tasks to configure cluster settings:

Many of the settings use default values that can be changed as required. The following instructions assume that a cluster resource has already been created. For details on creating a cluster resource, see Setting Up a Coherence Cluster. This section does not include instructions for securing Coherence. For security details, see Securing Oracle Coherence.

Use the Coherence tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterSystemResource MBean and its associated CoherenceClusterResource MBean expose cluster settings. The CoherenceClusterResource MBean provides access to multiple MBeans for configuring a Coherence cluster.

Adding and Removing Coherence Cluster Members

Any existing managed server instance can be added to a Coherence cluster. In addition, managed Coherence servers can be removed from a cluster. Adding and removing cluster members is available when configuring a Coherence Cluster and is a shortcut that is used instead of explicitly configuring each instance. However, when adding existing managed server instances, default Coherence settings may need to be changed. For details on configuring managed Coherence servers, see Configuring Managed Coherence Servers.

Use the Member tab on the Coherence Cluster Settings page to select which managed servers or WebLogic Server clusters are associated with a Coherence cluster. When selecting a WebLogic Server cluster, it is recommended that all the managed servers in the WebLogic Server cluster be associated with a Coherence cluster. A CoherenceClusterSystemResource exposes all managed Coherence servers as targets. A CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.

Setting Advanced Cluster Configuration Options

WebLogic Server MBeans expose a subset of Coherence operational settings that are sufficient for most use cases and are detailed throughout this chapter. These settings are available natively through the WLST utility and the WebLogic Server Administration Console. For more advanced use cases, use an external Coherence cluster configuration file (tangosol-coherence-override.xml), which provides full control over Coherence operational settings.

Note:

The use of an external cluster configuration file is only recommended for operational settings that are not available through the provided MBeans. That is, avoid configuring the same operational settings in both an external cluster configuration file and through the MBeans.

Use the General tab on the Coherence Cluster Settings page to enter the path and name of a cluster configuration file that is located on the administration server or use the CoherenceClusterSystemResource MBean. For details on using a Coherence cluster configuration file, see Developing Applications with Oracle Coherence, which also provides usage instructions for each element and a detailed schema reference.

Checking Which Operational Configuration is Used

Coherence generates an operational configuration from WebLogic Server MBeans, a Coherence cluster configuration file (if imported), and Coherence system properties (if set). The result are written to the managed Coherence server log if the system property weblogic.debug.DebugCoherence=true is set. If you use the WebLogic start-up scripts, you can use the JAVA_PROPERTIES environment variable. For example,

export JAVA_PROPERTIES=-Dweblogic.debug.DebugCoherence=true

Configure Cluster Communication

Cluster members communicate using the Tangosol Cluster Management Protocol (TCMP). The protocol operates independently of the WLS cluster protocol. TCMP is an IP-based protocol for discovering cluster members, managing the cluster, provisioning services, and transmitting data. TCMP can be transmitted over different transport protocols and can use both multicast and unicast. By default, TCMP is transmitted over UDP and uses unicast. The use of different transport protocols and multicast requires support from the underlying network.

Use the General tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterParamsBean and CoherenceClusterWellKnownAddressesBean MBeans expose the cluster communication parameters.

Changing the Coherence Cluster Mode

Coherence clusters support both unicast and multicast communication. Multicast must be explicitly configured and is not the default option. The use of multicast should be avoided in environments that do not properly support or allow multicast. The use of unicast disables all multicast transmission and automatically uses the Coherence Well Known Addresses (WKA) feature to discover and communicate between cluster members. See "Specifying Well Known Address Machines".

For details on using multicast, unicast, and WKA in Coherence, see Developing Applications with Oracle Coherence.

Selecting Unicast For the Coherence Cluster Mode

To use unicast for cluster communication, select Unicast from the Clustering Mode drop-down list and enter a cluster port or keep the default port, which is 7574. For most clusters, the port does not need to be changed. However, changing the port is required when multiple Coherence clusters run on the same computer. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.

Specifying Well Known Address Machines

When unicast is enabled, use the Well Known Addresses tab to explicitly configure WKA machine addresses. If no addresses are defined for a cluster, then addresses are automatically assigned. The recommended best practice is to always explicitly specify WKA machine addresses when using unicast.

In addition, if a domain contains multiple managed Coherence server that are located on different machines, then at least one non-local WKA machine address must be defined to ensure a Coherence cluster is formed; otherwise, multiple individual clusters are formed on each machine. If the managed Coherence servers are all running on the same machine, then a cluster can be created without specifying a non-local listen address.

Notes:

WKA machine addresses must be explicitly defined in production environments. In production mode, a managed Coherence server fails to start if WKA machines addresses have not been explicitly defined. Automatically assigned WKA machine addresses is a design time convenience and should only be used during development on a single server.

Selecting Multicast For the Coherence Cluster Mode

To use multicast for cluster communication, select Multicast from the Clustering Mode drop-down list and enter a cluster port and multicast listen address. For most clusters, the default cluster port (7574) does not need to be changed. However, changing the port is required when multiple Coherence clusters run on the same computer or when multiple clusters use the same multicast address. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.

Use the Time To Live field to designate how far multicast packets can travel on a network. The time-to-live value (TTL) is expressed in terms of how many hops a packet survives; each network interface, router, and managed switch is considered one hop. The TTL value should be set to the lowest integer value that works.

Changing the Coherence Cluster Transport Protocol

The following transport protocols are supported for TCMP and are selected using the Transport drop-down list. The CoherenceClusterParamsBean MBean exposes the transport protocol setting.

  • User Datagram Protocol (UDP) – UDP is the default TCMP transport protocol and is used for both multicast and unicast communication. If multicast is disabled, all communication is done using UDP unicast.

  • Transmission Control Protocol (TCP) – The TCP transport protocol is used in network environments that favor TCP communication. All TCMP communication uses TCP if unicast is enabled. If multicast is enabled, TCP is only used for unicast communication and UDP is used for multicast communication.

  • Secure Sockets Layer (SSL) – The SSL/TCP transport protocol is used in network environments that require highly secure communication between cluster members. SSL is only supported with unicast communication; ensure multicast is disabled when using SSL. The use of SSL requires additional configuration. For details on securing Coherence within WebLogic Server, see Securing Oracle Coherence.

  • TCP Message Bus (TMB) – The TMB protocol provides support for TCP/IP.

  • TMB with SSL (TMBS) – TMBS requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.

  • Sockets Direct Protocol Message Bus (SDMB) – The Sockets Direct Protocol (SDP) provides support for stream connections. SDMB is only valid on Exalogic.

  • SDMB with SSL (SDMBS) – SDMBS is only available for Oracle Exalogic systems and requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.

  • Infiniband Message Bus (IMB) – IMB uses an optimized protocol based on native InfiniBand verbs. IMB is only valid on Exalogic.

  • Lightweight Message Bus (LWMB) – LWMB uses MSGQLT/LWIPC libraries with IMB for Infinibus communications. LWMB is only available for Oracle Exalogic systems and is the default transport for both service and unicast communication. LWMB is automatically used as long as TCMP has not been configured with SSL.

Overriding a Cache Configuration File

A Coherence cache configuration file defines the caches that are used by an application. Typically, a cache configuration file is included in a GAR module. A GAR is deployed to all managed Coherence servers in the data tier and can also be deployed as part of an EAR to the application tier. The GAR ensures that the cache configuration is available on every Oracle Coherence cluster member. However, there are use cases that require a different cache configuration file to be used on specific managed Coherence servers. For example, a proxy tier requires access to all artifacts in the GAR but needs a different cache configuration file that defines the proxy services to start.

A cache configuration file can be associated with WebLogic clusters or managed Coherence servers at runtime. In this case, the cache configuration overrides the cache configuration file that is included in a GAR. You can also omit the cache configuration file from a GAR file and assign it at runtime. To override a cache configuration file at runtime, the cache configuration file must be bound to a JNDI name. The JNDI name is defined using the override-property attribute of the <cache-configuration-ref> element. The element is located in the coherence-application.xml file that is packaged in a GAR file. For details on the coherence-application.xml file, see Developing Oracle Coherence Applications for Oracle WebLogic Server.

The following example defines an override property named cache-config/ExamplesGar that can be used to override the META-INF/example-cache-config.xml cache configuration file in the GAR:

...
<cache-configuration-ref override-property="cache-config/ExamplesGar">
   META-INF/example-cache-config.xml</cache-configuration-ref>
...

At runtime, use the Cache Configurations tab on the Coherence Cluster Settings page to override a cache configuration file. You must supply the same JNDI name that is defined in the override-property attribute. The cache configuration can be located on the administration server or at a URL. In addition, you can choose to import the file to the domain or use it from the specified location. Use the Targets tab to specify which Oracle Coherence cluster members use the cache configuration file.

The following WLST (online) example demonstrates how a cluster cache configuration can be overridden using a CoherenceClusterSystemResource object.

edit()
startEdit()
cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs')
create('ExamplesGar', 'CoherenceCacheConfig')
cd('ExamplesGar')
set('JNDIName', 'ExamplesGar')
cmo.importCacheConfigurationFile('/tmp/cache-config.xml')
cmo.addTarget(getMBean('/Servers/coh_server'))
save()
activate()

The WLST example creates a CoherenceCacheConfig resource as a child. The script then imports the cache configuration file to the domain and specifies the JNDI name to which the resource binds. The file must be found at the path provided. Lastly, the cache configuration is targeted to a specific server. The ability to target a cache configuration resource to certain servers or WebLogic Server clusters allows the application to load different configuration based on the context of the server (cache servers, cache clients, proxy servers, and so on).

The cache configuration resource can also be configured as a URL:

edit()
startEdit()
cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs')
create('ExamplesGar', 'CoherenceCacheConfig')
cd('ExamplesGar')
set('JNDIName', 'ExamplesGar')
set('CacheConfigurationFile', 'http://cache.locator/app1/cache-config.xml')
cmo.addTarget(getMBean('/Servers/coh_server'))
save()
activate()

Configuring Coherence Logging

Configure cluster logging using the WebLogic Server Administration Console's Logging tab that is located on the Coherence Cluster Settings page or use the CoherenceLoggingParamsBean MBean. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server. Coherence logging configuration includes:

  • Disabling and enabling logging

  • Changing the default logger name

    WebLogic Server provides two loggers that can be used for Coherence logging: the default com.oracle.coherence logger and the com.oracle.wls logger. The com.oracle.wls logger is generic and uses the same handler that is configured for WebLogic Server log output. The logger does not allow for Coherence-specific configuration. The com.oracle.coherence logger allows Coherence-specific configuration, which includes the use of different handlers for Coherence logs.

    Note:

    If logging is configured through a standard logging.properties file, then make sure the file uses the same logger name that is currently configured for Coherence logging.
  • Changing the log message format

    Add or remove information from a log message. A log message can include static text as well as parameters that are replaced at run time (for example, {date}). For details on supported log message parameters, see Developing Applications with Oracle Coherence.

Configuring Managed Coherence Servers

Managed Coherence servers expose several cluster member settings that can be configured for a specific domain. Use the following tasks to configure a managed Coherence server:

Many of the settings use default values that can be changed as required. The instructions in this section assume that a managed server has already been created and associated with a Coherence cluster. For details on creating managed Coherence servers, see Create Standalone Managed Coherence Servers.

Use the Coherence tab on a managed server's Setting page to configure Coherence cluster member settings. A CoherenceMemberConfig MBean is created for each managed server and exposes the Coherence cluster member parameters.

Configure Coherence Cluster Member Storage Settings

The storage settings for managed Coherence servers can be configured as required. Enabling storage on a server means the server is responsible for storing a portion of both primary and backup data for the Coherence cluster. Servers that are intended to store data must be configured as storage-enabled servers. Servers that host cache applications and cluster proxy servers should be configured as storage-disabled servers and are typically not responsible for storing data because sharing resource can become problematic and affect application and cluster performance.

Note:

If a managed Coherence server is part of a WebLogic Server cluster, then the Coherence storage settings that are specified on the WebLogic Server cluster override the storage settings on the server. The storage setting is an exception to the general rule that server settings override WebLogic Server cluster settings. Moreover, the final runtime configuration is not reflected in the console. Therefore, a managed Coherence server may show that storage is disabled even though storage has been enabled through the Coherence tab for a WebLogic Server cluster. Always check the WebLogic Server cluster settings to determine whether storage has been enabled for a managed Coherence server.

Use the following fields on the Coherence tab to configure storage settings:

  • Local Storage Enabled – This field specifies whether a managed Coherence server to stores data. If this option is not selected, then the managed Coherence server does not store data and is considered a cluster client.

  • Coherence Web Local Storage Enabled – This field specifies whether a managed Coherence server stores HTTP session data. For details on using Coherence to store session data, see Administering HTTP Session Management with Oracle Coherence*Web.

Configure Coherence Cluster Member Unicast Settings

Managed Coherence servers communicate with each other using unicast (point-to-point) communication. Unicast is used even if the cluster is configured to use multicast communication. For details on unicast in Coherence, see Developing Applications with Oracle Coherence.

Use the following fields on the Coherence tab to configure unicast settings:

  • Unicast Listen Address – This field specifies the address on which the server listens for unicast communication. If no address is provided, then a routable IP address is automatically selected. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address.

  • Unicast Listen Port – This field specifies the ports on which the server listens for unicast communication. A cluster member uses two unicast UDP ports which are automatically assigned from the operating system's available ephemeral port range (as indicated by a value of 0). The default value ensures that Coherence cannot accidently cause port conflicts with other applications. However, if a firewall is required between cluster members (an atypical configuration), then a port can be manually assigned and a second port is automatically selected (port1 +1).

  • Unicast Port Auto Adjust – This field specifies whether the port automatically increments if the port is already in use.

Removing a Coherence Management Proxy

A Coherence cluster can be managed from any JMX-compatible client such as JConsole or Java VisualVM. The management information includes runtime statistics and operational settings. The management information is specific to the Coherence management domain and is different than the management information that is provided for Coherence as part of the com.bea management domain. For a detailed reference of Coherence MBeans, see Managing Oracle Coherence.

One cluster member is automatically selected as a management proxy and is responsible for aggregating the management information from all other cluster members. The Administration server for the WebLogic domain then integrates the management information and it is made available through the domain runtime MBean server. It the cluster member is not operational, then another cluster member is automatically selected as the management proxy.

Use the Coherence Management Node field on the Coherence tab of a managed Coherence server to specify whether a cluster member can be selected as a management proxy. By default, all cluster members can be selected as the management proxy. Therefore, deselect the option only if you want to remove a cluster member from being selected as a management proxy.

At runtime, use a JMX client to connect to the domain runtime MBean server where the Coherence management information is located within the Coherence management namespace. For details about connecting to the domain runtime MBean server, see Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.

Configure Coherence Cluster Member Identity Settings

A set of identifiers are used to give a managed Coherence server an identity within the cluster. The identity information is used to differentiate servers and conveys the servers' role within the cluster. Some identifiers are also used by the cluster service when performing cluster tasks. Lastly, the identity information is valuable when displaying management information (for example, JMX) and facilitates interpreting log entries.

Use the following fields on the Coherence tab to configure member identity settings:

  • Site Name – This field specifies the name of the geographic site that hosts the managed Coherence server. The server's domain name is used if no name is specified. For WAN clustering, this value identifies the datacenter where the member is located. The site name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate geographic sites). The site name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.

  • Rack Name – This field specifies the name of the location within a geographic site that the managed Coherence server is hosted at and is often a cage, rack, or bladeframe identifier. The rack name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate bladeframes). The rack name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.

  • Role Name – This field specifies the managed Coherence server's role in the cluster. The role name allows an application to organize cluster members into specialized roles, such as storage-enabled or storage-disabled.

    If a managed Coherence server is part of a WebLogic Server cluster, the cluster name is automatically used as the role name and this field cannot be set. If no name is provided, the default role name that is used is WebLogicServer.

Configure Coherence Cluster Member Logging Levels

Logging levels can be configured for each managed Coherence server. The default log level is D5 and can be changed using the server's Logging tab. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server.

To configure a managed Coherence server's logging level:

  1. From the Summary of Servers screen, select a managed Coherence server.

  2. On the server's settings page, select the Logging tab.

  3. From the General tab, click Advanced.

  4. From the Platform Logger Levels field, enter a logging level.

    Value Resultant Message Displays
    com.oracle.coherence=FINEST D9
    com.oracle.coherence=INFO D3
      D5 (Default)

  5. Click Save.

Using a Single-Server Cluster

A single-server cluster is a cluster that is constrained to run on a single managed server instance and does not access the network. The server instance acts as a storage-enabled cluster member, a client, and a proxy. A single-server cluster is easy to setup and offers a quick way to start and stop a cluster. A single-server cluster is used during development and should not be used for production or testing environments.

To create a single-server cluster:

Using WLST with Coherence

The WebLogic Scripting Tool (WLST) is a command-line interface that you can use to automate domain configuration tasks, including configuring and managing Coherence clusters. For more information on WLST, see Understanding the WebLogic Scripting Tool.

Setting Up Coherence with WLST (Offline)

WLST can be used to set up Coherence clusters. The following examples demonstrate using WLST in offline mode to create and configure a Coherence cluster. It is assumed that a domain has already been created and that the examples are completed in the order in which they are presented. In addition, the examples only create a data tier. Additional tiers can be created as required. Lastly, the examples are not intended to demonstrate every Coherence MBean. For a complete list of Coherence MBeans, see MBean Reference for Oracle WebLogic Server.

readDomain('/ORACLE_HOME/user_projects/domains/base_domain')

Create a Coherence Cluster

create('myCoherenceCluster', 'CoherenceClusterSystemResource')

Create a Tier of Managed Coherence Servers

create('coh_server1', 'Server')
cd('Server/coh_server1')
set('ListenPort', 7005)
set('ListenAddress', '192.168.0.100')
set('CoherenceClusterSystemResource', 'myCoherenceCluster')

cd('/')
create('coh_server2','Server')
cd('Server/coh_server2')
set('ListenPort', 7010)
set('ListenAddress', '192.168.0.101')
set('CoherenceClusterSystemResource', 'myCoherenceCluster')

cd('/')
create('DataTier', 'Cluster')
assign('Server', 'coh_server1,coh_server2','Cluster','DataTier')
cd('Cluster/DataTier')
set('MulticastAddress', '237.0.0.101')
set('MulticastPort', 8050)

cd('/')
assign('Cluster','DataTier','CoherenceClusterSystemResource','myCoherenceCluster')

cd('/CoherenceClusterSystemResource/myCoherenceCluster')
set('Target', 'DataTier')

Configure Coherence Cluster Parameters

cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster/CoherenceClusterParams/NO_NAME_0')
set('ClusteringMode', 'unicast')
set('SecurityFrameworkEnabled','false')
set('ClusterListenPort', 7574)

Configure Well Known Addresses

create('wka_config','CoherenceClusterWellKnownAddresses')
cd('CoherenceClusterWellKnownAddresses/NO_NAME_0')
 
create('WKA1','CoherenceClusterWellKnownAddress')
cd('CoherenceClusterWellKnownAddress/WKA1')
set('ListenAddress', '192.168.0.100')
cd('../..')

create('WKA2','CoherenceClusterWellKnownAddress')
cd('CoherenceClusterWellKnownAddress/WKA2')
set('ListenAddress', '192.168.0.101')

Set Logging Properties

cd('/')
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster')
create('log_config)','CoherenceLoggingParams')
cd('CoherenceLoggingParams/NO_NAME_0')
set('Enabled', 'true')
set('LoggerName', 'com.oracle.coherence')

Configure Managed Coherence Servers

cd('/')
cd('Servers/coh_server1')
create('member_config', 'CoherenceMemberConfig')
cd('CoherenceMemberConfig/member_config')
set('LocalStorageEnabled', 'true')
set('RackName', '100A')
set('RoleName', 'Server')
set('SiteName', 'pa-1')
set('UnicastListenAddress', '192.168.0.100')
set('UnicastListenPort', 0)
set('UnicastPortAutoAdjust', 'true')

cd('/')
cd('Servers/coh_server2')
create('member_config', 'CoherenceMemberConfig')
cd('CoherenceMemberConfig/member_config')
set('LocalStorageEnabled', 'true')
set('RackName', '100A')
set('RoleName', 'Server')
set('SiteName', 'pa-1')
set('UnicastListenAddress', '192.168.0.101')
set('UnicastListenPort', 0)
set('UnicastPortAutoAdjust', 'true')

updateDomain()
closeDomain()

Persisting Coherence Caches with WLST

WLST includes a set of commands that can be used to persist and recover cached data from disk. The commands are automatically available when connected to an administration server domain runtime MBean server. For more information about Coherence cache persistence, see Administering Oracle Coherence.

Table 12-1 lists WLST commands for persisting Coherence caches. Example 12-1 demonstrates using the commands.

Table 12-1 WLST Coherence Persistence Commands

Command Description

coh_createSnapshot(snapshotName,
serviceName)

Persist the data partitions of a service to disk

  • snapshotName – any user defined name

  • serviceName – the name of the partitioned or federated cache service for which the snapshot is created

coh_recoverSnapshot(snapshotName,
serviceName)

Restore the data partitions of a service from disk. Any existing data in the caches of a service are lost.

  • snapshotName – the name of a snapshot to recover

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was created

coh_listSnapshots(serviceName)

Return a list of available snapshots

  • serviceName – the name of the partitioned or federated cache service for which the snapshots are listed

coh_validateSnapshot(snapshotDir,
verbose)

Check whether a snapshot is complete and without error

  • snapshotDir – The full path to a snapshot including the snapshot name. The default snapshot location is USER_HOME/coherence/snapshot.

  • verbose – return more detailed validation information

coh_archiveSnapshot(snapshotName,
serviceName
)

Save a snapshot to a central location. The location is specified in the snapshot archiver definition that is associated with a service.

  • snapshotName – the name of a snapshot to archive

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was created

coh_retrieveArchivedSnapshot(
snapshotName, serviceName)

Retrieve an archived snapshot so that it can be recovered using the coh_recoverSnapshot command

  • snapshotName – the name of a snapshot to retrieve

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was archived

coh_listArchivedSnapshots(
serviceName
)

Return a list of available archived snapshots

  • serviceName – the name of the partitioned or federated cache service for which the snapshot was archived

coh_validateArchivedSnapshot(
snapshotName
, clusterName,
serviceName, achviverName, verbose)

Check whether an archived snapshot is complete and without error. The operational override configuration file containing the archiver must be available on the classpath.

  • snapshotName – the name of an archived snapshot to validate

  • clusterName – the name of the cluster where the partitioned or federated cache service is running

  • serviceName – the name of the partitioned or federated cache service for which the archived snapshot was created

  • archiverName – the name of the snapshot archiver definition that is being used by the service.

  • verbose – return more detailed validation information

coh_removeArchivedSnapshot(
snapshotName
, serviceName)

Delete an archived snapshot from disk

  • snapshotName – the name of an archived snapshot to delete

  • serviceName – the name of the partitioned or federated cache service for which the archived snapshot is deleted

coh_removeSnapshot(snapshotName,
serviceName
)

Delete a snapshot from disk

  • snapshotName – the name of a snapshot to delete

  • serviceName – the name of the partitioned or federated cache service for which the snapshot is deleted


Example 12-1 demonstrates using the persistence API from WLST to persist the caches for a partitioned cache service.

Example 12-1 WLST Example for Persisting Caches

serviceName   = '"ExampleGAR:ExamplesPartitionedPofCache"';
snapshotName  = 'new-snapshot'
 
connect('weblogic','password','t3://machine:7001')
 
# Must be in domain runtime tree otherwise no MBeans are returned
domainRuntime()
 
try:
   coh_listSnapshots(serviceName)
   coh_createSnapshot(snapshotName, serviceName)
   coh_listSnapshots(serviceName)
   coh_recoverSnapshot(snapshotName, serviceName)
   coh_archiveSnapshot(snapshotName, serviceName)
   coh_listArchivedSnapshots(serviceName)
   coh_removeSnapshot(snapshotName, serviceName)
   coh_retrieveArchivedSnapshot(snapshotName, serviceName)
   coh_recoverSnapshot(snapshotName, serviceName)
   coh_listSnapshots(serviceName)
except PersistenceException, rce:
   print 'PersistenceException: ' + str(rce)
except Exception,e:
   print 'Unknown Exception' + str(e)
else:
   print 'All operations complete'