Learn how to define Coherence clusters in a WebLogic Server domain and how to associate a Coherence cluster with multiple WebLogic Server clusters.
Coherence clusters consist of multiple managed Coherence server instances that distribute data in-memory to increase application scalability, availability, and performance. An application interacts with the data in a local cache and the distribution and backup of the data is performed automatically across cluster members.
Coherence clusters are different than WebLogic Server clusters. They use different clustering protocols and are configured separately. Multiple WebLogic Server clusters can be associated with a Coherence cluster and a WebLogic Server domain typically contains a single Coherence cluster. Managed servers that are configured as Coherence cluster members are referred to as managed Coherence servers.
Managed Coherence servers can be explicitly associated with a Coherence cluster or they can be associated with a WebLogic Server cluster that is associated with a Coherence cluster. Managed Coherence servers are typically setup in tiers that are based on their type: a data tier for storing data, an application tier for hosting applications, and a proxy tier that allows external clients to access caches.
Figure 12-1 shows a conceptual view of a Coherence cluster in a WebLogic Server domain:
Figure 12-1 Conceptual View of a Coherence Domain Topology
A WebLogic Server domain typically contains a single Coherence cluster. The cluster is represented as a single system-level resource (CoherenceClusterSystemResource)
. A CoherenceClusterSystemResource
instance is created using the WebLogic Server Administration Console or WLST.
A Coherence cluster can contain any number of managed Coherence servers. The servers can be standalone managed servers or can be part of a WebLogic Server cluster that is associated with a Coherence cluster. Typically, multiple WebLogic Server clusters are associated with a Coherence cluster. For details on creating WebLogic Server clusters for use by Coherence, see Creating Coherence Deployment Tiers.
Note:
Cloning a managed Coherence server does not clone its association with a Coherence cluster. The managed server will not be a member of the Coherence cluster. You must manually associate the cloned managed server with the Coherence cluster.
To define a Coherence cluster resource:
Managed Coherence servers are managed server instances that are associated with a Coherence cluster. Managed Coherence servers join together to form a Coherence cluster and are often referred to as cluster members. Cluster members have seniority and the senior member performs cluster tasks (for example, issuing the cluster heart beat).
Note:
Managed Coherence servers and standalone Coherence cluster members (those that are not managed within a WebLogic Server domain) can join the same cluster. However, standalone cluster members cannot be managed from within a WebLogic Server domain; operational configuration and application lifecycles must be manually administered and monitored.
Standalone Coherence cluster members must be configured to use Well Known Addresses (WKA) when joining a Coherence cluster that is managed in a WebLogic Server domain.
The Administration Server is typically not used as a managed Coherence server in a production environment.
Managed Coherence servers are distinguished by their role in the cluster. A best practice is to use different managed server instances (and preferably different WebLogic Server clusters) for each cluster role.
storage-enabled – a managed Coherence server that is responsible for storing data in the cluster. Coherence applications are packaged as Grid ARchives (GAR) and deployed on storage-enabled managed Coherence servers.
storage-disabled – a managed Coherence server that is not responsible for storing data and is used to host Coherence applications (cache clients). A Coherence application GAR is packaged within an EAR and deployed on storage-disabled managed Coherence servers.
proxy – a managed Coherence server that is storage-disabled and allows external clients (non-cluster members) to use a cache. A Coherence application GAR is deployed on managed Coherence proxy servers.
To create managed Coherence servers:
Coherence supports different topologies within a WebLogic Server domain to provide varying levels of performance, scalability, and ease of use.
For example, during development, a single standalone managed server instance may be used as both a cache server and a cache client. The single-server topology is easy to setup and use, but does not provide optimal performance or scalability. For production, Coherence is typically setup using WebLogic Server Clusters. A WebLogic Server cluster is used as a Coherence data tier and hosts one or more cache servers; a different WebLogic Server cluster is used as a Coherence application tier and hosts one or more cache clients; and (if required) different WebLogic Server clusters are used for the Coherence proxy tier that hosts one or more managed Coherence proxy servers and the Coherence extend client tier that hosts extend clients. The tiered topology approach provides optimal scalability and performance.
The instructions in this section use both the Clusters Settings page and Servers Settings page in the WebLogic Server Administration Console to create Coherence deployment tiers. WebLogic Server clusters and managed servers instances can be associated with a Coherence cluster resource using the ClusterMBean
and ServerMBean
MBeans, respectively. Managed servers that are associated with a WebLogic Server cluster inherit the cluster's Coherence settings. However, the settings may not be reflected in the Servers Settings page.
A Coherence Data tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-enabled managed Coherence servers. Managed Coherence servers in the data tier store and distribute data (both primary and backup) on the cluster. The number of managed Coherence servers that are required in a data tier depends on the expected amount of data that is stored in the Coherence cluster and the amount of memory available on each server. In addition, a cluster must contain a minimum of four physical computers to avoid the possibility of data loss during a computer failure.
Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) are packaged as a GAR and deployed on the data tier. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server. For details on calculating cache size and hardware requirements, see the production checklist in Administering Oracle Coherence.
To create a Coherence data tier:
To create managed servers for a Coherence data tier:
A Coherence Application tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of storage-disabled managed Coherence servers. Managed Coherence servers in the application tier host applications (cache factory clients) and are Coherence cluster members. Multiple application tiers can be created for different applications.
Clients in the application tier are deployed as EARs and implemented using Java EE standards such as servlet, JSP, and EJB. Coherence artifacts (such as Coherence configuration files, POF serialization classes, filters, entry processors, and aggregators) must be packaged as a GAR and also deployed within an EAR. For details on packaging and deploying Coherence applications, see Developing Oracle Coherence Applications for Oracle WebLogic Server.
To create a Coherence application tier:
To create managed servers for a Coherence application tier:
A Coherence proxy tier is a WebLogic Server cluster that is associated with a Coherence cluster and hosts any number of managed Coherence proxy servers. Managed Coherence proxy servers allow Coherence*Extend clients to use Coherence caches without being cluster members. The number of managed Coherence proxy servers that are required in a proxy tier depends on the number of expected clients. At least two proxy servers must be created to allow for load balancing; however, additional servers may be required when supporting a large number of client connections and requests.
For details on Coherence*Extend and creating extend clients, see Developing Remote Clients for Oracle Coherence.
To create a Coherence proxy tier:
To create managed servers for a Coherence proxy tier:
Coherence proxy services are clustered services that manage remote connections from extend clients. Proxy services are defined and configured in a coherence-cache-config.xml
file within the <proxy-scheme>
element. The definition includes, among other settings, the TCP listener address (IP, or DNS name, and port) that is used to accept client connections. For details on the <proxy-scheme>
element, see Developing Applications with Oracle Coherence.There are two ways to setup proxy services: using a name service and using an address provider. The naming service provides an efficient setup and is typically preferred in a Coherence proxy tier.
A name service is a specialized listener that allows extend clients to connect to a proxy service by name. Clients connect to the name service, which returns the addresses of all proxy services on the cluster.
Note:
If a domain includes multiple tiers (for example, a data tier, an application tier, and a proxy tier), then the proxy tier should be started first, before a client can connect to the proxy.
A name service automatically starts on port 7574 (the same default port that the TCMP socket uses) when a proxy service is configured on a managed Coherence proxy server. The reuse of the same port minimizes the number of ports that are used by Coherence and simplifies firewall configuration.
To configure a proxy service and enable the name service on the default TCMP port:
To connect to a name service, a client's coherence-cache-config.xml
file must include a <name-service-addresses>
element, within the <tcp-initiator>
element, of a remote cache or remote invocation definition. The <name-service-addresses>
element provides the socket address of a name service that is on a managed Coherence proxy server. The following example defines a remote cache definition and specifies a name service listening at host 192.168.1.5
on port 7574
. The client automatically connects to the name service and gets a list of all managed Coherence proxy servers that contain a TcpExtend
proxy service. The cache on the cluster must also be called TcpExtend
. In this example, a single address is provided. A second name service address could be provided in case of a failure at the primary address. For details on client configuration and proxy service load balancing, see Developing Remote Clients for Oracle Coherence.
<remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>TcpExtend</service-name> <initiator-config> <tcp-initiator> <name-service-addresses> <socket-address> <address>192.168.1.5</address> <port>7574</port> </socket-address> </name-service-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
The name service listens on the cluster port (7574) by default and is available on all machines running Coherence cluster nodes. If the target cluster uses the default TCMP cluster port, then the port can be omitted from the configuration.
Note:
The <service-name>
value must match the proxy scheme's <service-name>
value; otherwise, a <proxy-service-name>
element must also be provided in a remote cache and remote invocation scheme that contains the value of the <service-name>
element that is configured in the proxy scheme.
In previous Coherence releases, the name service automatically listened on a member's unicast port instead of the cluster port.
An address provider can also be used to specify name service addresses.
An address provider specifies the TCP listener address (IP, or DNS name, and port) for a proxy service. The listener address can be explicitly defined within a <proxy-scheme>
element in a coherence-cache-config.xml
file; however, the preferred approach is to define address providers in a cluster configuration file and then reference the addresses from within a <proxy-scheme>
element. The latter approach decouples deployment configuration from application configuration and allows network addresses to change without having to update a coherence-cache-config.xml
file.
To use an address provider:
To connect to a proxy service, a client's coherence-cache-config.xml
file must include a <remote-addresses>
element, within the <tcp-initiator>
element of a remote cache or remote invocation definition, that includes the address provider name. For example:
<remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>TcpExtend</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <address-provider>proxy1</address-provider> </remote-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
Clients can also explicitly specify remote addresses. The following example defines a remote cache definition and specifies a proxy service on host 192.168.1.5
and port 9099
. The client automatically connects to the proxy service and uses a cache on the cluster named TcpExtend
. In this example, a single address is provided. A second address could be provided in case of a failure at the primary address. For details on client configuration and proxy service load balancing, see Developing Remote Clients for Oracle Coherence.
<remote-cache-scheme> <scheme-name>extend-dist</scheme-name> <service-name>TcpExtend</service-name> <initiator-config> <tcp-initiator> <remote-addresses> <socket-address> <address>192.168.1.5</address> <port>9099</port> </socket-address> </remote-addresses> </tcp-initiator> </initiator-config> </remote-cache-scheme>
A Coherence cluster resource exposes several cluster settings that can be configured for a specific domain.
Use the following tasks to configure cluster settings:
Many of the settings use default values that can be changed as required. The following instructions assume that a cluster resource has already been created. For details on creating a cluster resource, see Setting Up a Coherence Cluster. This section does not include instructions for securing Coherence. For security details, see Securing Oracle Coherence.
Use the Coherence tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterSystemResource
MBean and its associated CoherenceClusterResource
MBean expose cluster settings. The CoherenceClusterResource
MBean provides access to multiple MBeans for configuring a Coherence cluster.
Note:
WLS configuration take precedence over Coherence system properties. Coherence configuration in WLS should, in general, be changed using WLST or a Coherence cluster configuration file instead of using system properties.
Any existing managed server instance can be added to a Coherence cluster. In addition, managed Coherence servers can be removed from a cluster. Adding and removing cluster members is available when configuring a Coherence Cluster and is a shortcut that is used instead of explicitly configuring each instance. However, when adding existing managed server instances, default Coherence settings may need to be changed. For details on configuring managed Coherence servers, see Configuring Managed Coherence Servers.
Use the Member tab on the Coherence Cluster Settings page to select which managed servers or WebLogic Server clusters are associated with a Coherence cluster. When selecting a WebLogic Server cluster, it is recommended that all the managed servers in the WebLogic Server cluster be associated with a Coherence cluster. A CoherenceClusterSystemResource
exposes all managed Coherence servers as targets. A CoherenceMemberConfig
MBean is created for each managed server and exposes the Coherence cluster member parameters.
WebLogic Server MBeans expose a subset of Coherence operational settings that are sufficient for most use cases and are detailed throughout this chapter. These settings are available natively through the WLST utility and the WebLogic Server Administration Console. For more advanced use cases, use an external Coherence cluster configuration file (tangosol-coherence-override.xml
), which provides full control over Coherence operational settings.
Note:
The use of an external cluster configuration file is only recommended for operational settings that are not available through the provided MBeans. That is, avoid configuring the same operational settings in both an external cluster configuration file and through the MBeans.
Use the General tab on the Coherence Cluster Settings page to enter the path and name of a cluster configuration file that is located on the administration server or use the CoherenceClusterSystemResource
MBean. For details on using a Coherence cluster configuration file, see Developing Applications with Oracle Coherence, which also provides usage instructions for each element and a detailed schema reference.
Checking Which Operational Configuration is Used
Coherence generates an operational configuration from WebLogic Server MBeans, a Coherence cluster configuration file (if imported), and Coherence system properties (if set). The result are written to the managed Coherence server log if the system property weblogic.debug.DebugCoherence=true
is set. If you use the WebLogic start-up scripts, you can use the JAVA_PROPERTIES
environment variable. For example,
export JAVA_PROPERTIES=-Dweblogic.debug.DebugCoherence=true
Cluster members communicate using the Tangosol Cluster Management Protocol (TCMP). The protocol operates independently of the WLS cluster protocol. TCMP is an IP-based protocol for discovering cluster members, managing the cluster, provisioning services, and transmitting data. TCMP can be transmitted over different transport protocols and can use both multicast and unicast. TCMP uses multicast UDP for discovery and TCP for data transmission (using TMB), by default. If the Well Known Addresses (WKA) is configured, then TCMP is transmitted over unicast UDP for discovery and TCP for data transmission. If SSL is configured for TCMP, then SSL over TCP is used for both discovery and data transmission. The use of different transport protocols and multicast requires support from the underlying network.
Use the General tab on the Coherence Cluster Settings page to configure cluster communication. The CoherenceClusterParamsBean
and CoherenceClusterWellKnownAddressesBean
MBeans expose the cluster communication parameters.
Coherence clusters support both unicast and multicast communication. Multicast must be explicitly configured and is not the default option. The use of multicast should be avoided in environments that do not properly support or allow multicast. The use of unicast disables all multicast transmission and automatically uses the Coherence Well Known Addresses (WKA) feature to discover and communicate between cluster members. See Specifying Well Known Address Members.
For details on using multicast, unicast, and WKA in Coherence, see Developing Applications with Oracle Coherence.
Selecting Unicast For the Coherence Cluster Mode
To use unicast for cluster communication, select Unicast from the Clustering Mode drop-down list and enter a cluster port or keep the default port, which is 7574. For most clusters, the port does not need to be changed. However, changing the port is required when multiple Coherence clusters run on the same computer. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.
Specifying Well Known Address Members
When unicast is enabled, use the Well Known Addresses tab to explicitly configure WKA machine addresses. If no addresses are defined for a cluster, then addresses are automatically assigned. The recommended best practice is to always explicitly specify WKA machine addresses when using unicast.
In addition, if a domain contains multiple managed Coherence server that are located on different machines, then at least one non-local WKA machine address must be defined to ensure a Coherence cluster is formed; otherwise, multiple individual clusters are formed on each machine. If the managed Coherence servers are all running on the same machine, then a cluster can be created without specifying a non-local listen address.
Note:
WKA machine addresses must be explicitly defined in production environments. In production mode, a managed Coherence server fails to start if WKA machines addresses have not been explicitly defined. Automatically assigned WKA machine addresses is a design time convenience and should only be used during development on a single server.
Selecting Multicast For the Coherence Cluster Mode
To use multicast for cluster communication, select Multicast from the Clustering Mode drop-down list and enter a cluster port and multicast listen address. The same cluster port can be shared across distinct clusters (as identified by the cluster name) even if the clusters run on the same computer or multicast address. Thus, changing the cluster port is not necessary if the cluster name is being set to a value which is unique to the environment. If a different port is required, then the recommended best practice is to select a value between 1024 and 8999.
Use the Time To Live field to designate how far multicast packets can travel on a network. The time-to-live value (TTL) is expressed in terms of how many hops a packet survives; each network interface, router, and managed switch is considered one hop. The TTL value should be set to the lowest integer value that works.
The following transport protocols are supported for TCMP and are selected using the Transport drop-down list. The CoherenceClusterParamsBean
MBean exposes the transport protocol setting.
User Datagram Protocol (UDP) – UDP is the default TCMP transport protocol and is used for both multicast and unicast communication. If multicast is disabled, all communication is done using UDP unicast.
Transmission Control Protocol (TCP) – The TCP transport protocol is used in network environments that favor TCP communication. All TCMP communication uses TCP if unicast is enabled. If multicast is enabled, TCP is only used for unicast communication and UDP is used for multicast communication.
Secure Sockets Layer (SSL) – The SSL/TCP transport protocol is used in network environments that require highly secure communication between cluster members. SSL is only supported with unicast communication; ensure multicast is disabled when using SSL. The use of SSL requires additional configuration. For details on securing Coherence within WebLogic Server, see Securing Oracle Coherence.
TCP Message Bus (TMB) – The TMB protocol provides support for TCP/IP.
TMB with SSL (TMBS) – TMBS requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.
Sockets Direct Protocol Message Bus (SDMB) – The Sockets Direct Protocol (SDP) provides support for stream connections. SDMB is only valid on Exalogic.
SDMB with SSL (SDMBS) – SDMBS is only available for Oracle Exalogic systems and requires the use of an SSL socket provider. See Developing Applications with Oracle Coherence.
Infiniband Message Bus (IMB) – IMB uses an optimized protocol based on native InfiniBand verbs. IMB is only valid on Exalogic.
Lightweight Message Bus (LWMB) – LWMB uses MSGQLT/LWIPC libraries with IMB for Infinibus communications. LWMB is only available for Oracle Exalogic systems and is the default transport for both service and unicast communication. LWMB is automatically used as long as TCMP has not been configured with SSL.
A Coherence cache configuration file defines the caches that are used by an application. Typically, a cache configuration file is included in a GAR module. A GAR is deployed to all managed Coherence servers in the data tier and can also be deployed as part of an EAR to the application tier. The GAR ensures that the cache configuration is available on every Oracle Coherence cluster member. However, there are use cases that require a different cache configuration file to be used on specific managed Coherence servers. For example, a proxy tier requires access to all artifacts in the GAR but needs a different cache configuration file that defines the proxy services to start.
A cache configuration file can be associated with WebLogic clusters or managed Coherence servers at runtime. In this case, the cache configuration overrides the cache configuration file that is included in a GAR. You can also omit the cache configuration file from a GAR file and assign it at runtime. To override a cache configuration file at runtime, the cache configuration file must be bound to a JNDI name. The JNDI name is defined using the override-property
attribute of the <cache-configuration-ref>
element. The element is located in the coherence-application.xml
file that is packaged in a GAR file. For details on the coherence-application.xml
file, see Developing Oracle Coherence Applications for Oracle WebLogic Server.
The following example defines an override property named cache-config/ExamplesGar
that can be used to override the META-INF/example-cache-config.xml
cache configuration file in the GAR:
... <cache-configuration-ref override-property="cache-config/ExamplesGar"> META-INF/example-cache-config.xml</cache-configuration-ref> ...
At runtime, use the Cache Configurations tab on the Coherence Cluster Settings page to override a cache configuration file. You must supply the same JNDI name that is defined in the override-property
attribute. The cache configuration can be located on the administration server or at a URL. In addition, you can choose to import the file to the domain or use it from the specified location. Use the Targets tab to specify which Oracle Coherence cluster members use the cache configuration file.
The following WLST (online) example demonstrates how a cluster cache configuration can be overridden using a CoherenceClusterSystemResource
object.
edit() startEdit() cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs') create('ExamplesGar', 'CoherenceCacheConfig') cd('ExamplesGar') set('JNDIName', 'ExamplesGar') cmo.importCacheConfigurationFile('/tmp/cache-config.xml') cmo.addTarget(getMBean('/Servers/coh_server')) save() activate()
The WLST example creates a CoherenceCacheConfig
resource as a child. The script then imports the cache configuration file to the domain and specifies the JNDI name to which the resource binds. The file must be found at the path provided. Lastly, the cache configuration is targeted to a specific server. The ability to target a cache configuration resource to certain servers or WebLogic Server clusters allows the application to load different configuration based on the context of the server (cache servers, cache clients, proxy servers, and so on).
The cache configuration resource can also be configured as a URL:
edit() startEdit() cd('CoherenceClusterSystemResources/myCoherenceCluster/CoherenceCacheConfigs') create('ExamplesGar', 'CoherenceCacheConfig') cd('ExamplesGar') set('JNDIName', 'ExamplesGar') set('CacheConfigurationFile', 'http://cache.locator/app1/cache-config.xml') cmo.addTarget(getMBean('/Servers/coh_server')) save() activate()
Configure cluster logging using the WebLogic Server Administration Console's Logging tab that is located on the Coherence Cluster Settings page or use the CoherenceLoggingParamsBean
MBean. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server. Coherence logging configuration includes:
Disabling and enabling logging
Changing the default logger name
WebLogic Server provides two loggers that can be used for Coherence logging: the default com.oracle.coherence
logger and the com.oracle.wls
logger. The com.oracle.wls
logger is generic and uses the same handler that is configured for WebLogic Server log output. The logger does not allow for Coherence-specific configuration. The com.oracle.coherence
logger allows Coherence-specific configuration, which includes the use of different handlers for Coherence logs.
Note:
If logging is configured through a standard logging.properties
file, then make sure the file uses the same logger name that is currently configured for Coherence logging.
Changing the log message format
Add or remove information from a log message. A log message can include static text as well as parameters that are replaced at run time (for example, {date}
). For details on supported log message parameters, see Developing Applications with Oracle Coherence.
Coherence persistence manages the persistence and recovery of Coherence distributed caches. Cached data is persisted so that it can be quickly recovered after a catastrophic failure or after a cluster restart due to planned maintenance. For complete details about Coherence cache persistence, see Persisting Caches.
Use the Persistence tab on the Coherence Cluster Settings page to enable active persistence and to override the default location where persistence files are stored. The CoherencePersistenceParamsBean
MBean exposes the persistence parameters. Managed Coherence servers must be restarted for persistence changes to take affect.
On-demand persistence allows a cache service to be manually persisted and recovered upon request (a snapshot) using the persistence coordinator. The persistence coordinator is exposed as an MBean interface (PersistenceCoordinatorMBean
) that provides operations for creating, archiving, and recovering snapshots of a cache service. To use the MBean, JMX must be enabled on the cluster. For details about enabling JMX management and accessing Coherence MBeans, see Using JMX to Manage Oracle Coherence. Active persistence automatically persists cache contents on all mutations and automatically recovers the contents on cluster/service startup. The persistence coordinator can still be used in active persistence mode to perform on-demand snapshots.
The federated caching feature federates cache data asynchronously across multiple geographically dispersed clusters. Cached data is federated across clusters to provide redundancy, off-site backup, and multiple points of access for application users in different geographical locations. For complete details about Coherence Federation, see Federating Caches Across Clusters.
Use the Federation tab on the Coherence Cluster Settings page to enable a federation topology and to configure a remote cluster participant to which caches are federated. When selecting a topology, a topology configuration is automatically created and named Default-Topology
. Federation must be configured on both the local cluster participant and the remote cluster participant. At least one host on the remote cluster must be provided. If a custom port is being used on the remote cluster participant, then change the cluster port accordingly. Managed Coherence servers must be restarted for federation changes to take affect. The CoherenceFederationParamsBean
MBean also exposes the cluster federation parameters and can be used to configure cache federation.
Note:
The Default-Topology
topology configuration is created and used if no federation topology is specified in the cache configuration file.
When using federation, matching topologies must be configured on both the local and remote clusters. For example, selecting none
for the topology in a local cluster and active-active
as the topology in the remote cluster can lead to unpredictable behavior. Similarly, if a local cluster is set to use active-passive, then the remote cluster must be set to use passive-active.
Managed Coherence servers expose several cluster member settings that can be configured for a specific domain.
Use the following tasks to configure a managed Coherence server:
Many of the settings use default values that can be changed as required. The instructions in this section assume that a managed server has already been created and associated with a Coherence cluster. For details on creating managed Coherence servers, see Create Standalone Managed Coherence Servers.
Use the Coherence tab on a managed server's Setting page to configure Coherence cluster member settings. A CoherenceMemberConfig
MBean is created for each managed server and exposes the Coherence cluster member parameters.
Note:
WLS configuration take precedence over Coherence system properties. Coherence configuration in WLS should, in general, be changed using WLST or a Coherence cluster configuration file instead of using system properties.
The storage settings for managed Coherence servers can be configured as required. Enabling storage on a server means the server is responsible for storing a portion of both primary and backup data for the Coherence cluster. Servers that are intended to store data must be configured as storage-enabled servers. Servers that host cache applications and cluster proxy servers should be configured as storage-disabled servers and are typically not responsible for storing data because sharing resource can become problematic and affect application and cluster performance.
Note:
If a managed Coherence server is part of a WebLogic Server cluster, then the Coherence storage settings that are specified on the WebLogic Server cluster override the storage settings on the server. The storage setting is an exception to the general rule that server settings override WebLogic Server cluster settings. Moreover, the final runtime configuration is not reflected in the console. Therefore, a managed Coherence server may show that storage is disabled even though storage has been enabled through the Coherence tab for a WebLogic Server cluster. Always check the WebLogic Server cluster settings to determine whether storage has been enabled for a managed Coherence server.
Use the following fields on the Coherence tab to configure storage settings:
Local Storage Enabled – This field specifies whether a managed Coherence server to stores data. If this option is not selected, then the managed Coherence server does not store data and is considered a cluster client.
Coherence Web Local Storage Enabled – This field specifies whether a managed Coherence server stores HTTP session data. For details on using Coherence to store session data, see Administering HTTP Session Management with Oracle Coherence*Web.
Managed Coherence servers communicate with each other using unicast (point-to-point) communication. Unicast is used even if the cluster is configured to use multicast communication. For details on unicast in Coherence, see Developing Applications with Oracle Coherence.
Use the following fields on the Coherence tab to configure unicast settings:
Unicast Listen Address – This field specifies the address on which the server listens for unicast communication. If no address is provided, then a routable IP address is automatically selected. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address. The address field also supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address.
Unicast Listen Port – This field specifies the ports on which the server listens for unicast communication. A cluster member uses two unicast UDP ports which are automatically assigned from the operating system's available ephemeral port range (as indicated by a value of 0
). The default value ensures that Coherence cannot accidently cause port conflicts with other applications. However, if a firewall is required between cluster members (an atypical configuration), then a port can be manually assigned and a second port is automatically selected (port1 +1).
Unicast Port Auto Adjust – This field specifies whether the port automatically increments if the port is already in use.
A Coherence cluster can be managed from any JMX-compatible client such as JConsole or Java VisualVM. The management information includes runtime statistics and operational settings. The management information is specific to the Coherence management domain and is different than the management information that is provided for Coherence as part of the com.bea management domain. For a detailed reference of Coherence MBeans, see Managing Oracle Coherence.
One cluster member is automatically selected as a management proxy and is responsible for aggregating the management information from all other cluster members. The Administration server for the WebLogic domain then integrates the management information and it is made available through the domain runtime MBean server. It the cluster member is not operational, then another cluster member is automatically selected as the management proxy.
Use the Coherence Management Node field on the Coherence tab of a managed Coherence server to specify whether a cluster member can be selected as a management proxy. By default, all cluster members can be selected as the management proxy. Therefore, deselect the option only if you want to remove a cluster member from being selected as a management proxy.
At runtime, use a JMX client to connect to the domain runtime MBean server where the Coherence management information is located within the Coherence management namespace. For details about connecting to the domain runtime MBean server, see Developing Custom Management Utilities Using JMX for Oracle WebLogic Server.
A set of identifiers are used to give a managed Coherence server an identity within the cluster. The identity information is used to differentiate servers and conveys the servers' role within the cluster. Some identifiers are also used by the cluster service when performing cluster tasks. Lastly, the identity information is valuable when displaying management information (for example, JMX) and facilitates interpreting log entries.
Use the following fields on the Coherence tab to configure member identity settings:
Site Name – This field specifies the name of the geographic site that hosts the managed Coherence server. The server's domain name is used if no name is specified. For WAN clustering, this value identifies the datacenter where the member is located. The site name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate geographic sites). The site name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.
Rack Name – This field specifies the name of the location within a geographic site that the managed Coherence server is hosted at and is often a cage, rack, or bladeframe identifier. The rack name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate bladeframes). The rack name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.
Role Name – This field specifies the managed Coherence server's role in the cluster. The role name allows an application to organize cluster members into specialized roles, such as storage-enabled or storage-disabled.
If a managed Coherence server is part of a WebLogic Server cluster, the cluster name is automatically used as the role name and this field cannot be set. If no name is provided, the default role name that is used is WebLogicServer
.
Logging levels can be configured for each managed Coherence server. The default log level is D5 and can be changed using the server's Logging tab. For details on WebLogic Server logging, see Configuring Log Files and Filtering Log Messages for Oracle WebLogic Server.
To configure a managed Coherence server's logging level:
A single-server cluster is a cluster that is constrained to run on a single managed server instance and does not access the network.
To create a single-server cluster:
Define a Coherence Cluster Resource – Create a Coherence cluster and select a managed server instance to be a member of the cluster. The administration server instance can be used to facilitate setup.
Configure Cluster Communication – Configure the cluster and set the Time To Live value to 0
if using multicast communication.
Configure Coherence Cluster Member Unicast Settings – Configure the managed server instance and set the unicast address to an address that is routed to loop back. On most computers, setting the address to 127.0.0.1
works.
See Understanding the WebLogic Scripting Tool.
Setting Up Coherence with WLST (Offline)
WLST can be used to set up Coherence clusters. The following examples demonstrate using WLST in offline mode to create and configure a Coherence cluster. It is assumed that a domain has already been created and that the examples are completed in the order in which they are presented. In addition, the examples only create a data tier. Additional tiers can be created as required. Lastly, the examples are not intended to demonstrate every Coherence MBean. For a complete list of Coherence MBeans, see MBean Reference for Oracle WebLogic Server.
readDomain('/ORACLE_HOME/user_projects/domains/base_domain')
Create a Coherence Cluster
create('myCoherenceCluster', 'CoherenceClusterSystemResource')
Create a Tier of Managed Coherence Servers
create('coh_server1', 'Server') cd('Server/coh_server1') set('ListenPort', 7005) set('ListenAddress', '192.168.0.100') set('CoherenceClusterSystemResource', 'myCoherenceCluster') cd('/') create('coh_server2','Server') cd('Server/coh_server2') set('ListenPort', 7010) set('ListenAddress', '192.168.0.101') set('CoherenceClusterSystemResource', 'myCoherenceCluster') cd('/') create('DataTier', 'Cluster') assign('Server', 'coh_server1,coh_server2','Cluster','DataTier') cd('Cluster/DataTier') set('MulticastAddress', '237.0.0.101') set('MulticastPort', 8050) cd('/CoherenceClusterSystemResource/myCoherenceCluster') set('Target', 'DataTier')
Configure Coherence Cluster Parameters
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster/CoherenceClusterParams/NO_NAME_0') set('ClusteringMode', 'unicast') set('SecurityFrameworkEnabled','false') set('ClusterListenPort', 7574)
Configure Well Known Addresses
create('wka_config','CoherenceClusterWellKnownAddresses') cd('CoherenceClusterWellKnownAddresses/NO_NAME_0') create('WKA1','CoherenceClusterWellKnownAddress') cd('CoherenceClusterWellKnownAddress/WKA1') set('ListenAddress', '192.168.0.100') cd('../..') create('WKA2','CoherenceClusterWellKnownAddress') cd('CoherenceClusterWellKnownAddress/WKA2') set('ListenAddress', '192.168.0.101')
Set Logging Properties
cd('/') cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/ myCoherenceCluster') create('log_config)','CoherenceLoggingParams') cd('CoherenceLoggingParams/NO_NAME_0') set('Enabled', 'true') set('LoggerName', 'com.oracle.coherence')
Configure Managed Coherence Servers
cd('/') cd('Servers/coh_server1') create('member_config', 'CoherenceMemberConfig') cd('CoherenceMemberConfig/member_config') set('LocalStorageEnabled', 'true') set('RackName', '100A') set('RoleName', 'Server') set('SiteName', 'pa-1') set('UnicastListenAddress', '192.168.0.100') set('UnicastListenPort', 0) set('UnicastPortAutoAdjust', 'true') cd('/') cd('Servers/coh_server2') create('member_config', 'CoherenceMemberConfig') cd('CoherenceMemberConfig/member_config') set('LocalStorageEnabled', 'true') set('RackName', '100A') set('RoleName', 'Server') set('SiteName', 'pa-1') set('UnicastListenAddress', '192.168.0.101') set('UnicastListenPort', 0) set('UnicastPortAutoAdjust', 'true') updateDomain() closeDomain()
Setting the Cluster Name and Port
readDomain('/ORACLE_HOME/user_projects/domains/base_domain')
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster)
set('Name', 'MyCluster')
cd('CoherenceClusterSystemResource/myCoherenceCluster/CoherenceResource/
myCoherenceCluster/CoherenceClusterParams/NO_NAME_0')
set('ClusterListenPort', 9123)
updateDomain()
closeDomain()
WLST includes a set of commands that can be used to persist and recover cached data from disk. The commands are automatically available when connected to an Administration Server domain runtime MBean server.
For more information about Coherence cache persistence, see Administering Oracle Coherence.
Table 12-1 lists WLST commands for persisting Coherence caches. Example 12-1 demonstrates using the commands.
Table 12-1 WLST Coherence Persistence Commands
Command | Description |
---|---|
|
Persist the data partitions of a service to disk
|
|
Restore the data partitions of a service from disk. Any existing data in the caches of a service are lost.
|
|
Return a list of available snapshots
|
|
Check whether a snapshot is complete and without error
|
|
Save a snapshot to a central location. The location is specified in the snapshot archiver definition that is associated with a service.
|
|
Retrieve an archived snapshot so that it can be recovered using the coh_recoverSnapshot command
|
|
Return a list of available archived snapshots
|
|
Check whether an archived snapshot is complete and without error. The operational override configuration file containing the archiver must be available on the classpath.
|
|
Delete an archived snapshot from disk
|
|
Delete a snapshot from disk
|
Example 12-1 demonstrates using the persistence API from WLST to persist the caches for a partitioned cache service.
Example 12-1 WLST Example for Persisting Caches
serviceName = '"ExampleGAR:ExamplesPartitionedPofCache"'; snapshotName = 'new-snapshot' connect('weblogic','password','t3://machine:7001') # Must be in domain runtime tree otherwise no MBeans are returned domainRuntime() try: coh_listSnapshots(serviceName) coh_createSnapshot(snapshotName, serviceName) coh_listSnapshots(serviceName) coh_recoverSnapshot(snapshotName, serviceName) coh_archiveSnapshot(snapshotName, serviceName) coh_listArchivedSnapshots(serviceName) coh_removeSnapshot(snapshotName, serviceName) coh_retrieveArchivedSnapshot(snapshotName, serviceName) coh_recoverSnapshot(snapshotName, serviceName) coh_listSnapshots(serviceName) except PersistenceException, rce: print 'PersistenceException: ' + str(rce) except Exception,e: print 'Unknown Exception' + str(e) else: print 'All operations complete'