5 Simplified JMS Cluster and High Availability Configuration

Learn about new cluster-targeting enhancements and how they simplify JMS configuration. These enhancements make the JMS service dynamically scalable and highly available without the need for an extensive configuration process in a cluster or Oracle WebLogic Server Multitenant environment.

This chapter includes the following sections:

What Are the WebLogic Clustering Options for JMS?

A WebLogic Cluster can contain individually configured servers, dynamically generated servers, or a mix of both.

WebLogic Server has the following cluster types:

  • Configured: A cluster where each member server is individually configured and individually targeted to the cluster. The value of the Dynamic Cluster Size attribute for the cluster configuration is 0. See Clusters: Configuration: Servers in the Oracle WebLogic Server Administration Console Online Help. This type of cluster is also known as a Static Cluster.

  • Dynamic: A cluster where all the member servers are created using a server template. These servers are referred to as dynamic servers. The value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

  • Mixed: A cluster where some member servers are created using a server template (Dynamic servers) and the remaining servers are manually configured (Configured servers). Because a mixed cluster contains dynamic servers, the value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

For more information about using dynamic servers, see:

Understanding the Simplified JMS Cluster Configuration

A cluster targeted JMS service configuration directly targets JMS service artifacts such as a JMS Server, SAF Agent, or Path Service and their associated persistent stores to the same cluster. A Messaging Bridge is also a JMS artifact that can be cluster targeted. Cluster targeting JMS artifacts is simpler and provides more HA capability than individually configuring and targeting a JMS artifact for each WebLogic server in the cluster.

Cluster targeted JMS service artifacts can be distributed across the cluster or singletons depending on their configured Distribution Policy. When distributed, the cluster will automatically start a new instance of the artifact (and associated store if applicable) on each new cluster member and that member becomes the preferred server for that instance. For artifacts that are not distributed, e.g. they have a singleton Distribution Policy, the system will select a single server in the cluster to start a single instance of that artifact. See Simplified JMS Configuration and High Availability Enhancements.

In the case of a dynamic or a mixed cluster, the number of instances automatically grow when the cluster size grows. To dynamically scale the size of the dynamic or mixed cluster or the dynamic servers of the mixed cluster, adjust the dynamic cluster Size attribute in the Clusters: Configuration: Servers tab of your cluster configuration.

Figure 5-1 shows the relationship between the JMS and a dynamic cluster configuration in the config.xml file.

Figure 5-1 Dynamic Clustered JMS

Description of Figure 5-1 follows
Description of "Figure 5-1 Dynamic Clustered JMS"

Using Custom Persistent Stores with Cluster-Targeted JMS Service Artifacts

The custom persistent store used by the JMS service artifacts must be targeted to the same cluster with appropriate attribute values configured to take advantage of cluster enhancements. However, cluster-targeted SAF Agents and JMS Servers can also continue to use the default store available on each cluster member, which does not offer any of the new enhancements discussed in this chapter. See Simplified JMS Configuration and High Availability Enhancements.

Targeting JMS Modules Resources

JMS system modules continue to support two types of targeting, either of which can be used to take advantage of simplified cluster configuration.

  • Any default targeted JMS resource in a module (a JMS resource that is not associated with a subdeployment), inherits the targeting of its parent module, and the parent module can be targeted to any type of cluster.

  • Module subdeployment targets can reference clustered JMS Servers or SAF Agents for hosting regular destinations or imported destinations respectively. Using a cluster-targeted JMS Server or a SAF Agent in a subdeployment eliminates the need to individually create and enumerate the JMS Servers or SAF Agents in the subdeployment, which is particularly useful for Uniform Distributed Destination and imported destination deployment.

See Targeting Best Practices.

Note:

A module or its subdeployments cannot be directly targeted to a Dynamic cluster member server.

Using Persistent Stores with Cluster Targeted JMS Servers

The persistent store associated with a Cluster Targeted JMS server can be a custom persistent store that is targeted to the same cluster as the JMS server or, in limited circumstances, can be a default store. It is strongly recommended to always use a custom persistent store instead of a default store as this provides more high availability capabilities (such as a service migration), and this will work in all topologies including dynamic clusters and multi-tenant.

Targeting JMS Modules Resources

JMS system modules support default and deployment targeting, either of these can be used to take advantage of simplified cluster configuration.

  • Any default targeted JMS resource in a module (a JMS resource that is not associated with a subdeployment) inherits the targeting of its parent module, and the parent module can be targeted to any type of cluster. Note that Oracle strongly recommends using subdeployment targeting instead of default targeting for destinations.

  • Module subdeployment targets can reference clustered JMS servers. Using a cluster targeted JMS server in a subdeployment eliminates the need to individually enumerate individual JMS servers in the subdeployment, which is particularly useful for uniform distributed destination deployment.

See Targeting Best Practices.

Note:

A module or its subdeployments cannot be directly targeted to a Dynamic cluster member.

Simplified JMS Configuration and High Availability Enhancements

WebLogic Server supports high availability for JMS service artifacts deployed in a cluster. Both server and service failure scenarios are handled by automatically migrating an artifact's instance to other running servers. During this process, the system evaluates the overall server load and availability and moves the instances accordingly.

Cluster-targeting enhancements in this release of WebLogic Server eliminate many of the limitations that existed in the previous releases:

  • In releases before 12.2.1.0, only JMS servers, persistent stores, and SAF Agents (partially) were allowed to target to a cluster. In 12.2.1.0 and later, the support is extended for all of the JMS service artifacts including SAF Agents, path services, and messaging bridges and for all types of clusters (Configured, Mixed, and Dynamic).

  • Enhancements in 12.2.1.0 and later lets you easily configure and control the distribution behavior, as well as the JMS high availability (also known as JMS automatic service migration) behavior for all cluster targeted JMS Service artifacts. All of these configurations now exist in a single location, which is a Persistent Store for all the artifacts that depend on that store, or on the messaging bridge (which does not use the Store). This eliminates the need for migratable targets that were used in the previous releases.

  • Because the logical JMS artifacts are targeted to clusters, the system automatically creates any "physical" instances required on a cluster member when it joins the cluster. This allows the JMS Service to automatically scale up when the cluster size grows. With optional high availability configuration, the "physical" instances can restart or migrate in the event of service failure or server failure or shutdown, making the JMS Service highly available with minimal configuration.

The primary attributes that control the scalability and high availability behavior of cluster targeted JMS services are Distribution policy and Migration policy. In addition to these policies, there are a few additional attributes that can be used for fine-tuning the high availability behavior such as restarting the instance in place (on the same server) before attempting to migrate elsewhere. These policies and attributes are described in the following sections:

Defining the Distribution Policy for JMS Services

The Distribution Policy setting for a custom persistent store or messaging bridge determines how the associated JMS Service artifacts (JMS Server, SAF Agent, and Path Service) are distributed in a cluster and the same setting on the Messaging Bridge determines its distribution behavior.

The following are the options that control the distribution behavior of the JMS service artifact:
  • Distributed: In this mode, the cluster automatically ensures that there is a minimum of one instance per server. When the cluster starts, the system ensures that all the messaging service instances are up if possible, and when applicable it will attempt an even distribution of the instances. In addition, all the instances will automatically try to start on their home/preferred server first. Depending on the Migration Policy, instances can automatically migrate or even fail-back as needed to ensure high availability and even load balancing across the cluster.

    Note:

    The default value for the store Distribution Policy attribute is Distributed. Distributed is the required value for SAF Agents, and is also required for cluster-targeted JMS Servers that host Uniform Distributed Destinations.
  • Singleton: In this mode, a JMS server or a path service has one instance per cluster.

    Note:

    This option is required for a path service and for cluster-targeted JMS servers that host singleton or standalone (non-distributed) destinations.

    For more information about the Distribution Policy, see JDBC Store: HA Configuration in the Oracle WebLogic Server Administration Console Online Help

Defining the Migration Policy for JMS Services

The store Migration Policy setting controls service migration and restart behavior of cluster-targeted JMS service artifact instances. For high availability and service migration, set the migration policy as follows on the associated store:

  • Off: This option disables migration. By default, Restart In Place is also disabled when the Migration Policy is Off.

  • Always: This option enables the system to automatically migrate instances in all situations. This includes administrative shutdown, crashes or bad health of the hosting server or subsystem service. This option also enables service restart-in-place, which automatically tries to restart a failing store on the current hosting server JVM before trying to migrate it to another server JVM in the same cluster.

  • On-Failure: This option enables the system to automatically migrate the instances only in case of failure or a crash (bad health) of its hosting server. The instances will not migrate when there is an administrative shutdown, instead they will restart when the server is restarted. This option also enables service restart-in-place, which automatically tries to restart a failing store on the current hosting server JVM before trying to migrate it to another server JVM in the same cluster.

Note:

  • WebLogic Server provides complete in-place restart support for the JMS services regardless of targeting type, deployment scope and migration policy setting. See Service Restart In Place in Administering the WebLogic Persistent Store.

  • JMS Service Migration and JTA Migration work independently based on their respective Migration Policy settings and are configured independently. For information on Dynamic Clusters and JTA migration policies, see Understanding the Service Migration Framework.

  • To enable support for cluster-targeted JMS Service artifacts with the Always or On-Failure migration policy, you must configure Cluster Leasing. See Leasing in Administering Clusters for Oracle WebLogic Server.

  • When configuring Cluster Leasing, it is a best practice to use the Database Leasing option instead of Consensus Leasing.

  • When a distributed instance is migrated from its preferred server, it will try to fail back when the preferred server is restarted.

For more information about the store Migration Policy attribute, see JDBC Store: HA Configuration in Oracle WebLogic Server Administration Console Online Help.

Additional Configuration Options for JMS Services

There are different store configuration options available for automatic migration and high availability of JMS services. The configuration options apply when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always, or when the Migration Policy is Off and Restart In Place is explicitly configured to True.The following table describes the different configuration properties for JMS service migration:

Table 5-1 Configuration Properties for JMS Service Migration

Property Default Value Description

Restart In Place

False when Migration Policy=Off, True otherwise

Defines how the system responds on a JMS Service failure within a healthy WebLogic Server. If a service fails and if this property is enabled, then the system first attempts to restart that store and associated service artifact on the same server before migrating to another server.

Note: This attribute does not apply when an entire server fails.

See Service Restart In Place in Administering the WebLogic Persistent Store.

Seconds Between Restarts

30

If Restart In Place is enabled, then this property specifies the delay, in seconds, between restarts on the same server.

Number of Restart Attempts

6

If Restart In Place is enabled, then this number determines the restart attempts the system should make before trying to migrate the artifact instance to another server.

Initial Boot Delay Seconds

60

Controls how fast subsequent instances are started on a server after the first instance is started. This prevents the system from getting overloaded during startup.

A value of 0 indicates that the system does not need to wait, which may lead to overload situations. The system’s default value is 60 seconds.

Failback Delay Seconds

-1

Specifies the time to wait before failing back an artifact's instance to its preferred server.

A value > 0 specifies that the time, in seconds, to delay before failing a JMS artifact back to its preferred server.

A value of 0 indicates that the instance would never failback.

A value of -1 indicates that there is no delay and the instance would failback immediately.

Partial Cluster Stability Seconds

240

Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster-targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure.

This delay ensures that services are balanced across a cluster even if the servers are started sequentially.

A value > 0 specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.

A value of 0 specifies no delay in starting all the instances on available servers.

The default delay value is 240 seconds.

Fail Over Limit

-1

Specify a limit for the number of cluster-targeted JMS artifact instances that can fail over to a particular JVM.

A value of -1 means there is no failover limit (unlimited).

A value of 0 prevents any failovers of cluster-targeted JMS artifact instances, so no more than 1 instance will run per server (this is an instance that has not failed over).

A value of 1 allows one failover instance on each server, so no more than two instances will run per server (one failed over instance plus an instance that has not failed over).

For more information about these properties, see JDBC Store: HA Configuration in the Oracle WebLogic Server Administration Console Online Help.

Considerations and Limitations of Clustered JMS

Before developing applications using dynamic clusters and cluster targeted JMS servers, you must consider the limitations posed by Clustered JMS.

The following are the limitations and other behaviors for consideration:

  • There are special considerations when a SAF agent-imported destination with an Exactly-Once QoS Level forwards messages to a distributed destination that is hosted on a mixed or dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • WLST Offline does not support the assign command to target JMS servers to a dynamic cluster. Use the get and set command.

  • Weighted distributed destinations (a deprecated type of distributed destination composed of a group of singleton destinations), are not supported on cluster-targeted JMS Servers.

  • Replicated distributed topics (RDTs) are not supported when any member destination is hosted on a cluster-targeted JMS server. Configure a partitioned distributed topic (PDT) or a singleton topic instead. If you are converting a configuration that already has RDTs configured, see Replacing an RDT with a PDT in Developing JMS Applications for Oracle WebLogic Server.

  • A custom persistent store with a Singleton Distribution Policy, and an Always (Or On-Failure) Migration Policy is required for a cluster targeted JMS Server to allow Standalone destinations.

  • There is no support for manually (administratively) forcing the migration or fail-back of a service instance that is generated from a cluster-targeted JMS artifact.

  • A path service must be configured if there are any distributed or imported destinations that are used to host Unit-of-Order (UOO) messages. In addition, such destinations need to be configured with a path service UOO routing policy instead of the default hashed UOO routing policy because hash based UOO routing is not supported in cluster-targeted JMS. Attempts to send UOO messages to a distributed or imported destination that does not configure a path service routing policy and that is hosted on a cluster targeted JMS Service will fail with an exception.

  • A Fail Over Limit setting only applies when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always.

Note that a cluster targeted path service must be configured to reference a Store that has a Singleton Distribution Policy and an Always Migration Policy.

Interoperability and Upgrade Considerations of Cluster Targeted JMS Servers

The following section provides information about interoperability and upgrade considerations when using cluster targeted JMS servers:

  • JMS clients, bridges, MDBs, SAF clients, and SAF agents from previous releases can communicate with cluster targeted JMS servers.

  • There are special considerations when a SAF agent-imported destination with an Exactly-Once QOS Level forwards messages to a distributed destination that is hosted on a Mixed or Dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • No conversion path is available for moving data (messages) or configurations from non-cluster targeted JMS servers to cluster targeted JMS servers, or vice versa.

The migratable target-based service migration of JMS services is supported on a Configured Cluster.

For example, a messaging configuration for JMS servers and persistent stores can target a single manually configured WebLogic Server or to a single migratable target. Similarly, SAF Agents an target a single manually configured WebLogic Server or to a single migratable target or to a Cluster.

See Automatic Migration of JMS Services.

Best Practices for Using Cluster Targeted JMS Services

Learn about the recommended best practices and design patterns for using cluster targeted JMS services.

  • Prior to decreasing the dynamic-cluster-size setting of a dynamic cluster or deleting a configure server in a configured or mixed cluster, process the messages and delete the stores that are associated with a retired cluster targeted JMS server before shutting down its WebLogic Server instance. For example:

    • Pause the retiring destination instances for production.

    • Let consumer applications process the remaining messages on the paused destinations.

    • Shut down the server instance.

    • Delete any persistent store files or database tables that are associated with the retired instance.

    Alternatively, a much simpler solution is to setup the system to automatically migrate destinations on retired servers to the remaining servers:

    • Configure the stores to have a Migration Policy of On-Failure or Always (not Off).

    • Never reduce the configured dynamic-cluster-size of a dynamic cluster or delete a configured server from a configured/mixed cluster. Instead, simply don't boot the retired servers. The On-Failure or Always stores will migrate to running servers. Note that the WLST scaleDown command should be used only when its updateConfiguration option is disabled, otherwise it will reduce the cluster dynamic cluster size setting.

  • Use cluster targeted stores instead of default stores for clustered targeted JMS servers and SAF Agents.

  • When enabling high availability (that is, when the Migration Policy on the store is set to either On-Failure or Always), ensure that cluster leasing is configured. As a best practice, database leasing is preferred over consensus leasing. See Leasing in Administering Clusters for Oracle WebLogic Server.

  • When configuring destinations in a module, use a subdeployment that targets a specific clustered JMS server or SAF agent instead of using default targeting. This ensures that the destination creates members on exactly the desired JMS server instances.

  • When using Exactly-Once QOS Level SAF agents and SAF clients, as a best practice, ensure that Stores associated with the SAF Agents are configured with the migration-policy set to Always.

Also, in the event of change in the cluster size (particularly when shrinking), ensure that the backing tables (in case of JDBC store) or files (in case of FILE store) are not deleted or destroyed, so that they can be migrated over to any available cluster members for continuous availability of the SAF service.

Runtime MBean Instance Naming Syntax

The runtime MBean associated with each of the JMS artifacts, such as Persistent store, JMS server, SAFAgent, MessagingBridge and PathService distributed in a cluster are named based on the distribution policy set. This is enforced by a MBean check.

The types of Distribution Policies are:
  • Distributed — Instances are automatically created and named uniquely by naming them after their home or preferred host WebLogic Server the first time when the WebLogic Server boots . The format is <configured-name>@<server-name>.

  • Singleton — Instances will be named with its configured name along with “01”. The format is <configured-name>–01. No server name is added to this instance name.

The following sections brief about the Instance Naming Syntax for persistent store:

Instance Naming Syntax for .DAT File

In case of file stores (.DAT files) that are targeted to a cluster, an instance’s data files are uniquely named based on the corresponding store instance name. For example, in Distributed mode, an instance's files will be names as <Store name>@<Server instance name> NNNNNN.DAT, where <NNNNNN> is a number ranging from 000000–999999.

Note:

A single file instance may create one or more .DAT files.

Instance Naming Syntax for .RGN File

A replicated store’s instance regions targeted to a cluster, are uniquely named based on the corresponding store instance name. For example, in distributed mode, a replicated store’s instance will contain the syntax of <configured Replicated Store name>@<Server instance name> NNNNNN.RGN, where <NNNNNN> is a number ranging from 000000–999999.

JDBC Store Table Name Syntax

The prefix of a JDBC Store's table name is configurable via its Prefix Name attribute. The suffix of a JDBC Store's table name is automatically generated and differs based on whether the store is cluster targeted. The table name must often be changed using the JDBC store PrefixName setting to ensure different JDBC stores use different tables. No two different instances of JDBC store can share the same backing table. See Table 5-2 and Table 5-3.

Table 5-2 JDBC Store Table Name Syntax for Cluster Targeted Case

PrefixName Distribution Policy Instance Name Store Table Name
myPrefix Distributed myStore@myServer myPrefix_myServer_WLStore
myPrefix. (ends with ‘.’) Distributed myStore@myServer myPrefix._myServer_WLStore
Not set Distributed myStore@myServer myServer_WLStore
myPrefix Singleton myStore-01 myPrefix_01_WLStore
myPrefix. (ends with ‘.’) Singleton myStore-01 myPrefix.S_01_WLStore
Not set Singleton myStore-01 S_01_WLStore

Table 5-3 Non-Cluster (single-server) Targeted Case

Prefix Name Distribution Policy Instance Name Store Table Name
myprefix NA myStore myPrefixWLStore
NA NA myStore WLStore