5 Simplified JMS Cluster and High Availability Configuration

This chapter describes the new cluster-targeting enhancements and how they simplify the JMS configuration process. These enhancements make the JMS service dynamically scalable and highly available without the need for an extensive configuration process in a cluster or WebLogic Server Multitenant environment.

This chapter includes the following sections:

What Are the WebLogic Clustering Options for JMS?

A WebLogic Cluster may contain individually configured servers, dynamically generated servers, or a mix of both. WebLogic Server has the following cluster types:

  • Configured: A cluster where each member server is individually configured and individually targeted to the cluster. The value of the Dynamic Cluster Size attribute for the cluster configuration is 0. See Clusters: Configuration: Servers in the Oracle WebLogic Server Administration Console Online Help. This type of cluster is also known as a Static Cluster.

  • Dynamic: A cluster where all the member servers are created using a server template. These servers are referred to as dynamic servers. The value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

  • Mixed: A cluster where some member servers are created using a server template (Dynamic servers) and the remaining servers are manually configured (Configured servers). Because a mixed cluster contains dynamic servers, the value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

For more information about using dynamic servers, see:

Understanding the Simplified JMS Cluster Configuration

The clustered JMS service has the ability to target JMS service artifacts (such as JMS Server, SAF Agent, and Path Service) and optional associated persistent stores to the same cluster. Messaging Bridge is also a JMS artifact that does not use a Store, but it can be targeted to a cluster.(Required) Enter introductory text here, including the definition and purpose of the concept.

For JMS service artifacts that are configured to be distributed across the cluster, depending on the Distribution Policy, when each new server starts, the cluster can automatically start one instance of the artifact (and associated store if applicable) on each cluster member and that member is called the preferred server for that instance. For artifacts that are not distributed, the system will select a server in the cluster to start a single instance of that artifact. See Simplified JMS Configuration and High Availability Enhancements.

In the case of a dynamic or a mixed cluster, the number of instances automatically grow when the cluster size grows. To dynamically scale the size of the dynamic or mixed cluster or the dynamic servers of the mixed cluster, adjust the dynamic cluster Size attribute in the Clusters: Configuration: Servers tab of your cluster configuration.

Figure 5-1 shows the relationship between the JMS and a dynamic cluster configuration in the config.xml file.

Figure 5-1 Dynamic Clustered JMS

Description of Figure 5-1 follows
Description of "Figure 5-1 Dynamic Clustered JMS"

Using Custom Persistent Stores with Cluster-Targeted JMS Service Artifacts

The custom persistent store used by the JMS service artifacts must be targeted to the same cluster with appropriate attribute values configured to take advantage of cluster enhancements. However, cluster-targeted SAF Agents and JMS Servers can also continue to use the default store available on each cluster member, which does not offer any of the new enhancements discussed in this chapter. See Simplified JMS Configuration and High Availability Enhancements.

Targeting JMS Modules Resources

JMS system modules continue to support two types of targeting, either of which can be used to take advantage of simplified cluster configuration.

  • Any default targeted JMS resource in a module (a JMS resource that is not associated with a subdeployment), inherits the targeting of its parent module, and the parent module can be targeted to any type of cluster.

  • Module subdeployment targets can reference clustered JMS Servers or SAF Agents for hosting regular destinations or imported destinations respectively. Using a cluster-targeted JMS Server or a SAF Agent in a subdeployment eliminates the need to individually create and enumerate the JMS Servers or SAF Agents in the subdeployment, which is particularly useful for Uniform Distributed Destination and imported destination deployment.

See Targeting Best Practices.

Note:

A module or its subdeployments cannot be directly targeted to a Dynamic cluster member server.

Using Persistent Stores with Cluster Targeted JMS Servers

The persistent store associated with a Cluster Targeted JMS server can be a custom persistent store that is targeted to the same cluster or the default store available on each cluster member.

Targeting JMS Modules Resources

JMS system modules continue to support two types of targeting, either of which can be used to take advantage of simplified cluster configuration.

  • Any default targeted JMS resource in a module (a JMS resource that is not associated with a subdeployment) inherits the targeting of its parent module, and the parent module can be targeted to any type of cluster.

  • Module subdeployment targets can reference clustered JMS servers. Using a cluster targeted JMS server in a subdeployment eliminates the need to individually enumerate individual JMS servers in the subdeployment, which is particularly useful for uniform distributed destination deployment.

See Targeting Best Practices.

Note:

A module or its subdeployments cannot be directly targeted to a Dynamic cluster member.

Simplified JMS Configuration and High Availability Enhancements

WebLogic Server supports high availability for JMS service artifacts deployed in a cluster. Both server and service failure scenarios are handled by automatically migrating an artifact's instance to other running servers. During this process, the system evaluates the overall server load and availability and moves the instances accordingly.

Cluster-targeting enhancements in this release of WebLogic Server eliminates many of the limitations that existed in the previous releases:

  • In the previous releases, only JMS servers, persistent stores, and SAF Agent (partially) were allowed to target to a cluster. In this release, the support is extended for all of the JMS service artifacts including SAF Agents, path service, an messaging bridges and for all types of clusters (Configured, Mixed, and Dynamic).

  • Enhancements in this release lets you easily configure and control the distribution behavior, as well as the JMS high availability (also known as JMS automatic service migration) behavior for all cluster targeted JMS Service artifacts. All of these configurations now exist in a single location, which is a Persistent Store for all the artifacts that depend on that store, or on the messaging bridge (which does not use the Store). This eliminates the need for migratable targets that were used in the previous releases.

  • Because the logical JMS artifacts are targeted to clusters, the system automatically creates any "physical" instances required on a cluster member when it joins the cluster. This allows the JMS Service to automatically scale up when the cluster size grows. With optional high availability configuration, the "physical" instances can restart or migrate in the event of service failure or server failure or shutdown, making the JMS Service highly available with minimal configuration.

The primary attributes that control the scalability and high availability behavior are Distribution policy and Migration policy. In addition to these policies, there are a few additional attributes that can be used for fine-tuning the high availability behavior such as restarting the instance in place (on the same server) before attempting to migrate elsewhere. These policies and attributes are described in the following sections:

Defining the Distribution Policy for JMS Services

The Distribution Policy setting for a custom persistent store or messaging bridge determines how the associated JMS Service artifacts (JMS Server, SAF Agent, and Path Service) are distributed in a cluster and the same setting on the Messaging Bridge determines its distribution behavior.

The following are the options that control the distribution behavior of the JMS service artifact:
  • Distributed: In this mode, the cluster automatically ensures that there is e a minimum of one instance per server. When the cluster starts , the system ensures that all the messaging service instances are up if possible, and when applicable it will attempt an even distribution of the instances. In addition, all the instances will automatically try to start on their home/preferred server first. Depending on the Migration Policy, instances can automatically migrate or even fail-back as needed to ensure high availability and even load balancing across the cluster.

    Note:

    The Distribution Policy is the default value for this policy. It is also required for cluster-targeted SAF Agents and cluster-targeted JMS Servers that host Uniform Distributed Destinations (UDD).

    This option is the default policy and it is also required for cluster-targeted SAF Agents and cluster-targeted JMS Servers that host Uniform Distributed Destinations (UDD).

  • Singleton: In this mode, a JMS server or a path service has one instance per cluster.

    Note:

    This option is required for path service and cluster-targeted JMS servers that host singleton or standalone (non-distributed) destinations.

    For more information about the Distribution Policy, see JDBC Store: HA Configuration in the Oracle WebLogic Server Administration Console Online Help

Defining the Migration Policy for JMS Services

The Migration Policy controls migration and restart behavior of cluster-targeted JMS service artifact instances. For high availability and service migration, set the migration policy as follows on the associated store:

  • Off: This option disables both migration and restart in place.

  • Always: This option enables the system to automatically migrate instances in all situations. This includes administrative shutdown, crashes or bad health of the hosting server or subsystem service. This option also enables service migration, which automatically restarts a failing Store on the current hosting server or migrates it to another server based on its restart-in-place configuration.

  • On-Failure: This option enables the system to automatically migrate the instances only in case of failure or a crash (bad health) of its hosting server. The instances will not migrate when there is an administrative shutdown, instead they will restart when the server is restarted. This option also enables service migration, which automatically restarts a failing Store on the same hosting server or migrates it to another server based on its restart-in-place configuration.

Note:

  • To enable support for cluster-targeted JMS Service artifacts with the Always or On-Failure migration policy, you must configure Cluster Leasing. For more information, see Leasing in Administering Clusters for Oracle WebLogic Server.

  • JMS Service Migration and JTA Migration work independently based on their Migration Policy. For information on Dynamic Clusters and JTA migration policies, see Understanding the Service Migration Framework.

  • It is a best practice to use the Database Leasing option instead of Consensus Leasing.

  • When a distributed instance is migrated from its preferred server, it will try to fail back when the preferred server is restarted.

For more information about Migration Policy, see JDBC Store: HA Configuration in Oracle WebLogic Server Administration Console Online Help

Additional Configuration Options for JMS Services

Table 5-1 Configuration Properties for JMS Service Migration

Property Default Value Description

Restart In Place

True

Defines how the system responds on a JMS Service failure within a healthy WebLogic Server. If a service fails and if this property is enabled, then the system first attempts to restart that store and associated service artifact on the same server before migrating to another server.

Note:

This attribute does not apply when an entire server fails.

Seconds Between Restarts

30

If Restart In Place is enabled, then his property specifies the delay, in seconds, between restarts on the same server.

Number of Restart Attempts

6

If Restart In Place is enabled, then this number determines the restart attempts the system should make before trying to migrate the artifact instance to another server.

Initial Boot Delay Seconds

60

Controls how fast subsequent instances are started on a server after the first instance is started. This prevents the system getting overloaded at the startup.

A value of 0 indicates that the system does not need to wait, which may lead to overload situations. The system’s default value is 60 seconds.

Failback Delay Seconds

-1

Specifies the time to wait before failing back an artifact's instance to its preferred server.

A value > 0 specifies that the time, in seconds, to delay before failing a JMS artifact back to its preferred server.

A value of 0 indicates that the instance would never failback.

A value of -1 indicates that there is no delay and the instance would failback immediately.

Partial Cluster Stability Seconds

240

Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster-targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure.

This delay ensures that services are balanced across a cluster even if the servers are started sequentially.

A value > 0 specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.

A value of 0 specifies no delay in starting all the instances on available servers.

The default delay value is 240 seconds.

For more information about these properties, see JDBC Store: HA Configuration in the Oracle WebLogic Server Administration Console Online Help.

Considerations and limitations of Clustered JMS

The following limitations and other behavior provides information on limitations and other behaviors you should consider before developing applications using dynamic clusters and cluster targeted JMS servers.

  • There are special considerations when a SAF agent-imported destination with an Exactly-Once QOS Level forwards messages to a distributed destination that is hosted on a mixed or dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • WLST Offline does not support the assign command to target JMS servers to a dynamic cluster. Use the get and set command.

  • Weighted distributed destinations (a deprecated type of distributed destination composed of a group of singleton destinations), are not supported on cluster-targeted JMS Servers.

  • Replicated distributed topics (RDTs) are not supported when any member destination is hosted on a cluster-targeted JMS server.

  • Custom persistent store with "Singleton" Distribution Policy, and "Always" (Or On-Failure) Migration Policy is required for cluster targeted JMS Server to allow Standalone destinations.

  • There is no support for manually (administratively) forcing the migration or fail-back of a service instance that is generated from a cluster-targeted JMS artifact.

  • A path service must be configured if there are any distributed or imported destinations that are used to host Unit-of-Order (UOO) messages. In addition, such destinations need to be configured with a path service UOO routing policy instead of the default hashed UOO routing policy because hash-based UOO routing is not supported in cluster-targeted JMS.

Attempts to send messages to a distributed or imported destination that is hosted on a cluster-targeted JMS Service will fail with an exception.

Note that a cluster-targeted path service must be configured to reference a Store that has a Singleton Distribution Policy and an Always Migration Policy.

Interoperability and Upgrade Considerations of Cluster Targeted JMS Servers

The following section provides information about interoperability and upgrade considerations when using cluster targeted JMS servers:

  • JMS clients, bridges, MDBs, SAF clients, and SAF agents from previous releases can communicate with cluster targeted JMS servers.

  • There are special considerations when a SAF agent-imported destination with an Exactly-Once QOS Level forwards messages to a distributed destination that is hosted on a Mixed or Dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • No conversion path is available for moving data (messages) or configurations from non-cluster targeted JMS servers to cluster targeted JMS servers, or vice versa.

The migratable target-based service migration of JMS services is supported on a Configured Cluster.

For example, a messaging configuration for JMS servers and persistent stores can target a single manually configured WebLogic Server or to a single migratable target. Similarly, SAF Agents an target a single manually configured WebLogic Server or to a single migratable target or to a Cluster.

See Automatic Migration of JMS Services.

Best Practices for Using Clustered JMS Services

The following section provides information on best practices and design patterns:

  • Prior to decreasing the size of a dynamic cluster or decreasing the number of Dynamic cluster members in a mixed cluster, delete the destinations that are hosted on a cluster targeted JMS server before shutting down the WebLogic Server instance. For example:

    1. Pause the individual destination for production.

    2. Let the applications delete the destinations.

    3. Shut down the server instance.

  • Use cluster targeted stores instead of default stores for clustered targeted JMS servers and SAF Agents.

  • When enabling high availability (that is, when the Migration Policy on the store is set to either On-Failure or Always), ensure that cluster leasing is configured. As a best practice, database leasing is preferred over consensus leasing. For more information, see Leasing in Administering Clusters for Oracle WebLogic Server.

  • When configuring destinations in a module, use a subdeployment that targets a specific clustered JMS server or SAF agent instead of using default targeting. This ensures that the destination creates members on exactly the desired JMS server instances.

  • When using Exactly-Once QOS Level SAF agents and SAF clients, as a best practice, ensure that Stores associated with the SAF Agents are configured with the migration-policy set to Always.

Also, in the event of change in the cluster size (particularly when shrinking), ensure that the backing tables (in case of JDBC store) or files (in case of FILE store) are not deleted or destroyed, so that they can be migrated over to any available cluster members for continuous availability of the SAF service.

Runtime MBean Instance Naming Syntax

The runtime MBean associated with each of the JMS artifacts (such as Persistent store, JMS server, SAFAgent, MessagingBridge and PathService) distributed in a cluster are named based on the distribution policy set. This is enforced by a MBean check.

The types of Distribution Policies are:
  • Distributed — Instances are automatically created and named uniquely by naming them after their home or preferred host WebLogic Server the first time when the WebLogic Server boots . The format is <configured-name>@<server-name>).

  • Singleton — Instances will be named with its configured name along with “01”. The format is <configured-name>–01. No server name is added to this instance name.

The following sections brief about the Instance Naming Syntax for persistent store:

Instance Naming Syntax for .DAT File

In case of file stores (.DAT files) that are targeted to a cluster, an instance’s data files are uniquely named based on the corresponding store instance name. For example, in Distributed mode, an instance's files will be names as <Store name>@<Server instance name> NNNNNN.DAT, where <NNNNNN> is a number ranging from 000000–999999.

Note:

A single file instance may create one or more .DAT files.

Instance Naming Syntax for .RGN File

A replicated store’s instance regions targeted to a cluster, are uniquely named based on the corresponding store instance name. For example, in distributed mode, a replicated store’s instance will contain the syntax of <configured Replicated Store name>@<Server instance name> NNNNNN.RGN, where <NNNNNN> is a number ranging from 000000–999999.

JDBC Store Table Name Syntax

A JDBC Store’s database table defaults to WLStore. The table name must often be changed using the JDBC store PrefixName setting to ensure different JDBC stores use different tables. No two different instances of JDBC store can share the same backing table. Depending on the Distribution policy setting, the database table name will be generated, refer Table 5-2

Table 5-2 JDBC Store Table Name Syntax for Cluster Targeted Case

Default Name PrefixName Distribution Policy Instance Name Store Table Name
WLStore myPrefix Distributed myServer myPrefix_myServer_WLStore

NA

myPrefix. (ends with ‘.’) Distributed myServer myPrefix._myServer_WLStore

NA

NA Distributed myServer myServer_WLStore

NA

myPrefix Singleton 01 myPrefix_01_WLStore

NA

myPrefix. Singleton 01 myPrefix.S_01_WLStore

NA

NA Singleton 01 S_01_WLStore

Table 5-3 Non-Cluster (single-server) TargetedCase

Default Name Prefix Name Distribution Policy Instance Name Store Table Name

NA

NA NA NA myPrefixWLStore

NA

NA NA NA WLStore