5 Simplified JMS Cluster and High Availability Configuration

This chapter describes the new cluster targeting enhancements and how it simplifies the JMS configuration process. These enhancements make the JMS service dynamically scalable and highly available without the need for extensive configuration process in a Cluster or WebLogic Server Multitenant environment.

This chapter includes the following sections:

What are the WebLogic Clustering Options for JMS?

A WebLogic Cluster may contain manually configured servers, dynamically generated servers, or a mix of both. WebLogic Server has the following cluster types:

  • Configured: A cluster where each member server is manually configured and individually targeted to the cluster. The value of the Dynamic Cluster Size attribute for the cluster configuration is 0. See Clusters: Configuration: Servers. This type of cluster is also known as a Static Cluster.

  • Dynamic: A cluster where all the member servers are created using a server template. These servers are referred to as dynamic servers. The value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

  • Mixed: A cluster where some member servers are created using a server template (Dynamic servers) and the remaining servers are manually configured (Configured servers). Because a mixed cluster contains dynamic servers, the value of the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of the cluster configuration is greater than 0.

For more information on using dynamic servers, see:

Understanding the Simplified JMS Cluster Configuration

The Clustered JMS Service has the ability to target JMS Service artifacts (such as JMS Server, SAF Agent, and Path Service) and optional associated Persistent Stores to the same cluster. Messaging Bridge is also a JMS artifact that does not use a Store, but it can be targeted to a cluster.

For JMS Service artifacts that are configured to be distributed across the cluster, depending on the distribution policy, when each new server starts, the cluster can automatically start one instance of the artifact (and associated store if applicable) on each cluster member and that member is called the "preferred server" for that instance. For artifacts that are not distributed, the system will select a server in the cluster to start a single instance of that artifact. See Simplified JMS Configuration and High Availability Enhancements.

In the case of a Dynamic or a Mixed cluster, the number of instances automatically grow when the cluster size grows. To dynamically scale the size of the Dynamic or Mixed cluster or the Dynamic servers of the Mixed cluster, adjust the Dynamic Cluster Size attribute in the Clusters: Configuration: Servers tab of your cluster configuration.

Figure 5-1, "Dynamic Clustered JMS" shows the relationship between the JMS and a Dynamic cluster configuration in the config.xml file.

Figure 5-1 Dynamic Clustered JMS

Description of Figure 5-1 follows
Description of ''Figure 5-1 Dynamic Clustered JMS''

Using Custom Persistent Stores with Cluster-Targeted JMS Service Artifacts

The custom persistent store used by the JMS service artifacts must be targeted to the same cluster with appropriate attribute values configured to take advantage of cluster enhancements. However, cluster-targeted SAF Agents and JMS Servers can also continue to use the default store available on each cluster member, which does not offer any of the new enhancements discussed in this chapter. See Simplified JMS Configuration and High Availability Enhancements.

Targeting JMS Modules Resources

JMS system modules continue to support two types of targeting, either of which can be used to take advantage of simplified cluster configuration.

  • Any default targeted JMS resource in a module (a JMS resource that is not associated with a subdeployment), inherits the targeting of its parent module, and the parent module can be targeted to any type of cluster.

  • Module subdeployment targets can reference clustered JMS Servers or SAF Agents for hosting regular destinations or imported destinations respectively. Using a cluster-targeted JMS Server or a SAF Agent in a subdeployment eliminates the need to individually create and enumerate the JMS Servers or SAF Agents in the subdeployment, which is particularly useful for Uniform Distributed Destination and imported destination deployment.

See Targeting Best Practices.

Note:

A module or its subdeployments cannot be directly targeted to a Dynamic cluster member server.

Simplified JMS Configuration and High Availability Enhancements

WebLogic Server supports high availability for JMS Service artifacts deployed in a cluster. Both server and service failure scenarios are handled by automatically migrating an artifact's instance to other running servers. During this process, the system evaluates the overall server load and availability and moves the instances accordingly.

Cluster targeting enhancements in this release of WebLogic Server eliminates many of the limitations that existed in the previous releases:

  • In the previous releases, only JMS Servers, Persistent Stores, and SAF Agent (partially) were allowed to target to a cluster. In this release, the support is extended for all of the JMS Service artifacts including SAF Agents, Path Service, and Messaging Bridges and for all types of clusters (Configured, Mixed, and Dynamic).

  • Enhancements in this release allow you to easily configure and control the distribution behavior, as well as the JMS High Availability (also known as JMS Automatic Service Migration) behavior for all cluster targeted JMS Service artifacts. All of these configuration now exists in a single location, which is a Persistent Store for all the artifacts that depend on that store, or on the Messaging Bridge (which does not use the Store). This eliminates the need for Migratable Targets that were used in the previous releases.

  • Since the "logical" JMS artifacts are targeted to clusters, the system automatically creates any "physical" instances required on a cluster member when it joins the cluster. This allows the JMS Service to automatically scale up when the cluster size grows. With optional high availability configuration, the "physical" instances can restart or migrate in the event of service failure or server failure or shutdown, making the JMS Service highly available with minimal configuration.

The primary attributes that control the scalability and high availability behavior are Distribution and Migration policies. In addition to these policies, there are a few additional attributes that can be used for fine tuning the high availability behavior such as restarting the instance in place (on the same server) before attempting to migrate elsewhere. These policies and attributes are described in the following sections:

Notes:

  • To enable high availability of cluster-targeted JMS Service artifacts, you must configure Cluster Leasing. For more information, see "Leasing" in Administering Clusters for Oracle WebLogic Server.

  • It is a best practice to use the Database Leasing option instead of Consensus Leasing.

Defining the Distribution Policy for JMS Services

The Distribution Policy setting for a custom Persistent Store or Messaging Bridge determines how the associated JMS Service artifacts (JMS Server, SAF Agent, and Path Service) are distributed in a cluster and the same setting on the Messaging Bridge determines its distribution behavior.

Following are the options that control the distribution behavior of the JMS Service artifact:

  • Distributed: In this mode, the cluster automatically ensures that there will be a minimum of one instance per server. Instances are automatically created and named uniquely by naming them after their home or preferred host WebLogic Server the first time the WebLogic Server boots (for example, <configured-name>@<server-name>). When the cluster starts up, the system will ensure that all the Messaging service instances are up if possible, and when applicable it will attempt an even distribution of the instances. In addition, all the instances will automatically try to start on their home/preferred server first. Depending on the Migration policy, instances can automatically migrate or even failback as needed to ensure high availability and even load balancing across the cluster.

    Note:

    This option is the default value for this policy. It is also required for cluster-targeted SAF Agents and cluster-targeted JMS Servers that host Uniform Distributed Destinations (UDD).

    This option is the default policy and it is also required for cluster-targeted SAF Agents and cluster-targeted JMS Servers that host Uniform Distributed Destinations (UDD).

  • Singleton: In this mode, a JMS Server or a Path Service will have one instance per cluster. This singleton instance will be named after its configured name along with "-01" as its suffix (for example, <configured-name>-01). No server name is added to this instance name.

    Note:

    This option is required for Path Service and cluster-targeted JMS Servers that host singleton or standalone (non-distributed) destinations.

For more information about Distribution Policy, see "JDBC Store: HA Configuration" in Oracle WebLogic Server Administration Console Online Help.

Defining the Migration Policy for JMS Services

The Migration Policy controls migration and restart behavior of cluster-targeted JMS service artifact instances. For high availability and service migration, set the migration policy as follows on the associated store:

  • Off: This option disables both migration and restart in place.

  • Always: This option enables the system to automatically migrate instances in all situations. This includes administrative shutdown, crashes or bad health of the hosting server or subsystem service. This option also enables service migration, which automatically restarts a failing Store or migrates it to another server based on its restart-in-place configuration.

  • On-Failure: This option enables the system to automatically migrate the instances only in case of failure or a crash (bad health) of its hosting server. The instances will not migrate when there is an administrative shutdown, instead they will restart when the server is restarted. This option also enables service migration, which automatically restarts a failing Store or migrates it to another server based on its restart-in-place configuration.

Note:

  • To enable support for cluster-targeted JMS Service artifacts with the Always or On-Failure migration policy, you must configure Cluster Leasing. For more information, see "Leasing" in Administering Clusters for Oracle WebLogic Server.

  • It is a best practice to use the Database Leasing option instead of Consensus Leasing.

  • When a distributed instance is migrated from its preferred server, it will try to failback when the preferred server is restarted.

For more information about Migration Policy, see "JDBC Store: HA Configuration" in Oracle WebLogic Server Administration Console Online Help.

Additional Configuration Options for JMS Services

Table 5-1 describes the additional Store configuration options available for automatic migration and high availability of JMS Services. The following settings apply when the JMS artifact is cluster-targeted and the Migration Policy is set to On-Failure or Always.

Table 5-1 Configuration Properties for JMS Service Migration

Property Default Value Description

Restart In Place

True

Defines how the system responds on a JMS Service failure within a healthy WebLogic Server. If a service fails and if this property is enabled, the system first attempts to restart that store and associated service artifact on the same server before migrating to another server.

Note: This attribute does not apply when an entire server fails.

Seconds Between Restarts

30

If Restart In Place is enabled, this property specifies the delay, in seconds, between restarts on the same server.

Number of Restart Attempts

6

If Restart In Place is enabled, this number determines the restart attempts the system should make before trying to migrate the artifact instance to another server.

Initial Boot Delay Seconds

-1

This attribute controls how fast subsequent instances are started on a server after the first instance is started. This prevents the system getting overloaded at the startup.

A value of 0 indicates that the system does not need to wait, which may lead to overload situations. A value of -1 indicates the system default should be used, which is 60 seconds.

Failback Delay Seconds

-1

Specifies the time to wait before failing back an artifact's instance to its preferred server.

A value > 0 specifies the time, in seconds, to delay before failing a JMS artifact back to its user preferred server.

A value of 0 specifies there is no need to failback.

A value of -1 specifies the default delay value is used, which is no delay and failback immediately.

Partial Cluster Stability Seconds

-1

Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster-targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure.

This delay ensures that services are balanced across a cluster even if the servers are started sequentially.

A value > 0 specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.

A value of 0 specifies no delay in starting all the instances on available servers.

A value of -1 specifies a default delay value of 240 seconds.


For more information about these properties, see "JDBC Store: HA Configuration" in Oracle WebLogic Server Administration Console Online Help.

Considerations and Limitations of Clustered JMS

The following section provides information on limitations and other behaviors you should consider before developing applications using clusters and cluster targeted JMS Services.

  • There are special considerations when a SAF Agent imported destination with an Exactly-Once QOS Level forwards messages to a distributed destination that is hosted on a Mixed or Dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • WLST Offline does not support the assign command to target JMS Servers to a dynamic cluster. Use the get and set command.

  • Weighted distributed destinations (a deprecated type of distributed destination composed of a group of singleton destinations), are not supported on cluster-targeted JMS Servers.

  • Replicated distributed topics (RDTs) are not supported when any member destination is hosted on a cluster-targeted JMS Server.

  • There is no support for manually (administratively) forcing the migration or fail-back of a service instance that is generated from a cluster-targeted JMS artifact.

  • A Path Service must be configured if there are any distributed or imported destinations that will be used to host Unit-of-Order (UOO) messages. In addition, such destinations need to be configured with a Path Service UOO routing policy instead of the default hashed UOO routing policy since hash based UOO routing is not supported in cluster-targeted JMS.

    Attempts to send messages to a distributed or imported destination that is hosted on a cluster-targeted JMS Service will fail with an exception.

    Note that a cluster-targeted Path Service must be configured to reference a Store that has a Singleton distribution policy and an Always migration policy.

Interoperability and Upgrade Considerations of Cluster-Targeted JMS Services

The following section provides information on interoperability and upgrade considerations when using cluster-targeted JMS Services:

  • JMS clients, bridges, MDBs, SAF clients, and SAF agents from previous releases can communicate with cluster-targeted JMS Servers.

  • There are special considerations when a SAF Agent imported destination with an Exactly-Once QOS Level forwards messages to a distributed destination that's hosted on a Mixed or Dynamic cluster. See Best Practices for Using Clustered JMS Services.

  • No conversion path is available for moving data (messages) or configurations from non-cluster targeted JMS Servers to cluster targeted JMS Servers, or vice versa.

The Migratable Target based service migration of JMS Services is supported on a Configured Cluster.

For example, a messaging configuration for JMS Servers and persistent stores can target a single manually configured WebLogic Server or to a single Migratable Target. Similarly, SAF Agents can target a single manually configured WebLogic Server or to a single Migratable Target or to a Configured Cluster.

See Automatic Migration of JMS Services.

Best Practices for Using Clustered JMS Services

The following section provides information on best practices and design patterns:

  • Prior to decreasing the size of a Dynamic cluster or decreasing the number of Dynamic cluster members in a Mixed cluster, drain the destinations that are hosted on a cluster-targeted JMS Server before shutting down the WebLogic Server instance. For example:

    1. Pause the individual destination for production.

    2. Let the applications drain the destinations.

    3. Shut down the server instance.

  • Use cluster-targeted stores instead of default stores for cluster-targeted JMS Servers and SAF Agents.

  • When enabling high availability (that is, when the migration-policy on the store is set to either On-Failure or Always), ensure that cluster leasing is configured. As a best practice, Database Leasing is preferred over Consensus Leasing". For more information, see "Leasing" in Administering Clusters for Oracle WebLogic Server.

  • When configuring destinations in a module, use a subdeployment that targets a specific clustered JMS Server or SAF Agent instead of using default targeting. This ensures that the destination creates members on exactly the desired JMS Server instances.

  • When using Exactly-Once QOS Level SAF agents and SAF clients, as a best practice, ensure that Stores associated with the SAF Agents are configured with the migration-policy set to Always.

    Also, in the event of change in the cluster size (particularly when shrinking), ensure that the backing tables (in case of JDBC store) or files (in case of FILE store) are not deleted or destroyed, so that they can be migrated over to any available cluster members for continuous availability of the SAF service.