4 Configuring Advanced JMS System Resources

You can learn how to configure advanced WebLogic JMS resources for Oracle WebLogic Server, such as a distributed destination in a clustered environment.

This chapter includes the following sections:

Configuring WebLogic JMS Clustering

A WebLogic Server cluster is a group of servers in a domain that work together to provide a more scalable, more reliable application platform than a single server. A cluster appears to its clients as a single server but it is a group of servers acting as one.

Advantages of JMS Clustering

The advantages of clustering for JMS include the following:

  • Load balancing of destinations across multiple servers in a cluster

    An administrator can establish load balancing of destinations across multiple servers in the cluster by:

    • Configuring a JMS server and targeting a WebLogic cluster. See Simplified JMS Cluster and High Availability Configuration

    • Configuring multiple JMS servers and targeting them to the configured WebLogic Servers.

    • Configuring multiple JMS servers and targeting them to a set of migratable targets.

    Each JMS server is deployed on exactly one WebLogic Server instance and handles requests for a set of destinations.

  • High availability of destinations

    • Distributed destinations : The queue and topic members of a distributed destination are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. Applications that use distributed destinations are more highly available than applications that use simple destinations because WebLogic JMS provides load balancing and failover for member destinations of a distributed destination within a cluster. For more information on distributed destinations, see Configuring Distributed Destination Resources.

    • Store-and-Forward : JMS modules use the SAF service to enable local JMS message producers to reliably send messages to remote queues or topics. If the destination is not available at the moment the messages are sent, either because of network problems or system failures, then the messages are saved on a local server instance, and are forwarded to the remote destination as soon as it becomes available. See Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server.

    • For automatic failover, WebLogic Server supports migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically or manually. See Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

      WebLogic Server also supports automatic migration at the service level for JMS, where a failed service instance can be restarted in place on its current WebLogic Server JVM or migrated to another running JVM in the same cluster. This is termed 'service migration' and there are two approaches for configuring it based on how JMS is configured and targeted. For more information, see Migratable Target and Simplified JMS Configuration and High Availability Enhancements. The latter is recommended for new configurations.

  • Cluster wide, transparent access to destinations from any server in a cluster

    An administrator can establish cluster wide, transparent access to destinations from any server in the cluster by either using the default connection factories for each server instance in the cluster, or by configuring one or more connection factories and targeting them to one or more server instances in the cluster, or to the entire cluster. This way, each connection factory can be deployed on multiple WebLogic Server instances. Connection factories are described in more detail in Connection Factory Configuration.

  • Scalability

    • Load balancing of destinations across multiple servers in the cluster.

    • Distribution of the application load across multiple JMS servers through connection factories, thus reducing the load on any single JMS server and enabling session concentration by routing connections to specific servers.

  • Server affinity for JMS Clients

    When configured for the cluster, load-balancing algorithms (round-robin-affinity, weight-based-affinity, or random-affinity), provide server affinity for JMS client connections. If a JMS application has a connection to a given server instance, JMS attempts to establish new JMS connections to the same server instance. For more information on server affinity, see Load Balancing in a Cluster in Administering Clusters for Oracle WebLogic Server.

For more information about the features and benefits of using WebLogic clusters, see Understanding WebLogic Server Clustering in Administering Clusters for Oracle WebLogic Server.

How JMS Clustering Works

An administrator can establish cluster wide, transparent access to JMS destinations from any server in a cluster, either by using the default connection factories for each server instance in a cluster, or by configuring one or more connection factories and targeting them to one or more server instances in a cluster, or to an entire cluster. This way, each connection factory can be deployed on multiple WebLogic Servers. For information about configuring and deploying connection factories, see Connection Factory Configuration Parameters.

A messaging application uses a Java Naming and Directory Interface (JNDI) context to look up a connection factory and then uses the connection factory to create a connection from the client into the cluster. If the client application is located outside of the connection factory's cluster, the connection will implicitly connect to one of servers in the cluster that are among the targets of the connection factory (this server may be different than the server the JNDI context itself is using). If the application is running on a WebLogic Server, and the same server is among the targets of the connection factory, then the client connection will simply connect to the local WebLogic Server. Each JMS server handles requests for a set of destinations. If requests for destinations are sent to a WebLogic Server connection host which is not hosting a JMS server or destinations, or are load balanced to a different WebLogic Server, the requests are forwarded by the connection host to the appropriate WebLogic Server instance in the same cluster that is hosting the desired JMS server and its destinations.

The administrator can also configure multiple JMS servers on the various servers in the cluster: as long as the JMS servers are uniquely named—and can then target JMS queue or topic resources to the various JMS servers. Alternatively, an administrator can target a JMS server in a cluster, and the cluster automatically creates an instance of a JMS server on each server. The application uses the Java Naming and Directory Interface (JNDI) to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. Requests for destinations not handled by a JMS server are forwarded to the appropriate WebLogic Server instance. For information about configuring and deploying JMS servers, see JMS Server Configuration.

JMS Clustering Naming Requirements

There are naming requirements when configuring JMS objects and resources, such as JMS servers, JMS modules, and JMS resources, to work in a clustered environment in a single WebLogic domain or in a multi domain environment. See JMS Configuration Naming Requirements.

Distributed Destination Within a Cluster

A distributed destination resource is a single set of destinations (queues or topics) that is accessible as a single, logical destination to a client (for example, a distributed topic has its own JNDI name). The members of the unit are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. Applications that use distributed destinations are more highly available than applications that use simple destinations because WebLogic Server provides load balancing and failover for member destinations of a distributed destination within a cluster. See Configuring Distributed Destination Resources.

JMS Services As a Migratable Service Within a Cluster

In addition to being part of a whole server migration, where all services hosted by a server can be migrated to another machine, JMS services are also part of the singleton service migration framework. This allows an administrator, for example, to migrate a JMS server and all of its destinations to another WebLogic Server within a cluster in response to a server failure or for scheduled maintenance. This includes both scheduled migrations as well as automatic migrations. For more information about JMS service migration, see Migration of JMS-related Services.

Configuration Guidelines for JMS Clustering

In order to use WebLogic JMS in a clustered environment, follow these guidelines:

  1. Configure your clustered environment as described in Setting Up WebLogic Clusters in Administering Clusters for Oracle WebLogic Server.
  2. Identify targets for any user-defined JMS connection factories using the WebLogic Server Administration Console. For connection factories, you can identify either a single-server, a cluster, or a migratable target.

    For more information about these connection factory configuration attributes, see Connection Factory Configuration.

  3. Optionally, identify migratable server targets or clusters for JMS services (JMSServers and Persistent Stores) using the WebLogic Server Administration Console. For example, for JMS servers, you can identify:

    For more information about JMS server configuration attributes, see JMS Server Configuration.

  4. Optionally, you can configure the physical JMS destinations in a cluster as part of a virtual distributed destination set, as discussed in Distributed Destination Within a Cluster. Note that it is a best practice to always target destinations using a subdeployment target that in turn references one or more specific SAF Agents (for imported destinations) or JMS Servers (for all other types of destinations).

What About Failover?

Note:

The WebLogic JMS Automatic Reconnect feature is deprecated. The JMS Connection Factory configuration, javax.jms.extension.WLConnection API, and javax.jms.extension.JMSContext API for this feature will be removed or ignored in a future release. Oracle recommends that client applications handle connection exceptions as described in Client Resiliency Best Practices in Administering JMS Resources for Oracle WebLogic Server.

The resiliency of a JMS system is fundamentally addressed at two levels. First at the JVM and service level via migration as described later in this section and in the following section, and second at the API level by ensuring clients reconnect and retry after a failure. For more information about client reconnection and retry after a failure, see Client Resiliency Best Practices in Administering JMS Resources for Oracle WebLogic Server.

In addition, implementing the automatic service migration feature ensures that exactly-once services, like JMS, do not introduce a single point of failure for dependent applications in the cluster. For dyamic-cluster targeted JMS server failover, failback and restart-in-place features are available. See Migration of JMS-related Services. WebLogic Server also supports data migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically, or manually. See Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

In a clustered environment, WebLogic Server also offers service continuity in the event of a single server failure by allowing you to configure distributed destinations, where the members of the unit are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. See Distributed Destination Within a Cluster.

Oracle also recommends implementing high-availability clustering software, which provides an integrated, out-of-the-box solution for WebLogic Server-based applications.

Migration of JMS-related Services

JMS-related services are singleton services; therefore, are not active on all server instances in a cluster. Instead, the services are pinned to a single server in the cluster to preserve data consistency.

To ensure that singleton JMS services do not introduce a single point of failure for dependent applications in the cluster, you can configure JMS-related services for high availability by using cluster-targeted JMS or migratable-target JMS. See Migratable Target and Simplified JMS Configuration and High Availability Enhancements. The latter is recommended for new configurations. JMS services can also be manually migrated before performing scheduled server maintenance.

Migratable JMS-related services include:

  • JMS Server : a management container for the queues and topics in JMS modules that are targeted to them. See JMS Server Configuration.

  • Store-and-Forward (SAF) Service : store-and-forward messages between local sending and remote receiving endpoints, even when the remote endpoint is not available at the moment the messages are sent. Only the sending SAF agents configured for JMS SAF (sending capability only) are migratable. See Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server.

  • Path Service: a persistent map that can be used to store the mapping of a group of messages in a JMS Message Unit-of-Order to a messaging resource in a cluster. One path service is configured per cluster. See Using the WebLogic Path Service.

  • Custom Persistent Store : a user-defined, disk-based file store or JDBC-accessible database for storing subsystem data, such as persistent JMS messages or store-and-forward messages. See Using the WebLogic Persistent Store in Administering Server Environments for Oracle WebLogic Server.

See Understanding the Service Migration Framework in Administering Clusters for Oracle WebLogic Server.

Automatic Migration of JMS Services

An administrator can configure migratable targets so that hosted JMS services are automatically migrated from the current unhealthy hosting server to a healthy active server with the help of the Health Monitoring subsystem. For more information about configuring automatic migration of JMS-related services, see Roadmap for Configuring Automatic Migration of JMS-Related Services in Administering Clusters for Oracle WebLogic Server.

Manual Migration of JMS Services

An administrator can manually migrate JMS-related services to a healthy server if the host server fails or before performing server maintenance. For more information about configuring manual migration of JMS-related services, see Roadmap for Configuring Manual Migration of JMS-Related Services in Administering Clusters for Oracle WebLogic Server.

Note:

Manual migration requires migratable targets, and therefore is not an option when taking advantage of Simplified JMS Cluster and High Availability Configuration in a cluster targeted JMS configuration. This type of configuration has much less of a need for manual migration as Simplified JMS Cluster and High Availability Configuration supports automatic fail-back.

Persistent Store High Availability

As discussed in What About Failover?, a JMS service, including a custom persistent store, can be migrated as part of the "whole server" migration feature, or as part of a "service-level" migration for migratable JMS-related services.

File stores must use the same files throughout the lifetime regardless of where they run. This means that it is the responsibility of the administrator to make sure that a migrated file store can access the same files that it updated before it was migrated.

Migratable custom file stores can be configured on a shared disk that is available to the migratable target servers in the cluster or, if using a migratable target HA configuration, can be migrated to a backup server target by using pre/post-migration scripts. For more information about migrating persistent stores, see Custom Store Availability for JMS Services in Administering Clusters for Oracle WebLogic Server. See File Locations in Administering the WebLogic Persistent Store.

Similarly, default file stores must be located in a shared directory location when setting up whole server migration or JTA migration.

Finally, migrated JDBC Stores must still access the same database and schema as their original location.

Using the WebLogic Path Service

The WebLogic Server path service is a persistent map used for storing the mapping between a group of messages in a JMS Message Unit-of-Order and a messaging resource in a cluster.

The path service provides a way to enforce ordering by pinning messages to a member of a cluster that is hosting servlets, distributed queue members, or Store-and-Forward agents. One path service is configured per cluster. For more information about the Message Unit-of-Order feature, see Using Message Unit-of-Order in Developing JMS Applications for Oracle WebLogic Server.

To configure a path service in a cluster, see Configure path services in the Oracle WebLogic Server Administration Console Online Help.

Path Service High Availability

There are different ways to achieve high availability for the path service:

  • You can use whole server migration to restart the WebLogic Server that runs the path service. See Oracle Fusion Middleware Administering Clusters . See Whole Server Migration in Administering Clusters for Oracle WebLogic Server.

  • The path service can use a cluster targeted store with singleton distribution policy. See Simplified JMS Cluster and High Availability Configuration.

  • Path Service and its store can be configured to use a migratable target. However, a migratable path service cannot use the default store, so a custom store must be configured and targeted to the same migratable target. As an additional best practice, the path service and its custom store should be the only users of that migratable target. See Understanding the Service Migration Framework in Administering Clusters for Oracle WebLogic Server.

Implementing Message UOO with a Path Service

Consider the following when implementing Message Unit-of-Order in conjunction with path service-based routing:

  • Each path service mapping is stored in a persistent store. When configuring a path service, select a persistent store that takes advantage of a high availability solution. See Persistent Store High Availability.

  • If one or more producers send messages using the same Unit-of-Order name then all messages they produce will share the same path entry and have the same member queue destination.

  • If the required route for a Unit-of-Order name is unreachable, then the producer sending the message will throw a JMSOrderException. The exception is thrown because the JMS messaging system can not meet the quality-of-service required : only one distributed destination member consumes messages for a particular Unit-of-Order name.

  • A path entry is automatically deleted when the last producer and last message reference are deleted.

  • Depending on your system, using the path service may slow system throughput due to a remote disk operations to create, read, and delete path entries.

  • A distributed queue and its individual members each represent a unique destination. For example:

    DXQ1 is a distributed queue with queue members Q1 and Q2. DXQ1 also has a Unit-of-Order name value of Fred mapped by the Path Service to the Q2 member.

    • If message M1 is sent to DXQ1, then it uses the Path Service to define a route to Q2.

    • If message M1 is sent directly to Q2, then, no routing by the Path Service is performed. This is because the application selected Q2 directly and the system was not asked to pick a member from a distributed destination.

    • If you want the system to use the path service, send messages to the distributed destination. If not, send directly to the member.

    • You can have more than one destination that has the same Unit-of-Order names in a distributed queue. For example:

      Queue Q3 also has a Unit-of-Order name value of Fred. If Q3 is added to DXQ1, then there are now two destinations that have the same Unit-of-Order name in a distributed queue. Even though, Q3 and DXQ1 share the same Unit-of-Order name value Fred, each has a unique route and destination that allows the server to continue to provide the correct message ordering for each destination.

  • Empty the queues before removing them from a distributed queue or adding them to a distributed queue. Although the path service removes the path entry for the removed member, there is a short transition period where a message produced may throw an exception JMSOrderException when the queue has been removed but the path entry still exists.

Configuring Foreign Server Resources to Access Third-Party JMS Providers

WebLogic JMS allows you to reference third-party JMS providers within a local WebLogic Server JNDI tree. With Foreign Server resources in JMS modules, you can quickly map a foreign JMS provider so that its associated connection factories and destinations appear in the WebLogic JNDI tree as local JMS objects.

Foreign Server resources can also be used to reference remote instances of WebLogic Server in another cluster or domain in the local WebLogic JNDI tree. For more information about integrating remote and foreign JMS providers, see Enhanced 2EE Support for Using WebLogic JMS With EJBs and Servlets in Developing JMS Applications for Oracle WebLogic Server.

These sections provide more information about how a foreign server works and a sample configuration for accessing a remote MQSeries JNDI provider.

How WebLogic JMS Accesses Foreign JMS Providers

When a foreign JMS server is deployed, it creates local connection factory and destination objects in the WebLogic Server JNDI. Then when a foreign connection factory or destination object is looked up on the local server, that object performs the actual lookup on the remote JNDI directory, and the foreign object is returned from that directory.

This method makes it easier to configure multiple WebLogic Messaging Bridge destinations, because the foreign server moves the JNDI initial context factory and connection URL configuration details outside of your Messaging Bridge destination configurations. You need to only provide the foreign Connection Factory and Destination JNDI name for each object.

For more information on configuring a messaging bridge, see Configuring and Managing a Messaging Bridge in Administering the WebLogic Messaging Bridge for Oracle WebLogic Server.

The ease-of-configuration concept also applies to configuring WebLogic Servlets, EJBs, and Message-Driven Beans (MDBs) with WebLogic JMS. For example, the weblogic-ejb-jar.xml file in the MDB can have a local JNDI name, and you can use the foreign JMS server to control where the MDB receives messages from. For example, you can deploy the MDB in one environment to talk to one JMS destination and server, and you can deploy the same weblogic-ejb-jar.xml file to a different server and have it talk to a different JMS destination without having to unpack and edit the weblogic-ejb-jar.xml file.

Creating Foreign Server Resources

A Foreign Server resource in a JMS module represents a JNDI provider that is outside the WebLogic JMS server. It contains information that allows a local WebLogic Server instance to reach a remote JNDI provider, thereby allowing a number of foreign connection factory and destination objects to be defined on one JNDI directory.

The WebLogic Server Administration Console lets you to configure, modify, target, and delete foreign server resources in a system module. For a road map of the foreign server tasks, see Configure foreign servers in the Oracle WebLogic Server Administration Console Online Help.

Note:

For information about configuring and deploying JMS application modules in an enterprise application, see Configuring JMS Application Modules for Deployment.

Some foreign server options are dynamically configured. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all foreign server options, see ForeignServerBean in the MBean Reference for Oracle WebLogic Server.

After defining a foreign server, you can configure connection factory and destination objects. You can configure one or more connection factories and destinations (queues or topics) for each foreign server.

Creating Foreign Connection Factory Resources

A Foreign Connection Factory resource in a JMS module contains the JNDI name of the connection factory in the remote JNDI provider, the JNDI name that the connection factory is mapped to in the local WebLogic Server JNDI tree, and an optional user name and password.

The foreign connection factory creates non-replicated JNDI objects on each WebLogic Server instance that the parent foreign server is targeted to. (To create the JNDI object on every node in a cluster, target the foreign server to the cluster.)

Creating a Foreign Destination Resources

A Foreign Destination resource in a JMS module represents either a queue or a topic. It contains the destination JNDI name that is looked up on the foreign JNDI provider and the JNDI name that the destination is mapped to on the local WebLogic Server. When the foreign destination is looked up on the local server, a lookup is performed on the remote JNDI directory, and the destination object is returned from that directory.

Sample Configuration for MQSeries JNDI

Table 4-1 provides a possible a sample configuration when accessing a remote MQSeries JNDI provider.

Table 4-1 Sample MQSeries Configuration

Foreign JMS Object Option Names Sample Configuration Data

Foreign Server

Name

JNDI Initial Context Factory

JNDI Connection URL

JNDI Properties

MQJNDI

com.sun.jndi.fscontext.RefFSContextFactory

file:/MQJNDI/

(If necessary, enter a comma-separated name=value list of properties.)

Foreign

Connection Factory

Name

Local JNDI Name

Remote JNDI Name

Username

Password

MQ_QCF

mqseries.QCF

QCF

weblogic_jms

weblogic_jms

Foreign

Destination 1

Foreign

Destination 2

Name

Local JNDI Name

Remote JNDI Name

Name

Local JNDI Name

Remote JNDI Name

MQ_QUEUE1

mqseries.QUEUE1

QUEUE_1

MQ_QUEUE2

mqseries.QUEUE2

QUEUE_2

Configuring Distributed Destination Resources

A distributed destination resource in a JMS module represents a single set of destinations (queues or topics) that are accessible as a single, logical destination to a client. For example, a distributed topic has its own JNDI name. The members of the set are typically distributed across multiple servers within a cluster, with each member belonging to a separate JMS server.

Applications that use a distributed destination are more highly available than applications that use standalone destinations because WebLogic JMS provides load balancing and failover for the members of a distributed destination in a cluster.

These sections provide information on how to create, monitor, and load balance distributed destinations:

Uniform Distributed Destinations vs. Weighted Distributed Destinations

Note:

Weighted Distributed Destinations were deprecated in WebLogic Server 10.3.4.0. Oracle recommends using Uniform Distributed Destinations. See Weighted Distributed Destinations in What's New in Oracle WebLogic Server.

WebLogic Server 9.x and later offers two types of distributed destination: uniform and weighted. In releases prior to WebLogic Server 9.x, WebLogic Administrators often needed to manually configure physical destinations to function as members of a distributed destination. This method provided the flexibility to create members that were intended to carry extra message load or have extra capacity; however, such differences often led to administrative and application problems because such a weighted distributed destination was not deployed consistently across a cluster. This type of distributed destination is officially referred to as a weighted distributed destination (or WDD).

A uniform distributed destination (UDD) greatly simplifies the management and development of distributed destination applications. Using uniform distributed destinations, you no longer need to create or designate destination members, but you can instead rely on WebLogic Server to uniformly create the necessary members on the JMS servers to which a JMS module is targeted. This feature ensures the consistent configuration of all distributed destination parameters, particular with in regard to weighting, security, persistence, paging, and quotas.

The weighted distributed destination feature is still available for users who prefer to manually fine-tune distributed destination members. However, Oracle strongly recommends configuring uniform distributed destinations to avoid possible administrative and application problems due to a weighted distributed destination not being deployed consistently across a cluster.

For more information about using a distributed destination with your applications, see Using Distributed Destinations in Developing JMS Applications for Oracle WebLogic Server.

Creating Uniform Distributed Destinations

The WebLogic Server Administration Console enables you to configure, modify, target, and delete UDD resources in a JMS system module.

Note:

It is recommended that you create a single cluster targeted JMS Server and an associated persistent store to host the UDD resource, with optional HA configuration settings. This makes the UDD configuration simple, scalable, and highly available. See Simplified JMS Cluster and High Availability Configuration. A Replicated Distributed Topic is not supported by a cluster targeted JMS Server (use a Partitioned Distributed Topic instead).

For a road map of the uniform distributed destination tasks, see the following topics in the Oracle WebLogic Server Administration Console Online Help:

Some uniform distributed destination options can be dynamically configured. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all uniform distributed destination options, see the following entries in MBean Reference for Oracle WebLogic Server:

The following sections provide additional uniform distributed destination information:

Targeting Uniform Distributed Queues and Topics

Unlike a standalone queue and topics resources in a module, which can only target a specific single-instance JMS server and only run on this one instance, a UDD can be targeted to multiple JMS server instances within the same server or cluster.

There are multiple ways to target a UDD but Oracle strongly recommends only one of them: configure UDDs to target a system module subdeployment that in turn directly references one or more JMS Servers. All other targeting options are strongly discouraged; for example, Oracle recommends against targeting a destination using ‘default targeting’ or targeting a subdeployment that in turn references a cluster or server name. Failure to follow this best practice can result in unintentional message loss.

For example, consider a system module named jmssysmod-jms.xml which is targeted to a cluster with three WebLogic Server instances: wlserver1, wlserver2, and wlserver3, where each server is in turn targeted by a configured JMS server, jmsserver1, jmsserver2, and jmsserver3. If you want to setup a uniform distributed queue in the same cluster, you can group the UDQ in a subdeployment named jmsservergroup to ensure that it is always linked to the exact desired JMS Server instances. You can optionally use the same subdeployment for a connection factory. Here is how the servergroup sub-deployment resources would look like in jmssysmod-jms.xml:

<weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms">
  <connection-factory name="MyCF">
    <sub-deployment-name>jmsservergroup</sub-deployment-name>
    <jndi-name>jms/MyCF</jndi-name>
  </connection-factory>
  <uniform-distributed-queue name="MyUDQ">
    <sub-deployment-name>jmsservergroup</sub-deployment-name>
    <jndi-name>jms/MyUDQ</jndi-name>
  </uniform-distributed-queue>
</weblogic-jms>

And here's how the corresponding subdeployment would be configured in the system module’s corresponding stanza in the domain's config.xml file:

  <jms-system-resource>
    <name>jmssysmod-jms</name>
    <target>cluster1,</target>
    <sub-deployment>
      <name>jmsservergroup</name>
      <target>jmsserver1,jmsserver2,jmsserver3</target>
    </sub-deployment> 
    <descriptor-file-name>jms/jmssysmod-jms.xml</descriptor-file-name>
  </jms-system-resource>

If you are using simplified JMS configuration that leverages a cluster targeted jms server named MyClusteredJMSServer instead of individually configured and targeted jms servers ‘jmserver1’, ‘jmserver2’, and ‘jmserver3’, then the above subdeployment’s target simplifies to:

<target>MyClusteredJMSServer</target>

Instead of:

<target>jmsserver1,jmsserver2,jmsserver3</target>

Note:

  • Remember, Oracle strongly recommends that a destination should always be configured to target subdeployments that in turn reference the exact desired JMS Server(s) for the destination. Oracle strongly advises against other destination targeting approaches, including default targeting. (Defaulting targeting a connection factory is fine.)

  • Changing the targets of a UDD can lead to the removal of a member destination and the consequent unintentional loss of messages.

  • When creating a new UDD using the WebLogic console, subdeployments are accessed via its ‘advanced targeting’ option. You can also edit subdeployments or create new ones via the System Module subdeployments tab.

Pausing and Resuming Message Operations on UDD Members

You can pause and resume message production, insertion, and/or consumption operations on a uniform distributed destinations, either programmatically (using JMX and the runtime MBean API) or administratively (using the WebLogic Server Administration Console). In this way, you can control the JMS subsystem behavior in the event of an external resource failure that would otherwise cause the JMS subsystem to overload the system by continuously accepting and delivering (and redelivering) messages.

For more information on the "pause and resume" feature, see Controlling Message Operations on Destinations.

Monitoring UDD Members

Runtime statistics for uniform distributed destination members can be monitored via the WebLogic Server Administration Console, as described in Monitoring JMS Statistics.

Configuring Partitioned Distributed Topics

Note:

Partitioned Distributed Topics are the only type of distributed topic that is supported when using cluster targeted JMS Servers, WebLogic multi-tenancy RG or RGT scoped configuration, or dynamic clusters. Configuration errors will be generated on an attempt to setup up a Replicated Distributed Topic in these cases. If you need to replace a Replicated Distributed Topic with a Partitioned Distributed Topic, see Replacing a Replicated Distributed Topic in Developing JMS Applications for Oracle WebLogic Server.

The uniform distributed topic message Forwarding Policy specifies whether a sent message is forwarded to all members.

The valid values are:

  • Replicated: The default. All physical topic members receive each sent message. If a message arrives at one of the physical topic members, a copy of this message is forwarded to the other members of that uniform distributed topic. A subscription on any one particular member will get a copy of any message sent to the uniform distributed topic logical name or to any particular uniform distributed topic member.

  • Partitioned: The physical member receiving the message is the only member of the uniform distributed topic that is aware of the message. When a message is published to the logical name of a Partitioned uniform distributed topic, it will only arrive on one particular physical topic member. Once a message arrives on a physical topic member, the message is not forwarded to the rest of the members of the uniform distributed destination, and subscribers on other physical topic members do not get a copy of that message.

Most new applications will use the Partitioned forwarding policy in combination with a logical subscription topology on a uniform distributed topic that consists of:

  • A same named physical subscription created directly on each physical member.

  • A Client ID Policy of Unrestricted.

  • A Subscription Sharing Policy of Sharable.

For more information on how to create and use the partitioned distributed topic, see:

Load Balancing Partitioned Distributed Topics

Partitioned topic publishers have the option of load balancing their messages across multiple members by tuning the connection factory Affinity and Load Balance attributes. The Unit of Order messages are routed to the correct member based on the UOO routing policy and the subscriber status. See Configure connection factory load balancing parameters in Oracle WebLogic Server Administration Console Online Help.

Creating Weighted Distributed Destinations

Note:

Weighted Distributed Destinations (WDDs) are deprecated in WebLogic Server 10.3.4.0. Oracle strongly recommends using Uniform Distributed Destinations.

The WebLogic Server Administration Console lets you configure, modify, target, and delete WDD resources in JMS system modules. When configuring a distributed topic or distributed queue, clearing the "Allocate Members Uniformly" check box lets you manually select existing queues and topics to add to the distributed destination, and to fine-tune the weighting of resulting distributed destination members.

For a road map of the weighted distributed destination tasks, see the following topics in the Oracle WebLogic Server Administration Console Online Help:

Some weighted distributed destination options can be dynamically configured. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all weighted distributed destination options, see the following entries in MBean Reference for Oracle WebLogic Server:

Unlike UDDs, WDD members cannot be monitored with the WebLogic Server Administration Console or though runtime MBeans. Also, WDD members cannot be uniformly targeted to JMS server or WebLogic Server instances in a domain. Instead, new WDD members must be manually configured on such instances, and then manually added to the WDD.

Load Balancing Messages Across a Distributed Destination

By using distributed destinations, JMS can spread or balance the messaging load across multiple destinations, which can result in better use of resources and improved response times. The JMS load-balancing algorithm determines the physical destinations that messages are sent to, as well as the physical destinations that consumers are assigned to.

Load-Balancing Options

WebLogic JMS supports two different algorithms for balancing the message load across multiple physical destinations within a given distributed destination set. You select one of these load balancing options when configuring a distributed topic or queue on the WebLogic Server Administration Console.

Round-Robin Distribution

In the round-robin algorithm, WebLogic JMS maintains an order of physical destinations within the distributed destination. The messaging load is distributed across the physical destinations one at a time in the order that they are defined in the WebLogic Server configuration (config.xml) file. Each WebLogic Server maintains an identical order, but may be at a different point within the ordering. Multiple threads of execution within a single server using a given distributed destination affect each other with respect to which physical destination a member is assigned to each time the member produces a message. Round-robin is the default algorithm and doesn't need to be configured.

For weighted distributed destinations only, if weights are assigned to any of the physical destinations in the set for a given distributed destination, then those physical destinations appear multiple times in the order.

Random Distribution

The random distribution algorithm uses the weight assigned to the physical destinations to compute a weighted distribution for the set of physical destinations. The messaging load is distributed across the physical destinations by pseudo-randomly accessing the distribution. In the short run, the load is not directly proportional to the weight. In the long run, the distribution approaches the limit of the distribution. A pure random distribution can be achieved by setting all the weights to the same value, which is typically 1.

Adding or removing a member (either administratively or as a result of a WebLogic Server shutdown/restart event) requires a recomputation of the distribution. Such events should be infrequent however, and the computation is generally simple, running in O(n) time.

Consumer Load Balancing

When an application creates a consumer, the application must provide a destination. If that destination represents a distributed destination, then WebLogic JMS must find a physical destination from which the consumer will receive messages from. The choice of which destination member to use is made by using one of the load balancing algorithms described in Load Balancing Options. The choice is made only once: when the consumer is created. From that point on, the consumer gets messages from that member only.

Producer Load Balancing

When a producer sends a message, WebLogic JMS looks at the destination to which the message is being sent. If the destination is a distributed destination, then the WebLogic JMS makes a decision as to where the message will be sent. That is, the producer sends to one of the destination members according to one of the load-balancing algorithms described in Load Balancing Options.

The producer makes such a decision each time it sends a message. However, there is no compromise of ordering guarantees between a consumer and producer, because consumers are load balanced once, and are then pinned to a single destination member.

Note:

If a producer attempts to send a persistent message to a distributed destination, every effort is made to first forward the message to distributed members that utilize a persistent store. However, if none of the distributed members utilize a persistent store, then the message will still be sent to one of the members according to the selected load-balancing algorithm.

Load-Balancing Heuristics

In addition to the algorithms described in Load Balancing Options, WebLogic JMS uses the following heuristics when choosing an instance of a destination.

Transaction Affinity

When producing multiple messages within a transacted session, an effort is made to send all messages produced to the same WebLogic Server. Specifically, if a session sends multiple messages to a single distributed destination, then all of the messages are routed to the same physical destination. If a session sends multiple messages to multiple different distributed destinations, then an effort is made to choose a set of physical destinations served by the same WebLogic Server.

Server Affinity

The Server Affinity Enabled parameter on connection factories defines whether a WebLogic Server that is load balancing consumers or producers across multiple member destinations in a distributed destination set, first attempts to load balance across any other local destination members that are also running on the same WebLogic Server.

Note:

The server affinity Enabled attribute does not affect queue browsers. Therefore, a queue browser created on a distributed queue can be pinned to a remote distributed queue member even when Server Affinity is enabled.

To disable server affinity on a connection factory:

  1. Follow the directions for navigating to the JMS Connection Factory > Configuration > General page in Configure connection factory load balancing parameters topic in the Oracle WebLogic Server Administration Console Online Help.

  2. Define the Server Affinity Enabled field as follows:

    • If the Server Affinity Enabled check box is selected (True), then a WebLogic Server that is load balancing consumers or producers across multiple physical destinations in a distributed destination set, will first attempt to load balance across any other physical destinations that are also running on the same WebLogic Server.

    • If the Server Affinity Enabled check box is not selected (False), then a WebLogic Server will load balance consumers or producers across physical destinations in a distributed destination set and disregard any other physical destinations also running on the same WebLogic Server.

  3. Click Save.

For more information about how the Server Affinity Enabled setting affects the load balancing among the members of a distributed destination, see Distributed Destination Load Balancing When Server Affinity Is Enabled.

Queues with Zero Consumers

When load balancing consumers across multiple remote physical queues, if one or more of the queues have zero consumers, then those queues alone are considered for balancing the load. Once all the physical queues in the set have at least one consumer, the standard algorithms apply.

In addition, when producers are sending messages, queues with zero consumers are not considered for message production, unless all instances of the given queue have zero consumers.

Paused Distributed Destination Members

When distributed destinations are paused for message production or insertion, they are not considered for message production. Similarly, when destinations are paused for consumption, they are not considered for message production.

For more information about pausing message operations on destinations, see Controlling Message Operations on Destinations.

Per-JVM Load Balancing

You can choose Per-JVM instead of Per-Member load balancing behavior when sending non-Unit-of-Order messages to a distributed destination or to an Exactly-Once QoS SAF imported destination.
  • Per-Member (default): The load balancing algorithm considers all active members of the distributed destination as candidates when taking into account affinity or other heuristics. This option helps provide an even distribution of messages among all members, but after members fail-over or migrate, it can lead to some JVMs getting more messages than others when these JVMs host more members than the others.

  • Per-JVM: The load balancing algorithm considers only one member of the distributed destination on each WebLogic Server JVM regardless of the number of members hosted by each JVM. This option helps provide an even distribution of messages among all WebLogic server JVMs in a cluster, and helps direct messages away from additional failed-over members on a JVM. It is useful for evenly distributing messages among servers in a cluster that have shrunk due to decreased loads, but that also retains failed-over members in order to recover the older unprocessed messages of failed-over members.

Per-JVM heuristics are:

  1. Choose among members of a distributed destination hosted on a cluster-targeted JMS Server or SAF Agent that are running on their preferred server (for example, members that have not failed over or migrated).
  2. Choose the lexicographically least member name on the same JVM when the destributed destination members are not hosted on a cluster-targeted JMS Server or SAF Agent.
  3. A member in (1) takes precedence over a member in (2) when both are present. If two members satisfy (1), then the lexicographically least member name is chosen.
  4. The same member candidate on each JVM is chosen for new producers or new load balanced messages as long as the system is stable. Candidates can change after a member migration, or member restart, or a member shutdown.
  5. If each WebLogic Server JVM only hosts one member of a DD, behaves the same as Per-Member.

Note:

The Per-JVM or Per-Member heuristic does not apply to message consumption, sending messages to a standalone or singleton destination, or sending UOO messages to any type of destination.

To control JMS Message Producer Per-JVM/Per-Member load balancing for non-Unit-of-Order messages, configure the ProducerLoadBalancingPolicy on a custom connection factory or enable the behavior using a command line system property. For information on both the configurable attribute and the system properties, see LoadBalancingParamsBean.ProducerLoadBalancingPolicy.

To control Per-JVM/Per-Member load balancing for Exactly-Once non-Unit-of-Order messages forwarded by a SAF Agent or by Client SAF, configure the ExactlyOnceLoadBalancingPolicy attribute on a SAFImportedDestinationBean or enable the behavior using a command line system property. For information on both the configurable attribute and the system properties, see SAFImportedDestinationBean.ExactlyOnceLoadBalancingPolicy.

Defeating Load Balancing

Applications can defeat load balancing by directly accessing the individual physical destinations. That is, if the physical destination has no JNDI name, it can still be referenced using the createQueue() or createTopic() methods.

For instructions on how to directly access uniform and weighted distributed destination members, see Accessing Distributed Destination Members in Developing JMS Applications for Oracle WebLogic Server.

Connection Factories

Applications that use distributed destinations to distribute or balance their producers and consumers across multiple physical destinations, but do not want to make a load balancing decision each time a message is produced, can use a connection factory with the Load Balancing Enabled parameter disabled. To ensure a fair distribution of the messaging load among a distributed destination, the initial physical destination (queue or topic) used by producers is always chosen at random from among the distributed destination members.

To disable load balancing on a connection factory:

  1. Follow the directions for navigating to the JMS Connection Factory and then > Configuration > General page in Configure connection factory load balancing parameters topic in the Oracle WebLogic Server Administration Console Online Help.

  2. Define the setting of the Load Balancing Enabled field using the following guidelines:

    • Load Balancing Enabled = True

      For Queue.sender.send() methods, non-anonymous producers are load balanced on every invocation across the distributed queue members.

      For TopicPublish.publish() methods, non-anonymous producers are always pinned to the same physical topic for every call, irrespective of the Load Balancing Enabled setting.

    • Load Balancing Enabled = False

    Producers always produce to the same physical destination until they fail. At that point, a new physical destination is chosen.

  3. Click Save.

    Note:

    Depending on your implementation, the setting of the Server Affinity Enabled attribute can affect load-balancing preferences for distributed destinations. See Distributed Destination Load Balancing When Server Affinity Is Enabled.

Anonymous producers (producers that do not designate a destination when created), are load-balanced each time they switch destinations. If they continue to use the same destination, then the rules for non-anonymous producers apply (as stated previously).

Distributed Destination Load Balancing When Server Affinity Is Enabled

Table 4-2 explains how the setting of a connection factory's Server Affinity Enabled parameter affects the load-balancing preferences for distributed destination members. The order of preference depends on the type of operation and whether or not durable subscriptions or persistent messages are involved.

The Server Affinity Enabled parameter for distributed destinations is different from the server affinity provided by the Default Load Algorithm attribute in the ClusterMBean, which is also used by the JMS connection factory to create initial context affinity for client connections.

See the Load Balancing for EJBs and RMI Objects and Initial Context Affinity and Server Affinity for Client Connections sections in Administering Clusters for Oracle WebLogic Server.

Table 4-2 Server Affinity Load Balancing Preferences

When the operation is And Server Affinity Enabled is Then load balancing preference is given to a
  • createReceiver() for queues

  • createSubscriber() for topics

True

  1. local member without a consumer

  2. local member

  3. remote member without a consumer

  4. remote member

createReceiver() for queues

False

  1. member without a consumer

  2. member

createSubscriber() for topics

(Note: nondurable subscribers)

True or False

  1. local member without a consumer

  2. local member

  • createSender() for queues

  • createPublisher() for topics

True or False

There is no separate machinery for load balancing a created JMS producer. JMS producers are created on the server on which your JMS connection is load balanced or pinned.

For more information about load balancing JMS connections created using a connection factory, refer to the Load Balancing for EJBs and RMI Objects and Initial Context Affinity and Server Affinity for Client Connections sections in Administering Clusters for Oracle WebLogic Server.

For persistent messages using QueueSender.send()

True

  1. local member with a consumer and a store

  2. remote member with a consumer and a store

  3. local member with a store

  4. remote member with a store

  5. local member with a consumer

  6. remote member with a consumer

  7. local member

  8. remote member

For persistent messages using QueueSender.send()

False

  1. member with a consumer and a store

  2. member with a store

  3. member with a consumer

  4. member

For nonpersistent messages using QueueSender.send()

True

  1. local member with a consumer

  2. remote member with a consumer

  3. local member

  4. remote member

For nonpersistent messages:

  • QueueSender.send()

  • TopicPublish.publish()

False

  1. member with a consumer

  2. member

createConnectionConsumer() for session pool queues and topics

True or False

local member only

Note: Session pools are now used rarely, as they are not a required part of the Java EE specification, do not support JTA user transactions, and are largely superseded by message-driven beans (MDBs), which are simpler, easier to manage, and more capable.

Distributed Destination Migration

For clustered JMS implementations that take advantage of the service migration feature, a JMS server and its distributed destination members can be migrated to another WebLogic Server instance within the cluster. Service migrations can take place due to scheduled system maintenance, as well as in response to a server failure within the cluster.

However, the target WebLogic Server may already be hosting a JMS server with all of its physical destinations. This can lead to situations where the same WebLogic Server instance hosts two physical destinations for a single distributed destination. This is permissible in the short term, because a WebLogic Server instance can host multiple physical destinations for that distributed destination. However, load balancing in this situation is less effective.

In such a situation, each JMS server on a target WebLogic Server instance operates independently. This is necessary to avoid merging of the two destination instances, and/or disabling of one instance on both , which can make some messages unavailable for a prolonged period of time. The long-term intent, however, is to eventually re-migrate the migrated JMS server to yet another WebLogic Server instance in the cluster.

For more information about configuring JMS migratable targets, see Migration of JMS-related Services.

Distributed Destination Failover

If the server instance that is hosting the JMS connections for the JMS producers and JMS consumers should fail, then all the producers and consumers using these connections are closed and are not re-created on another server instance in the cluster. Furthermore, if a server instance that is hosting a JMS destination should fail, then all the JMS consumers for that destination are closed and not re-created on another server instance in the cluster.

If the distributed queue member on which a queue producer is created should fail, yet the WebLogic Server instance where the producer's JMS connection resides is still running, then the producer remains active and WebLogic JMS will fail it over to another distributed queue member, irrespective of whether the Load Balancing option is enabled.

For more information about procedures for recovering from a WebLogic Server failure, see Recovering From a Server Failure in Developing JMS Applications for Oracle WebLogic Server.

Configure an Unrestricted ClientID

The Client ID Policy specifies whether more than one JMS connection can use the same client ID in a cluster.

Valid values for this policy are:

  • RESTRICTED: This is the default. Only one connection that uses this policy can exist in a cluster at any given time for a particular client ID (if a connection already exists with a given client ID, then attempts to create new connections using this policy with the same client ID fail with an exception).

  • UNRESTRICTED: Connections created using this policy can specify any client ID, even when other restricted or unrestricted connections already use the same Client ID. When a durable subscription is created using an Unrestricted client ID, it can only be cleaned up using weblogic.jms.extensions.WLSession.unsubscribe(Topic topic, String name). See Managing Subscriptions in Developing JMS Applications for Oracle WebLogic Server.

Oracle recommends setting the client ID policy to Unrestricted for new applications (unless your application architecture requires exclusive client IDs), especially if sharing a subscription (durable or non-durable). Subscriptions created with different client ID policies are always treated as independent subscriptions. See ClientIdPolicy in the MBean Reference for Oracle WebLogic Server.

To set the Client ID Policy on the connection factory using the WebLogic Console, see Configure multiple connections using the same client Id in the Oracle WebLogic Server Administration Console Online Help. The connection factory setting can be overridden programmatically using the setClientIDPolicy method of the WLConnection interface in the Java API Reference for Oracle WebLogic Server.

Note:

Programmatically changing (overriding) the client ID policy settings on a JMS connection runtime object is valid only for that particular connection instance and for the life of that connection. Any changes made to the connection runtime object are not persisted/reflected by the corresponding JMS connection factory configuration defined in the underlying JMS module descriptor.

For more information on how to use the client ID policy, see:

Configure Shared Subscriptions

The Subscription Sharing Policy specifies whether subscribers can share subscriptions with other subscribers on the same connection.

Valid values for this policy are:

  • Exclusive: This is the default. All subscribers created using this connection factory cannot share subscriptions with any other subscribers.

  • Sharable: Subscribers created using this connection factory can share their subscriptions with other subscribers, regardless of whether those subscribers are created using the same connection factory or a different connection factory. Consumers can share a non durable subscription only if they have the same client ID and client ID policy; consumers can share a durable subscription only if they have the same client ID, client ID policy, and Subscription Name.

WebLogic JMS applications can override the Subscription Sharing Policy specified on the connection factory configuration by casting a javax.jms.Connection instance to weblogic.jms.extension.WLConnection and calling setSubscriptionSharingPolicy(String).

Most applications with a Subscription Sharing Policy will use an Unrestricted client ID policy to ensure that multiple connections with the same client ID can exist.

Two durable subscriptions with the same client ID and Subscription Name are treated as two different independent subscriptions if they have a different client ID policy. Similarly, two Sharable non-durable subscriptions with the same client ID are treated as two different independent subscriptions if they have a different client ID policy.

For more information abut how to use the Subscription Sharing Policy, see: