4 Configuring Advanced JMS System Resources

This chapter provides information on configuring advanced WebLogic JMS resources, such as a distributed destination in a clustered environment.

Configuring WebLogic JMS Clustering

A WebLogic Server cluster is a group of servers in a domain that work together to provide a more scalable, more reliable application platform than a single server. A cluster appears to its clients as a single server but is in fact a group of servers acting as one.

Note:

JMS clients depend on unique WebLogic Server names to successfully access a cluster—even when WebLogic Servers reside in different domains. Therefore, make sure that all WebLogic Servers that JMS clients contact have unique server names.

Advantages of JMS Clustering

The advantages of clustering for JMS include the following:

  • Load balancing of destinations across multiple servers in a cluster

    An administrator can establish load balancing of destinations across multiple servers in the cluster by configuring multiple JMS servers and targeting them to the defined WebLogic Servers. Each JMS server is deployed on exactly one WebLogic Server instance and handles requests for a set of destinations.

    Note:

    Load balancing is not dynamic. During the configuration phase, the system administrator defines load balancing by specifying targets for JMS servers.

  • High availability of destinations

    • Distributed destinations — The queue and topic members of a distributed destination are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. Applications that use distributed destinations are more highly available than applications that use simple destinations because WebLogic JMS provides load balancing and failover for member destinations of a distributed destination within a cluster. For more information on distributed destinations, see Configuring Distributed Destination Resources.

    • Store-and-Forward — JMS modules utilize the SAF service to enable local JMS message producers to reliably send messages to remote queues or topics. If the destination is not available at the moment the messages are sent, either because of network problems or system failures, then the messages are saved on a local server instance, and are forwarded to the remote destination once it becomes available. For more information, see "Understanding the Store-and-Forward Service" in Configuring and Managing Store-and-Forward for Oracle WebLogic Server.

    • For automatic failover, WebLogic Server supports migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically, or manually. For more information, see "Whole Server Migration" in Using Clusters for Oracle WebLogic Server.

  • Cluster-wide, transparent access to destinations from any server in a cluster

    An administrator can establish cluster-wide, transparent access to destinations from any server in the cluster by either using the default connection factories for each server instance in the cluster, or by configuring one or more connection factories and targeting them to one or more server instances in the cluster, or to the entire cluster. This way, each connection factory can be deployed on multiple WebLogic Server instances. Connection factories are described in more detail in Connection Factory Configuration.

  • Scalability

    • Load balancing of destinations across multiple servers in the cluster, as described previously.

    • Distribution of application load across multiple JMS servers through connection factories, thus reducing the load on any single JMS server and enabling session concentration by routing connections to specific servers.

    • Optional multicast support, reducing the number of messages required to be delivered by a JMS server. The JMS server forwards only a single copy of a message to each host group associated with a multicast IP address, regardless of the number of applications that have subscribed.

  • Migratability

    WebLogic Server supports migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically, or manually. For more information, see "Whole Server Migration" in Using Clusters for Oracle WebLogic Server.

    Also, as an "exactly-once" service, WebLogic JMS takes advantage of the service migration framework implemented in WebLogic Server for clustered environments. This allows WebLogic JMS to respond properly to migration requests and to bring a JMS server online and offline in an orderly fashion. This includes both scheduled manual migrations as well as automatic migrations in response to a WebLogic Server failure. For more information, see Migration of JMS-related Services.

  • Server affinity for JMS Clients

    When configured for the cluster, load balancing algorithms (round-robin-affinity, weight-based-affinity, or random-affinity), provide server affinity for JMS client connections. If a JMS application has a connection to a given server instance, JMS attempts to establish new JMS connections to the same server instance. For more information on server affinity, see "Load Balancing in a Cluster" in Using Clusters for Oracle WebLogic Server.

For more information about the features and benefits of using WebLogic clusters, see "Understanding WebLogic Server Clustering" in Using Clusters for Oracle WebLogic Server.

How JMS Clustering Works

An administrator can establish cluster-wide, transparent access to JMS destinations from any server in a cluster, either by using the default connection factories for each server instance in a cluster, or by configuring one or more connection factories and targeting them to one or more server instances in a cluster, or to an entire cluster. This way, each connection factory can be deployed on multiple WebLogic Servers. For information on configuring and deploying connection factories, see Connection Factory Configuration Parameters.

The application uses the Java Naming and Directory Interface (JNDI) to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. If requests for destinations are sent to a WebLogic Server instance that is hosting a connection factory, but which is not hosting a JMS server or destinations, the requests are forwarded by the connection factory to the appropriate WebLogic Server instance that is hosting the JMS server and destinations.

The administrator can also configure multiple JMS servers on the various servers in the cluster—as long as the JMS servers are uniquely named—and can then target JMS queue or topic resources to the various JMS servers. The application uses the Java Naming and Directory Interface (JNDI) to look up a connection factory and create a connection to establish communication with a JMS server. Each JMS server handles requests for a set of destinations. Requests for destinations not handled by a JMS server are forwarded to the appropriate WebLogic Server instance. For information on configuring and deploying JMS servers, see JMS Server Configuration.

JMS Clustering Naming Requirements

There are naming requirements when configuring JMS objects and resources, such as JMS servers, JMS modules, and JMS resources, to work in a clustered environment in a single WebLogic domain or in a multi-domain environment. For more information, see JMS Configuration Naming Requirements.

Distributed Destination Within a Cluster

A distributed destination resource is a single set of destinations (queues or topics) that are accessible as a single, logical destination to a client (for example, a distributed topic has its own JNDI name). The members of the unit are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. Applications that use distributed destinations are more highly available than applications that use simple destinations because WebLogic Server provides load balancing and failover for member destinations of a distributed destination within a cluster. For more information, see Configuring Distributed Destination Resources.

JMS Services As a Migratable Service Within a Cluster

In addition to being part of a whole server migration, where all services hosted by a server can be migrated to another machine, JMS services are also part of the singleton service migration framework. This allows an administrator, for example, to migrate a JMS server and all of its destinations to migrate to another WebLogic Server within a cluster in response to a server failure or for scheduled maintenance. This includes both scheduled migrations as well as automatic migrations. For more information on JMS service migration, see Migration of JMS-related Services.

Configuration Guidelines for JMS Clustering

In order to use WebLogic JMS in a clustered environment, follow these guidelines:

  1. Configure your clustered environment as described in "Setting Up WebLogic Clusters" in Using Clusters for Oracle WebLogic Server.

  2. Identify server targets for any user-defined JMS connection factories using the Administration Console. For connection factories, you can identify either a single-server target or a cluster target, which are server instances that are associated with a connection factory to support clustering.

    For more information about these connection factory configuration attributes, see Connection Factory Configuration.

  3. Optionally, identify migratable server targets for JMS services using the Administration Console. For example, for JMS servers, you can identify either a single-server target or a migratable target, which is a set of server instances in a cluster that can host an "exactly-once" service like JMS in case of a server failure in the cluster.

    For more information on migratable JMS server targets, see Migration of JMS-related Services. For more information about JMS server configuration attributes, see JMS Server Configuration.

    Note:

    You cannot deploy the same destination on more than one JMS server. In addition, you cannot deploy a JMS server on more than one WebLogic Server.

  4. Optionally, you can configure the physical JMS destinations in a cluster as part of a virtual distributed destination set, as discussed in Distributed Destination Within a Cluster.

What About Failover?

If a server or network failure occurs, JMS message producer and consumer objects will attempt to transparently failover to another server instance, if one is available. In WebLogic Server release 9.1 or later, WebLogic JMS message producers automatically attempt to reconnect to an available server instance without any manual configuration or changes to existing client code. In WebLogic Server release 9.2 or later, you can use the Administration Console or WebLogic JMS APIs to configure WebLogic JMS message consumers to attempt to automatically reconnect to an available server instance. See "Automatic JMS Client Failover" in Developing Applications for Oracle WebLogic Server.

Note:

For WebLogic Server 9.x or earlier JMS client applications, refer to "Programming Considerations for WebLogic Server 9.x or Earlier Failures" in Programming JMS for Oracle WebLogic Server.

In addition, implementing the automatic service migration feature ensures that exactly-once services, like JMS, do not introduce a single point of failure for dependent applications in the cluster. See Migration of JMS-related Services. WebLogic Server also supports data migration at the server level—a complete server instance, and all of the services it hosts can be migrated to another machine, either automatically, or manually. See "Whole Server Migration" in Using Clusters for Oracle WebLogic Server.

In a clustered environment, WebLogic Server also offers service continuity in the event of a single server failure by allowing you to configure distributed destinations, where the members of the unit are usually distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. See Distributed Destination Within a Cluster.

Oracle also recommends implementing high-availability clustering software, which provides an integrated, out-of-the-box solution for WebLogic Server-based applications.

Migration of JMS-related Services

JMS-related services are singleton services, and, therefore, are not active on all server instances in a cluster. Instead, they are pinned to a single server in the cluster to preserve data consistency. To ensure that singleton JMS services do not introduce a single point of failure for dependent applications in the cluster, WebLogic Server can be configured to automatically migrate JMS service to any server instance in the migratable target list. migratable JMS services can also be manually migrated if the host server fails. JMS services can also be manually migrated before performing scheduled server maintenance.

Migratable JMS-related services include:

  • JMS Server – a management container for the queues and topics in JMS modules that are targeted to them. See JMS Server Configuration.

  • Store-and-Forward (SAF) Service – store-and-forward messages between local sending and remote receiving endpoints, even when the remote endpoint is not available at the moment the messages are sent. Only sending SAF agents configured for JMS SAF (sending capability only) are migratable. See "Understanding the Store-and-Forward Service" in Configuring and Managing Store-and-Forward for Oracle WebLogic Server.

  • Path Service – a persistent map that can be used to store the mapping of a group of messages in a JMS Message Unit-of-Order to a messaging resource in a cluster. One path service is configured per cluster. See Using the WebLogic Path Service.

  • Custom Persistent Store – a user-defined, disk-based file store or JDBC-accessible database for storing subsystem data, such as persistent JMS messages or store-and-forward messages. See "Using the WebLogic Persistent Store" in Configuring Server Environments for Oracle WebLogic Server.

You can configure JMS-related services for high availability by using migratable targets. A migratable target is a special target that can migrate from one server in a cluster to another. As such, a migratable target provides a way to group migratable services that should move together. When the migratable target is migrated, all services hosted by that target are migrated.

See "Understanding the Service Migration Framework" in Using Clusters for Oracle WebLogic Server.

Automatic Migration of JMS Services

An administrator can configure migratable targets so that hosted JMS services are automatically migrated from the current unhealthy hosting server to a healthy active server with the help of the Health Monitoring subsystem. For more information about configuring automatic migration of JMS-related services, see "Roadmap for Configuring Automatic Migration of JMS-Related Services" in Using Clusters for Oracle WebLogic Server.

Manual Migration JMS Services

An administrator can manually migrate JMS-related services to a healthy server if the host server fails or before performing server maintenance. For more information about configuring manual migration of JMS-related services, see "Roadmap for Configuring Manual Migration of JMS-Related Services" in Using Clusters for Oracle WebLogic Server.

Persistent Store High Availability

As discussed in What About Failover?, a JMS service, including a custom persistent store, can be migrated as part of the "whole server" migration feature, or as part of a "service-level" migration for migratable JMS-related services. Migratable JMS-related services cannot use the default persistent file store, so you must configure a custom file store or JDBC store and target it to the same migratable target as the JMS server or SAF agent associated with the store. (As a best practice, a path service should use its own custom store and migratable target).

Migratable custom file stores can be configured on a shared disk that is available to the migratable target servers in the cluster or can be migrated to a backup server target by using pre/post-migration scripts. For more information on migrating persistent stores, see "Custom Store Availability for JMS Services" in Using Clusters for Oracle WebLogic Server.

Using the WebLogic Path Service

The WebLogic Server Path Service is a persistent map that can be used to store the mapping of a group of messages in a JMS Message Unit-of-Order to a messaging resource in a cluster. It provides a way to enforce ordering by pinning messages to a member of a cluster that is hosting servlets, distributed queue members, or Store-and-Forward agents. One path service is configured per cluster. For more information on the Message Unit-of-Order feature, see "Using Message Unit-of-Order" in Programming JMS for Oracle WebLogic Server.

To configure a path service in a cluster, see "Configure path services" in the Oracle WebLogic Server Administration Console Help.

Path Service High Availability

For high availability, a cluster's path service can be targeted to a migratable target for automatic or manual service migration. However, a migratable path service cannot use the default store, so a custom store must be configured and targeted to the same migratable target. As an additional best practice, the path service and its custom store should be the only users of that migratable target. See "Understanding the Service Migration Framework" in Using Clusters for Oracle WebLogic Server.

Implementing Message UOO With a Path Service

Consider the following when implementing Message Unit-of-Order in conjunction with Path Service-based routing:

  • Each path service mapping is stored in a persistent store. When configuring a path service, select a persistent store that takes advantage of a high-availability solution. See Persistent Store High Availability.

  • If one or more producers send messages using the same Unit-of-Order name, all messages they produce will share the same path entry and have the same member queue destination.

  • If the required route for a Unit-of-Order name is unreachable, the producer sending the message will throw a JMSOrderException. The exception is thrown because the JMS messaging system can not meet the quality-of-service required — only one distributed destination member consumes messages for a particular Unit-of-Order.

  • A path entry is automatically deleted when the last producer and last message reference are deleted.

  • Depending on your system, using the Path Service may slow system throughput due to a remote disk operations to create, read, and delete path entries.

  • A distributed queue and its individual members each represent a unique destination. For example:

    DXQ1 is a distributed queue with queue members Q1 and Q2. DXQ1 also has a Unit-of-Order name value of Fred mapped by the Path Service to the Q2 member.

    • If message M1 is sent to DXQ1, it uses the Path Service to define a route to Q2.

    • If message M1 is sent directly to Q2, no routing by the Path Service is performed. This is because the application selected Q2 directly and the system was not asked to pick a member from a distributed destination.

    • If you want the system to use the Path Service, send messages to the distributed destination. If not, send directly to the member.

    • You can have more than one destination that has the same Unit-of-Order names in a distributed queue. For example:

      Queue Q3 also has a Unit-of-Order name value of Fred. If Q3 is added to DXQ1, there are now two destinations that have the same Unit-of-Order name in a distributed queue. Even though, Q3 and DXQ1 share the same Unit-of-Order name value Fred, each has a unique route and destination that allows the server to continue to provide the correct message ordering for each destination.

  • Empty queues before removing them from a distributed queue or adding them to a distributed queue. Although the Path Service will remove the path entry for the removed member, there is a short transition period where a message produced may throw a JMSOrderException when the queue has been removed but the path entry still exists.

Configuring Foreign Server Resources to Access Third-Party JMS Providers

WebLogic JMS enables you to reference third-party JMS providers within a local WebLogic Server JNDI tree. With Foreign Server resources in JMS modules, you can quickly map a foreign JMS provider so that its associated connection factories and destinations appear in the WebLogic JNDI tree as local JMS objects. Foreign Server resources can also be used to reference remote instances of WebLogic Server in another cluster or domain in the local WebLogic JNDI tree.

For more information on integrating remote and foreign JMS providers, see "Enhanced 2EE Support for Using WebLogic JMS With EJBs and Servlets" in Programming JMS for Oracle WebLogic Server.

These sections provide more information on how a Foreign Server works and a sample configuration for accessing a remote MQSeries JNDI provider.

How WebLogic JMS Accesses Foreign JMS Providers

When a foreign JMS server is deployed, it creates local connection factory and destination objects in WebLogic Server JNDI. Then when a foreign connection factory or destination object is looked up on the local server, that object performs the actual lookup on the remote JNDI directory, and the foreign object is returned from that directory.

This method makes it easier to configure multiple WebLogic Messaging Bridge destinations, since the foreign server moves the JNDI Initial Context Factory and Connection URL configuration details outside of your Messaging Bridge destination configurations. You need only provide the foreign Connection Factory and Destination JNDI name for each object.

For more information on configuring a Messaging Bridge, see Configuring and Managing the Messaging Bridge for Oracle WebLogic Server.

The ease-of-configuration concept also applies to configuring WebLogic Servlets, EJBs, and Message-Driven Beans (MDBs) with WebLogic JMS. For example, the weblogic-ejb-jar.xml file in the MDB can have a local JNDI name, and you can use the foreign JMS server to control where the MDB receives messages from. For example, you can deploy the MDB in one environment to talk to one JMS destination and server, and you can deploy the same weblogic-ejb-jar.xml file to a different server and have it talk to a different JMS destination without having to unpack and edit the weblogic-ejb-jar.xml file.

Creating Foreign Server Resources

A Foreign Server resource in a JMS module represents a JNDI provider that is outside the WebLogic JMS server. It contains information that allows a local WebLogic Server instance to reach a remote JNDI provider, thereby allowing for a number of foreign connection factory and destination objects to be defined on one JNDI directory.

The WebLogic Server Administration Console enables you to configure, modify, target, and delete foreign server resources in a system module. For a road map of the foreign server tasks, see "Configure foreign servers" in the Oracle WebLogic Server Administration Console Help.

Note:

For information on configuring and deploying JMS application modules in an enterprise application, see Chapter 5, "Configuring JMS Application Modules for Deployment."

Some foreign server options are dynamically configurable. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all foreign server options, see "ForeignServerBean" in the Oracle WebLogic Server MBean Reference.

After defining a foreign server, you can configure connection factory and destination objects. You can configure one or more connection factories and destinations (queues or topics) for each foreign server.

Creating Foreign Connection Factory Resources

A Foreign Connection Factory resource in a JMS module contains the JNDI name of the connection factory in the remote JNDI provider, the JNDI name that the connection factory is mapped to in the local WebLogic Server JNDI tree, and an optional user name and password.

The foreign connection factory creates non-replicated JNDI objects on each WebLogic Server instance that the parent foreign server is targeted to. (To create the JNDI object on every node in a cluster, target the foreign server to the cluster.)

Creating a Foreign Destination Resources

A Foreign Destination resource in a JMS module represents either a queue or a topic. It contains the destination JNDI name that is looked up on the foreign JNDI provider and the JNDI name that the destination is mapped to on the local WebLogic Server. When the foreign destination is looked up on the local server, a lookup is performed on the remote JNDI directory, and the destination object is returned from that directory.

Sample Configuration for MQSeries JNDI

The following table provides a possible a sample configuration when accessing a remote MQSeries JNDI provider.

Table 4-1 Sample MQSeries Configuration

Foreign JMS Object Option Names Sample Configuration Data

Foreign Server

Name

JNDI Initial Context Factory

JNDI Connection URL

JNDI Properties

MQJNDI

com.sun.jndi.fscontext.RefFSContextFactory

file:/MQJNDI/

(If necessary, enter a comma-separated name=value list of properties.)

Foreign

Connection Factory

Name

Local JNDI Name

Remote JNDI Name

Username

Password

MQ_QCF

mqseries.QCF

QCF

weblogic_jms

weblogic_jms

Foreign

Destination 1

Foreign

Destination 2

Name

Local JNDI Name

Remote JNDI Name

Name

Local JNDI Name

Remote JNDI Name

MQ_QUEUE1

mqseries.QUEUE1

QUEUE_1

MQ_QUEUE2

mqseries.QUEUE2

QUEUE_2


Configuring Distributed Destination Resources

A distributed destination resource in a JMS module represents a single set of destinations (queues or topics) that are accessible as a single, logical destination to a client (for example, a distributed topic has its own JNDI name). The members of the set are typically distributed across multiple servers within a cluster, with each member belonging to a separate JMS server. Applications that use a distributed destination are more highly available than applications that use standalone destinations because WebLogic JMS provides load balancing and failover for the members of a distributed destination in a cluster.

These sections provide information on how to create, monitor, and load balance distributed destinations:

Uniform Distributed Destinations vs. Weighted Distributed Destinations

Note:

Weighted Distributed Destinations are deprecated in WebLogic Server 10.3.4.0. Oracle recommends using Uniform Distributed Destinations.

WebLogic Server 9.x and later offers two types of distributed destination: uniform and weighted. In releases prior to WebLogic Server 9.x, WebLogic Administrators often needed to manually configure physical destinations to function as members of a distributed destination. This method provided the flexibility to create members that were intended to carry extra message load or have extra capacity; however, such differences often led to administrative and application problems because such a weighted distributed destination was not deployed consistently across a cluster. This type of distributed destination is officially referred to as a weighted distributed destination (or WDD).

A uniform distributed destination (UDD) greatly simplifies the management and development of distributed destination applications.Using uniform distributed destinations, you no longer need to create or designate destination members, but instead rely on WebLogic Server to uniformly create the necessary members on the JMS servers to which a JMS module is targeted. This feature ensures the consistent configuration of all distributed destination parameters, particularly in regards to weighting, security, persistence, paging, and quotas.

The weighted distributed destination feature is still available for users who prefer to manually fine-tune distributed destination members. However, Oracle strongly recommends configuring uniform distributed destinations to avoid possible administrative and application problems due to a weighted distributed destinations not being deployed consistently across a cluster.

For more information about using a distributed destination with your applications, see "Using Distributed Destinations" in Programming JMS for Oracle WebLogic Server.

Creating Uniform Distributed Destinations

The WebLogic Server Administration Console enables you to configure, modify, target, and delete UDD resources in JMS system module.

Note:

For information on configuring and deploying JMS application modules in an enterprise application, see Chapter 5, "Configuring JMS Application Modules for Deployment."

For a road map of the uniform distributed destination tasks, see the following topics in the Oracle WebLogic Server Administration Console Help:

Some uniform distributed destination options are dynamically configurable. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all uniform distributed destination options, see the following entries in the Oracle WebLogic Server MBean Reference:

The following sections provide additional uniform distributed destination information:

Targeting Uniform Distributed Queues and Topics

Unlike standalone queue and topics resources in a module, which can only be targeted to a specific JMS server in a domain, UDDs can be targeted to one or more JMS servers, one or more WebLogic Server instances, or to a cluster, since the purpose of UDDs is to distribute its members on every JMS server in a domain. For example, targeting a UDD to a cluster ensures that a member is uniformly configured on every JMS server in the cluster.

Note:

Changing the targets of a UDD can lead to the removal of a member destination and the unintentional loss of messages.

You can also use subdeployment groups when configuring UDDs to link specific resources with the distributed members. For example, if a system module named jmssysmod-jms.xml, is targeted to three WebLogic Server instances: wlserver1, wlserver2, and wlserver3, each with a configured JMS server, and you want to target a uniform distributed queue and a connection factory to each server instance, you can group the UDQ and connection factory in a subdeployment named servergroup, to ensure that these resources are always linked to the same server instances.

Here's how the servergroup subdeployment resources would look in jmssysmod-jms.xml:

<weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms">
  <connection-factory name="connfactory">
    <sub-deployment-name>servergroup</sub-deployment-name>
    <jndi-name>jms.connectionfactory.CF</jndi-name>
  </connection-factory>
 <uniform-distributed-queue name="UniformDistributedQueue">
    <sub-deployment-name>servergroup</sub-deployment-name>
    <jndi-name>jms.queue.UDQ</jndi-name>
    <forward-delay>10</forward-delay>
 </uniform-distributed-queue>
</weblogic-jms>

And here's how the servergroup subdeployment targeting would look in the domain's configuration file:

  <jms-system-resource>
   <name>jmssysmod-jms</name>
   <target>cluster1,</target>
   <sub-deployment>
     <name>servergroup</name>
     <target>wlserver1,wlserver2,wlserver3</target>
   </sub-deployment> 
   <descriptor-file-name>jms/jmssysmod-jms.xml</descriptor-file-name>
  </jms-system-resource>

Pausing and Resuming Message Operations on UDD Members

You can pause and resume message production, insertion, and/or consumption operations on a uniform distributed destinations, either programmatically (using JMX and the runtime MBean API) or administratively (using the Administration Console). In this way, you can control the JMS subsystem behavior in the event of an external resource failure that would otherwise cause the JMS subsystem to overload the system by continuously accepting and delivering (and redelivering) messages.

For more information on the "pause and resume" feature, see Controlling Message Operations on Destinations.

Monitoring UDD Members

Runtime statistics for uniform distributed destination members can be monitored via the Administration console, as described in Monitoring JMS Statistics.

Configuring Partitioned Distributed Topics

The uniform distributed topic message Forwarding Policy specifies whether a sent message is forwarded to all members.

The valid values are:

  • Replicated: The default. All physical topic members receive each sent message. If a message arrives at one of the physical topic members, a copy of this message is forwarded to the other members of that uniform distributed topic. A subscription on any one particular member will get a copy of any message sent to the uniform distributed topic logical name or to any particular uniform distributed topic member.

  • Partitioned: The physical member receiving the message is the only member of the uniform distributed topic that is aware of the message. When a message is published to the logical name of a Partitioned uniform distributed topic, it will only arrive on one particular physical topic member. Once a message arrives on a physical topic member, the message is not forwarded to the rest of the members of the uniform distributed destination, and subscribers on other physical topic members do not get a copy of that message.

Most new applications will use the Partitioned forwarding policy in combination with a logical subscription topology on a uniform distributed topic that consists of:

  • A same named physical subscription created directly on each physical member.

  • A Client ID Policy of Unrestricted.

  • A Subscription Sharing Policy of Sharable.

For more information on how to create and use the partitioned distributed topic, see:

Load Balancing Partitioned Distributed Topics

Partitioned topic publishers have the option of load balancing their messages across multiple members by tuning the connection factory Affinity and Load Balance attributes. The Unit of Order messages are routed to the correct member based on the UOO routing policy and the subscriber status. See Configure connection factory load balancing parameters in Oracle WebLogic Server Administration Console Help.

Creating Weighted Distributed Destinations

Note:

Weighted Distributed Destinations are deprecated in WebLogic Server 10.3.4.0. Oracle recommends using Uniform Distributed Destinations.

The WebLogic Server Administration Console enables you to configure, modify, target, and delete WDD resources in JMS system modules. When configuring a distributed topic or distributed queue, clearing the "Allocate Members Uniformly" check box allows you to manually select existing queues and topics to add to the distributed destination, and to fine-tune the weighting of resulting distributed destination members.

For a road map of the weighted distributed destination tasks, see the following topics in the Oracle WebLogic Server Administration Console Help:

Some weighted distributed destination options are dynamically configurable. When options are modified at run time, only incoming messages are affected; stored messages are not affected. For more information about the default values for all weighted distributed destination options, see the following entries in the Oracle WebLogic Server MBean Reference:

Unlike UDDs, WDD members cannot be monitored with the Administration Console or though runtime MBeans. Also, WDDs members cannot be uniformly targeted to JMS server or WebLogic Server instances in a domain. Instead, new WDD members must be manually configured on such instances, and then manually added to the WDD.

Load Balancing Messages Across a Distributed Destination

By using distributed destinations, JMS can spread or balance the messaging load across multiple destinations, which can result in better use of resources and improved response times. The JMS load-balancing algorithm determines the physical destinations that messages are sent to, as well as the physical destinations that consumers are assigned to.

Load Balancing Options

WebLogic JMS supports two different algorithms for balancing the message load across multiple physical destinations within a given distributed destination set. You select one of these load balancing options when configuring a distributed topic or queue on the Administration Console.

Round-Robin Distribution

In the round-robin algorithm, WebLogic JMS maintains an ordering of physical destinations within the distributed destination. The messaging load is distributed across the physical destinations one at a time in the order that they are defined in the WebLogic Server configuration (config.xml) file. Each WebLogic Server maintains an identical ordering, but may be at a different point within the ordering. Multiple threads of execution within a single server using a given distributed destination affect each other with respect to which physical destination a member is assigned to each time they produce a message. Round-robin is the default algorithm and doesn't need to be configured.

For weighted distributed destinations only, if weights are assigned to any of the physical destinations in the set for a given distributed destination, then those physical destinations appear multiple times in the ordering.

Random Distribution

The random distribution algorithm uses the weight assigned to the physical destinations to compute a weighted distribution for the set of physical destinations. The messaging load is distributed across the physical destinations by pseudo-randomly accessing the distribution. In the short run, the load will not be directly proportional to the weight. In the long run, the distribution will approach the limit of the distribution. A pure random distribution can be achieved by setting all the weights to the same value, which is typically 1.

Adding or removing a member (either administratively or as a result of a WebLogic Server shutdown/restart event) requires a recomputation of the distribution. Such events should be infrequent however, and the computation is generally simple, running in O(n) time.

Consumer Load Balancing

When an application creates a consumer, it must provide a destination. If that destination represents a distributed destination, then WebLogic JMS must find a physical destination that consumer will receive messages from. The choice of which destination member to use is made by using one of the load-balancing algorithms described in Load Balancing Options. The choice is made only once: when the consumer is created. From that point on, the consumer gets messages from that member only.

Producer Load Balancing

When a producer sends a message, WebLogic JMS looks at the destination where the message is being sent. If the destination is a distributed destination, WebLogic JMS makes a decision as to where the message will be sent. That is, the producer will send to one of the destination members according to one of the load-balancing algorithms described in Load Balancing Options.

The producer makes such a decision each time it sends a message. However, there is no compromise of ordering guarantees between a consumer and producer, because consumers are load balanced once, and are then pinned to a single destination member.

Note:

If a producer attempts to send a persistent message to a distributed destination, every effort is made to first forward the message to distributed members that utilize a persistent store. However, if none of the distributed members utilize a persistent store, then the message will still be sent to one of the members according to the selected load-balancing algorithm.

Load Balancing Heuristics

In addition to the algorithms described in Load Balancing Options, WebLogic JMS uses the following heuristics when choosing an instance of a destination.

Transaction Affinity

When producing multiple messages within a transacted session, an effort is made to send all messages produced to the same WebLogic Server. Specifically, if a session sends multiple messages to a single distributed destination, then all of the messages are routed to the same physical destination. If a session sends multiple messages to multiple different distributed destinations, an effort is made to choose a set of physical destinations served by the same WebLogic Server.

Server Affinity

The Server Affinity Enabled parameter on connection factories defines whether a WebLogic Server that is load balancing consumers or producers across multiple member destinations in a distributed destination set, will first attempt to load balance across any other local destination members that are also running on the same WebLogic Server.

Note:

The Server Affinity Enabled attribute does not affect queue browsers. Therefore, a queue browser created on a distributed queue can be pinned to a remote distributed queue member even when Server Affinity is enabled.

To disable server affinity on a connection factory:

  1. Follow the directions for navigating to the JMS Connection Factory > Configuration > General page in "Configure connection factory load balancing parameters" topic in the Oracle WebLogic Server Administration Console Help.

  2. Define the Server Affinity Enabled field as follows:

    • If the Server Affinity Enabled check box is selected (True), then a WebLogic Server that is load balancing consumers or producers across multiple physical destinations in a distributed destination set, will first attempt to load balance across any other physical destinations that are also running on the same WebLogic Server.

    • If the Server Affinity Enabled check box is not selected (False), then a WebLogic Server will load balance consumers or producers across physical destinations in a distributed destination set and disregard any other physical destinations also running on the same WebLogic Server.

  3. Click Save.

For more information about how the Server Affinity Enabled setting affects the load balancing among the members of a distributed destination, see Distributed Destination Load Balancing When Server Affinity Is Enabled.

Queues with Zero Consumers

When load balancing consumers across multiple remote physical queues, if one or more of the queues have zero consumers, then those queues alone are considered for balancing the load. Once all the physical queues in the set have at least one consumer, the standard algorithms apply.

In addition, when producers are sending messages, queues with zero consumers are not considered for message production, unless all instances of the given queue have zero consumers.

Paused Distributed Destination Members

When distributed destinations are paused for message production or insertion, they are not considered for message production. Similarly, when destinations are paused for consumption, they are not considered for message production.

For more information on pausing message operations on destinations, see Controlling Message Operations on Destinations.

Defeating Load Balancing

Applications can defeat load balancing by directly accessing the individual physical destinations. That is, if the physical destination has no JNDI name, it can still be referenced using the createQueue() or createTopic() methods.

For instructions on how to directly access uniform and weighted distributed destination members, see "Accessing Distributed Destination Members" in Programming JMS for Oracle WebLogic Server.

Connection Factories

Applications that use distributed destinations to distribute or balance their producers and consumers across multiple physical destinations, but do not want to make a load balancing decision each time a message is produced, can use a connection factory with the Load Balancing Enabled parameter disabled. To ensure a fair distribution of the messaging load among a distributed destination, the initial physical destination (queue or topic) used by producers is always chosen at random from among the distributed destination members.

To disable load balancing on a connection factory:

  1. Follow the directions for navigating to the JMS Connection Factory > Configuration > General page in "Configure connection factory load balancing parameters" topic in the Oracle WebLogic Server Administration Console Help.

  2. Define the setting of the Load Balancing Enabled field using the following guidelines:

    • Load Balancing Enabled = True

      For Queue.sender.send() methods, non-anonymous producers are load balanced on every invocation across the distributed queue members.

      For TopicPublish.publish() methods, non-anonymous producers are always pinned to the same physical topic for every invocation, irrespective of the Load Balancing Enabled setting.

    • Load Balancing Enabled = False

    Producers always produce to the same physical destination until they fail. At that point, a new physical destination is chosen.

  3. Click Save.

    Note:

    Depending on your implementation, the setting of the Server Affinity Enabled attribute can affect load balancing preferences for distributed destinations. For more information, see Distributed Destination Load Balancing When Server Affinity Is Enabled.

Anonymous producers (producers that do not designate a destination when created), are load-balanced each time they switch destinations. If they continue to use the same destination, then the rules for non-anonymous producers apply (as stated previously).

Distributed Destination Load Balancing When Server Affinity Is Enabled

Table 4-1 explains how the setting of a connection factory's Server Affinity Enabled parameter affects the load balancing preferences for distributed destination members. The order of preference depends on the type of operation and whether or not durable subscriptions or persistent messages are involved.

The Server Affinity Enabled parameter for distributed destinations is different from the server affinity provided by the Default Load Algorithm attribute in the ClusterMBean, which is also used by the JMS connection factory to create initial context affinity for client connections.

For more information, refer to the "Load Balancing for EJBs and RMI Objects" and "Initial Context Affinity and Server Affinity for Client Connections" sections in Using Clusters for Oracle WebLogic Server.

Table 4-2 Server Affinity Load Balancing Preferences

When the operation is... And Server Affinity Enabled is... Then load balancing preference is given to a...
  • createReceiver() for queues

  • createSubscriber() for topics

True

  1. local member without a consumer

  2. local member

  3. remote member without a consumer

  4. remote member

createReceiver() for queues

False

  1. member without a consumer

  2. member

createSubscriber() for topics

(Note: non-durable subscribers)

True or False

  1. local member without a consumer

  2. local member

  • createSender() for queues

  • createPublisher() for topics

True or False

There is no separate machinery for load balancing a JMS producer creation. JMS producers are created on the server on which your JMS connection is load balanced or pinned.

For more information about load balancing JMS connections created via a connection factory, refer to the "Load Balancing for EJBs and RMI Objects" and "Initial Context Affinity and Server Affinity for Client Connections" sections in Using Clusters for Oracle WebLogic Server.

For persistent messages using QueueSender.send()

True

  1. local member with a consumer and a store

  2. remote member with a consumer and a store

  3. local member with a store

  4. remote member with a store

  5. local member with a consumer

  6. remote member with a consumer

  7. local member

  8. remote member

For persistent messages using QueueSender.send()

False

  1. member with a consumer and a store

  2. member with a store

  3. member with a consumer

  4. member

For non-persistent messages using QueueSender.send()

True

  1. local member with a consumer

  2. remote member with a consumer

  3. local member

  4. remote member

For non-persistent messages:

  • QueueSender.send()

  • TopicPublish.publish()

False

  1. member with a consumer

  2. member

createConnectionConsumer() for session pool queues and topics

True or False

local member only

Note: Session pools are now used rarely, as they are not a required part of the Java EE specification, do not support JTA user transactions, and are largely superseded by message-driven beans (MDBs), which are simpler, easier to manage, and more capable.


Distributed Destination Migration

For clustered JMS implementations that take advantage of the Service Migration feature, a JMS server and its distributed destination members can be manually migrated to another WebLogic Server instance within the cluster. Service migrations can take place due to scheduled system maintenance, as well as in response to a server failure within the cluster.

However, the target WebLogic Server may already be hosting a JMS server with all of its physical destinations. This can lead to situations where the same WebLogic Server instance hosts two physical destinations for a single distributed destination. This is permissible in the short term, since a WebLogic Server instance can host multiple physical destinations for that distributed destination. However, load balancing in this situation is less effective.

In such a situation, each JMS server on a target WebLogic Server instance operates independently. This is necessary to avoid merging of the two destination instances, and/or disabling of one instance, which can make some messages unavailable for a prolonged period of time. The long-term intent, however, is to eventually re-migrate the migrated JMS server to yet another WebLogic Server instance in the cluster.

For more information about the configuring JMS migratable targets, see Migration of JMS-related Services.

Distributed Destination Failover

If the server instance that is hosting the JMS connections for the JMS producers and JMS consumers should fail, then all the producers and consumers using these connections are closed and are not re-created on another server instance in the cluster. Furthermore, if a server instance that is hosting a JMS destination should fail, then all the JMS consumers for that destination are closed and not re-created on another server instance in the cluster.

If the distributed queue member on which a queue producer is created should fail, yet the WebLogic Server instance where the producer's JMS connection resides is still running, the producer remains alive and WebLogic JMS will fail it over to another distributed queue member, irrespective of whether the Load Balancing option is enabled.

For more information about procedures for recovering from a WebLogic Server failure, see "Recovering From a Server Failure" in Programming JMS for Oracle WebLogic Server.

Configure an Unrestricted ClientID

The Client ID Policy specifies whether more than one JMS connection can use the same Client ID in a cluster. Valid values for this policy are:

  • RESTRICTED: The default. Only one connection that uses this policy can exist in a cluster at any given time for a particular Client ID (if a connection already exists with a given Client ID, attempts to create new connections using this policy with the same Client ID fail with an exception).

  • UNRESTRICTED: Connections created using this policy can specify any Client ID, even when other restricted or unrestricted connections already use the same Client ID. When a durable subscription is created using an Unrestricted Client ID, it can only be cleaned up using weblogic.jms.extensions.WLSession.unsubscribe(Topic topic, String name). See Managing Subscriptions in Programming JMS for Oracle WebLogic Server.

Oracle recommends setting the Client ID policy to Unrestricted for new applications (unless your application architecture requires exclusive Client IDs), especially if sharing a subscription (durable or non-durable). Subscriptions created with different Client ID policies are always treated as independent subscriptions. See ClientIdPolicy in the Oracle WebLogic Server MBean Reference.

To set the Client ID Policy on the connection factory using the WebLogic Console, see Configure multiple connections using the same client Id in the Oracle WebLogic Server Administration Console Help. The connection factory setting can be overridden programmatically using the setClientIDPolicy method of the WLConnection interface in the Oracle WebLogic Server API Reference.

Note:

Programmatically changing (overriding) the Client ID policy settings on a JMS connection runtime object is valid only for that particular connection instance and for the life of that connection. Any changes made to the connection runtime object are not persisted/reflected by the corresponding JMS connection factory configuration defined in the underlying JMS module descriptor.

For more information on how to use the Client ID Policy, see:

Configure Shared Subscriptions

The Subscription Sharing Policy specifies whether subscribers can share subscriptions with other subscribers on the same connection.aon this connection. Valid values for this policy are:

  • Exclusive: The default. All subscribers created using this connection factory cannot share subscriptions with any other subscribers.

  • Sharable: Subscribers created using this connection factory can share their subscriptions with other subscribers, regardless of whether those subscribers are created using the same connection factory or a different connection factory. Consumers can share a non-durable subscriptions only if they have the same Client ID and Client ID Policy; consumers can share a durable subscription only if they have the same Client ID, Client ID Policy, and Subscription Name.

WebLogic JMS applications can override the Subscription Sharing Policy specified on the connection factory configuration by casting a javax.jms.Connection instance to weblogic.jms.extension.WLConnection and calling setSubscriptionSharingPolicy(String).

Most applications with a Sharable Subscription Sharing Policy will also use an Unrestricted Client ID Policy in order to ensure that multiple connections with the same client ID can exist.

Two durable subscriptions with the same Client ID and Subscription Name are treated as two different independent subscriptions if they have a different Client ID Policy. Similarly, two Sharable non-durable subscriptions with the same Client ID are treated as two different independent subscriptions if they have a different Client ID Policy.

For more information on how to use the Subscription Sharing Policy, see: