12 Tuning WebLogic JMS

Get the most out of your applications by implementing the administrative performance tuning features available with Oracle WebLogic Server JMS.

JMS Performance & Tuning Check List

Review a checklist of items to consider when tuning WebLogic JMS.

  • Always configure quotas, see Defining Quota.

  • Verify that default paging settings apply to your needs, see Paging Out Messages To Free Up Memory. Paging lowers performance but may be required if JVM memory is insufficient.

  • Avoid large message backlogs. See Handling Large Message Backlogs.

  • Create and use custom connection factories with all applications instead of using default connection factories, including when using MDBs. Default connection factories are not tunable, while custom connection factories provide many options for performance tuning.

  • Write applications so that they cache and re-use JMS client resources, including JNDI contexts and lookups, and JMS connections, sessions, consumers, or producers. These resources are relatively expensive to create. For information on detecting when caching is needed, as well as on built-in pooling features, see Cache and Re-use Client Resources.

  • For asynchronous consumers and MDBs, tune MessagesMaximum on the connection factory. Increasing MessagesMaximum can improve performance, decreasing MessagesMaximum to its minimum value can lower performance, but helps ensure that messages do not end up waiting for a consumer that's already processing a message. See Tuning MessageMaximum.

  • Avoid single threaded processing when possible. Use multiple concurrent producers and consumers and ensure that enough threads are available to service them.

  • Tune server-side applications so that they have enough instances. Consider creating dedicated thread pools for these applications. See Tuning Message-Driven Beans.

  • For client-side applications with asynchronous consumers, tune client-side thread pools using Client-side Thread Pools.

  • Tune persistence as described in Tuning the WebLogic Persistent Store. In particular, it's normally best for multiple JMS servers, destinations, and other services to share the same store so that the store can aggregate concurrent requests into single physical I/O requests, and to reduce the chance that a JTA transaction spans more than one store. Multiple stores should only be considered once it's been established that the a single store is not scaling to handle the current load.

  • If you have large messages, see Tuning for Large Messages.

  • Prevent unnecessary message routing in a cluster by carefully configuring connection factory targets. Messages potentially route through two servers, as they flow from a client, through the client's connection host, and then on to a final destination. For server-side applications, target connection factories to the cluster. For client-side applications that work with a distributed destination, target connection factories only to servers that host the distributed destinations members. For client-side applications that work with a singleton destination, target the connection factory to the same server that hosts the destination.

  • If JTA transactions include both JMS and JDBC operations, consider enabling the JDBC LLR optimization. LLR is a commonly used safe "ACID" optimization that can lead to significant performance improvements, with some drawbacks. See Tuning Transactions.

  • If you are using Java clients, avoid thin Java clients except when a small jar size is more important than performance. Thin clients use the slower IIOP protocol even when T3 is specified so use a full java client instead. See Developing Standalone Clients for Oracle WebLogic Server.

  • Tune JMS Store-and-Forward according to Tuning WebLogic JMS Store-and-Forward.

  • Tune a WebLogic Messaging Bridge according Tuning WebLogic Message Bridge.

  • For asynchronous message sends, see Using JMS 2.0 Asynchronous Message Sends (preferred), or if JMS 2.0 is not an option, and you are using non-persistent non-transactional remote producer clients, then consider enabling one-way calls. See Using One-Way Message Sends.

  • Consider using JMS distributed queues. See Using Distributed Queues in Developing JMS Applications for Oracle WebLogic Server.

  • If you are already using distributed queues, see Tuning Distributed Queues.

  • Consider using advanced distributed topic features (PDTs). See Developing Advanced Pub/Sub Applications in Developing JMS Applications for Oracle WebLogic Server.

  • If your applications use Topics, see Tuning Topics.

  • Avoid configuring sorted destinations, including priority sorted destinations. FIFO or LIFO destinations are the most efficient. Destination sorting can be expensive when there are large message backlogs, even a backlog of a few hundred messages can lower performance.

  • Use careful selector design. See Filtering Messages in Developing JMS Applications for Oracle WebLogic Server.

  • Run applications on the same WebLogic Servers that are also hosting destinations. This eliminates networking and some or all marshalling overhead, and can heavily reduce network and CPU usage. It also helps ensure that transactions are local to a single server. This is one of the major advantages of using an application server's embedded messaging.

Handling Large Message Backlogs

When message senders inject messages faster than consumers, messages accumulate into a message backlog.

Large backlogs can be problematic for a number of reasons, for example:

  • Indicates consumers may not be capable of handling the incoming message load, are failing, or are not properly load balanced across a distributed queue.

  • Can lead to out-of-memory on the server, which in turn prevents the server from doing any work.

  • Can lead to high garbage collection (GC) overhead. A JVM's GC overhead is partially proportional to the number of live objects in the JVM.

Improving Message Processing Performance

One area for investigation is to improve overall message processing performance. Here are some suggestions:

  • Follow the JMS tuning recommendations as described in JMS Performance & Tuning Check List.

  • Check for programming errors in newly developed applications. In particular, ensure that non-transactional consumers are acknowledging messages, that transactional consumers are committing transactions, that plain javax.jms applications called javax.jms.Connection.start(), and that transaction timeouts are tuned to reflect the needs of your particular application. Here are some symptoms of programming errors: consumers are not receiving any messages (make sure they called start()), high "pending" counts for queues, already processed persistent messages re-appearing after a shutdown and restart, and already processed transactional messages re-appearing after a delay (the default JTA timeout is 30 seconds, default transacted session timeout is one hour).

  • Check WebLogic statistics for queues that are not being serviced by consumers. If you're having a problem with distributed queues, see Tuning Distributed Queues.

  • Check WebLogic statistics for topics with high pending counts. This usually indicates that there are topic subscriptions that are not being serviced. There may be a slow or unresponsive consumer client that's responsible for processing the messages, or it's possible that a durable subscription may no longer be needed and should be deleted, or the messages may be accumulating due to delayed distributed topic forwarding. You can check statistics for individual durable subscriptions on the WebLogic Server Administration Console. A durable subscription with a large backlog may have been created by an application but never deleted. Unserviced durable subscriptions continue to accumulate topic messages until they are either administratively destroyed, or unsubscribed by a standard JMS client.

  • Understand distributed topic behavior when not all members are active. In distributed topics, each produced message to a particular topic member is forwarded to each remote topic member. If a remote topic member is unavailable then the local topic member will store each produced message for later forwarding. Therefore, if a topic member is unavailable for a long period of time, then large backlogs can develop on the active members. In some applications, this backlog can be addressed by setting expiration times on the messages. See Defining a Message Expiration Policy.

  • In certain applications it may be fine to automatically delete old unprocessed messages. See Handling Expired Messages.

  • For transactional MDBs, consider using MDB transaction batching as this can yield a 5 fold improvement in some use cases.

  • Leverage distributed queues and add more JVMs to the cluster (in order to add more distributed queue member instances). For example, split a 200,000 message backlog across 4 JVMs at 50,000 messages per JVM, instead of 100,000 messages per JVM.

  • For client applications, use asynchronous consumers instead of synchronous consumers when possible. Asynchronous consumers can have a significantly lower network overhead, lower latency, and do not block a thread while waiting for a message.

  • For synchronous consumer client applications, consider: enabling prefetch, using CLIENT_ACKNOWLEDGE to enable acknowledging multiple consumed messages at a time, and using DUPS_OK_ACKNOWLEDGE instead of AUTO_ACKNOWLEDGE.

  • For asynchronous consumer client applications, consider using DUPS_OK_ACKNOWLEDGE instead of AUTO_ACKNOWLEDGE.

  • Leverage batching. For example, include multiple messages in each transaction, or send one larger message instead of many smaller messages.

  • For non-durable subscriber client-side applications handling missing ("dropped") messages, investigate MULTICAST_NO_ACKNOWLEDGE. This mode broadcasts messages concurrently to subscribers over UDP multicast.

Controlling Message Production

Another area for investigation is to slow down or even stop message production. Here are some suggestions:

  • Set lower quotas. See Defining Quota. For topics, additionally consider tuning a subscription limit. See Subscription Message Limits.

  • Use fewer producer threads.

  • Tune a sender blocking timeout that occurs during a quota condition, as described Blocking Senders During Quota Conditions. The timeout is tunable on connection factory.

  • Tune producer flow control, which automatically slows down producer calls under threshold conditions. See Controlling the Flow of Messages on JMS Servers and Destinations.

  • Consider modifying the application to implement flow-control. For example, some applications do not allow producers to inject more messages until a consumer has successfully processed the previous batch of produced messages (a windowing protocol). Other applications might implement a request/reply algorithm where a new request isn't submitted until the previous reply is received (essentially a windowing protocol with a window size of 1). In some cases, JMS tuning is not required as the synchronous flow from the RMI/EJB/Servlet is adequate.

Drawbacks to Controlling Message Production

Slowing down or stopping message processing has at least two potential drawbacks:

  • It puts back-pressure on the down-stream flow that is calling the producer. Sometimes the down-stream flow cannot handle this back-pressure, and a hard-to-handle backlog develops behind the producer. The location of the backlog depends on what's calling the producer. For example, if the producer is being called by a servlet, the backlog might manifest as packets accumulating on the incoming network socket or network card.

  • Blocking calls on server threads can lead to thread-starvation, too many active threads, or even dead-locks. Usually the key to address this problem is to ensure that the producer threads are running in a size limited dedicated thread pool, as this ensures that the blocking threads do not interfere with activity in other thread pools. For example, if an EJB or servlet is calling a "send" that might block for a significant time: configure a custom work manager with a max threads constraint, and set the dispatch-policy of the EJB/servlet to reference this work-manager.

Cache and Re-use Client Resources

JMS client resources are relatively expensive to create in comparison with sending and receiving messages. These resources should be cached or pooled for re-use rather than recreating them with each message. They include contexts, destinations, connection factories, connections, sessions, consumers, or producers.

In addition, it is important for applications to close contexts, connections, sessions, consumers, or producers once they are completely done with these resources. Failing to close unused resources leads to a memory leak, which lowers overall JVM performance and eventually may cause the JVM to fail with an out-of-memory error. Be aware that JNDI contexts have close() method, and that closing a JMS connection automatically efficiently closes all sessions, consumers, and producers created using the connection.

For server-side applications, WebLogic automatically wraps and pools JMS resources that are accessed using a resource reference. See Enhanced Support for Using WebLogic JMS with EJBs and Servlets in Developing JMS Applications for Oracle WebLogic Server.

  • To check for heavy JMS resource allocation or leaks, you can monitor mbean stats and/or use your particular JVM's built in facilities. You can monitor mbean stats using the console, WLST, or java code.

  • Check JVM heap statistics for memory leaks or unexpectedly high allocation counts.

  • Similarly, check WebLogic statistics for memory leaks or unexpectedly high allocation counts.

Tuning Distributed Queues

Each distributed queue member is individually advertised in JNDI as jms-server-name@distributed-destination-jndi-name. If produced messages are failing to load balance evenly across all distributed queue members, you may wish to change the configuration of your producer connection factories to disable server affinity (enabled by default) or set Producer Load Balancing Policy to Per-JVM.

Once created, a JMS consumer remains pinned to a particular queue member. This can lead to situations where consumers are not evenly load balanced across all distributed queue members, particularly if new members become available after all consumers have been initialized. If consumers fail to load balance evenly across all distributed queue members, the best option is to use an MDB that's targeted to a cluster designed to process the messages. WebLogic MDBs automatically ensure that all distributed queue members are serviced. If MDBs are not an option, here are some suggestions to improve consumer load balancing:

  • Ensure that your application is creating enough consumers and the consumer's connection factory is tuned using the available load balancing options. In particular, consider disabling the default server affinity setting and consider setting the Producer Load Balancing Policy to Per-JVM.

  • Change applications to periodically close and recreate consumers. This forces consumers to re-load balance.

  • Consume from individual queue members instead of from the distributed queues logical name.

  • Configure the distributed queue to enable forwarding. Distributed queue forwarding automatically internally forwards messages that have been idled on a member destination without consumers to a member that has consumers. This approach may not be practical for high message load applications.

    Note:

    Queue forwarding is not compatible with the WebLogic JMS Unit-of-Order feature, as it can cause messages to be delivered out of order.

    See Using Distributed Destinations in Developing JMS Applications for Oracle WebLogic Server and Configuring Advanced JMS System Resources in Administering JMS Resources for Oracle WebLogic Server.

Tuning Topics

Review information on how to tune WebLogic Topics.

  • You may want to convert singleton topics to distributed topics. A distributed topic with a Partitioned policy generally outperforms the Replicated policy choice.

  • Oracle highly recommends leveraging MDBs to process Topic messages, especially when working with Distributed Topics. MDBs automate the creation and servicing of multiple subscriptions and also provide high scalability options to automatically distribute the messages for a single subscription across multiple Distributed Topic members.

  • There is a Sharable subscription extension that allows messages on a single topic subscription to be processed in parallel by multiple subscribers on multiple JVMs. WebLogic MDBs leverage this feature when they are not in Compatibility mode.

  • If the application can tolerate the deletion of old messages without having them be processed by a consumer, consider using message expirations or subscription limits. See Defining an Expiration Logging Policy and Subscription Message Limits.

  • If produced messages are failing to load balance evenly across the members of a Partitioned Distributed Topic, you may need to change the configuration of your producer connection factories to disable server affinity (enabled by default) or set Producer Load Balancing Policy to Per-JVM.

  • Before using any of these previously mentioned advanced features, Oracle recommends fully reviewing the following related documentation:

Tuning Non-durable Topic Publishers

Since WebLogic Server 9.0, a non-durable topic message publish request may block until the message is pushed to all consumers that are currently ready to process the message. This may cause non-durable topic publishers with large numbers of consumers to take longer to publish a message than expected. To revert to a publish that does not wait for consumers and waits only until it's confirmed the message arrived on a JMS server, use the following property:

-Dweblogic.messaging.DisableTopicMultiSender=true

Tuning for Large Messages

Learn how to improve JMS performance when handling large messages.

Tuning MessageMaximum

WebLogic JMS pipelines messages that are delivered to asynchronous consumers (otherwise known as message listeners) or prefetch-enabled synchronous consumers. This action aids performance because messages are aggregated when they are internally pushed from the server to the client. The messages backlog (the size of the pipeline) between the JMS server and the client is tunable by configuring the MessagesMaximum setting on the connection factory. See Asynchronous Message Pipeline in Developing JMS Applications for Oracle WebLogic Server.

In some circumstances, tuning the MessagesMaximum parameter may improve performance dramatically, such as when the JMS application defers acknowledges or commits. In this case, Oracle suggests setting the MessagesMaximum value to:

2 * (ack or commit interval) + 1

For example, if the JMS application acknowledges 50 messages at a time, set the MessagesMaximum value to 101.

Tuning MessageMaximum Limitations

Tuning the MessagesMaximum value too high can cause:

  • Increased memory usage on the client.

  • Affinity to an existing client as its pipeline fills with messages. For example: If MessagesMaximum has a value of 10,000,000, the first consumer client to connect will get all messages that have already arrived at the destination. This condition leaves other consumers without any messages and creates an unnecessary backlog of messages in the first consumer that may cause the system to run out of memory.

  • Packet is too large exceptions and stalled consumers. If the aggregate size of the messages pushed to a consumer is larger than the current protocol's maximum message size (default size is 10 MB and is configured on a per WebLogic Server instance basis using the console and on a per client basis using the -Dweblogic.MaxMessageSize command line property), the message delivery fails.

Setting Maximum Message Size for Network Protocols

You may need to configure WebLogic clients in addition to the WebLogic Server instances, when sending and receiving large messages.

For most protocols, including T3, WLS limits the size of a network call to 10MB by default. If individual JMS message sizes exceed this limit, or if a set of JMS messages that is batched into the same network call exceeds this limit, this can lead to either "packet too large exceptions" and/or stalled consumers. Asynchronous consumers can cause multiple JMS messages to batch into the same network call, to control this batch size, see Tuning MessageMaximum Limitations.

To set the maximum message size on a server instance, tune the maximum message size for each supported protocol on a per protocol basis for each involved default channel or custom channel. In this context the word 'message' refers to all network calls over the given protocol, not just JMS calls.

To set the maximum message size on a client, use the following command line property:

-Dweblogic.MaxMessageSize

Note:

This setting applies to all WebLogic Server network packets delivered to the client, not just JMS related packets.

Threshold Compression for Remote Producers

A message compression threshold can be set programmatically using a JMS API extension to the WLMessageProducer interface, or administratively by either specifying a Default Compression Threshold value on a connection factory or on a JMS SAF remote context. Compressed messages may actually inadvertently affect destination quotas since some message types actually grow larger when compressed.

For instructions on configuring default compression thresholds using the WebLogic Server Administration Console, see:

Once configured, message compression is triggered on producers for client sends, on connection factories for message receives and message browsing, or through SAF forwarding. Messages are compressed using GZIP. Compression only occurs when message producers and consumers are located on separate server instances where messages must cross a JVM boundary, typically across a network connection when WebLogic domains reside on different machines. Decompression automatically occurs on the client side and only when the message content is accessed, except for the following situations:

  • Using message selectors on compressed XML messages can cause decompression, since the message body must be accessed in order to filter them. For more information on defining XML message selectors, see Filtering Messages in Developing JMS Applications for Oracle WebLogic Server.

  • Interoperating with earlier versions of WebLogic Server can cause decompression. For example, when using the Messaging Bridge, messages are decompressed when sent from the current release of WebLogic Server to a receiving side that is an earlier version of WebLogic Server.

On the server side, messages always remains compressed, even when they are written to disk.

Store Compression

WebLogic Server provides the ability to configure message compression for JMS Store I/O operations.

By selecting an appropriate message body compression option, JMS store I/O performance may improve for:

  • Persistent messages that are read from or written to disk.
  • Persistent and non-persistent messages are paged in or paged out when JMS paging is enabled.

The following sections provide information on how to configure message compression:

For general tuning information on JMS message compression, see Threshold Compression for Remote Producers.

Selecting a Message Compression Option

This section provides information on the types of message compression available for use when message body compression is enabled.

Note:

The performance of each compression option is dependent on the operating environment, data type, and data size. Oracle recommends users test their environments to determine the most appropriate compression option.

Table 12-1 Message Body Compression Options

Compression Type Description
GZIP DEFAULT_COMPRESSION Use GZIP_DEFAULT_COMPRESSION to enable message compression using the JDK GZIP API with DEFAULT_COMPRESSION level. See java.util.zip package.
GZIP BEST_COMPRESSION Use GZIP_BEST_COMPRESSION to enable message compression using the JDK GZIP API with BEST_COMPRESSION level. See java.util.zip package.
GZIP BEST_SPEED Use GZIP_BEST_SPEED to enable message compression using the JDK GZIP API with BEST_SPEED level. See java.util.zip package.
LZF Use LZF to enable message compression using Open Source LZF. See https://github.com/ning/compress.
Message Compression for JMS Servers
To configure message body compression for JMS servers:
  1. If you have not done so, create a JMS Server, see Create JMS Servers in the Oracle WebLogic Server Administration Console Online Help.
  2. Use the instructions to Configure general JMS server properties in the Oracle WebLogic Server Administration Console Online Help. Update the following Advanced JMS server attributes for your environment:
    1. Optionally, select Store Message Compression Enabled to enable the JMS store to perform message body compression. See StoreMessageCompressionEnabled in MBean Reference for Oracle WebLogic Server.
    2. Optionally, select Paging Message Compression Enabled to enable the JMS paging store to perform message body compression on persistent and non-persistent messages. See PagingMessageCompressionEnabled in MBean Reference for Oracle WebLogic Server.
    3. In Message Compression Options, specify the type of message compression used. See MessageCompressionOptions in MBean Reference for Oracle WebLogic Server.
Message Compression for Store-and-Forward Sending Agents

To configure message body compression for SAF Sending Agents:

  1. If you have not done so, create a SAF Sending Agent, see Create Store-and-Forward Agents in the Oracle WebLogic Server Administration Console Online Help.
  2. Use the instructions to Configure SAF agent general properties in the Oracle WebLogic Server Administration Console Online Help. Update the following Advanced Sending Agent attributes for your environment:
    1. Optionally, select Store Message Compression Enabled to enable the JMS store to perform message body compression. See StoreMessageCompressionEnabled in MBean Reference for Oracle WebLogic Server.
    2. Optionally, select Paging Message Compression Enabled to enable the JMS paging store to perform message body compression on persistent and non-persistent messages. See PagingMessageCompressionEnabled in MBean Reference for Oracle WebLogic Server.
    3. In Message Compression Options, specify the type of message compression used. See MessageCompressionOptions in MBean Reference for Oracle WebLogic Server.

Paging Out Messages To Free Up Memory

With the message paging feature, JMS servers automatically attempt to free up virtual memory during peak message load periods. This feature can greatly benefit applications with large message spaces. Message paging is always enabled on JMS servers, and so a message paging directory is automatically created without having to configure one. You can, however, specify a directory using the Paging Directory option, then paged-out messages are written to files in this directory.

In addition to the paging directory, a JMS server uses either a file store or a JDBC store for persistent message storage. The file store can be user-defined or the server's default store. Paged JDBC store persistent messages are copied to both the JDBC store as well as the JMS Server's paging directory. Paged file store persistent messages that are small are copied to both the file store as well as the JMS Server's paging directory. Paged larger file store messages are not copied into the paging directory. See Best Practices When Using Persistent Stores.

However, a paged-out message does not free all of the memory that it consumes, since the message header with the exception of any user properties, which are paged out along with the message body, remains in memory for use with searching, sorting, and filtering. Queuing applications that use selectors to select paged messages may show severely degraded performance as the paged out messages must be paged back in. This does not apply to topics or to applications that select based only on message header fields (such as CorrelationID). A good rule of thumb is to conservatively assume that messages each use 512 bytes of JVM memory even when paged out.

Specifying a Message Paging Directory

If a paging directory is not specified, then paged-out message bodies are written to the default \tmp directory inside the servername subdirectory of a domain's root directory. For example, if no directory name is specified for the default paging directory, it defaults to:

ORACLE_HOME\user_projects\domains\domainname\servers\servername\tmp

where domainname is the root directory of your domain, typically c:\Oracle\Middleware\Oracle_Home\user_projects\domains\domainname, which is parallel to the directory in which WebLogic Server program files are stored, typically c:\Oracle\Middleware\Oracle_Home\wlserver.

To configure the Message Paging Directory attribute, see "Configure general JMS server properties" in Oracle WebLogic Server Administration Console Online Help.

Tuning the Message Buffer Size Option

The Message Buffer Size option specifies the amount of memory that will be used to store message bodies in memory before they are paged out to disk. The default value of Message Buffer Size is approximately one-third of the maximum heap size for the JVM, or a maximum of 512 megabytes. The larger this parameter is set, the more memory JMS will consume when many messages are waiting on queues or topics. Once this threshold is crossed, JMS may write message bodies to the directory specified by the Paging Directory option in an effort to reduce memory usage below this threshold.

It is important to remember that this parameter is not a quota. If the number of messages on the server passes the threshold, the server writes the messages to disk and evicts the messages from memory as fast as it can to reduce memory usage, but it will not stop accepting new messages. It is still possible to run out of memory if messages are arriving faster than they can be paged out. Users with high messaging loads who wish to support the highest possible availability should consider setting a quota, or setting a threshold and enabling flow control to reduce memory usage on the server.

Defining Quota

It is highly recommended to always configure message count quotas. Quotas help prevent large message backlogs from causing out-of-memory errors, and WebLogic JMS does not set quotas by default.

There are many options for setting quotas, but in most cases it is enough to simply set a Messages Maximum quota on each JMS Server rather than using destination level quotas. Keep in mind that each current JMS message consumes JVM memory even when the message has been paged out, because paging pages out only the message bodies but not message headers. A good rule of thumb for queues is to assume that each current JMS message consumes 512 bytes of memory. A good rule of thumb for topics is to assume that each current JMS message consumes 256 bytes of memory plus an additional 256 bytes of memory for each subscriber that hasn't acknowledged the message yet. For example, if there are 3 subscribers on a topic, then a single published message that hasn't been processed by any of the subscribers consumes 256 + 256*3 = 1024 bytes even when the message is paged out. Although message header memory usage is typically significantly less than these rules of thumb indicate, it is a best practice to make conservative estimates on memory utilization.

In prior releases, there were multiple levels of quotas: destinations had their own quotas and would also have to compete for quota within a JMS server. In this release, there is only one level of quota: destinations can have their own private quota or they can compete with other destinations using a shared quota.

In addition, a destination that defines its own quota no longer also shares space in the JMS server's quota. Although JMS servers still allow the direct configuration of message and byte quotas, these options are only used to provide quota for destinations that do not refer to a quota resource.

Quota Resources

A quota is a named configurable JMS module resource. It defines a maximum number of messages and bytes, and is then associated with one or more destinations and is responsible for enforcing the defined maximums. Multiple destinations referring to the same quota share available quota according to the sharing policy for that quota resource.

Quota resources include the following configuration parameters:

Table 12-2 Quota Parameters

Attribute Description

Bytes Maximum and Messages Maximum

The Messages Maximum/Bytes Maximum parameters for a quota resource defines the maximum number of messages and/or bytes allowed for that quota resource. No consideration is given to messages that are pending; that is, messages that are in-flight, delayed, or otherwise inhibited from delivery still count against the message and/or bytes quota.

Quota Sharing

The Shared parameter for a quota resource defines whether multiple destinations referring to the same quota resource compete for resources with each other.

Quota Policy

The Policy parameter defines how individual clients compete for quota when no quota is available. It affects the order in which send requests are unblocked when the Send Timeout feature is enabled on the connection factory, as described in Tuning for Large Messages.

For more information about quota configuration parameters, see QuotaBean in the MBean Reference for Oracle WebLogic Server. For instructions on configuring a quota resource using the WebLogic Server Administration Console, see Create a quota for destinations in the Oracle WebLogic Server Administration Console Online Help.

Destination-Level Quota

Destinations no longer define byte and messages maximums for quota, but can use a quota resource that defines these values, along with quota policies on sharing and competition.

The Quota parameter of a destination defines which quota resource is used to enforce quota for the destination. This value is dynamic, so it can be changed at any time. However, if there are unsatisfied requests for quota when the quota resource is changed, then those requests will fail with a javax.jms.ResourceAllocationException.

Note:

Outstanding requests for quota will fail at such time that the quota resource is changed. This does not mean changes to the message and byte attributes for the quota resource, but when a destination switches to a different quota.

JMS Server-Level Quota

In some cases, there will be destinations that do not configure quotas. JMS Server quotas allow JMS servers to limit the resources used by these quota-less destinations. All destinations that do not explicitly set a value for the Quota attribute share the quota of the JMS server where they are deployed. The behavior is exactly the same as if there were a special Quota resource defined for each JMS server with the Shared parameter enabled.

The interfaces for the JMS server quota are unchanged from prior releases. The JMS server quota is entirely controlled using methods on the JMSServerMBean. The quota policy for the JMS server quota is set by the Blocking Send Policy parameter on a JMS server, as explained in Specifying a Blocking Send Policy on JMS Servers. It behaves just like the Policy setting of any other quota.

Blocking Senders During Quota Conditions

Defining a Send Timeout on Connection Factories

The Send Timeout feature provides more control over message send operations by giving message produces the option of waiting a specified length of time until space becomes available on a destination. For example, if a producer makes a request and there is insufficient space, then the producer is blocked until space becomes available, or the operation times out. See Controlling the Flow of Messages on JMS Servers and Destinations for another method of flow control.

To use the WebLogic Server Administration Console to define how long a JMS connection factory will block message requests when a destination exceeds its maximum quota.

  1. Follow the directions for navigating to the JMS Connection Factory: Configuration: Flow Control page in Configure message flow control in the Oracle WebLogic Server Administration Console Online Help.
  2. In the Send Timeout field, enter the amount of time, in milliseconds, a sender will block messages when there is insufficient space on the message destination. Once the specified waiting period ends, one of the following results will occur:
    • If sufficient space becomes available before the timeout period ends, the operation continues.

    • If sufficient space does not become available before the timeout period ends, you receive a resource allocation exception.

      If you choose not to enable the blocking send policy by setting this value to 0, then you will receive a resource allocation exception whenever sufficient space is not available on the destination.

      For more information about the Send Timeout field, see JMS Connection Factory: Configuration: Flow Control in the Oracle WebLogic Server Administration Console Online Help.

  3. Click Save.

Specifying a Blocking Send Policy on JMS Servers

The Blocking Send policies enable you to define the JMS server's blocking behavior on whether to deliver smaller messages before larger ones when multiple message producers are competing for space on a destination that has exceeded its message quota.

To use the WebLogic Server Administration Console to define how a JMS server will block message requests when its destinations are at maximum quota.

  1. Follow the directions for navigating to the JMS Server: Configuration: Thresholds and Quotas page of the WebLogic Server Administration Console in Configure JMS server thresholds and quota in Oracle WebLogic Server Administration Console Online Help.
  2. From the Blocking Send Policy list box, select one of the following options:
    • FIFO — All send requests for the same destination are queued up one behind the other until space is available. No send request is permitted to complete when there another send request is waiting for space before it.

    • Preemptive — A send operation can preempt other blocking send operations if space is available. That is, if there is sufficient space for the current request, then that space is used even if there are previous requests waiting for space.

    • For more information about the Blocking Send Policy field, see JMS Server: Configuration: Thresholds and Quota in the Oracle WebLogic Server Administration Console Online Help.

  3. Click Save.

Subscription Message Limits

In Oracle WebLogic JMS 12.2.1.3.0 and later, you can help prevent overloaded subscriptions from using all the available resources by configuring a message limit for a topic or a template. To configure a message limit, set the MessagesLimitOverride attribute on a destination template, a standalone topic, or a uniform distributed topic.

When a subscription reaches its specified limit and receives a new message, the head message of the subscription is deleted to provide space for the new message. For a default FIFO subscription, the head message is the oldest. Messages are deleted only from subscriptions that have reached their limit. If a message exists on multiple subscriptions and is deleted on one subscription, then the message can still be received by the other subscriptions.

A subscription limit differs from a quota in multiple ways. A topic that has reached its quota disallows new messages until existing messages have been processed or expired; on the other hand, a subscription that has reached its subscription limit allows the new message and makes room for it by deleting current messages. Also, a topic that has reached its quota affects all subscriptions on the topic, as this disallows new messages from being added to any subscription. By contrast, a subscription limit only affects subscriptions that have reached their limits.

Note:

  • Subscription limits are not substitutes for quotas. Oracle always recommends configuring quotas, even when a subscription limit is also configured. For information on quotas, see Blocking Senders During Quota Conditions.

  • Regardless of subscription limits, subscription messages are not deleted if they are participating in a pending transaction, are part of a Unit-of-Work that is still waiting to accumulate all of its messages, or have already been passed to a consumer and are awaiting acknowledgement.

  • If a topic has not reached its quota, and all messages are immune from deletion, then a new message is accepted regardless of whether this causes a subscription to exceed its limit.

To configure a subscription limit, set the MessagesLimitOverride attribute on a destination template, stand-alone topic, or uniform distributed topic. You can see whether a topic’s runtime MBean has a subscription limit configured via its SubscriptionMessagesLimit attribute (“-1” indicates that no limit has been configured). You can monitor the number of messages that have been deleted due to a subscription limit on a durable subscription by checking its SubscriptionLimitDeletedCount attribute.

See JMS Topic: Configuration: Thresholds and Quotas and JMS Templates: Configuration: Thresholds and Quotas in Administration Console Online Help.

Controlling the Flow of Messages on JMS Servers and Destinations

With the Flow Control feature, you can direct a JMS server or destination to slow down message producers when it determines that it is becoming overloaded.

See Compressing Messages.

How Flow Control Works

Specifically, when either a JMS server or it's destinations exceeds its specified byte or message threshold, it becomes armed and instructs producers to limit their message flow (messages per second).

Producers will limit their production rate based on a set of flow control attributes configured for producers via the JMS connection factory. Starting at a specified flow maximum number of messages, a producer evaluates whether the server/destination is still armed at prescribed intervals (for example, every 10 seconds for 60 seconds). If at each interval, the server/destination is still armed, then the producer continues to move its rate down to its prescribed flow minimum amount.

As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.

Configuring Flow Control

Producers receive a set of flow control attributes from their session, which receives the attributes from the connection, and which receives the attributes from the connection factory. These attributes allow the producer to adjust its message flow.

Specifically, the producer receives attributes that limit its flow within a minimum and maximum range. As conditions worsen, the producer moves toward the minimum; as conditions improve; the producer moves toward the maximum. Movement toward the minimum and maximum are defined by two additional attributes that specify the rate of movement toward the minimum and maximum. Also, the need for movement toward the minimum and maximum is evaluated at a configured interval.

Flow Control options are described in following table:

Table 12-3 Flow Control Parameters

Attribute Description

Flow Control Enabled

Determines whether a producer can be flow controlled by the JMS server.

Flow Maximum

The maximum number of messages per second for a producer that is experiencing a threshold condition.

If a producer is not currently limiting its flow when a threshold condition is reached, the initial flow limit for that producer is set to Flow Maximum. If a producer is already limiting its flow when a threshold condition is reached (the flow limit is less than Flow Maximum), then the producer will continue at its current flow limit until the next time the flow is evaluated.

Once a threshold condition has subsided, the producer is not permitted to ignore its flow limit. If its flow limit is less than the Flow Maximum, then the producer must gradually increase its flow to the Flow Maximum each time the flow is evaluated. When the producer finally reaches the Flow Maximum, it can then ignore its flow limit and send without limiting its flow.

Flow Minimum

The minimum number of messages per second for a producer that is experiencing a threshold condition. This is the lower boundary of a producer's flow limit. That is, WebLogic JMS will not further slow down a producer whose message flow limit is at its Flow Minimum.

Flow Interval

An adjustment period of time, defined in seconds, when a producer adjusts its flow from the Flow Maximum number of messages to the Flow Minimum amount, or vice versa.

Flow Steps

The number of steps used when a producer is adjusting its flow from the Flow Minimum amount of messages to the Flow Maximum amount, or vice versa. Specifically, the Flow Interval adjustment period is divided into the number of Flow Steps (for example, 60 seconds divided by 6 steps is 10 seconds per step).

Also, the movement (that is, the rate of adjustment) is calculated by dividing the difference between the Flow Maximum and the Flow Minimum into steps. At each Flow Step, the flow is adjusted upward or downward, as necessary, based on the current conditions, as follows:

The downward movement (the decay) is geometric over the specified period of time (Flow Interval) and according to the specified number of Flow Steps. (For example, 100, 50, 25, 12.5).

The movement upward is linear. The difference is simply divided by the number of Flow Steps.

For more information about the flow control fields, and the valid and default values for them, see JMS Connection Factory: Configuration: Flow Control in the Oracle WebLogic Server Administration Console Online Help.

Flow Control Thresholds

The attributes used for configuring bytes/messages thresholds are defined as part of the JMS server and/or its destination.Table 12-4 defines how the upper and lower thresholds start and stop flow control on a JMS server and/or JMS destination.

Table 12-4 Flow Control Threshold Parameters

Attribute Description

Bytes/Messages Threshold High

When the number of bytes/messages exceeds this threshold, the JMS server/destination becomes armed and instructs producers to limit their message flow.

Bytes/Messages Threshold Low

When the number of bytes/messages falls below this threshold, the JMS server/destination becomes unarmed and instructs producers to begin increasing their message flow.

Flow control is still in effect for producers that are below their message flow maximum. Producers can move their rate upward until they reach their flow maximum, at which point they are no longer flow controlled.

For detailed information about other JMS server and destination threshold and quota fields, and the valid and default values for them, see the following pages in the Oracle WebLogic Server Administration Console Online Help:

Handling Expired Messages

Active message expiration ensures that expired messages are cleaned up immediately. Expired message auditing gives you the option of tracking expired messages, either by logging when a message expires or by redirecting expired messages to a defined error destination.

The following sections describe two message expiration features, the message Expiration Policy and the Active Expiration of message, which provide more control over how the system searches for expired messages and how it handles them when they are encountered.

Defining a Message Expiration Policy

Use the message Expiration Policy feature to define an alternate action to take when messages expire. Using the Expiration Policy attribute on the Destinations node, an expiration policy can be set on a per destination basis. The Expiration Policy attribute defines the action that a destination should take when an expired message is encountered: discard the message, discard the message and log its removal, or redirect the message to an error destination.

Also, if you use JMS templates to configure multiple destinations, you can use the Expiration Policy field to quickly configure an expiration policy on all your destinations. To override a template's expiration policy for specific destinations, you can modify the expiration policy on any destination.

For instructions on configuring the Expiration Policy, click one of the following links:

Configuring an Expiration Policy on Topics

Follow these directions if you are configuring an expiration policy on topics without using a JMS template. Expiration policies that are set on specific topics will override the settings defined on a JMS template.

  1. Follow the directions for navigating to the JMS Topic: Configuration: Delivery Failure page in Configure topic message delivery failure options in the Oracle WebLogic Server Administration Console Online Help.
  2. From the Expiration Policy list box, select an expiration policy option.
    • Discard — Expired messages are removed from the system. The removal is not logged and the message is not redirected to another location.

    • Log — Removes expired messages and writes an entry to the server log file indicating that the messages were removed from the system. You define the actual information that will be logged in the Expiration Logging Policy field in next step.

    • Redirect — Moves expired messages from their current location into the Error Destination defined for the topic.

      For more information about the Expiration Policy options for a topic, see JMS Topic: Configuration: Delivery Failure in the Oracle WebLogic Server Administration Console Online Help.

  3. If you selected the Log expiration policy in previous step, use the Expiration Logging Policy field to define what information about the message is logged.

    For more information about valid Expiration Logging Policy values, see Defining an Expiration Logging Policy.

  4. Click Save.

Configuring an Expiration Policy on Queues

Follow these directions if you are configuring an expiration policy on queues without using a JMS template. Expiration policies that are set on specific queues will override the settings defined on a JMS template.

  1. Follow the directions for navigating to the JMS Queue: Configuration: Delivery Failure page in Configure queue message delivery failure options in the Oracle WebLogic Server Administration Console Online Help.
  2. From the Expiration Policy list box, select an expiration policy option.
    • Discard — Expired messages are removed from the system. The removal is not logged and the message is not redirected to another location.

    • Log — Removes expired messages from the queue and writes an entry to the server log file indicating that the messages were removed from the system. You define the actual information that will be logged in the Expiration Logging Policy field described in the next step.

    • Redirect — Moves expired messages from the queue and into the Error Destination defined for the queue.

    • For more information about the Expiration Policy options for a queue, see JMS Queue: Configuration: Delivery Failure in the Oracle WebLogic Server Administration Console Online Help.

  3. If you selected the Log expiration policy in the previous step, use the Expiration Logging Policy field to define what information about the message is logged.

    For more information about valid Expiration Logging Policy values, see Defining an Expiration Logging Policy.

  4. Click Save

Configuring an Expiration Policy on Templates

Since JMS templates provide an efficient way to define multiple destinations (topics or queues) with similar attribute settings, you can configure a message expiration policy on an existing template (or templates) for your destinations.

  1. Follow the directions for navigating to the JMS Template: Configuration: Delivery Failure page in Configure JMS template message delivery failure options in the Oracle WebLogic Server Administration Console Online Help.
  2. In the Expiration Policy list box, select an expiration policy option.
    • Discard — Expired messages are removed from the messaging system. The removal is not logged and the message is not redirected to another location.

    • Log — Removes expired messages and writes an entry to the server log file indicating that the messages were removed from the system. The actual information that is logged is defined by the Expiration Logging Policy field described in the next step.

    • Redirect — Moves expired messages from their current location into the Error Destination defined for the destination.

    • For more information about the Expiration Policy options for a template, see JMS Template: Configuration: Delivery Failure in the Oracle WebLogic Server Administration Console Online Help.

  3. If you selected the Log expiration policy in Step 4, use the Expiration Logging Policy field to define what information about the message is logged.

    For more information about valid Expiration Logging Policy values, see Defining an Expiration Logging Policy.

  4. Click Save.

Defining an Expiration Logging Policy

The following section provides information on the expiration policy.

The Expiration Logging Policy parameter has been deprecated in this release of WebLogic Server. In its place, Oracle recommends using the Message Life Cycle Logging feature, which provide a more comprehensive view of the basic events that JMS messages will traverse through once they are accepted by a JMS server, including detailed message expiration data. For more information about message life cycle logging options, see Message Life Cycle Logging in Administering JMS Resources for Oracle WebLogic Server.

For example, you could specify one of the following values:

  • JMSPriority, Name, Address, City, State, Zip

  • %header%, Name, Address, City, State, Zip

  • JMSCorrelationID, %properties%

The JMSMessageID field is always logged and cannot be turned off. Therefore, if the Expiration Policy is not defined (that is, none) or is defined as an empty string, then the output to the log file contains only the JMSMessageID of the message.

Expiration Log Output Format

When an expired message is logged, the text portion of the message (not including timestamps, severity, thread information, security identity, etc.) conforms to the following format:

<ExpiredJMSMessage JMSMessageId='$MESSAGEID' >
 <HeaderFields Field1='Value1' [Field2='Value2'] … ] />
 <UserProperties Property1='Value1' [Property='Value2'] … ] />
</ExpiredJMSMessage>

where $MESSAGEID is the exact string returned by Message.getJMSMessageID().

For example:

<ExpiredJMSMessage JMSMessageID='ID:P<851839.1022176920343.0' >
 <HeaderFields JMSPriority='7' JMSRedelivered='false' />
 <UserProperties Make='Honda' Model='Civic' Color='White'Weight='2680' />
</ExpiredJMSMessage>

If no header fields are displayed, the line for header fields is not be displayed. If no user properties are displayed, that line is not be displayed. If there are no header fields and no properties, the closing </ExpiredJMSMessage> tag is not necessary as the opening tag can be terminated with a closing bracket (/>).

For example:

<ExpiredJMSMessage JMSMessageID='ID:N<223476.1022177121567.1' />

All values are delimited with double quotes. All string values are limited to 32 characters in length. Requested fields and/or properties that do not exist are not displayed. Requested fields and/or properties that exist but have no value (a null value) are displayed as null (without single quotes). Requested fields and/or properties that are empty strings are displayed as a pair of single quotes with no space between them.

For example:

<ExpiredJMSMessage JMSMessageID='ID:N<851839.1022176920344.0' >
 <UserProperties First='Any string longer than 32 char ...' Second=null Third='' />
</ExpiredJMSMessage>

Tuning Active Message Expiration

Use the Active Expiration feature to define the timeliness in which expired messages are removed from the destination to which they were sent or published. Messages are not necessarily removed from the system at their expiration time, but they are removed within a user-defined number of seconds. The smaller the window, the closer the message removal is to the actual expiration time.

Configuring a JMS Server to Actively Scan Destinations for Expired Messages

Follow these directions to define how often a JMS server will actively scan its destinations for expired messages. The default value is 30 seconds, which means the JMS server waits 30 seconds between each scan interval.

  1. Follow the directions for navigating to the JMS Server: Configuration: General page of the WebLogic Server Administration Console in Configure general JMS server properties in the Oracle WebLogic Server Administration Console Online Help.
  2. In the Scan Expiration Interval field, enter the amount of time, in seconds, that you want the JMS server to pause between its cycles of scanning its destinations for expired messages to process.

    To disable active scanning, enter a value of 0 seconds. Expired messages are passively removed from the system as they are discovered.

    For more information about the Expiration Scan Interval attribute, see JMS Server: Configuration: General in the Oracle WebLogic Server Administration Console Online Help.

  3. Click Save.

There are a number of design choices that impact performance of JMS applications. Some others include reliability, scalability, manageability, monitoring, user transactions, message driven bean support, and integration with an application server. In addition, there are WebLogic JMS extensions and features have a direct impact on performance.

For more information on designing your applications for JMS, see Best Practices for Application Design in Developing JMS Applications for Oracle WebLogic Server.

Tuning Applications Using Unit-of-Order

Message Unit-of-Order is a WebLogic Server value-added feature that enables a stand-alone message producer, or a group of producers acting as one, to group messages into a single unit with respect to the processing order (a sub-ordering). This single unit is called a Unit-of-Order (or UOO) and requires that all messages from that unit be processed sequentially in the order they were created.

UOO replaces the following complex design patterns:

  • A dedicated consumer with a unique selector per each sub-ordering

  • A new destination per sub-ordering, one consumer per destination.

See Using Message Unit-of-Order in Developing JMS Applications for Oracle WebLogic Server.

Best Practices

The following sections provide best practice information when using UOO:

  • Ideal for applications that have strict message ordering requirements. UOO simplifies administration and application design, and in most applications improves performance.

  • Use MDB batching to:

    • Speed-up processing of the messages within a single sub-ordering.

    • Consume multiple messages at a time under the same transaction.

      See Tuning Message-Driven Beans.

  • You can configure a default UOO for the destination. Only one consumer on the destination processes messages for the default UOO at a time.

Using UOO and Distributed Destinations

To ensure strict ordering when using distributed destinations, each different UOO is pinned to a specific physical destination instance. There are two options for automatically determining the correct physical destination for a given UOO:

  • Hashing – Is generally faster and the UOO setting. Hashing works by using a hash function on the UOO name to determine the physical destination. It has the following drawbacks:

    • It doesn't correctly handle the administrative deleting or adding physical destinations to a distributed destination.

    • If a UOO hashes to an unavailable destination, the message send fails.

  • Path Service – Is a single server UOO directory service that maps the physical destination for each UOO. The Path Service is generally slower than hashing if there are many differently named UOO created per second. In this situation, each new UOO name implicitly forces a check of the path service before sending the message. If the number of UOOs created per second is limited, Path Service performance is not an issue as the UOO paths are cached throughout the cluster.

Migrating Old Applications to Use UOO

For releases prior to WebLogic Server 9.0, applications that had strict message ordering requirements were required to do the following:

  • Use a single physical destination with a single consumer

  • Ensure the maximum asynchronous consumer message backlog (The MessagesMaximum parameter on the connection factory) was set to a value of 1.

UOO relaxes these requirements significantly as it allows for multiple consumers and allows for a asynchronous consumer message backlog of any size. To migrate older applications to take advantage of UOO, simply configure a default UOO name on the physical destination. See Configure connection factory unit-of-order parameters in Oracle WebLogic Server Administration Console Online Help and Ordered Redelivery of Messages in Developing JMS Applications for Oracle WebLogic Server.

Using JMS 2.0 Asynchronous Message Sends

WebLogic Server 12.2.1.0 introduced a standard way to do asynchronous sends, that is flexible, powerful, and supported by the standard JMS 2.0 asynchronous send method.

The JMS 2.0 asynchronous send feature allows messages to be sent asynchronously without waiting for a JMS Server to accept them. This feature may yield a substantial performance gain, even a 'multi-x' gain, for applications that are bottlenecked on message send latency, especially for batches of small non-persistent messages.

Asynchronous send calls each get an asynchronous reply from the server indicating the message has been successfully sent with the same degree of confidence as if a synchronous send had been performed. The JMS provider notifies the application by invoking the callback method onCompletion, on an application-specified CompletionListener object. For a given message producer, callbacks to the CompletionListener will be performed, single threaded per session, in the same order as the corresponding calls to the asynchronous send method.

Note:

Oracle recommends using JMS 2.0 asynchronous sends instead of the proprietary WebLogic one-way message sends as described in Using One-Way Message Sends.

The JMS 2.0 asynchronous send has a performance similar to that of the One-Way Sends.

The JMS 2.0 asynchronous send:
  • Can handle both non-persistent and persistent messages.
  • Can handle Unit of Order messages.
  • Does not get degraded performance when a client's connection host is connected to a different server in the cluster, than the producer's target destination.
  • Provides best effort flow control (block) internally, without a need for special tuning when the amount of outstanding, asynchronously sent data without a completion-event gets too high.

See JMS 2.0 javadoc for send() calls with CompletionListeners.

See What's New in JMS 2.0, Part Two—New Messaging Features for example usage.

Note:

  • To get asynchronous send performance gains, it is important to cache or pool message producers between asynchronous send calls. The following calls will block until all outstanding asynchronous send call CompletionListener objects have been processed.
    • Connection.close()
    • Session.close()
    • MessageProducer.close()
    • Session.commit()
    • Session.rollback()
  • An implementation of the CompletionListener interface must not make calls on their owning session unless no other threads are using the session. This is because the behavior of multi-threaded JMS session access is undefined and unpredictable (as per the JMS specification).
  • As required by the JMS specification, asynchronous send calls fail within standard Java EE server applications. If it is necessary to bypass this check, then a non-standard (proprietary to WebLogic) application can still access asynchronous sends by accessing JMS connection factories or contexts directly, instead of via context injection, or via a resource reference to a connection factory. Bypassing JMS in this way is for advanced users only; this disables Java EE restriction checks and the automatic pooling of JMS client objects that are built into the server-side WebLogic applications.
  • Asynchronous send calls are not compatible with JTA (XA) transactions, and will fail if a JTA transaction is active when called and the sender's connection was created with a connection factory configured with XA Enabled.

Using One-Way Message Sends

One-way message sends can greatly improve the performance of applications that are bottle-necked by senders, but do so at the risk of introducing a lower QOS (quality-of-service). By enabling the One-Way Send Mode options, you allow message producers created by a user-defined connection factory to do one-way message sends, when possible.

Note:

Oracle recommends using the JMS 2.0 asynchronous send feature instead of the proprietary WebLogic one-way send feature. The asynchronous send feature was introduced in 12.2.1.0 and has less activation restrictions. For example, the JMS 2.0 asynchronous send feature works well in a cluster without requiring additional configuration changes.

Typical message sends from a JMS producer are termed two-way sends because they include both an internal request and an internal response. When an producer application calls send(), the call generates a request that contains the application's message and then waits for a response from the JMS server to confirm its receipt of the message. This call-and-response mechanism regulates the producer, since the producer is forced to wait for the JMS server's response before the application can make another send call. Eliminating the response message eliminates this wait, and yields a one-way send. WebLogic Server supports a configurable one-way send option for non-persistent, non-transactional messaging; no application code changes are required to leverage this feature.

When the One-Way Send Mode is active, the associated producers can send messages without internally waiting for a response from the target destination's host JMS server. You can choose to allow queue senders and topic publishers to do one-way sends, or to limit this capability to topic publishers only. You must also specify a One-Way Window Size to determine when a two-way message is required to regulate the producer before it can continue making additional one-way sends.

Configure One-Way Sends On a Connection Factory

You configure one-way message send parameters on a connection factory by using the WebLogic Server Administration Console, as described in Configure connection factory flow control in the Oracle WebLogic Server Administration Console Online Help. You can also use the WebLogic Scripting Tool (WLST) or JMX via the FlowControlParamsBean MBean.

Note:

One-way message sends are disabled if your connection factory is configured with "XA Enabled". This setting disables one-way sends whether or not the sender actually uses transactions.

One-Way Send Support In a Cluster With a Single Destination

To ensure one-way send support in a cluster with a single destination, verify that the connection factory and the JMS server hosting the destination are targeted to the same WebLogic server. The connection factory must not be targeted to any other WebLogic Server instances in the cluster.

One-Way Send Support In a Cluster With Multiple Destinations

To ensure one-way send support in a cluster with multiple destinations that share the same name, special care is required to ensure the WebLogic Server instance that hosts the client connection also hosts the destination. One solution is the following:

  1. Configure the cluster wide RMI load balancing algorithm to "Server Affinity".
  2. Ensure that no two destinations are hosted on the same WebLogic Server instance.
  3. Configure each destination to have the same local-jndi-name.
  4. Configure a connection factory that is targeted to only those WebLogic Server instances that host the destinations.
  5. Ensure sender clients use the JNDI names configured in Steps 3 and 4 to obtain their destination and connection factory from their JNDI context.
  6. Ensure sender clients use URLs limited to only those WebLogic Server instances that host the destinations in Step 3.

This solution disables RMI-level load balancing for clustered RMI objects, which includes EJB homes and JMS connection factories. Effectively, the client will obtain a connection and destination based only on the network address used to establish the JNDI context. Load balancing can be achieved by leveraging network load balancing, which occurs for URLs that include a comma-separated list of WebLogic Server addresses, or for URLs that specify a DNS name that resolves to a round-robin set of IP addresses (as configured by a network administrator).

For more information on Server Affinity for clusters, see Load Balancing for EJBs and RMI Objects in Administering Clusters for Oracle WebLogic Server.

When One-Way Sends Are Not Supported

This section defines when one-way sends are not supported. When one-ways are not supported, the send QOS is automatically upgraded to standard two-ways.

Different Client and Destination Hosts

One-way sends are supported when the client producer's connection host and the JMS server hosting the target destination are the same WebLogic Server instance; otherwise, the one-way mode setting will ignored and standard two-way sends will be used instead.

XA Enabled On Client's Host Connection Factory

One-way message sends are disabled if the client's host connection factory is configured with XA Enabled. This setting disables one-way sends whether or not the sender actually uses transactions.

Higher QOS Detected

When the following higher QOS features are detected, then the one-way mode setting will be ignored and standard two-way sends will be used instead:

  • XA

  • Transacted sessions

  • Persistent messaging

  • Unit-of-order

  • Unit-of-work

  • Distributed destinations

Destination Quota Exceeded

When the specified quota is exceeded on the targeted destination, then standard two-way sends will be used until the quota clears.

One-way messages that exceed quota are silently deleted, without immediately throwing exceptions back to the client. The client will eventually get a quota exception if the destination is still over quota at the time the next two-way send occurs. (Even in one-way mode, clients will send a two-way message every One Way Send Window Size number of messages configured on the client's connection factory.)

A workaround that helps avoid silently-deleted messages during quota conditions is to increase the value of the Blocking Send Timeout configured on the connection factory, as described in Compressing Messages. The one-way messages will not be deleted immediately, but instead will optimistically wait on the JMS server for the specified time until the quota condition clears (presumably due to messages getting consumed or by messages expiring). The client sender will not block until it sends a two-way message. For each client, no more than One Way Window Size messages will accumulate on the server waiting for quota conditions to clear.

Change In Server Security Policy

A change in the server-side security policy could prevent one-way message sends without notifying the JMS client of the change in security status.

Change In JMS Server or Destination Status

One-way sends can be disabled when a host JMS server or target destination is administratively undeployed, or when message production is paused on either the JMS server or the target destination using the "Production Pause/Resume" feature. See Production Pause and Production Resume in Administering JMS Resources for Oracle WebLogic Server.

Looking Up Logical Distributed Destination Name

One-way message sends work with distributed destinations provided the client looks up the physical distributed destination members directly rather than using the logical distributed destination's name. See Using Distributed Destinations in Developing JMS Applications for Oracle WebLogic Server.

Hardware Failure

A hardware or network failure will disable one-way sends. In such cases, the JMS producer is notified by an OnException or by the next two-way message send. (Even in one-way mode, clients will send a two-way message every One Way Send Window Size number of messages configured on the client's connection factory.) The producer will be closed. The worst-case scenario is that all messages can be lost up to the last two-way message before the failure occurred.

One-Way Send QOS Guidelines

Use the following QOS-related guidelines when using the one-way send mode for typical non-persistent messaging.

  • When used in conjunction with the Blocking Sends feature, then using one-way sends on a well-running system should achieve similar QOS as when using the two-way send mode.

  • One-way send mode for topic publishers falls within the QOS guidelines set by the JMS Specification, but does entail a lower QOS than two-way mode (the WebLogic Server default mode).

  • One-way send mode may not improve performance if JMS consumer applications are a system bottleneck, as described in Asynchronous vs. Synchronous Consumers in Developing JMS Applications for Oracle WebLogic Server.

  • Consider enlarging the JVM's heap size on the client and/or server to account for increased batch size (the Window) of sends. The potential memory usage is proportioned to the size of the configured Window and the number of senders.

  • The sending application will not receive all quota exceptions. One-way messages that exceed quota are silently deleted, without throwing exceptions back to the sending client. See Destination Quota Exceeded for more information and a possible work around.

  • Configuring one-way sends on a connection factory effectively disables any message flow control parameters configured on the connection factory.

  • By default, the One-way Window Size is set to "1", which effectively disables one-way sends as every one-way message will be upgraded to a two-way send. (Even in one-way mode, clients will send a two-way message every One Way Send Window Size number of messages configured on the client's connection factory.) Therefore, you must set the one-way send window size much higher. It is recommended to try setting the window size to "300" and then adjust it according to your application requirements.

  • The client application will not immediately receive network or server failure exceptions, some messages may be sent but silently deleted until the failure is detected by WebLogic Server and the producer is automatically closed. See Hardware Failure for more information.

Tuning the Messaging Performance Preference Option

The Messaging Performance Preference tuning option on JMS destinations enables you to control how long a destination should wait (if at all) before creating full batches of available messages for delivery to consumers.

Note:

This is an advanced option for fine tuning. It is normally best to explore other tuning options first.

At the minimum value, batching is disabled. Tuning above the default value increases the amount of time a destination is willing to wait before batching available messages. The maximum message count of a full batch is controlled by the JMS connection factory's Messages Maximum per Session setting.

Using the WebLogic Server Administration Console, this advanced option is available on the General Configuration page for both standalone and uniform distributed destinations (or via the DestinationBean API), as well as for JMS templates (or via the TemplateBean API).

Specifically, JMS destinations include internal algorithms that attempt to automatically optimize performance by grouping messages into batches for delivery to consumers. In response to changes in message rate and other factors, these algorithms change batch sizes and delivery times. However, it isn't possible for the algorithms to optimize performance for every messaging environment. The Messaging Performance Preference tuning option enables you to modify how these algorithms react to changes in message rate and other factors so that you can fine-tune the performance of your system.

Messaging Performance Configuration Parameters

The Message Performance Preference option includes the following configuration parameters:

Table 12-5 Message Performance Preference Values

WebLogic Server Administration Console Value MBean Value Description

Do Not Batch Messages

0

Effectively disables message batching. Available messages are promptly delivered to consumers.

This is equivalent to setting the value of the connection factory's Messages Maximum per Session field to "1".

Batch Messages Without Waiting

25 (default)

Less-than-full batches are immediately delivered with available messages.

This is equivalent to the value set on the connection factory's Messages Maximum per Session field.

Low Waiting Threshold for Message Batching

50

Wait briefly before less-than-full batches are delivered with available messages.

Medium Waiting Threshold for Message Batching

75

Possibly wait longer before less-than-full batches are delivered with available messages.

High Waiting Threshold for Message Batching

100

Possibly wait even longer before less-than-full batches are delivered with available messages.

It may take some experimentation to find out which value works best for your system. For example, if you have a queue with many concurrent message consumers, by selecting the WebLogic Server Administration Console's Do Not Batch Messages value (or specifying "0" on the DestinationBean MBean), the queue will make every effort to promptly push messages out to its consumers as soon as they are available. Conversely, if you have a queue with only one message consumer that doesn't require fast response times, by selecting the console's High Waiting Threshold for Message Batching value (or specifying "100" on the DestinationBean MBean), then the queue will strongly attempt to only push messages to that consumer in batches, which will increase the waiting period but may improve the server's overall throughput by reducing the number of sends.

For instructions on configuring Messaging Performance Preference parameters on a standalone destinations, uniform distributed destinations, or JMS templates using the WebLogic Server Administration Console, see the following sections in the Administration Console Online Help:

For more information about these parameters, see DestinationBean and TemplateBean in the MBean Reference for Oracle WebLogic Server.

Compatibility With the Asynchronous Message Pipeline

The Message Performance Preference option is compatible with asynchronous consumers using the Asynchronous Message Pipeline, and is also compatible with synchronous consumers that use the Prefetch Mode for Synchronous Consumers feature, which simulates the Asynchronous Message Pipeline. However, if the value of the Maximum Messages value is set too low, it may negate the impact of the destination's higher-level performance algorithms (e.g., Low, Medium, and High Waiting Threshold for Message Batching). For more information on the Asynchronous Message Pipeline, see Receiving Messages in Developing JMS Applications for Oracle WebLogic Server.

Client-side Thread Pools

WebLogic client thread pools are configured differently than WebLogic server thread-pools, and are not self tuning. Use the -Dweblogic.ThreadPoolSize=n command-line property to configure the thread pools.

With most Java client side applications, the default client thread pool size of 5 threads is sufficient. If, however, the application has a large number of asynchronous consumers, then it is often beneficial to allocate slightly more threads than asynchronous consumers. This allows more asynchronous consumers to run concurrently.

WebLogic clients have a specific thread pool that is used for handling incoming requests from the server, such as JMS MessageListener invocations. This pool can be configured via the command-line property:

-Dweblogic.ThreadPoolSize=n

where n is the number of threads

You can force a client-side thread dump to verify that this setting is taking effect.

Best Practices for JMS .NET Client Applications

Review a short list of performance related best practices to use when creating a JMS .NET client application.

  • Always register a connection exception listener using an IConnection if the application needs to take action when an idle connection fails.

  • Have multiple .NET client threads share a single context to ensure that they use a single socket.

  • Cache and reuse frequently accessed JMS resources, such as contexts, connections, sessions, producers, destinations, and connection factories. Creating and closing these resources consumes significant CPU and network bandwidth.

  • Use DNS aliases or comma separated addresses for load balancing JMS .NET clients across multiple JMS .NET client host servers in a cluster.

For more information on best practices and other programming considerations for JMS .NET client applications, see Programming Considerations in Developing JMS .NET Client Applications for Oracle WebLogic Server.

Considerations for Oracle Data Guard Environments

Review the configuration considerations for a WebLogic JMS environment that includes Oracle Data Guard.

For more information on Oracle Data Guard, see http://www.oracle.com/us/products/database/options/active-data-guard/overview/index.html.

Pause Destinations for Planned Down Time

For planned maintenance windows, pause the impacted JMS destinations before initiating the switch from the production database instance to standby instance. When the standby database has transitioned to production, resume the JMS destinations. See Pause JMS server message operations at runtime in Oracle WebLogic Server Administration Console Online Help.

Migrate JMS Services for Unexpected Outages

For unexpected service outages, implement JMS Service migration with the Restart on Failure option. Should the amount of time required to switch from the production to standby database exceed the value of the Store IORetryDelaySeconds attribute and the JMS Services fails, the JMS service and associated store are restarted in-place. See In-Place Restarting of Failed Migratable Services in Administering Clusters for Oracle WebLogic Server.