Sun Java logo     Previous      Contents      Index      Next     

Sun logo
Sun Java System Message Queue 3.5 SP1 Administration Guide 

Chapter 9
Analyzing and Tuning a Message Service

This chapter covers a number of topics about how to analyze and tune a Message Queue service to optimize the performance of your messaging applications. It includes the following topics:


About Performance

The Performance Tuning Process

The performance you get out of a messaging application depends on the interaction between the application and the Message Queue service. Hence, maximizing performance requires the combined efforts of both the application developer and the administrator.

The process of optimizing performance begins with application design and continues through to tuning the message service after the application has been deployed. The performance tuning process includes the following stages:

The process outlined above is often iterative. During deployment of the application, a Message Queue administrator evaluates the suitability of the message server for the application’s general performance requirements. If the benchmark testing meets these requirements, the administrator can tune the system as described in this chapter. However, if benchmark testing does not meet performance requirements, then a redesign of the application might be necessary or the deployment architecture might need to be modified.

Aspects of Performance

In general, performance is a measure of the speed and efficiency with which a message service delivers messages from producer to consumer. However, there are several different aspects of performance that might be important to you, depending on your needs.

Connection Load     The number of message producers, or message consumers, or the number of concurrent connections a system can support.

Message throughput     The number of messages or message bytes that can be pumped through a messaging system per second.

Latency     The time it takes a particular message to be delivered from message producer to message consumer.

Stability     The overall availability of the message service or how gracefully it degrades in cases of heavy load or failure.

Efficiency     The efficiency of message delivery; a measure of message throughput in relation to the computing resources employed.

These different aspects of performance are generally inter-related. If message throughput is high, that means messages are less likely to be backlogged in the message server, and as a result, latency should be low (a single message can be delivered very quickly). However, latency can depend on many factors: the speed of communication links, message server processing speed, and client processing speed, to name a few.

In any case, there are several different aspects of performance. Which of them are most important to you generally depends on the requirements of a particular application.

Benchmarks

Benchmarking is the process of creating a test suite for your messaging application and of measuring message throughput or other aspects of performance for this test suite.

For example, you could create a test suite by which some number of producing clients, using some number of connections, sessions, and message producers, send persistent or non-persistent messages of a standard size to some number of queues or topics (all depending on your messaging application design) at some specified rate. Similarly, the test suite includes some number of consuming clients, using some number of connections, sessions, and message consumers (of a particular type) that consume the messages in the test suite’s destinations using a particular acknowledgement mode.

Using your standard test suite you can measure the time it takes between production and consumption of messages or the average message throughput rate, and you can monitor the system to observe connection thread usage, message storage data, message flow data, and other relevant metrics. You can then ramp up the rate of message production, or the number of message producers, or other variables, until performance is negatively impacted. The maximum throughput you can achieve is a benchmark for your message service configuration.

Using this benchmark, you can modify some of the characteristics of your test suite. By carefully controlling all the factors that might have an impact on performance (see "Application Design Factors that Impact Performance"), you can note how changing some of these factors affects the benchmark. For example, you can increase the number of connections or the size of messages five-fold or ten-fold, and note the impact on performance.

Conversely, you can keep application-based factors constant and change your broker configuration in some controlled way (for example, change connection properties, thread pool properties, JVM memory limits, limit behaviors, built-in versus plugged-in persistence, and so forth) and note how these changes affect performance.

This benchmarking of your application provides information that can be valuable when you want to increase the performance of a deployed application by tuning your message service. A benchmark allows the effect of a change or a set of changes to be more accurately predicted.

As a general rule, benchmarks should be run in a controlled test environment and for a long enough period of time for your message service to stabilize. (Performance is negatively impacted at startup by the Just-In-Time compilation that turns Java code into machine code.)

Baseline Use Patterns

Once a messaging application is deployed and running, it is important to establish baseline use patterns. You want to know when peak demand occurs and you want to be able to quantify that demand. For example, demand normally fluctuates by number of end-users, activity levels, time of day, or all of these.

To establish base-line use patterns you need to monitor your message server over an extended period of time, looking at data such as number of connections, number of messages stored in the broker (or in particular destinations), message flows into and out of a broker (or particular destinations), numbers of active consumers, and so forth. You can also use average and peak values provided in metrics data.

It is important to check these baseline metrics against design expectations. By doing so, you are checking that client code is behaving properly: for example, that connections are not being left open or that consumed messages are not being left unacknowledged. These coding errors consume message server resources and could significantly affect performance.

The base-line use patterns help you determine how to tune your system for optimal performance. For example, if one destination is used significantly more than others, you might want to set higher message memory limits on that destination than on others, or to adjust limit behaviors accordingly. If the number of connections needed is significantly greater than allowed by the maximum thread pool size, you might want to increase the threadpool size or adopt a shared thread model. If peak message flows are substantially greater than average flows, that might influence the limit behaviors you employ when memory runs low.

In general, the more you know about use patterns, the better you are able to tune your system to those patterns and to plan for future needs.


Factors That Impact Performance

Message latency and message throughput, two of the main performance indicators, generally depend on the time it takes a typical message to complete various steps in the message delivery process. These steps are shown below for the case of a persistent, reliably delivered message. The steps are described following the illustration.

Figure 9-1  Message Delivery Through a Message Queue Service

Diagram showing steps in the message delivery process in case of a persistent, reliably delivered message. Steps are described in text that follows.

  1. The message is delivered from producing client to message server
  2. The message server reads in the message
  3. The message is placed in persistent storage (for reliability)
  4. The message server confirms receipt of the message (for reliability)
  5. The message server determines the routing for the message
  6. The message server writes out the message
  7. The message is delivered from message server to consuming client
  8. The consuming client acknowledges receipt of the message (for reliability)
  9. The message server processes client acknowledgement (for reliability)
  10. The message server confirms that client acknowledgement has been processed

Since these steps are sequential, any one of them can be a potential bottleneck in the delivery of messages from producing clients to consuming clients. Most of these steps depend upon physical characteristics of the messaging system: network bandwidth, computer processing speeds, message server architecture, and so forth. Some, however, also depend on characteristics of the messaging application and the level of reliability it requires.

The following subsections discuss the impact of both application design factors and messaging system factors on performance. While application design and messaging system factors closely interact in the delivery of messages, each category is considered separately.

Application Design Factors that Impact Performance

Application design decisions can have a significant effect on overall messaging performance.

The most important factors affecting performance are those that impact the reliability of message delivery. Among these are the following factors:

Other application design factors impacting performance are the following:

The sections that follow describe the impact of each of these factors on messaging performance. As a general rule, there is a trade-off between performance and reliability: factors that increase reliability tend to decrease performance.

The following table shows how the various application design factors generally affect messaging performance. The table shows two scenarios—a high reliability, low performance scenario and a high performance, low reliability scenario—and the choice of application design factors that characterizes each. Between these extremes, there are many choices and trade-offs that affect both reliability and performance.

Table 9-1  Comparison of High Reliability and High Performance Scenarios 

Application Design
Factor

High Reliability
Low Performance Scenario

High Performance
Low Reliability Scenario

Delivery mode

Persistent messages

Non-persistent messages

Use of transactions

Transacted sessions

No transactions

Acknowledgement mode

AUTO_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE

DUPS_OK_ACKNOWLEDGE

Durable/non-durable subscriptions

Durable subscriptions

Non-durable subscriptions

Use of selectors

Message filtering

No message filtering

Message size

Small messages

Large messages

Message body type

Complex body types

Simple body types


Note

In the graphs that follow, performance data were generated on a two-CPU, 1002 Mhz, Solaris 8 system, using file-based persistence. The performance test first warmed up the Message Queue broker, allowing the Just-In-Time compiler to optimize the system and the persistent database to be primed.

Once the broker was warmed up, a single producer and single consumer were created and messages were produced for 30 seconds. The time required for the consumer to receive all produced messages was recorded, and a throughput rate (messages per second) was calculated. This scenario was repeated for different combinations of the application design factors shown in Table 9-1.


Delivery Mode (Persistent/Non-persistent Messages)

As described in "Reliable Messaging", persistent messages guarantee message delivery in case of message server failure. The broker stores the message in a persistent store until all intended consumers acknowledge they have consumed the message.

Broker processing of persistent messages is slower than for non-persistent messages for the following reasons:

The differences in performance between the persistent and non-persistent modes can be significant. Figure 9-2 compares throughput for persistent and non-persistent messages in two reliable delivery cases: 10k-sized messages delivered both to a queue and to a topic with durable subscriptions. Both cases use the AUTO_ACKNOWLEDGE acknowledgement mode.

Figure 9-2  Performance Impact of Delivery Modes

Chart comparing message throughput for persistent and non-persistent messages for both a queue destination and a topic destination with durable subscriptions. Effect is described in text.

Use of Transactions

A transaction is a guarantee that all messages produced in a transacted session and all messages consumed in a transacted session will be either processed or not processed (rolled back) as a unit.

Message Queue supports both local and distributed transactions (see "Local Transactions" and "Distributed Transactions", respectively, for more information).

A message produced or acknowledged in a transacted session is slower than in a non-transacted session for the following reasons:

Acknowledgement Mode

One mechanism for ensuring the reliability of JMS message delivery is for a client to acknowledge consumption of messages delivered to it by the Message Queue message server (see "Reliable Delivery: Acknowledgements and Transactions")

If a session is closed without the client acknowledging the message or if the message server fails before the acknowledgment is processed, the broker redelivers that message, setting a JMSRedelivered flag.

For a non-transacted session, the client can choose one of three acknowledgement modes, each of which has its own performance characteristics:

(Using CLIENT_ACKNOWLEDGE mode is similar to using transactions, except there is no guarantee that all acknowledgments will be processed together if a provider fails during processing.)

Performance is impacted by acknowledgement mode for the following reasons:

Durable vs. Non-durable Subscriptions

Subscribers to a topic destination fall into two categories, those with durable and non-durable subscriptions, as described in "Publish/Subscribe (Topic destinations)":

Durable subscriptions provide increased reliability at the cost of slower throughput for the following reasons:

Figure 9-3 compares throughput for topic destinations with durable and non-durable subscriptions in two cases: persistent and non-persistent 10k-sized messages. Both cases use AUTO_ACKNOWLEDGE acknowledgement mode.

You can see from Figure 9-3 that the performance impact of using durable subscriptions is manifest only in the case of persistent messages; and the impact in that case is because persistent messages are only stored persistently for durable subscriptions, as explained above.

Figure 9-3  Performance Impact of Subscription Types

Chart comparing message throughput for topic destinations with durable and non-durable subscriptions. Effect is described in text.

Use of Selectors (Message Filtering)

Application developers often want to target sets of messages to particular consumers. They can do so either by targeting each set of messages to a unique destination or by using a single destination and registering one or more selectors for each consumer.

A selector is a string requesting that only messages with property values (see "JMS Message Structure") that match the string are delivered to a particular consumer. For example, the selector NumberOfOrders >1 delivers only the messages with a NumberOfOrders property value of 2 or more.

Registering consumers with selectors lowers performance (as compared to using multiple destinations) because additional processing is required to handle each message. When a selector is used, it must be parsed so that it can be matched against future messages. Additionally, the message properties of each message must be retrieved and compared against the selector as each message is routed. However, using selectors provides more flexibility in a messaging application.

Message Size

Message size affects performance because more data must be passed from producing client to broker and from broker to consuming client, and because for persistent messages a larger message must be stored.

However, by batching smaller messages into a single message, the routing and processing of individual messages can be minimized, providing an overall performance gain. In this case, information about the state of individual messages is lost.

Figure 9-4 compares throughput in kilobytes per second for 1k, 10k, and 100k-sized messages in two cases: persistent and non-persistent messages. All cases send messages are to a queue destination and use AUTO_ACKNOWLEDGE acknowledgement mode.

Figure 9-4 shows that in both cases there is less overhead in delivering larger messages compared to smaller messages. You can also see that the almost 50% performance gain of non-persistent messages over persistent messages shown for 1k and 10k-sized messages is not maintained for 100k-sized messages, probably because network bandwidth has become the bottleneck in message throughput for that case.

Figure 9-4  Performance Impact of a Message Size

Chart comparing throughput for 1k, 10k, and 100k-sized messages for both persistent and non-persistent messages. Effect is described in text.

Message Body Type

JMS supports five message body types, shown below roughly in the order of complexity:

While, in general, the message type is dictated by the needs of an application, the more complicated types (MapMessage and ObjectMessage) carry a performance cost—the expense of serializing and deserializing the data. The performance cost depends on how simple or how complicated the data is.

Message Service Factors that Impact Performance

The performance of a messaging application is affected not only by application design, but also by the message service performing the routing and delivery of messages.

The following sections discuss various message service factors that can affect performance. Understanding the impact of these factors is key to sizing a message service and diagnosing and resolving performance bottlenecks that might arise in a deployed application.

The most important factors affecting performance in a Message Queue service are the following:

The sections below describe the impact of each of these factors on messaging performance.

Hardware

For both the Message Queue message server and client applications, CPU processing speed and available memory are primary determinants of message service performance. Many software limitations can be eliminated by increasing processing power, while adding memory can increase both processing speed and capacity. However, it is generally expensive to overcome bottlenecks simply by upgrading your hardware.

Operating System

Because of the efficiencies of different operating systems, performance can vary, even assuming the same hardware platform. For example, the thread model employed by the operating system can have an important impact on the number of concurrent connections a message server can support. In general, all hardware being equal, Solaris is generally faster than Linux, which is generally faster than Windows.

Java Virtual Machine (JVM)

The message server is a Java process that runs in and is supported by the host JVM. As a result, JVM processing is an important determinant of how fast and efficiently a message server can route and deliver messages.

In particular, the JVM’s management of memory resources can be critical. Sufficient memory has to be allocated to the JVM to accommodate increasing memory loads. In addition, the JVM periodically reclaims unused memory, and this memory reclamation can delay message processing. The larger the JVM memory heap, the longer the potential delay that might be experienced during memory reclamation.

Connections

The number and speed of connections between client and broker can affect the number of messages that a message server can handle as well as the speed of message delivery.

Message Server Connection Limits

All access to the message server is by way of connections. Any limit on the number of concurrent connections can affect the number of producing or consuming clients that can concurrently use the message server.

The number of connections to a message server is generally limited by the number of threads available. Message Queue uses a thread pool manager, which you can configure to support either a dedicated thread model or a shared thread model (see "Thread Pool Manager"). The dedicated thread model is very fast because each connection has dedicated threads, however the number of connections is limited by the number of threads available (one input thread and one output thread for each connection). The shared thread model places no limit on the number of connections, however there is significant overhead and throughput delays in sharing threads among a number of connections, especially when those connections are busy.

Transport Protocols

Message Queue software allows clients to communicate with the message server using various low-level transport protocols. Message Queue supports the connection services (and corresponding protocols) shown in "Connection Services Support". The choice of protocols is based on application requirements (encrypted, accessible through a firewall), but the choice impacts overall performance.

Figure 9-5  Transport Protocol Speeds

Diagram showing relative speeds of different transport protocols. Effect is explained in text.

Figure 9-5 reflects the performance characteristics of the various protocol technologies:

Message Server Architecture

A Message Queue message server can be implemented as a single broker or as multiple interconnected broker instances—a broker cluster.

As the number of clients connected to a broker increases, and as the number of messages being delivered increases, a broker will eventually exceed resource limitations such as file descriptor, thread, and memory limits. One way to accommodate increasing loads is to add more broker instances to a Message Queue message server, distributing client connections and message routing and delivery across multiple brokers.

In general, this scaling works best if clients are evenly distributed across the cluster, especially message producing clients. Because of the overhead involved in delivering messages between the brokers in a cluster, clusters with limited numbers of connections or limited message delivery rates, might exhibit lower performance than a single broker.

You might also use a broker cluster to optimize network bandwidth. For example, you might want to use slower, long distance network links between a set of remote brokers within a cluster, while using higher speed links for connecting clients to their respective broker instances.

For more information on clusters, see "Multi-Broker Clusters (Enterprise Edition)" and "Working With Clusters (Enterprise Edition)".

Broker Limits and Behaviors

The message throughput that a message server might be required to handle is a function of the use patterns of the messaging applications the message server supports. However, the message server is limited in resources: memory, CPU cycles, and so forth. As a result, it would be possible for a message server to become overwhelmed to the point where it becomes unresponsive or unstable.

The Message Queue message server has mechanisms built in for managing memory resources and preventing the broker from running out of memory. These mechanisms include configurable limits on the number of messages or message bytes that can be held by a broker or its individual destinations, and a set of behaviors that can be instituted when destination limits are reached (see "Managing Memory Resources and Message Flow".

With careful monitoring and tuning, these configurable mechanisms can be used to balance the inflow and outflow of messages so that system overload cannot occur. While these mechanisms consume overhead and can limit message throughput, they nevertheless maintain operational integrity.

Data Store Performance

Message Queue supports both built-in and plugged-in persistence (see "Persistence Manager"). Built-in persistence is a file-based data store. Plugged-in persistence uses a Java Database Connectivity (JDBC™) interface and requires a JDBC-compliant data store.

The built-in persistence is significantly faster than plugged-in persistence; however, a JDBC-compliant database system might provide the redundancy, security, and administrative features needed for an application.

In the case of built-in persistence, you can maximize reliability by specifying that persistence operations synchronize the in-memory state with the data store. This helps eliminate data loss due to system crashes, but at the expense of performance.

Client Runtime Configuration

The Message Queue client runtime provides client applications with an interface to the Message Queue message service. It supports all the operations needed for clients to send messages to destinations and to receive messages from such destinations. The client runtime is configurable (by setting connection factory attribute values), allowing you to set properties and behaviors that can generally improve performance and message throughput.

For example, the Message Queue client runtime supports the following configurable behaviors:

For more information on these behaviors and the attributes used to configure them, see "Client Runtime Message Flow Adjustments".


Monitoring a Message Server

A Message Queue server can be configured to provide metrics information that you can use to monitor its performance. This section describes the various tools you can use to monitor a message server and the metrics data that can be obtained using these tools.

For information on how to use metrics data to troubleshoot performance problems or to analyze and tune message server performance, see "Troubleshooting Performance Problems".

Monitoring Tools

You can obtain metrics information using the following tools:

The following sections describe how to use each of these tools to obtain metrics information. For a comparison of the different tools, see "Choosing the Right Monitoring Tool".

Message Queue Command Utility (imqcmd)

The Command utility (imqcmd) is Message Queue’s basic command line administration tool. It allows you to manage the broker and its connection services, as well as application-specific resources such as physical destinations, durable subscriptions, and transactions. The imqcmd command is documented in Chapter 6, "Broker and Application Management."

One of the capabilities of the imqcmd command is its ability to obtain metrics information for the broker as a whole, for individual connection services, and for individual destinations. To obtain metrics data, you generally use the metrics subcommand of imqcmd. Metrics data is written at an interval you specify, or the number of times you specify, to the console screen.

You can also use the query subcommand (see "imqcmd query") to obtain a more limited subset of metrics data.

imqcmd metrics

The syntax and options of imqcmd metrics are shown in Table 9-2 and Table 9-3, respectively.

Table 9-2  imqcmd metrics Subcommand Syntax

Subcommand Syntax

Metrics Data Provided

metrics bkr
    [-b hostName:port]
    [-m metricType]
    [-int interval]
    [-msp numSamples]
    [-u userName
    [-p pasword

Displays broker metrics for the default broker or a broker at the specified host and port.

or

 

metrics svc -n serviceName
    [-b hostName:port]
    [-m metricType]
    [-int interval]
    [-msp numSamples]
    [-u userName
    [-p pasword

Displays metrics for the specified service on the default broker or on a broker at the specified host and port.

or

 

metrics dst -t destType
    -n destName
    [-b hostName:port]
    [-m metricType]
    [-int interval]
    [-msp numSamples]
    [-u userName
    [-p pasword

Displays metrics information for the destination of the specified type and name.

Table 9-3  imqcmd metrics Subcommand Options

Subcommand Options

Description

-b hostName:port

Specifies the hostname and port of the broker for which metrics data is reported. The default is localhost:7676

-int interval

Specifies the interval (in seconds) at which to display the metrics. The default is 5 seconds.

-m metricType

Specifies the type of metric to display:

ttl      Displays metrics on messages and packets flowing into and out of the broker (default metric type)

rts      Displays metrics on rate of flow of messages and packets into and out of the broker (per second)

cxn      Displays connections, virtual memory heap, and threads (brokers and connection services only)

con      Displays consumer-related metrics (destinations only)

dsk      Displays disk usage metrics (destinations only)

-msp numSamples

Specifies the number of samples displayed in the output. The default is an unlimited number (infinite).

-n destName

Specifies the destination name of the destination (if any) for which metrics data is reported. There is no default.

-n serviceName

Specifies the connection service (if any) for which metrics data is reported. There is no default.

-t destTyp

Specifies the type (queue or topic) of the destination (if any) for which metrics data is reported. There is no default.

-u userName

Specifies your (the administrator’s) name. If you omit this value, you will be prompted for it.

-p password

Specifies your (the administrator’s) password. If you omit this value, you will be prompted for it.

Procedure: Using the metrics Subcommand to Display Metrics Data

This section describes the procedure for using the metrics subcommand to report metrics information.

    To Use the metrics Subcommand
  1. Start the broker for which metrics information is desired.
  2. See "Starting a Broker".

  3. Issue the appropriate imqcmd metrics subcommand and options as shown in Table 9-2 and Table 9-3.
Metrics Outputs: imqcmd metrics

This section shows example metrics subcommand outputs for broker-wide, connection service, and destination metrics.

Broker-wide metrics.     To get the rate of message and packet flow into and out of the broker at 10 second intervals, use the metrics bkr subcommand:

imqcmd metrics bkr -m rts -int 10 -u admin -p admin

This command produces output similar to the following (see data descriptions in Table 9-8):

--------------------------------------------------------

Msgs/sec   Msg Bytes/sec   Pkts/sec    Pkt Bytes/sec   

In   Out     In      Out     In   Out     In      Out  

--------------------------------------------------------

0     0      27      56      0     0      38      66   

10    0     7365     56      10    10    7457    1132  

0     0      27      56      0     0      38      73   

0     10     27     7402     10    20    1400    8459  

0     0      27      56      0     0      38      73   

Connection service metrics.     To get cumulative totals for messages and packets handled by the jms connection service, use the metrics svc subcommand:

imqcmd metrics svc -n jms -m ttl -u admin -p admin

This command produces output similar to the following (see data descriptions in Table 9-9):

-------------------------------------------------

  Msgs      Msg Bytes      Pkts      Pkt Bytes     

In   Out    In     Out   In   Out    In     Out  

-------------------------------------------------

164  100  120704  73600  282  383  135967  102127

657  100  483552  73600  775  876  498815  149948

Destination metrics.     To get metrics information about a destination, use the metrics dst subcommand:

imqcmd metrics dst -t q -n XQueue -m ttl -u admin -p admin

This command produces output similar to the following (see data descriptions in Table 9-10):

-----------------------------------------------------------------------------

  Msgs      Msg Bytes         Msg Count         Total Msg Bytes (k)     Largest

In   Out    In     Out    Current  Peak  Avg  Current  Peak     Avg    Msg (k)

-----------------------------------------------------------------------------

200  200  147200  147200     0     200    0      0      143      71        0  

300  200  220800  147200    100    200   10     71      143      64        0  

300  300  220800  220800     0     200    0      0      143      59        0  

To get information about a destination’s consumers, use the following metrics dst subcommand:

imqcmd metrics dst -t q -n SimpleQueue -m con -u admin -p admin

This command produces output similar to the following (see data descriptions in Table 9-10):

------------------------------------------------------------------

   Active Consumers        Backup Consumers         Msg Count

Current  Peak    Avg    Current  Peak    Avg    Current  Peak  Avg

------------------------------------------------------------------

   1       1      0        0       0      0       944    1000  525

imqcmd query

The syntax and options of imqcmd query are shown in Table 9-4 along with a description of the metrics data provided by the command.

Table 9-4  imqcmd query Subcommand Syntax

Subcommand Syntax

Metrics Data Provided

query bkr
    [-b hostName:port]
    [-int interval]
    [-msp numSamples]

Information on the current number of messages and message bytes stored in broker memory and persistent store (see "Displaying Broker Information")

or

 

metrics svc -n serviceName
    [-b hostName:port]
    [-int interval]
    [-msp numSamples]

Information on the current number of allocated threads and number of connections for a specified connection service (see "Displaying Connection Service Information")

or

 

metrics dst -t destType
    -n destName
    [-b hostName:port]
    [-int interval]
    [-msp numSamples]

Information on the current number of producers, active and backup consumers, and messages and message bytes stored in memory and persistent store for a specified destination (see "Displaying Destination Information")


Note

Because of the limited metrics data provided by imqcmd query, this tool is not represented in the tables presented in the section, "Description of Metrics Data,"on (more...).


Message Queue Broker Log Files

The Message Queue logger takes information generated by broker code, a debugger, and a metrics generator and writes that information to a number of output channels: to standard output (the console), to a log file, and, on Solaris™ platforms, to the syslog daemon process. The logger is describe in "Logger".

You can specify the type of information gathered by the logger as well as the type written to each of the output channels. In particular, you can specify that you want metrics information written out to a log file.

Procedure: Using Broker Log Files to Report Metrics Data

This section describes the procedure for using broker log files to report metrics information. For general information on configuring the logger, see "Logging".

    To Use Log Files to Report Metrics Information
  1. Configure the broker’s metrics generation capability:
    1. Confirm imq.metrics.enabled=true
    2. Generation of metrics for logging is turned on by default.

    3. Set the metrics generation interval to a convenient number of seconds.
    4. imq.metrics.interval=interval

      This value can be set in the config.properties file or using the
      -metrics interval command line option when starting up the broker.

  2. Confirm that the logger gathers metrics information:
  3. imq.log.level=INFO

    This is the default value. This value can be set in the config.properties file or using the -loglevel level command line option when starting up the broker.

  4. Confirm that the logger is set to write metrics information to the log file:
  5. imq.log.file.output=INFO

    This is the default value. It can be set in the config.properties file.

  6. Start up the broker.
Metrics Outputs: Log File

The following shows sample broker metrics output to the log file (see the description of metrics data in Table 9-7 and Table 9-8):

[21/Jul/2003:11:21:18 PDT]

Connections: 0    JVM Heap: 8323072 bytes (7226576 free) Threads: 0 (14-1010)

      In: 0 msgs (0bytes) 0 pkts (0 bytes)

     Out: 0 msgs (0bytes) 0 pkts (0 bytes)

 Rate In: 0 msgs/sec (0 bytes/sec) 0 pkts/sec (0 bytes/sec)

Rate Out: 0 msgs/sec (0 bytes/sec) 0 pkts/sec (0 bytes/sec)

Message-Based Monitoring API

Message Queue provides a metrics monitoring capability by which the broker can write metrics data into JMS messages, which it then sends to one of a number of metrics topic destinations, depending on the type of metrics information contained in the message.

You can access this metrics information by writing a client application that subscribes to the metrics topic destinations, consumes the messages in these destinations, and processes the metrics information contained in the messages. The general scheme is described in "Metrics Message Producer (Enterprise Edition)".

There are five metrics topic destinations, whose names are shown in Table 9-5, along with the type of metrics messages delivered to each destination.

Table 9-5  Metrics Topic Destinations

Topic Name

Type of Metrics Messages

mq.metrics.broker

Broker metrics

mq.metrics.jvm

Java Virtual Machine metrics

mq.metrics.destination_list

List of destinations and their types

mq.metrics.destination.queue.
monitoredDestinationName

Destination metrics for queue of specified name

mq.metrics.destination.topic.
monitoredDestinationName

Destination metrics for topic of specified name

Procedure: Setting Up Message-Based Monitoring

This section describes the procedure for using the message-based monitoring capability to gather metrics information. The procedure includes both client development and administration tasks.

    To Set Up Message-based Monitoring
  1. Write a metrics monitoring client.
  2. See the Message Queue Java Client Developer’s Guide for instructions on programming clients that subscribe to metrics topic destinations, consume metrics messages, and extract the metrics data from these messages.

  3. Configure the broker’s Metrics Message Producer by setting broker property values in the config.properties file:
    1. Enable metrics message production.
    2. Set imq.metrics.topic.enabled=true

      The default value is true.

    3. Set the interval (in seconds) at which metrics messages are generated.
    4. Set imq.metrics.topic.interval=interval

      The default is 60 seconds.

    5. Specify whether you want metrics messages to be persistent (that is, whether they will survive a broker failure).
    6. Set imq.metrics.topic.persist

      The default is false.

    7. Specify how long you want metrics messages to remain in their respective destinations before being deleted.
    8. Set imq.metrics.topic.timetolive

      The default value is 300 seconds

  4. Set any access control you desire on metrics topic destinations.
  5. See the discussion in "Security and Access Considerations," below.

  6. Start up your metrics monitoring client.
  7. When consumers subscribe to a metrics topic, the metrics topic destination will automatically be created. Once a metrics topic has been created, the broker’s metrics message producer will begin sending metrics messages to the metrics topic.

Security and Access Considerations

There are two reasons to restrict access to metrics topic destinations:

Because of these considerations, it is advisable to restrict access to metrics topic destinations.

Monitoring clients are subject to the same authentication and authorization control as any other client. Only users maintained in the Message Queue user repository are allowed to connect to the broker.

You can provide additional protections by restricting access to specific metrics topic destinations through an access control properties file, as described in "Authorizing Users: the Access Control Properties File".

For example, the following entries in an accesscontrol.properties file will deny access to the mq.metrics.broker metrics topic to everyone except user1 and user 2.

topic.mq.metrics.broker.consume.deny.user=*

topic.mq.metrics.broker.consume.allow.user=user1,user2

The following entries will only allow users user3 to monitor topic t1.

topic.mq.metrics.destination.topic.t1.consume.deny.user=*

topic.mq.metrics.destination.topic.t1.consume.allow.user=user3

Depending on the sensitivity of metrics data, you can also connect your metrics monitoring client to a broker using an encrypted connection. For information on using encrypted connections, see "Encryption: Working With an SSL-based Service (Enterprise Edition)".

Metrics Outputs: Metrics Messages

The metrics data outputs you get using the message-based monitoring API is a function of the metrics monitoring client you write. You are limited only by the data provided by the metrics generator in the broker. For a complete list of this data, see "Description of Metrics Data".

Choosing the Right Monitoring Tool

Each of the monitoring tools discussed in the previous sections has its advantages and disadvantages.

Using the imqcmd metrics command, for example, lets you quickly sample information tailored to your needs when you want it, but makes it somewhat difficult to look at historical information, or to manipulate the data programmatically.

The log files, on the other hand, provide a long-term record of metrics data, however the information in the log file is difficult to parse for meaningful information.

The message-based monitoring API lets you easily extract the information you need, process it, manipulate or format the data programmatically, present graphs or send alerts; however, you have to write a custom application to capture and analyze the data.

In addition, each of these tools gathers a somewhat different subset of the metrics information generated by the broker. For information on which metrics data is gathered by which monitoring tool, see "Description of Metrics Data".

Table 9-6 compares the different tools by showing the pros and cons of each.

Table 9-6  Pros and Cons of Metrics Monitoring Tools  

Metrics
Monitoring Tool

Pros

Cons

imqcmd metrics

Remote monitoring

Convenient for spot checking

Reporting interval set in command option; can be changed on the fly

Easy to select specific data of interest

Data presented in easy tabular format

No single command gets all data

Difficult to analyze data programmatically

Doesn’t create historical record

Difficult to see historical trends

Log files

Regular sampling

Creates a historical record

Need to configure broker properties; must shut down and restart broker to take effect

Local monitoring only

Data format very difficult to read or parse; no parsing tools

Reporting interval cannot be changed on the fly; the same for all metrics data

Does not provide flexibility in selection of data

Broker metrics only; destination and connection service metrics not included

Possible performance hit if interval set too short

Message-based monitoring API

Remote monitoring

Easy to select specific data of interest

Data can be analyzed electronically and presented in any format

Need to configure broker properties; must shut down and restart broker to take effect

You need to write your own metrics monitoring client

Reporting interval cannot be changed on the fly; the same for all metrics data

Description of Metrics Data

The metrics information reported by a broker can be grouped into the following categories:

The following sections present the metrics data available in each of these categories. For information on the monitoring tools referred to in the following tables, see "Monitoring Tools".

JVM Metrics

Table 9-7 lists and describes the metrics data the broker generates for the broker process JVM heap and shows which of the data can be obtained using the different metrics monitoring tools.

Table 9-7  JVM Metrics

Metric Quantity

Description

imqcmd metrics bkr
(metricType)

Log File

Metrics Message
(metrics topic)1

JVM heap:
free memory

The amount of free memory available for use in the JVM heap

Yes
(cxn)

Yes

Yes
(…jvm)

JVM heap:
total memory

The current JVM heap size

Yes
(cxn)

Yes

Yes
(…jvm)

JVM heap:
max memory

The maximum to which the JVM heap size can grow.

No

Yes2

Yes
(…jvm)

1For metrics topic destination names, see Table 9-5.

2Shown only at broker startup.

Broker-wide Metrics

Table 9-8 lists and describes the data the broker reports regarding broker-wide metrics information. It also shows which of the data can be obtained using the different metrics monitoring tools.

Table 9-8  Broker-wide Metrics 

Metric Quantity

Description

imqcmd metrics bkr
(metricType)

Log File

Metrics Message
(metrics topic)1

Connection Data

Num connections

Number of currently open connections to the broker

Yes
(cxn)

Yes

Yes
(…broker)

Num threads

Number of threads currently in use

Yes
(cxn)

Yes

No

Min threads

Number of threads, which once reached, are maintained in the thread pool for use by connection services

Yes
(cxn)

Yes

No

Max threads

Number of threads, beyond which no new threads are added to the thread pool for use by connection services

Yes
(cxn)

Yes

No

Stored Messages Data

Num messages

Number of JMS messages currently stored in broker memory and persistent store

No
Use query bkr

No

Yes
(…broker)

Total message bytes

Number of JMS messages bytes currently stored in broker memory and persistent store

No
Use query bkr

No

Yes
(…broker)

Message Flow Data

Num messages in

Number of JMS messages that have flowed into the broker since it was last started

Yes
(ttl)

Yes

Yes
(…broker)

Message bytes in

Number of JMS message bytes that have flowed into the broker since it was last started

Yes
(ttl)

Yes

Yes
(…broker)

Num packets in

Number of packets that have flowed into the broker since it was last started; includes both JMS messages and control messages

Yes
(ttl)

Yes

Yes
(…broker)

Packet bytes in

Number of packet bytes that have flowed into the broker since it was last started; includes both JMS messages and control messages

Yes
(ttl)

Yes

Yes
(…broker)

Num messages out

Number of JMS messages that have flowed out of the broker since it was last started.

Yes
(ttl)

Yes

Yes
(…broker)

Message bytes out

Number of JMS message bytes that have flowed out of the broker since it was last started

Yes
(ttl)

Yes

Yes
(…broker)

Num packets out

Number of packets that have flowed out of the broker since it was last started; includes both JMS messages and control messages

Yes
(ttl)

Yes

Yes
(…broker)

Packet bytes out

Number of packet bytes that have flowed out of the broker since it was last started; includes both JMS messages and control messages

Yes
(ttl)

Yes

Yes
(…broker)

Rate messages in

Current rate of flow of JMS messages into the broker

Yes
(rts)

Yes

No

Rate message bytes in

Current rate of flow of JMS message bytes into the broker

Yes
(rts)

Yes

No

Rate packets in

Current rate of flow of packets into the broker; includes both JMS messages and control messages

Yes
(rts)

Yes

No

Rate packet bytes in

Current rate of flow of packet bytes into the broker; includes both JMS messages and control messages

Yes
(rts)

Yes

No

Rate messages out

Current rate of flow of JMS messages out of the broker

Yes
(rts)

Yes

No

Rate message bytes out

Current rate of flow of JMS message bytes out of the broker

Yes
(rts)

Yes

No

Rate packets out

Current rate of flow of packets out of the broker; includes both JMS messages and control messages

Yes
(rts)

Yes

No

Rate packet bytes out

Current rate of flow of packet bytes out of the broker; includes both JMS messages and control messages

Yes
(rts)

Yes

No

Destinations Data

Num destinations

Number of physical destination in the broker

No

No

Yes
(…broker)

1For metrics topic destination names, see Table 9-5.

Connection Service Metrics

Table 9-9 lists and describes the metrics data the broker reports for individual connection services. It also shows which of the data can be obtained using the different metrics monitoring tools.

Table 9-9  Connection Service Metrics 

Metric Quantity

Description

imqcmd metrics svc
(metricType)

Log File

Metrics Message
(metrics topic)

Connection Data

Num connections

Number of currently open connections

Yes
(cxn)
Also query svc

No

No

Num threads

Number of threads currently in use, totaled across all connection services

Yes
(cxn)
Also query svc

No

No

Min threads

Number of threads, which once reached, are maintained in the thread pool for use by connection services, totaled across all connection services

Yes
(cxn)

No

No

Max threads

Number of threads, beyond which no new threads are added to the thread pool for use by connection services, totaled across all connection services

Yes
(cxn)

No

No

Message Flow Data

Num messages in

Number of JMS messages that have flowed into the connection service since the broker was last started

Yes
(ttl)

No

No

Message bytes in

Number of JMS message bytes that have flowed into the connection service since the broker was last started

Yes
(ttl)

No

No

Num packets in

Number of packets that have flowed into the connection service since the broker was last started; includes both JMS messages and control messages

Yes
(ttl)

No

No

Packet bytes in

Number packet bytes that have flowed into the connection service since the broker was last started; includes both JMS messages and control messages

Yes
(ttl)

No

No

Num messages out

Number of JMS messages that have flowed out of the connection service since the broker was last started.

Yes
(ttl)

No

No

Message bytes out

Number of JMS message bytes that have flowed out of the connection service since the broker was last started

Yes
(ttl)

No

No

Num packets out

Number of packets that have flowed out of the connection service since the broker was last started; includes both JMS messages and control messages

Yes
(ttl)

No

No

Packet bytes out

Number packet bytes that have flowed out of the connection service since the broker was last started; includes both JMS messages and control messages

Yes
(ttl)

No

No

Rate messages in

Current rate of flow of JMS messages into the broker through the connection service.

Yes
(rts)

No

No

Rate message bytes in

Current rate of flow of JMS message bytes into the connection service

Yes
(rts)

No

No

Rate packets in

Current rate of flow of packets into the connection service; includes both JMS messages and control messages

Yes
(rts)

No

No

Rate packet bytes in

Current rate of flow of packet bytes into the connection service; includes both JMS messages and control messages

Yes
(rts)

No

No

Rate messages out

Current rate of flow of JMS messages out of the connection service

Yes
(rts)

No

No

Rate message bytes out

Current rate of flow of JMS message bytes out of the connection service

Yes
(rts)

No

No

Rate packets out

Current rate of flow of packets out of the connection service; includes both JMS messages and control messages

Yes
(rts)

No

No

Rate packet bytes out

Current rate of flow of packet bytes out of the connection service; includes both JMS messages and control messages

Yes
(rts)

No

No

Destination Metrics

Table 9-9 lists and describes the metrics data the broker reports for individual destinations. It also shows which of the data can be obtained using the different metrics monitoring tools.

Table 9-10  Destination Metrics 

Metric Quantity

Description

imqcmd metrics dst
(metricType)

Log File

Metrics Message
(metrics topic)1

Consumer Data

Num active consumers

Current number of active consumers

Yes
(con)

No

Yes
(…destName)

Avg num active consumers

Average number of active consumers since the broker was last started

Yes
(con)

No

Yes
(…destName)

Peak num active consumers

Peak number of active consumers since the broker was last started

Yes
(con)

No

Yes
(…destName)

Num backup consumers

Current number of backup consumers (applies only to queues)

Yes
(con)

No

Yes
(…destName)

Avg num backup consumers

Average number of backup consumers since the broker was last started (applies only to queues)

Yes
(con)

No

Yes
(…destName)

Peak num backup consumers

Peak number of backup consumers since the broker was last started (applies only to queues)

Yes
(con)

No

Yes
(…destName)

Stored Messages Data

Num messages

Number of JMS messages currently stored in destination memory and persistent store

Yes
(con)
(ttl)
(rts)
Also query dst

No

Yes
(…destName)

Avg num messages

Average number of JMS messages stored in destination memory and persistent store since the broker was last started

Yes
(con)
(ttl)
(rts)

No

Yes
(…destName)

Peak num messages

Peak number of JMS messages stored in destination memory and persistent store since the broker was last started

Yes
(con)
(ttl)
(rts)

No

Yes
(…destName)

Total message bytes

Number of JMS message bytes currently stored in destination memory and persistent store

Yes
(ttl)
(rts)
Also query dst

No

Yes
(…destName)

Avg total message bytes

Average number of JMS message bytes stored in destination memory and persistent store since the broker was last started

Yes
(ttl)
(rts)

No

Yes
(…destName)

Peak total message bytes

Peak number of JMS message bytes stored in destination memory and persistent store since the broker was last started

Yes
(ttl)
(rts)

No

Yes
(…destName)

Peak message bytes

Peak number of JMS message bytes in a single message received by the destination since the broker was last started

Yes
(ttl)
(rts)

No

Yes
(…destName)

Message Flow Data

Num messages in

Number of JMS messages that have flowed into this destination since the broker was last started

Yes
(ttl)

No

Yes
(…destName)

Msg bytes in

Number of JMS message bytes that have flowed into this destination since the broker was last started

Yes
(ttl)

No

Yes
(…destName)

Num messages out

Number of JMS messages that have flowed out of this destination since the broker was last started

Yes
(ttl)

No

Yes
(…destName)

Msg bytes out

Number of JMS message bytes that have flowed out of this destination since the broker was last started

Yes
(ttl)

No

Yes
(…destName)

Rate num messages in

Current rate of flow of JMS messages into the destination

Yes
(rts)

No

No

Rate num messages out

Current rate of flow of JMS messages out of the destination

Yes
(rts)

No

No

Rate msg bytes
in

Current rate of flow of JMS message bytes into the destination

Yes
(rts)

No

No

Rate Msg bytes
out

Current rate of flow of JMS message bytes out of the destination

Yes
(rts)

No

No

Disk Utilization Data

Disk reserved

Disk space (in bytes) used by all message records (active and free) in the destination file-based store

Yes
(dsk)

No

Yes
(…destName)

Disk used

Disk space (in bytes) used by active message records in destination file-based store

Yes
(dsk)

No

Yes
(…destName)

Disk utilization ratio

Quotient of used disk space over reserved disk space. The higher the ratio, the more the disk space is being used to hold active messages

Yes
(dsk)

No

Yes
(…destName)

1For metrics topic destination names, see Table 9-5.


Troubleshooting Performance Problems

There are a number of performance problems that can occur in using a Message Queue service to support an application. These problems include the following:

Each of these problems is discussed below along with possible causes and solutions.

Problem: Clients Can’t Establish A Connection

Symptoms:

Possible Causes:

Problem: Connection Throughput is Too Slow

Symptoms:

Possible Causes:

Problem: Client Can’t Create a Message Producer

Symptoms:

Possible Causes:

Problem: Message Production Is Delayed or Slowed

Symptoms:

Possible Causes:

Problem: Messages Backlogged in Message Server

Symptoms:

Possible Causes:

Problem: Message Server Throughput Is Sporadic

Symptoms:

Possible Causes:

Problem: Messages Not Reaching Consumers

Symptoms:

Possible Causes:


Adjusting Your Configuration To Improve Performance

System Adjustments

The following sections describe adjustments you can make to the operating system, JVM, and communication protocols.

Solaris Tuning: CPU Utilization, Paging/Swapping/Disk I/O

See your system documentation for tuning your operating system.

Java Virtual Machine Adjustments

By default, the broker uses a JVM heap size of 192MB. This is often too small for significant message loads and should be increased.

When the broker gets close to exhausting the JVM heap space used by Java objects, it uses various techniques such as flow control and message swapping to free memory. Under extreme circumstances it even closes client connections in order to free the memory and reduce the message inflow. Hence it is desirable to set the maximum JVM heap space high enough to avoid such circumstances.

However, if the maximum Java heap space is set too high, in relation to system physical memory, the broker can continue to grow the Java heap space until the entire system runs out of memory. This can result in diminished performance, unpredictable broker crashes, and/or affect the behavior of other applications and services running on the system. In general, you need to allow enough physical memory for the operating system and other applications to run on the machine.

In general it is a good idea to evaluate the normal and peak system memory footprints, and configure the Java heap size so that it is large enough to provide good performance, but not so large as to risk system memory problems.

To change the minimum and maximum heap size for the broker, use the -vmargs command line option when starting the broker. For example:

This command will set the starting Java heap size to 256MB and the maximum Java heap size to 1GB.

In any case, verify settings by checking the broker's log file or using the
imqcmd metrics bkr -m cxn command.

Tuning Transport Protocols

Once a protocol that meets application needs has been chosen, additional tuning (based on the selected protocol) might improve performance.

A protocol's performance can be modified using the following three broker properties:

For TCP and SSL protocols, these properties affect the speed of message delivery between client and broker. For HTTP and HTTPS protocols, these properties affect the speed of message delivery between the Message Queue tunnel servlet (running on a Web server) and the broker. For HTTP/HTTPS protocols there are additional properties that can affect performance (see "HTTP/HTTPS Tuning").

The protocol tuning properties are described in the following sections.

nodelay

The nodelay property affects Nagle's algorithm (the value of the TCP_NODELAY socket-level option on TCP/IP) for the given protocol. Nagle's algorithm is used to improve TCP performance on systems using slow connections such as wide-area networks (WANs).

When the algorithm is used, TCP tries to prevent several small chunks of data from being sent to the remote system (by bundling the data in larger packets). If the data written to the socket does not fill the required buffer size, the protocol delays sending the packet until either the buffer is filled or a specific delay time has elapsed. Once the buffer is full or the time-out has occurred, the packet is sent.

For most messaging applications, performance is best if there is no delay in the sending of packets (Nagle’s algorithm is not enabled). This is because most interactions between client and broker are request/response interactions: the client sends a packet of data to the broker and waits for a response. For example, typical interactions include:

For these interactions, most packets are smaller than the buffer size. This means that if Nagle's algorithm is used, the broker delays several milliseconds before sending a response to the consumer.

However, Nagle's algorithm may improve performance in situations where connections are slow and broker responses are not required. This would be the case where a client sends a non-persistent message or where a client acknowledgement is not confirmed by the broker (DUPS_OK_ACKNOWLEDGE session).

inbufsz/outbufsz

The inbufsz property sets the size of the buffer on the input stream reading data coming in from a socket. Similarly, outbufsz sets the buffer size of the output stream used by the broker to write data to the socket.

In general, both parameters should be set to values that are slightly larger than the average packet being received or sent. A good rule of thumb is to set these property values to the size of the average packet plus 1k (rounded to the nearest k).

For example, if the broker is receiving packets with a body size of 1k, the overall size of the packet (message body + header + properties) is about 1200 bytes. An inbufsz of 2k (2048 bytes) gives reasonable performance.

Increasing the inbufsz or outbufsz greater than that size may improve performance slightly; however, it increases the memory needed for each connection.

Figure 9-6 shows the consequence of changing inbufsz on a 1k packet.

Figure 9-7  Effect of Changing inbufsz on a 1k (1024 bytes) Packet

Chart showing effect of changing inbufsz property on a 1k packet. Effect is described in text.

Figure 9-8 shows the consequence of changing outbufsz on a 1k packet.

Figure 9-8  Effect of Changing outbufsz on a 1k (1024 bytes) Packet

Chart showing effect of changing outbufsz property on a 1k packet. Effect is described in text.

HTTP/HTTPS Tuning

In addition to the general properties discussed in the previous two sections, HTTP/HTTPS performance is limited by how fast a client can make HTTP requests to the Web server hosting the Message Queue tunnel servlet.

A Web server might need to be optimized to handle multiple requests on a single socket. With JDK version 1.4 and later, HTTP connections to a Web server are kept alive (the socket to the Web server remains open) to minimize resources used by the Web server when it processes multiple HTTP requests. If the performance of a client application using JDK version 1.4 is slower than the same application running with an earlier JDK release, you might need to tune the Web server keep-alive configuration parameters to improve performance.

In addition to such Web-server tuning, you can also adjust how often a client polls the Web server. HTTP is a request-based protocol. This means that clients using an HTTP-based protocol periodically need to check the Web server to see if messages are waiting. The imq.httpjms.http.pullPeriod broker property (and the corresponding imq.httpsjms.https.pullPeriod property) specifies how often the Message Queue client runtime polls the Web server.

If the pullPeriod value is -1 (the default value), the client runtime polls the server as soon as the previous request returns, maximizing the performance of the individual client. As a result, each client connection monopolizes a request thread in the Web server, possibly straining Web server resources.

If the pullPeriod value is a positive number, the client runtime periodically sends requests to the Web server to see if there is pending data. In this case, the client does not monopolize a request thread in the Web server. Hence, if large numbers of clients are using the Web server, you might conserve Web server resources by setting the pullPeriod to a positive value.

Tuning the File-based Persistent Store

For information on tuning the file-based persistent store, see "Built-in persistence".

Broker Adjustments

The following sections describe adjustments you can make to broker properties to improve performance.

Memory Management: Increasing Broker Stability Under Load

Memory management can be configured on a destination-by-destination level or on a system-wide level (for all destinations, collectively).

Using Destination Limits

For information on destination limits, see "Managing Destinations".

Using System-wide Limits

If message producers tend to overrun message consumers, then messages can accumulate in the broker. While the broker does contain a mechanism for throttling back producers and swapping messages out of active memory in low memory conditions (see "Managing Memory Resources and Message Flow"), it's wise to set a hard limit on the total number of messages (and message bytes) that the broker can hold.

Control these limits by setting the imq.system.max_count and the imq.system.max_size broker properties. See "Editing the Instance Configuration File" or "Summary of imqbrokerd Options" for information on setting broker properties.

For example

imq.system.max_count=5000

The defined value above means that the broker will only hold up to 5000 undelivered/unacknowledged messages. If additional messages are sent, they are rejected by the broker. If a message is persistent then the producer will get an exception when it tries to send the message. If the message is non-persistent, then the broker silently drops the message.

To have non-persistent messages return an exception like persistent messages, set the following property on the connection factory object used by the client:

imqAckOnProduce = true

The setting above may decrease the performance of sending non-persistent messages to the broker (the client waits for a reply before sending the next message), but often this is acceptable since message inflow to the broker is typically not a system bottleneck.

When an exception is returned in sending a message, the client should pause for a moment and retry the send again.

Multiple Consumer Queue Performance

The efficiency with which multiple queue consumers process the messages in a queue destination depends on configurable queue destination attributes, namely the number of active consumers (maxNumActiveConsumers) and the maximum number of messages that can be delivered to a consumer in a single batch (consumerFlowLimit). These attributes are described in Table 6-10.

To achieve optimal message throughput there must be a sufficient number of active consumers to keep up with the rate of message production for the queue, and the messages in the queue must be routed and then delivered to the active consumers in such a way as to maximize their rate of consumption. The general mechanism for balancing message delivery among multiple consumers is described in "Queue Delivery to Multiple Consumers".

If messages are accumulating in the queue, it is possible that there is an insufficient number of active consumers to handle the message load. It is also possible that messages are being delivered to the consumers in batch sizes that cause messages to be backing up on the consumers. For example, if the batch size (consumerFlowLimit) is too large, one consumer might receive all the messages in a queue while other active consumers receive none. If consumers are very fast, this might not be a problem.

However, if consumers are relatively slow, you want messages to be distributed to them evenly, and therefore you want the batch size to be small. The smaller the batch size, the more overhead is required to deliver messages to consumers. Nevertheless, for slow consumers, there is generally a net performance gain to using small batch sizes.

Client Runtime Message Flow Adjustments

This section discusses flow control behaviors that impact performance (see "Client Runtime Configuration"). These behaviors are configured as attributes of connection factory administered objects. For information on setting connection factory attributes, see Chapter 7, "Managing Administered Objects."

Message Flow Metering

Messages sent and received by clients (JMS messages), as well as Message Queue control messages, pass over the same client-broker connection. Delays in the delivery of control messages, such as broker acknowledgements, can result if control messages are held up by the delivery of JMS messages. To prevent this type of congestion, Message Queue meters the flow of JMS messages across a connection.

JMS messages are batched (as specified with the imqConnectionFlowCount property) so that only a set number are delivered; when the batch has been delivered, delivery of JMS messages is suspended, and pending control messages are delivered. This cycle repeats, as other batches of JMS messages are delivered, followed by queued up control messages.

The value of imqConnectionFlowCount should be kept low if the client is doing operations that require many responses from the broker; for example, the client is using the CLIENT_ACKNOWLEDGE or AUTO_ACKNOWLEDGE modes, persistent messages, transactions, queue browsers, or if the client is adding or removing consumers. If, on the other hand, the client has only simple consumers on a connection using DUPS_OK_ACKNOWLEDGE mode, you can increase imqConnectionFlowCount without compromising performance.

Message Flow Limits

There is a limit to the number of JMS messages that the Message Queue client runtime can handle before encountering local resource limitations, such as memory. When this limit is approached, performance suffers. Hence, Message Queue lets you limit the number of messages per consumer (or messages per connection) that can be delivered over a connection and buffered in the client runtime, waiting to be consumed.

Consumer-based Limits

When the number of JMS messages delivered to the client runtime exceeds the value of imqConsumerFlowLimit for any consumer, message delivery for that consumer stops. It is resumed only when the number of unconsumed messages for that consumer drops below the value set with imqConsumerFlowThreshold.

The following example illustrates the use of these limits: consider the default settings for topic consumers

imqConsumerFlowLimit=1000

imqConsumerFlowThreshold=50

When the consumer is created, the broker delivers an initial batch of 1000 messages (providing they exist) to this consumer without pausing. After sending 1000 messages, the broker stops delivery until the client runtime asks for more messages. The client runtime holds these messages until the application processes them. The client runtime then allows the application to consume at least 50% (imqConsumerFlowThreshold) of the message buffer capacity (i.e. 500 messages) before asking the broker to send the next batch.

In the same situation, if the threshold were 10%, the client runtime would wait for the application to consume at least 900 messages before asking for the next batch.

The next batch size is calculated as follows:

imqConsumerFlowLimit - (current number of pending msgs in buffer)

So, if imqConsumerFlowThreshold is 50%, the next batch size can fluctuate between 500 and 1000, depending on how fast the application can process the messages.

If the imqConsumerFlowThreshold is set too high (close to 100%), the broker will tend to send smaller batches, which can lower message throughput. If the value is set too low (close to 0%), the client might be able to finish processing the remaining buffered messages before the broker delivers the next set, causing message throughput degradation. Generally speaking, unless you have specific performance or reliability concerns, you will not have to change the default value of imqConsumerFlowThreshold attribute.

The consumer-based flow controls (in particular imqConsumerFlowLimit) are the best way to manage memory in the client runtime. Generally, depending on the client application, you know the number of consumers you need to support on any connection, the size of the messages, and the total amount of memory that is available to the client runtime.

Connection-based Limits

In the case of some client applications, however, the number of consumers might be indeterminate, depending on choices made by end users. In those cases, you can still manage memory, using connection-level flow limits.

Connection-level flow controls limit the total number of messages buffered for all consumers on a connection. If this number exceeds the imqConnectionFlowLimit, then delivery of messages through the connection will stop until that total drops below the connection limit. (The imqConnectionFlowLimit is only enabled if you set the imqConnectionFlowLimitEnabled property to true.)

The number of messages queued up in a session is a function of the number of message consumers using the session and the message load for each consumer. If a client is exhibiting delays in producing or consuming messages, you can normally improve performance by redesigning the application to distribute message producers and consumers among a larger number of sessions or to distribute sessions among a larger number of connections.



Previous      Contents      Index      Next     


Copyright 2003 Sun Microsystems, Inc. All rights reserved.