Symptoms:
Message production is delayed or produced messages are rejected by the broker.
Messages take an unusually long time to reach consumers.
The number of messages or message bytes in the broker (or in specific destinations) increases steadily over time.
To see whether messages are accumulating, check how the number of messages or message bytes in the broker changes over time and compare to configured limits. First check the configured limits:
imqcmd query bkr
The imqcmd metrics bkr subcommand does not display this information.
Then check for message accumulation in each destination:
imqcmd list dst
To see whether messages have exceeded configured destination or brokerwide limits, check the broker log for the entry
[B2011]: Storing of JMS message from … failed.
This entry will be followed by another identifying the limit that has been exceeded.
Possible causes:
There are inactive durable subscriptions on a topic destination.
Too few consumers are available to consume messages in a queue.
Message consumers are processing too slowly to keep up with message producers.
Client acknowledgment processing is slowing down message consumption.
Client code defects; consumers are not acknowledging messages.
Possible cause: There are inactive durable subscriptions on a topic destination.
If a durable subscription is inactive, messages are stored in a destination until the corresponding consumer becomes active and can consume the messages.
To confirm this cause of the problem: Check the state of durable subscriptions on each topic destination:
imqcmd list dur -d destName
To resolve the problem:
Purge all messages for the offending durable subscriptions (see Managing Durable Subscriptions).
Specify message limit and limit behavior attributes for the topic (see Table 18–1). For example, you can specify the REMOVE_OLDEST and REMOVE_LOW_PRIORITY limit behaviors, which delete messages that accumulate in memory.
Purge all messages from the corresponding destinations (see Purging a Physical Destination).
Limit the time messages can remain in memory by rewriting the producing client to set a time-to-live value on each message. You can override any such settings for all producers sharing a connection by setting the imqOverrideJMSExpiration and imqJMSExpiration connection factory attributes (see Message Header Overrides).
Possible cause: Too few consumers are available to consume messages in a multiple-consumer queue.
If there are too few active consumers to which messages can be delivered, a queue destination can become backlogged as messages accumulate. This condition can occur for any of the following reasons:
Too few active consumers exist for the destination.
Consuming clients have failed to establish connections.
No active consumers use a selector that matches messages in the queue.
To confirm this cause of the problem: To help determine the reason for unavailable consumers, check the number of active consumers on a destination:
imqcmd metrics dst -n destName -t q -m con
To resolve the problem: Depending on the reason for unavailable consumers,
Create more active consumers for the queue by starting up additional consuming clients.
Adjust the imq.consumerFlowLimit broker property to optimize queue delivery to multiple consumers (see Adjusting Multiple-Consumer Queue Delivery ).
Specify message limit and limit behavior attributes for the queue (see Table 18–1). For example, you can specify the REMOVE_OLDEST and REMOVE_LOW_PRIOROTY limit behaviors, which delete messages that accumulate in memory.
Purge all messages from the corresponding destinations (see Purging a Physical Destination).
Limit the time messages can remain in memory by rewriting the producing client to set a time-to-live value on each message. You can override any such setting for all producers sharing a connection by setting the imqOverrideJMSExpiration and imqJMSExpiration connection factory attributes (see Message Header Overrides).
Possible cause: Message consumers are processing too slowly to keep up with message producers.
In this case, topic subscribers or queue receivers are consuming messages more slowly than the producers are sending messages. One or more destinations are getting backlogged with messages because of this imbalance.
To confirm this cause of the problem: Check for the rate of flow of messages into and out of the broker:
imqcmd metrics bkr -m rts
Then check flow rates for each of the individual destinations:
imqcmd metrics bkr -t destType -n destName -m rts
To resolve the problem:
Optimize consuming client code.
For queue destinations, increase the number of active consumers (see Adjusting Multiple-Consumer Queue Delivery ).
Possible cause: Client acknowledgment processing is slowing down message consumption.
Two factors affect the processing of client acknowledgments:
Significant broker resources can be consumed in processing client acknowledgments. As a result, message consumption may be slowed in those acknowledgment modes in which consuming clients block until the broker confirms client acknowledgments.
JMS payload messages and Message Queue control messages (such as client acknowledgments) share the same connection. As a result, control messages can be held up by JMS payload messages, slowing message consumption.
To confirm this cause of the problem:
Check the flow of messages relative to the flow of packets. If the number of packets per second is out of proportion to the number of messages, client acknowledgments may be a problem.
Check to see whether the client has received the following exception:
JMSException [C4000]: Packet acknowledge failed
To resolve the problem:
Modify the acknowledgment mode used by clients: for example, switch to DUPS_OK_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE.
If using CLIENT_ACKNOWLEDGE or transacted sessions, group a larger number of messages into a single acknowledgment.
Adjust consumer and connection flow control parameters (see Client Runtime Message Flow Adjustments ).
Possible cause: The broker cannot keep up with produced messages.
In this case, messages are flowing into the broker faster than the broker can route and dispatch them to consumers. The sluggishness of the broker can be due to limitations in any or all of the following:
CPU
Network socket read/write operations
Disk read/write operations
Memory paging
Persistent store
JVM memory limits
To confirm this cause of the problem: Check that none of the other possible causes of this problem are responsible.
To resolve the problem:
Upgrade the speed of your computer or data store.
Use a broker cluster to distribute the load among multiple broker instances.
Possible cause: Client code defects; consumers are not acknowledging messages.
Messages are held in a destination until they have been acknowledged by all consumers to which they have been sent. If a client is not acknowledging consumed messages, the messages accumulate in the destination without being deleted.
For example, client code might have the following defects:
Consumers using the CLIENT_ACKNOWLEDGE acknowledgment mode or transacted session may not be calling Session.acknowledge or Session.commit regularly.
Consumers using the AUTO_ACKNOWLEDGE acknowledgment mode may be hanging for some reason.
To confirm this cause of the problem: First check all other possible causes listed in this section. Next, list the destination with the following command:
imqcmd list dst
Notice whether the number of messages listed under the UnAcked header is the same as the number of messages in the destination. Messages under this header were sent to consumers but not acknowledged. If this number is the same as the total number of messages, then the broker has sent all the messages and is waiting for acknowledgment.
To resolve the problem: Request the help of application developers in debugging this problem.