JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle GlassFish Server Message Queue 4.5 Administration Guide
search filter icon
search icon

Document Information

Preface

Part I Introduction to Message Queue Administration

1.  Administrative Tasks and Tools

2.  Quick-Start Tutorial

Part II Administrative Tasks

3.  Starting Brokers and Clients

4.  Configuring a Broker

5.  Managing a Broker

6.  Configuring and Managing Connection Services

7.  Managing Message Delivery

8.  Configuring Persistence Services

9.  Configuring and Managing Security Services

10.  Configuring and Managing Broker Clusters

11.  Managing Administered Objects

12.  Configuring and Managing Bridge Services

13.  Monitoring Broker Operations

14.  Analyzing and Tuning a Message Service

15.  Troubleshooting

A Client Cannot Establish a Connection

Connection Throughput Is Too Slow

A Client Cannot Create a Message Producer

Message Production Is Delayed or Slowed

Messages Are Backlogged

Broker Throughput Is Sporadic

Messages Are Not Reaching Consumers

Dead Message Queue Contains Messages

To Inspect the Dead Message Queue

Part III Reference

16.  Command Line Reference

17.  Broker Properties Reference

18.  Physical Destination Property Reference

19.  Administered Object Attribute Reference

20.  JMS Resource Adapter Property Reference

21.  Metrics Information Reference

22.  JES Monitoring Framework Reference

Part IV Appendixes

A.  Distribution-Specific Locations of Message Queue Data

B.  Stability of Message Queue Interfaces

C.  HTTP/HTTPS Support

D.  JMX Support

E.  Frequently Used Command Utility Commands

Index

Message Production Is Delayed or Slowed

Symptoms:

Possible causes:

Possible cause: The broker is backlogged and has responded by slowing message producers.

A backlogged broker accumulates messages in broker memory. When the number of messages or message bytes in physical destination memory reaches configured limits, the broker attempts to conserve memory resources in accordance with the specified limit behavior. The following limit behaviors slow down message producers:

Similarly, when the number of messages or message bytes in brokerwide memory (for all physical destinations) reaches configured limits, the broker will attempt to conserve memory resources by rejecting the newest messages. Also, when system memory limits are reached because physical destination or brokerwide limits have not been set properly, the broker takes increasingly serious action to prevent memory overload. These actions include throttling back message producers.

To confirm this cause of the problem: When a message is rejected by the broker because of configured message limits, the broker returns the exception

JMSException [C4036]: A server error occurred

and makes the following entry in the broker log:

[B2011]: Storing of JMS message from IMQconn failed

This message is followed by another indicating the limit that has been reached:

[B4120]: Cannot store message on destination destName because capacity of maxNumMsgs would be exceeded.

if the exceeded message limit is on a physical destination, or

[B4024]: The maximum number of messages currrently in the system has been exceeded, rejecting message.

if the limit is brokerwide.

More generally, you can check for message limit conditions before the rejections occur as follows:

To resolve the problem:

Possible cause: The broker cannot save a persistent message to the data store.

If the broker cannot access a data store or write a persistent message to it, the producing client is blocked. This condition can also occur if destination or brokerwide message limits are reached, as described above.

To confirm this cause of the problem: If the broker is unable to write to the data store, it makes one of the following entries in the broker log:

[B2011]: Storing of JMS message from connectionID failed[B4004]: Failed to persist message messageID

To resolve the problem:

Possible cause: Broker acknowledgment timeout is too short.

Because of slow connections or a lethargic broker (caused by high CPU utilization or scarce memory resources), a broker may require more time to acknowledge receipt of a persistent message than allowed by the value of the connection factory’s imqAckTimeout attribute.

To confirm this cause of the problem: If the imqAckTimeout value is exceeded, the broker returns the exception

JMSException [C4000]: Packet acknowledge failed

To resolve the problem: Change the value of the imqAckTimeout connection factory attribute (see Reliability And Flow Control).

Possible cause: A producing client is encountering JVM limitations.

To confirm this cause of the problem:

To resolve the problem: Adjust the JVM (see Java Virtual Machine Adjustments).