After installing Sun JavaTM System Message Queue and performing some preparatory steps, you can begin starting brokers and clients. A broker’s configuration is governed by a set of configuration files, which can be overridden by command line options passed to the Broker utility (imqbrokerd); see Chapter 4, Configuring a Broker for more information.
This chapter contains the following sections:
Before starting a broker, there are two preliminary system-level tasks to perform: synchronizing system clocks and (on the Solaris or Linux platform) setting the file descriptor limit. The following sections describe these tasks.
Before starting any brokers or clients, it is important to synchronize the clocks on all hosts that will interact with the Message Queue system. Synchronization is particularly crucial if you are using message expiration (time-to-live). Time stamps from clocks that are not synchronized could prevent message expiration from working as expected and prevent the delivery of messages. Synchronization is also crucial for broker clusters.
Configure your systems to run a time synchronization protocol, such as Simple Network Time Protocol (SNTP). Time synchronization is generally supported by the xntpd daemon on Solaris and Linux, and by the W32Time service on Windows. (See your operating system documentation for information about configuring this service.) After the broker is running, avoid setting the system clock backward.
On the Solaris and Linux platforms, the shell in which a client or broker is running places a soft limit on the number of file descriptors that a process can use. In Message Queue, each connection a client makes, or a broker accepts, uses one of these file descriptors. Each physical destination that has persistent messages also uses a file descriptor.
As a result, the file descriptor limit constrains the number of connections a broker or client can have. By default, the maximum is 256 connections on Solaris or 1024 on Linux. (In practice, the connection limit is actually lower than this because of the use of file descriptors for persistent data storage.) If you need more connections than this, you must raise the file descriptor limit in each shell in which a client or broker will be executing. For information on how to do this, see the man page for the ulimit command.
You can start a broker either interactively, using the Message Queue command line utilities or the Windows Start menu, or by arranging for it to start automatically at system startup. The following sections describe how.
You can start a broker interactively from the command line, using the Broker utility (imqbrokerd). (Alternatively, on Windows, you can start a broker from the Start menu.) You cannot use the Administration Console (imqadmin) or the Command utility (imqcmd) to start a broker; the broker must already be running before you can use these tools.
On the Solaris and Linux platforms, a broker instance must always be started by the same user who initially started it. Each broker instance has its own set of configuration properties and file-based persistent data store. When the broker instance first starts, Message Queue uses the user’s file creation mode mask (umask) to set permissions on directories containing the configuration information and persistent data for that broker instance.
A broker instance has the instance name imqbroker by default. To start a broker from the command line with this name and the default configuration, simply use the command
imqbrokerd
This starts a broker instance named imqbroker on the local machine, with the Port Mapper at the default port of 7676 (see Port Mapper).
To specify an instance name other than the default, use the-name option to the imqbrokerd command. The following command starts a broker with the instance name myBroker:
imqbrokerd -name myBroker
Other options are available on the imqbrokerd command line to control various aspects of the broker’s operation. See Broker Utility for complete information on the syntax, subcommands, and options of the imqbrokerd command. For a quick summary of this information, enter the following command:
imqbrokerd -help
For example, the following command uses the-tty option to send errors and warnings to the command window (standard output):
imqbrokerd -name myBroker -tty
You can also use the -D option on the command line to override the values of properties specified in the broker’s instance configuration file (config.properties). The instance configuration file is described under Modifying Configuration Files. The following example sets a broker’s imq.jms.max_threads property, raising the maximum number of threads available to the jms connection service to 2000:
imqbrokerd -name myBroker -Dimq.jms.max_threads=2000
Instead of starting a broker explicitly from the command line, you can set it up to start automatically at system startup. How you do this depends on the platform (Solaris, Linux, or Windows) on which you are running the broker:
The method for enabling automatic startup on the Solaris 10 platforms is different from that for Solaris 9. Both are described below.
On Solaris 9 operating system, scripts that enable automatic startup are placed in the /etc/rc* directory tree during Message Queue installation. To enable the use of these scripts, you must edit the configuration file /etc/imq/imqbrokerd.conf as follows:
To start the broker automatically at system startup, set the AUTOSTART property to YES.
To have the broker restart automatically after an abnormal exit, set the RESTART property to YES.
To set startup command line arguments for the broker, specify one or more values for the ARGS property.
To disable automatic broker startup at system startup, edit the configuration file /etc/imq/imqbrokerd.conf and set the AUTOSTART property to NO.
Rather than using an rc file to implement automatic broker startup when a computer reboots, the following procedure makes use of the Solaris 10 Service Management Facility (SMF).
For more information on using the Service Management Facility, please refer to Solaris 10 documentation.
Copy and change permissions on the mqbroker startup script.
# cp /var/svc/manifest/application/sun/mq/mqbroker /lib/svc/method
# chmod 555 /lib/svc/method/mqbroker
Import the mqbroker service into the SMF repository.
# svccfg import /var/svc/manifest/application/sun/mq/mqbroker.xml
Verify that the import was successful by checking the state of the mqbroker service.
# svcs mqbroker
Output resembles the following:
STATE STIME FMRI disabled 16:22:50 svc:/application/sun/mq/mqbroker:default |
The service is initially shown as disabled.
Eanable the mqbroker service.
# svcadm enable svc:/application/sun/mq/mqbroker:default
Enabling the mqbroker service will start the imqbrokerd process. A reboot will subsequently restart the broker.
Configure the mqbroker service to pass any desired arguments to the imqbrokerd command.
The options/broker_args property is used to pass arguments toimqbrokerd. For example to add -loglevel DEBUGHIGH, do the following:
# svccfg svc:> select svc:/application/sun/mq/mqbroker svc:/application/sun/mq/mqbroker> setprop options/broker_args="-loglevel DEBUGHIGH" svc:/application/sun/mq/mqbroker> exit |
Disable the mqbroker service.
# svcadm disable svc:/application/sun/mq/mqbroker:default
A subsequent reboot will not restart the broker.
On Linux systems, scripts that enable automatic startup are placed in the /etc/rc* directory tree during Message Queue installation. To enable the use of these scripts, you must edit the configuration file /etc/opt/sun/mq/imqbrokerd.conf as follows:
To start the broker automatically at system startup, set the AUTOSTART property to YES.
To have the broker restart automatically after an abnormal exit, set the RESTART property to YES.
To set startup command line arguments for the broker, specify one or more values for the ARGS property.
To disable automatic broker startup at system startup, edit the configuration file /etc/opt/sun/mq/imqbrokerd.conf and set the AUTOSTART property to NO.
To start a broker automatically at Windows system startup, you must define the broker as a Windows service. The broker will then start at system startup time and run in the background until system shutdown. Consequently, you will not need to use the Message Queue Broker utility (imqbrokerd) unless you want to start an additional broker.
A system can have no more than one broker running as a Windows service. The Windows Task Manager lists such a broker as two executable processes:
The native Windows service wrapper, imqbrokersvc.exe
The Java runtime that is running the broker
You can install a broker as a service when you install Message Queue on a Windows system. After installation, you can use the Service Administrator utility (imqsvcadmin) to perform the following operations:
Add a broker as a Windows service
Determine the startup options for the broker service
Disable a broker from running as a Windows service
To pass startup options to the broker, use the -args option to the imqsvcadmin command. This works the same way as the imqbrokerd command’s -D option, as described under Starting Brokers. Use the Command utility (imqcmd) to control broker operations as usual.
See Service Administrator Utility for complete information on the syntax, subcommands, and options of the imqsvcadmin command.
The procedure for reconfiguring a broker installed as a Windows service is as follows:
Stop the service:
From the Settings submenu of the Windows Start menu, choose Control Panel.
Open the Administrative Tools control panel.
Run the Services tool by selecting its icon and choosing Open from the File menu or the pop-up context menu, or simply by double-clicking the icon.
Under Services (Local), select the Message Queue Broker service and choose Properties from the Action menu.
Alternatively, you can right-click on Message Queue Broker and choose Properties from the pop-up context menu, or simply double-click on Message Queue Broker. In either case, the Message Queue Broker Properties dialog box will appear.
Under the General tab in the Properties dialog, click Stop to stop the broker service.
Remove the service.
On the command line, enter the command
imqsvcadmin remove
Reinstall the service, specifying different broker startup options with the -args option or different Java version arguments with the -vmargs option.
For example, to change the service’s host name and port number to broker1 and 7878, you could use the command
imqsvcadmin install -args "-name broker1 -port 7878"
You can use either the imqsvcadmin command’s -javahome or -jrehome option to specify the location of an alternative Java runtime. (You can also specify these options in the Start Parameters field under the General tab in the service’s Properties dialog window.)
The Start Parameters field treats the backslash character (\) as an escape character, so you must type it twice when using it as a path delimiter: for example,
-javahome c:\\j2sdk1.4.0
To determine the startup options for the broker service, use the imqsvcadmin query command, as shown in Example 3–1.
|
To disable a broker from running as a Windows service, use the command
imqcmd shutdown bkr
to shut down the broker, followed by
imqsvcadmin remove
to remove the service.
Alternatively, you can use the Windows Services tool, reached via the Administrative Tools control panel, to stop and remove the broker service.
Restart your computer after disabling the broker service.
If you get an error when you try to start a broker as a Windows service, you can view error events that were logged:
Open the Windows Administrative Tools control panel.
Start the Event Viewer tool.
Select the Application event log.
Choose Refresh from the Action menu to display any error events.
To delete a broker instance, use the imqbrokerd command with the -remove option:
imqbrokerd [options…] -remove instance
For example, if the name of the broker is myBroker, the command would be
imqbrokerd -name myBroker -remove instance
The command deletes the entire instance directory for the specified broker.
See Broker Utility for complete information on the syntax, subcommands, and options of the imqbrokerd command. For a quick summary of this information, enter the command
imqbrokerd -help
Before starting a client application, obtain information from the application developer about how to set up the system. If you are starting Java client applications, you must set the CLASSPATH variable appropriately and make sure you have the correct .jar files installed. The Message Queue Developer’s Guide for Java Clients contains information about generic steps for setting up the system, but your developer may have additional information to provide.
To start a Java client application, use the following command line format:
java clientAppName
To start a C client application, use the format supplied by the application developer (see Building and Running C Clients in Sun Java System Message Queue 4.3 Developer’s Guide for C Clients).
The application’s documentation should provide information on attribute values that the application sets; you may want to override some of these from the command line. You may also want to specify attributes on the command line for any Java client that uses a Java Naming and Directory Interface (JNDI) lookup to find its connection factory. If the lookup returns a connection factory that is older than the application, the connection factory may lack support for more recent attributes. In such cases, Message Queue sets those attributes to default values; if necessary, you can use the command line to override these default values.
To specify attribute values from the command line for a Java application, use the following syntax:
java [ [-Dattribute=value] … ] clientAppName
The value for attribute must be a connection factory administered object attribute, as described in Chapter 18, Administered Object Attribute Reference. If there is a space in the value, put quotation marks around the
attribute=value
part of the command line.
The following example starts a client application named MyMQClient
, connecting to a broker on the host OtherHost at
port 7677:
java -DimqAddressList=mq://OtherHost:7677/jms MyMQClient
The host name and port specified on the command line override any others set by the application itself.
In some cases, you cannot use the command line to specify attribute values. An administrator can set an administered object to allow read access only, or an application developer can code the client application to do so. Communication with the application developer is necessary to understand the best way to start the client program.
A broker’s configuration is governed by a set of configuration files and by the options passed to the imqbrokerd command at startup. This chapter describes the available configuration properties and how to use them to configure a broker.
The chapter contains the following sections:
For full reference information about broker configuration properties, see Chapter 16, Broker Properties Reference
Broker configuration properties are logically divided into categories that depend on the services or broker components they affect:
Connection services manage the physical connections between a broker and its clients that provide transport for incoming and outgoing messages. For a discussion of properties associated with connection services, see Configuring Connection Services
Message delivery services route and deliver JMS payload messages, as well as control messages used by the message service to support reliable delivery. For a discussion of properties associated with message delivery services, including physical destinations, see Chapter 7, Managing Message Delivery
Persistence services manage the writing and retrieval of data, such as messages and state information, to and from persistent storage. For a discussion of properties associated with persistence services, see Chapter 8, Configuring Persistence Services
Security services authenticate users connecting to the broker and authorize their actions. For a discussion of properties associated with authentication and authorization services, as well as encryption configuration, see Chapter 9, Configuring and Managing Security Services
Clustering services support the grouping of brokers into a cluster to achieve scalability and availability. For a discussion of properties associated with broker clusters, see Chapter 10, Configuring and Managing Broker Clusters
Monitoring services generate metric and diagnostic information about the broker’s performance. For a discussion of properties associated with monitoring and managing a broker, see Chapter 12, Monitoring Broker Operations
You can specify a broker’s configuration properties in either of two ways:
Edit the broker’s configuration file.
Supply the property values directly from the command line.
The following sections describe these two methods of configuring a broker.
Broker configuration files contain property settings for configuring a broker. They are kept in a directory whose location depends on the operating system platform you are using; see Appendix A, Platform-Specific Locations of Message Queue Data for details. Message Queue maintains the following broker configuration files:
A default configuration file (default.properties) that is loaded on startup. This file is not editable, but you can read it to determine default settings and find the exact names of properties you want to change.
An installation configuration file (install.properties) containing any properties specified when Message Queue was installed. This file cannot be edited after installation.
A separate instance configuration file (config.properties) for each individual broker instance.
In addition, if you connect broker instances in a cluster, you may need to use a cluster configuration file (cluster.properties) to specify configuration information for the cluster; see Cluster Configuration Properties for more information.
Also, Message Queue makes use of en environment configuration file, imqenv.conf, which is used to provide the locations of external files needed by Message Queue, such as the default Java SE location and the locations of database drivers, JAAS login modules, and so forth.
At startup, the broker merges property values from the various configuration files. As shown in Figure 4–1, the files form a hierarchy in which values specified in the instance configuration file override those in the installation configuration file, which in turn override those in the default configuration file. At the top of the hierarchy, you can manually override any property values specified in the configuration files by using command line options to the imqbrokerd command.
The first time you run a broker, an instance configuration file is created containing configuration properties for that particular broker instance. The instance configuration file is named config.properties and is located in a directory identified by the name of the broker instance to which it belongs:
…/instances/instanceName/props/config.properties
(See Appendix A, Platform-Specific Locations of Message Queue Data for the location of the instances directory.) If the file does not yet exist, you must use the -name option when starting the broker (see Broker Utility) to specify an instance name that Message Queue can use to create the file.
The instances/instanceName directory and the instance configuration file are owned by the user who initially started the corresponding broker instance by using the imqbrokerd —name brokerName option. The broker instance must always be restarted by that same user.
The instance configuration file is maintained by the broker instance and is modified when you make configuration changes using Message Queue administration utilities. You can also edit an instance configuration file by hand. To do so, you must be the owner of the instances/instanceName directory or log in as the root user to change the directory’s access privileges.
The broker reads its instance configuration file only at startup. To effect any changes to the broker’s configuration, you must shut down the broker and then restart it. Property definitions in the config.properties file (or any configuration file) use the following syntax:
propertyName=value [ [,value1] … ]
For example, the following entry specifies that the broker will hold up to 50,000 messages in memory and persistent storage before rejecting additional messages:
imq.system.max_count=50000
The following entry specifies that a new log file will be created once a day (every 86,400 seconds):
imq.log.file.rolloversecs=86400
See Broker Services and Chapter 16, Broker Properties Reference for information on the available broker configuration properties and their default values.
You can enter broker configuration properties from the command line when you start a broker, or afterward.
At startup time, you use the Broker utility (imqbrokerd) to start a broker instance. Using the command’s -D option, you can specify any broker configuration property and its value; see Starting Brokers and Broker Utility for more information. If you start the broker as a Windows service, using the Service Administrator utility (imqsvcadmin), you use the -args option to specify startup configuration properties; see Service Administrator Utility.
You can also change certain broker configuration properties while a broker is running. To modify the configuration of a running broker, you use the Command utility’s imqcmd update bkr command; see Updating Broker Properties and Broker Management.
This chapter explains how to use the Message Queue Command utility (imqcmd) to manage a broker. The chapter has the following sections:
This chapter does not cover all topics related to managing a broker. Additional topics are covered in the following separate chapters:
For information on configuring and managing connection services, see Chapter 6, Configuring and Managing Connection Services.
For information on managing message delivery services, including how to create, display, update, and destroy physical destinations, see Chapter 7, Managing Message Delivery.
For information on configuring and managing persistence services, for both flat-file and JDBC-based data stores, see Chapter 8, Configuring Persistence Services.
For information about setting up security for the broker, such as user authentication, access control, encryption, and password files, see Chapter 9, Configuring and Managing Security Services.
For information on configuring and managing clustering services, for both conventional and enhanced broker clusters, see Chapter 10, Configuring and Managing Broker Clusters.
For information about monitoring a broker, see Chapter 12, Monitoring Broker Operations.
Before using the Command utility to manage a broker, you must do the following:
Start the broker using the imqbrokerd command. You cannot use the Command utility subcommands l until a broker is running.
Determine whether you want to set up a Message Queue administrative user or use the default account. You must specify a user name and password to use all Command utility subcommands (except to display command help and version information).
When you install Message Queue, a default flat-file user repository is installed. The repository is shipped with two default entries: an administrative user and a guest user. If you are testing Message Queue, you can use the default user name and password (admin/admin) to run the Command utility.
If you are setting up a production system, you must set up authentication and authorization for administrative users. See Chapter 9, Configuring and Managing Security Services for information on setting up a file-based user repository or configuring the use of an LDAP directory server. In a production environment, it is a good security practice to use a nondefault user name and password.
If you want to use a secure connection to the broker, set up and enable the ssladmin service on the target broker instance, For more information, see Message Encryption.
The Message Queue Command utility (imqcmd) enables you to manage the broker and its services interactively from the command line. See Command Utility for general reference information about the syntax, subcommands, and options of the imqcmd command, and Chapter 16, Broker Properties Reference for specific information on the configuration properties used to specify broker behavior.
Because each imqcmd subcommand is authenticated against the user repository, it requires a user name and password. The only exceptions are commands that use the -h or -H option to display help, and those that use the -v option to display the product version.
Use the -u option to specify an administrative user name. For example, the following command displays information about the default broker:
imqcmd query bkr -u admin
If you omit the user name, the command will prompt you for it.
For simplicity, the examples in this chapter use the default user name admin as the argument to the -u option. In a real-life production environment, you would use a custom user name.
Specify the password using one of the following methods:
Create a password file and enter the password into that file as the value of the imq.imqcmd.password property. On the command line, use the -passfile option to provide the name of the password file.
Let the imqcmd command prompt you for the password.
In previous versions of Message Queue, you could use the -p option to specify a password on the imqcmd command line. As of Message Queue 4.0, this option is deprecated and is no longer supported; you must instead use one of the methods listed above.
Most imqcmd subcommands use the -b option to specify the host name and port number of the broker to which the command applies:
-b hostName:portNumber
If no broker is specified, the command applies by default to a broker running on the local host (localhost) at port number 7676. To issue a command to a broker that is running on a remote host, listening on a nondefault port, or both, you must use the -b option to specify the host and port explicitly.
To display the Message Queue product version, use the -v option. For example:
imqcmd -v
If you enter an imqcmd command line containing the -v option in addition to a subcommand or other options, the Command utility processes only the -v option. All other items on the command line are ignored.
To display help on the imqcmd command, use the -h or -H option, and do not use a subcommand. You cannot get help about specific subcommands.
For example, the following command displays help about imqcmd:
imqcmd -H
If you enter an imqcmd command line containing the -h or -H option in addition to a subcommand or other options, the Command utility processes only the -h or -H option. All other items on the command line are ignored.
The examples in this section illustrate how to use the imqcmd command.
The following example lists the properties of the broker running on host localhost at port 7676, so the -b option is unnecessary:
imqcmd query bkr -u admin
The command uses the default administrative user name (admin) and omits the password, so that the command will prompt for it.
The following example lists the properties of the broker running on the host myserver at port 1564. The user name is aladdin:
imqcmd query bkr -b myserver:1564 -u aladdin
(For this command to work, the user repository would need to be updated to add the user name aladdin to the admin group.)
The following example lists the properties of the broker running on localhost at port 7676. The initial timeout for the command is set to 20 seconds and the number of retries after timeout is set to 7. The user’s password is in a password file called myPassfile, located in the current directory at the time the command is invoked.
imqcmd query bkr -u admin -passfile myPassfile -rtm 20 -rtr 7
For a secure connection to the broker, these examples could include the -secure option. This option causes the Command utility to use the ssladmin service if that service has been configured and started.
This section describes how to use Command utility subcommands to perform the following broker management tasks:
In addition to using the subcommands described in the following sections, imqcmd allows you to set system properties using the –D option. This is useful for setting or overriding connection factory properties or connection-related Java system properties.
For example, the following command changes the default value of imqSSLIsHostTrusted:
imqcmd list svc -secure -DimqSSLIsHostTrusted=true
The following command specifies the password file and the password used for the SSL trust store that is used by the imqcmd command:
imqcmd list svc -secure -DJavax.net.ssl.trustStore=/tmp/MyTruststore -Djavax.net.ssl.trustStorePassword=MyTrustword
The subcommand imqcmd shutdown bkr shuts down a broker:
imqcmd shutdown bkr [-b hostName:portNumber] [-time nSeconds] [-nofailover]
The broker stops accepting new connections and messages, completes delivery of existing messages, and terminates the broker process.
The -time option, if present, specifies the interval, in seconds, to wait before shutting down the broker. For example, the following command delays 90 seconds and then shuts down the broker running on host wolfgang at port 1756:
imqcmd shutdown bkr -b wolfgang:1756 -time 90 -u admin
The broker will not block, but will return immediately from the delayed shutdown request. During the shutdown interval, the broker will not accept any new jms connections; admin connections will be accepted, and existing jms connections will continue to operate. If the broker belongs to an enhanced broker cluster, it will not attempt to take over for any other broker during the shutdown interval.
If the broker is part of an enhanced broker cluster (see High-Availability Clusters in Sun Java System Message Queue 4.3 Technical Overview), another broker in the cluster will ordinarily attempt to take over its persistent data on shutdown; the -nofailover option to the imqcmd shutdown bkr subcommand suppresses this behavior. Conversely, you can use the imqcmd takeover bkr subcommand to force such a takeover manually (for instance, if the takeover broker were to fail before completing the takeover process); see Preventing or Forcing Broker Failover for more information.
The imqcmd takeover bkr subcommand is intended only for use in failed-takeover situations. You should use it only as a last resort, and not as a general way of forcibly taking over a running broker.
To shut down and restart a broker, use the subcommand imqcmd restart bkr:
imqcmd restart bkr [-b hostName:portNumber]
This shuts down the broker and then restarts it using the same options that were specified when it was first started. To choose different options, shut down the broker with imqcmd shutdown bkr and then start it again with the Broker utility (imqbrokerd), specifying the options you want.
The subcommand imqcmd quiesce bkr quiesces a broker, causing it to refuse any new client connections while continuing to service old ones:
imqcmd quiesce bkr [-b hostName:portNumber]
If the broker is part of an enhanced broker cluster, this allows its operations to wind down normally without triggering a takeover by another broker, for instance in preparation for shutting it down administratively for upgrade or similar purposes. For example, the following command quiesces the broker running on host hastings at port 1066:
imqcmd quiesce bkr -b hastings:1066 -u admin
To reverse the process and return the broker to normal operation, use the imqcmd unquiesce bkr subcommand:
imqcmd unquiesce bkr [-b hostName:portNumber]
For example, the following command unquiesces the broker that was quiesced in the preceding example:
imqcmd unquiesce bkr -b hastings:1066 -u admin
The subcommand imqcmd pause bkr pauses a broker, suspending its connection service threads and causing it to stop listening on the connection ports:
imqcmd pause bkr [-b hostName:portNumber]
For example, the following command pauses the broker running on host myhost at port 1588:
imqcmd pause bkr -b myhost:1588 -u admin
Because its connection service threads are suspended, a paused broker is unable to accept new connections, receive messages, or dispatch messages. The admin connection service is not suspended, allowing you to continue performing administrative tasks needed to regulate the flow of messages to the broker. Pausing a broker also does not suspend the cluster connection service; however, since message delivery within a cluster depends on the delivery functions performed by the different brokers in the cluster, pausing a broker in a cluster may result in a slowing of some message traffic.
You can also pause individual connection services and physical destinations. For more information, see Pausing and Resuming a Connection Service and Pausing and Resuming a Physical Destination.
The subcommand imqcmd resume bkr reactivates a broker’s service threads, causing it to resume listening on the ports:
imqcmd resume bkr [-b hostName:portNumber]
For example, the following command resumes the default broker (host localhost at port 7676):
imqcmd resume bkr -u admin
The subcommand imqcmd update bkr can be used to change the values of a subset of broker properties for the default broker (or for the broker at a specified host and port):
imqcmd update bkr [-b hostName:portNumber] -o property1=value1 [ [-o property2=value2] … ]
For example, the following command turns off the auto-creation of queue destinations for the default broker:
imqcmd update bkr -o imq.autocreate.queue=false -u admin
You can use imqcmd update bkr to update any of the following broker properties:
imq.autocreate.queue.maxNumActiveConsumers
imq.autocreate.queue.maxNumBackupConsumers
imq.destination.DMQ.truncateBody
See Chapter 16, Broker Properties Reference for detailed information about these properties.
To display information about a single broker, use the imqcmd query bkr subcommand:
imqcmd query bkr -b hostName:portNumber
This lists the current settings of the broker’s properties, as shown in Example 5–1.
|
The imqcmd metrics bkr subcommand displays detailed metric information about a broker’s operation:
imqcmd metrics bkr [-b hostName:portNumber] [-m metricType] [-int interval] [-msp numSamples]
The -m option specifies the type of metric information to display:
ttl (default): Messages and packets flowing into and out of the broker
rts: Rate of flow of messages and packets into and out of the broker per second
cxn: Connections, virtual memory heap, and threads
The -int and -msp options specify, respectively, the interval (in seconds) at which to display the metrics and the number of samples to display in the output. The default values are 5 seconds and an unlimited number of samples.
For example, the following command displays the rate of message flow into and out of the default broker (host localhost at port 7676) at 10-second intervals:
imqcmd metrics bkr -m rts -int 10 -u admin
Example 5–2 shows an example of the resulting output.
|
For a more detailed description of the data gathered and reported by the broker, see Brokerwide Metrics.
For brokers belonging to a broker cluster, the imqcmd list bkr subcommand displays information about the configuration of the cluster; see Displaying a Cluster Configuration for more information.
Message Queue offers various connection services using a variety of transport protocols for connecting both application and administrative clients to a broker. This chapter describes how to configure and manage these services and the connections they support:
Broker configuration properties related to connection services are listed under Connection Properties.
Figure 6–1 shows the connection services provided by the Message Queue broker.
These connection services are distinguished by two characteristics, as shown in Table 6–1:
The service type specifies whether the service provides JMS message delivery (NORMAL) or Message Queue administration services ( ADMIN).
The protocol type specifies the underlying transport protocol.
Service Name |
Service Type | |
---|---|---|
NORMAL | ||
NORMAL | ||
NORMAL | ||
NORMAL | ||
ADMIN | ||
ADMIN |
By setting a broker’s imq.service.activelist property, you can configure it to run any or all of these connection services. The value of this property is a list of connection services to be activated when the broker is started up; if the property is not specified explicitly, the jms and admin services will be activated by default.
Each connection service also supports specific authentication and authorization features; see Introduction to Security Services for more information.
There is also a special cluster connection service, used internally by the brokers within a broker cluster to exchange information about the cluster’s configuration and state. This service is not intended for use by clients communicating with a broker. See Chapter 10, Configuring and Managing Broker Clusters for more information about broker clusters.
Also there are two JMX connectors, jmxrmi and ssljmxrmi, that support JMX-based administration. These JMX connectors are very similar to the connection services in Table 6–1, above, and are used by JMX clients to establish a connection to the broker's MBean server. For more information, see JMX Connection Infrastructure.
Each connection service is available at a particular port, specified by host name (or IP address) and port number. You can explicitly specify a static port number for a service or have the broker’s Port Mapper assign one dynamically. The Port Mapper itself resides at the broker’s primary port, which is normally located at the standard port number 7676. (If necessary, you can use the broker configuration property imq.portmapper.port to override this with a different port number.) By default, each connection service registers itself with the Port Mapper when it starts up. When a client creates a connection to the broker, the Message Queue client runtime first contacts the Port Mapper, requesting a port number for the desired connection service.
Alternatively, you can override the Port Mapper and explicitly assign a static port number to a connection service, using the imq.serviceName.protocolType. port configuration property (where serviceName and protocolType identify the specific connection service, as shown in Table 6–1). (Only the jms, ssljms, admin, and ssladmin connection services can be configured this way; the httpjms and httpsjms services use different configuration properties, described in Appendix C, HTTP/HTTPS Support). Static ports are generally used only in special situations, however, such as in making connections through a firewall (see Connecting Through a Firewall), and are not recommended for general use.
In cases where two or more hosts are available (such as when more than one network interface card is installed in a computer), you can use broker properties to specify which host the connection services should bind to. The imq.hostname property designates a single default host for all connection services; this can then be overridden, if necessary, with imq.serviceName. protocolType.hostname (for the jms, ssljms, admin, or ssladmin service) or imq.portmapper.hostname (for the Port Mapper itself).
When multiple Port Mapper requests are received concurrently, they are stored in an operating system backlog while awaiting action. The imq.portmapper.backlog property specifies the maximum number of such backlogged requests. When this limit is exceeded, any further requests will be rejected until the backlog is reduced.
Each connection service is multithreaded, supporting multiple connections. The threads needed for these connections are maintained by the broker in a separate thread pool for each service. As threads are needed by a connection, they are added to the thread pool for the service supporting that connection.
The threading model you choose specifies whether threads are dedicated to a single connection or shared by multiple connections:
In the dedicated model, each connection to the broker requires two threads: one for incoming and one for outgoing messages. This limits the number of connections that can be supported, but provides higher performance.
In the shared model, connections are processed by a shared thread when sending or receiving messages. Because each connection does not require dedicated threads, this model increases the number of possible connections, but at the cost of lower performance because of the additional overhead needed for thread management.
The broker’s imq.serviceName. threadpool_model property specifies which of the two models to use for a given connection service. This property takes either of two string values: dedicated or shared. If you don’t set the property explicitly, dedicated is assumed by default.
You can also set the broker properties imq.serviceName. min_threads and imq.serviceName. max_threads to specify a minimum and maximum number of threads in a service’s thread pool. When the number of available threads exceeds the specified minimum threshold, Message Queue will shut down threads as they become free until the minimum is reached again, thereby saving on memory resources. Under heavy loads, the number of threads might increase until the pool’s maximum number is reached; at this point, new connections are rejected until a thread becomes available.
The shared threading model uses distributor threads to assign threads to active connections. The broker property imq.shared.connectionMonitor_limit specifies the maximum number of connections that can be monitored by a single distributor thread. The smaller the value of this property, the faster threads can be assigned to connections. The imq.ping.interval property specifies the time interval, in seconds, at which the broker will periodically test (“ping”) a connection to verify that it is still active, allowing connection failures to be detected preemptively before an attempted message transmission fails.
Message Queue brokers support connections from both application clients and administrative clients. See Configuring Connection Services for a description of the available connection services. The Command utility provides subcommands that you can use for managing both connection services as a whole and individual services; to apply a subcommand to a particular service, use the -n option to specify one of the names listed in the “Service Name” column of Table 6–1. Subcommands are available for the following connection service management tasks:
Pausing a connection service has the following effects:
The broker stops accepting new client connections on the paused service. If a Message Queue client attempts to open a new connection, it will get an exception.
All existing connections on the paused service are kept alive, but the broker suspends all message processing on such connections until the service is resumed. (For example, if a client attempts to send a message, the send method will block until the service is resumed.)
The message delivery state of any messages already received by the broker is maintained. (For example, transactions are not disrupted and message delivery will resume when the service is resumed.)
The admin connection service can never be paused; to pause and resume any other service, use the subcommands imqcmd pause svc and imqcmd resume svc. The syntax of the imqcmd pause svc subcommand is as follows:
imqcmd pause svc -n serviceName [-b hostName:portNumber]
For example, the following command pauses the httpjms service running on the default broker (host localhost at port 7676):
imqcmd pause svc -n httpjms -u admin
The imqcmd resume svc subcommand resumes operation of a connection service following a pause:
imqcmd resume svc -n serviceName [-b hostName:portNumber]
You can use the imqcmd update svc subcommand to change the value of one or more of the service properties listed in Table 6–2. See Connection Properties for a description of these properties.
Table 6–2 Connection Service Properties Updated by Command Utility
Property |
Description |
---|---|
port |
Port assigned to the service to be updated (does not apply to httpjms or httpsjms) A value of 0 means the port is dynamically allocated by the Port Mapper. |
minThreads | |
maxThreads |
Maximum number of threads assigned to the service |
The imqcmd update svc subcommand has the following syntax:
imqcmd update svc -n serviceName [-b hostName:portNumber] -o property1=value1 [[-o property2=value2]…]
For example, the following command changes the minimum number of threads assigned to the jms connection service on the default broker (host localhost at port 7676) to 20:
imqcmd update svc -o minThreads=20 -u admin
To list the connection services available on a broker, use the imqcmd list svc subcommand:
imqcmd list svc [-b hostName:portNumber]
For example, the following command lists all services on the default broker (host localhost at port 7676):
imqcmd list svc -u admin
Example 6–1 shows an example of the resulting output.
|
The imqcmd query svc subcommand displays information about a single connection service:
imqcmd query svc -n serviceName [-b hostName:portNumber]
For example, the following command displays information about the jms connection service on the default broker (host localhost at port 7676):
imqcmd query svc -n jms -u admin
Example 6–2 shows an example of the resulting output.
|
To display metrics information about a connection service, use the imqcmd metrics svc subcommand:
imqcmd metrics svc -n serviceName [-b hostName:portNumber] [-m metricType] [-int interval] [-msp numSamples]
The -m option specifies the type of metric information to display:
ttl (default): Messages and packets flowing into and out of the broker by way of the specified connection service
rts: Rate of flow of messages and packets into and out of the broker per second by way of the specified connection service
cxn: Connections, virtual memory heap, and threads
The -int and -msp options specify, respectively, the interval (in seconds) at which to display the metrics and the number of samples to display in the output. The default values are 5 seconds and an unlimited number of samples.
For example, the following command displays cumulative totals for messages and packets handled by the default broker (host localhost at port 7676) by way of the jms connection service:
imqcmd metrics svc -n jms -m ttl -u admin
Example 6–3 shows an example of the resulting output.
|
For a more detailed description of the use of the Command utility to report connection service metrics, see Connection Service Metrics.
The Command utility’s list cxn and query cxn subcommands display information about individual connections. The subcommand imqcmd list cxn lists all connections for a specified connection service:
imqcmd list cxn [-svn serviceName] [-b hostName:portNumber]
If no service name is specified, all connections are listed. For example, the following command lists all connections on the default broker (host localhost at port 7676):
imqcmd list cxn -u admin
Example 6–4 shows an example of the resulting output.
|
To display detailed information about a single connection, obtain the connection identifier from imqcmd list cxn and pass it to the imqcmd query cxn subcommand:
imqcmd query cxn -n connectionID [-b hostName:portNumber]
For example, the command
imqcmd query cxn -n 421085509902214374 -u admin
produces output like that shown in Example 6–5.
|
The imqcmd destroy cxn subcommand destroys a connection:
imqcmd destroy cxn -n connectionID [-b hostName:portNumber]
For example, the command
imqcmd destroy cxn -n 421085509902214374 -u admin
destroys the connection shown in Example 6–5.
A Message Queue message is routed to its consumer clients by way of a physical destination on a message broker. The broker manages the memory and persistent storage associated with the physical destination and configures its behavior. The broker also manages memory at a system-wide level, to assure that sufficient resources are available to support all destinations.
Message delivery also involves the maintenance of state information needed by the broker to route messages to consumers and to track acknowledgements and transactions.
This chapter provides information needed to manage message delivery, and includes the following topics:
This section describes how to use the Message Queue Command utility (imqcmd) to manage physical destinations. It includes discussion of a specialized physical destination managed by the broker, the dead message queue, whose properties differ somewhat from those of other destinations.
In a broker cluster, you create a physical destination on one broker and the cluster propagates it to all the others. Because the brokers cooperate to route messages across the cluster, client applications can consume messages from destinations on any broker in the cluster. However the persistence and acknowledgment of a message is managed only by the broker to which a message was originally produced.
This section covers the following topics regarding the management of physical destinations:
For provider independence and portability, client applications typically use destination administered objects to interact with physical destinations. Chapter 11, Managing Administered Objects describes how to configure such administered objects for use by client applications. For a general conceptual introduction to physical destinations, see the Message Queue Technical Overview.
The Message Queue Command utility (imqcmd) enables you to manage physical destinations interactively from the command line. See Chapter 15, Command Line Reference for general reference information about the syntax, subcommands, and options of the imqcmd command, and Chapter 17, Physical Destination Property Reference for specific information on the configuration properties used to specify physical destination behavior.
Table 7–1 lists the imqcmd subcommands for physical destination management. For full reference information about these subcommands, see Table 15–7.
Table 7–1 Physical Destination Subcommands for the Command Utility
Subcommand |
Description |
---|---|
create dst |
Create physical destination |
destroy dst |
Destroy physical destination |
pause dst |
Pause message delivery for physical destination |
resume dst |
Resume message delivery for physical destination |
purge dst |
Purge all messages from physical destination |
compact dst |
Compact physical destination |
update dst |
Set physical destination properties |
list dst |
List physical destinations |
query dst |
List physical destination property values |
metrics dst |
Display physical destination metrics |
The subcommand imqcmd create dst creates a new physical destination:
imqcmd create dst -t destType -n destName [ [-o property=value] … ]
You supply the destination type (q for a queue or t for a topic) and the name of the destination.
Destination names must conform to the rules described below for queue and topic destinations.
Queue destination names must conform to the following rules:
It must contain only alphanumeric characters.
It must not contain spaces.
It must begin with an alphabetic character (A–Z, a–z), an underscore (_), or a dollar sign ($).
It must not begin with the characters mq.
For example, the following command creates a queue destination named XQueue:
imqcmd create dst -t q -n XQueue
Topic destination names must conform to the same rules as queue destinations, as specified in Supported Queue Destination Names, except that Message Queue also supports, in addition, topic destination names that include wildcard characters, representing multiple destinations. These symbolic names allow publishers to publish messages to multiple topics and subscribers to consume messages from multiple topics. Using symbolic names, you can create destinations, as needed, consistent with the wildcard naming scheme. Publishers and subscribers automatically publish to and consume from any added destinations that match the symbolic names. (Wildcard topic subscribers are more common than publishers.)
The format of a symbolic topic destination name consists of multiple segments, in which wildcard characters (*, **, >) can represent one or more segments of the name. For example, suppose you have a topic destination naming scheme as follows:
size.color.shape
where the topic name segments can have the following values:
size: large, medium, small, ...
color: red, green, blue, ...
shape: circle, triangle, square, ...
Message Queue supports the following wildcard characters:
* matches a single segment
** matches one or more segments
> matches any number of successive segments
You can therefore indicate multiple topic destinations as follows:
large.*.circle would represent:
large.red.circle large.green.circle ...
**.square would represent all names ending in .square, for example:
small.green.square medium.blue.square ... |
small.> would represent all destination names starting with small., for example:
small.blue.circle small.red.square ... |
To use this multiple destination feature, you create topic destinations using a naming scheme similar to that described above. For example, the following command creates a topic destination named large.green.circle:
imqcmd create dst -t t -n large.green.circle
Client applications can then create wildcard publishers or wildcard consumers using symbolic destination names, as shown in the following examples:
... String DEST_LOOKUP_NAME = "large.*.circle"; Topic t = (Destination) ctx.lookup(DEST_LOOKUP_NAME); TopicPublisher myPublisher = mySession.createPublisher(t) myPublisher.send(myMessage);
In this example, the broker will place a copy of the message in any destination that matches the symbolic name large.*.circle
... String DEST_LOOKUP_NAME = "**.square"; Topic t = (Destination) ctx.lookup(DEST_LOOKUP_NAME); TopicSubscriber mySubscriber = mySession.createSubscriber(t); Message m = mySubscriber.receive();
In this example, a subscriber will be created if there is at least one destination that matches the symbolic name **.square and will receive messages from all destinations that match that symbolic name. If there are no destinations matching the symbolic name, the subscriber will not be registered with the broker until such a destination exists.
If you create additional destinations that match a symbolic name, then wildcard publishers created using that symbolic name will subsequently publish to that destination and wildcard subscribers created using that symbolic name will subsequently receive messages from that destination.
In addition, Message Queue administration tools, in addition to reporting the total number of publishers (producers) and subscribers (consumers) for a topic destination, will also report the number of publishers that are wildcard publishers (including their corresponding symbolic destination names) and the number of subscribers that are wildcard subscribers (including their symbolic destination names), if any. See Viewing Physical Destination Information.
The imqcmd create dst command may also optionally include any property values you wish to set for the destination, specified with the -o option. For example, the following command creates a topic destination named hotTopic with a maximum message length of 5000 bytes:
imqcmd create dst -t t -n hotTopic -o maxBytesPerMsg=5000
See Chapter 17, Physical Destination Property Reference for reference information about the physical destination properties that can be set with this option. (For auto-created destinations, you set default property values in the broker’s instance configuration file; see Table 16–3 for information on these properties.)
To destroy a physical destination, use the imqcmd destroy dst subcommand:
imqcmd destroy dest -t destType -n destName
This purges all messages at the specified destination and removes it from the broker; the operation is not reversible.
For example, the following command destroys the queue destination named curlyQueue:
imqcmd destroy dest -t q -n curlyQueue -u admin
You cannot destroy the dead message queue.
Pausing a physical destination temporarily suspends the delivery of messages from producers to the destination, from the destination to consumers, or both. This can be useful, for instance, to prevent destinations from being overwhelmed when messages are being produced much faster than they are consumed. You must also pause a physical destination before compacting it (see Managing Physical Destination Disk Utilization).
To pause the delivery of messages to or from a physical destination, use the imqcmd pause dst subcommand:
imqcmd pause dest [-t destType -n destName] [-pst pauseType]
If you omit the destination type and name (-t and -n options), all physical destinations will be paused. The pause type (-pst) specifies what type of message delivery to pause:
Pause delivery from message producers to the destination
Pause delivery from the destination to message consumers
Pause all message delivery (both producers and consumers)
If no pause type is specified, all message delivery will be paused.
For example, the following command pauses delivery from message producers to the queue destination curlyQueue:
imqcmd pause dst -t q -n curlyQueue -pst PRODUCERS -u admin
The following command pauses delivery to message consumers from the topic destination hotTopic:
imqcmd pause dst -t t -n hotTopic -pst CONSUMERS -u admin
This command pauses all message delivery to and from all physical destinations:
imqcmd pause dst -u admin
In a broker cluster, since each broker in the cluster has its own instance of each physical destination, you must pause each such instance individually.
The imqcmd resume dst subcommand resumes delivery to a paused destination:
imqcmd resume dest [-t destType -n destName]
For example, the following command resumes message delivery to the queue destination curlyQueue:
imqcmd resume dst -t q -n curlyQueue -u admin
If no destination type and name are specified, all destinations are resumed. This command resumes delivery to all physical destinations:
imqcmd resume dst -u admin
Purging a physical destination deletes all messages it is currently holding. You might want to do this when a destination’s accumulated messages are taking up too much of the system’s resources, such as when a queue is receiving messages but has no registered consumers to which to deliver them, or when a topic’s durable subscribers remain inactive for long periods of time.
To purge a physical destination, use the imqcmd purge dst subcommand:
imqcmd purge dst -t destType -n destName
For example, the following command purges all accumulated messages from the topic destination hotTopic:
imqcmd purge dst -t t -n hotTopic -u admin
In a broker cluster, since each broker in the cluster has its own instance of each physical destination, you must purge each such instance individually.
When restarting a broker that has been shut down, you can use the Broker utility’s -reset messages option to clear out its stale messages: for example,
imqbrokerd -reset messages -u admin
This saves you the trouble of purging physical destinations after restarting the broker.
The subcommand imqcmd update dst changes the values of specified properties of a physical destination:
imqcmd update dst -t destType -n destName -o property1=value1 [ [-o property2=value2] … ]
The properties to be updated can include any of those listed in Table 17–1 (with the exception of the isLocalOnly property, which cannot be changed once the destination has been created). For example, the following command changes the maxBytesPerMsg property of the queue destination curlyQueue to 1000 and the maxNumMsgs property to 2000:
imqcmd update dst -t q -n curlyQueue -u admin -o maxBytesPerMsg=1000 -o maxNumMsgs=2000
The type of a physical destination is not an updatable property; you cannot use the imqcmd update dst subcommand to change a queue to a topic or a topic to a queue.
To list the physical destinations on a broker, use the imqcmd list dst subcommand:
imqcmd list dst -b hostName:portNumber [-t destType] [-tmp]
This lists all physical destinations on the broker identified by hostName and portNumber of the type (queue or topic) specified by destType. If the -t option is omitted, both queues and topics are listed. For example, the following command lists all physical destinations on the broker running on host myHost at port number 4545:
imqcmd list dst -b myHost:4545
The list of queue destinations always includes the dead message queue (mq.sys.dmq) in addition to any other queue destinations currently existing on the broker.
If you specify the -tmp option, temporary destinations are listed as well. These are destinations created by clients, normally for the purpose of receiving replies to messages sent to other clients.
The imqcmd query dst subcommand displays information about a single physical destination:
imq query dst -t destType -n destName
For example, the following command displays information about the queue destination curlyQueue:
imqcmd query dst -t q -n curlyQueue -u admin
Example 7–3 shows an example of the resulting output. You can use the imqcmd update dst subcommand (see Updating Physical Destination Properties) to change the value of any of the properties listed.
|
For destinations in a broker cluster, it is often helpful to know how many messages in a destination are local (produced to the local broker) and how many are remote (produced to a remote broker). Hence, imqcmd query dst reports, in addition to the number and total message bytes of messages in the destination, the number and total bytes of messages that are sent to the destination from remote brokers in the cluster.
For topic destinations, imqcmd query dst reports the number of publishers that are wildcard publishers (including their corresponding symbolic destination names) and the number of subscribers that are wildcard subscribers (including their symbolic destination names), if any.
To display metrics information about a physical destination, use the imqcmd metrics dst subcommand:
imqcmd metrics dst -t destType -n destName [-m metricType] [-int interval] [-msp numSamples]
The -m option specifies the type of metric information to display:
ttl (default): Messages and packets flowing into and out of the destination and residing in memory
rts: Rate of flow of messages and packets into and out of the destination per second, along with other rate information
con: Metrics related to message consumers
dsk: Disk usage
The -int and -msp options specify, respectively, the interval (in seconds) at which to display the metrics and the number of samples to display in the output. The default values are 5 seconds and an unlimited number of samples.
For example, the following command displays cumulative totals for messages and packets handled by the queue destination curlyQueue:
imqcmd metrics dst -t q -n curlyQueue -m ttl -u admin
Example 7–4 shows an example of the resulting output.
|
For a more detailed description of the use of the Command utility to report physical destination metrics, see Physical Destination Metrics.
Because of the way message storage is structured in a file-based persistent data store (see File-Based Persistence Properties), disk space can become fragmented over time, resulting in inefficient utilization of the available resources. Message Queue’s Command utility (imqcmd) provides subcommands for monitoring disk utilization by physical destinations and for reclaiming unused disk space when utilization drops.
To monitor a physical destination’s disk utilization, use the imqcmd metrics dst subcommand:
imqcmd metrics dst -m dsk -t destType -n destMame
This displays the total number of bytes of disk space reserved for the destination’s use, the number of bytes currently in use to hold active messages, and the percentage of available space in use (the disk utilization ratio). For example, the following command displays disk utilization information for the queue destination curlyQueue:
imqcmd metrics dst -m dsk -t q -n curlyQueue -u admin
Example 7–5 shows an example of the resulting output.
|
The disk utilization pattern depends on the characteristics of the messaging application using a particular physical destination. Depending on the flow of messages into and out of the destination and their relative size, the amount of disk space reserved might grow over time. If messages are produced at a higher rate than they are consumed, free records should generally be reused and the utilization ratio should be on the high side. By contrast, if the rate of message production is comparable to or lower than the consumption rate, the utilization ratio will likely be low.
As a rule, you want the reserved disk space to stabilize and the utilization ratio to remain high. If the system reaches a steady state in which the amount of reserved disk space remains more or less constant with utilization above 75%, there is generally no need to reclaim unused disk space. If the reserved space stabilizes at a utilization rate below 50%, you can use the imqcmd compact dst subcommand to reclaim the disk space occupied by free records:
compact dst [-t destType -n destName]
This compacts the file-based data store for the designated physical destination. If no destination type and name are specified, all physical destinations are compacted.
You must pause a destination (with the imqcmd pause subcommand) before compacting it, and resume it (with imqcmd resume) afterward (see Pausing and Resuming a Physical Destination):
imqcmd pause dst -t q -n curlyQueue -u admin imqcmd compact dst -t q -n curlyQueue -u admin imqcmd resume dst -t q -n curlyQueue -u admin
If a destination’s reserved disk space continues to increase over time, try reconfiguring its maxNumMsgs, maxBytesPerMsg, maxTotalMsgBytes, and limitBehavior properties (see Physical Destination Properties).
The dead message queue, mq.sys.dmq, is a system-created physical destination that holds the dead messages of a broker's physical destinations. The dead message queue is a tool for monitoring, tuning system efficiency, and troubleshooting. For a definition of the term dead message and a more detailed introduction to the dead message queue, see the Message Queue Technical Overview.
The broker automatically creates a dead message queue when it starts. The broker places messages on the queue if it cannot process them or if their time-to-live has expired. In addition, other physical destinations can use the dead message queue to hold discarded messages. This can provide information that is useful for troubleshooting the system.
The physical destination configuration property useDMQ controls a destination’s use of the dead message queue. Physical destinations are configured to use the dead message queue by default; to disable a destination from using it, set the destination’s useDMQ property to false:
imqcmd update dst -t q -n curlyQueue -o useDMQ=false
You can enable or disable the use of the dead message queue for all auto-created physical destinations on a broker by setting the broker’s imq.autocreate.destination.useDMQ broker property:
imqcmd update bkr -o imq.autocreate.destination.useDMQ=false
You can manage the dead message queue with the Message Queue Command utility (imqcmd) just as you manage other queues, but with some differences. For example, because the dead message queue is system-created, you cannot create, pause, or destroy it. Also, as shown in Table 7–2, default values for the dead message queue’s configuration properties sometimes differ from those of ordinary queues.
Table 7–2 Dead Message Queue Treatment of Physical Destination Properties
Property |
Variant Treatment by Dead Message Queue |
---|---|
maxNumMsgs |
Default value is 1000, rather than -1 (unlimited) as for ordinary queues. |
maxTotalMsgBytes |
Default value is 10m (10 megabytes), rather than -1 (unlimited) as for ordinary queues. |
limitBehavior |
Default value is REMOVE_OLDEST, rather than REJECT_NEWEST as for ordinary queues. FLOW_CONTROL is not supported for the dead message queue. |
maxNumProducers |
Does not apply to the dead message queue. |
isLocalOnly |
Permanently set to false in broker clusters; the dead message queue in a cluster is always a global physical destination. |
localDeliveryPreferred |
Does not apply to the dead message queue. |
By default, the dead message queue stores entire messages. If you do not plan to restore dead messages, you can reduce the size of the dead message queue by setting the broker’s imq.destination.DMQ.truncateBody property to true:
imqcmd update bkr -o imq.destination.DMQ.truncateBody=true
This will discard the body of all messages and retain only the headers and property data.
The broker configuration property logDeadMsgs controls the logging of events related to the dead message queue. When dead message logging is enabled, the broker will log the following events:
A message is moved to the dead message queue.
A message is discarded from the dead message queue (or from any physical destination that does not use the dead message queue).
A physical destination reaches its limits.
Dead message logging is disabled by default. The following command enables it:
imqcmd update bkr -o imq.destination.logDeadMsgs=true
Dead message logging applies to all physical destinations that use the dead message queue. You cannot enable or disable logging for an individual physical destination.
Once clients are connected to the broker, the routing and delivery of messages can proceed. In this phase, the broker is responsible for creating and managing different types of physical destinations, ensuring a smooth flow of messages, and using resources efficiently. You can use the broker configuration properties described under Routing and Delivery Properties to manage these tasks in a way that suits your application’s needs.
The performance and stability of a broker depend on the system resources (such as memory) available and how efficiently they are utilized. You can set configuration properties to prevent the broker from becoming overwhelmed by incoming messages or running out of memory. These properties function at three different levels to keep the message service operating as resources become scarce:
Systemwide message limits apply collectively to all physical destinations on the system. These include the maximum number of messages held by a broker (imq.system.max_count) and the maximum total number of bytes occupied by such messages (imq.system.max_size). If either of these limits is reached, the broker will reject any new messages until the pending messages fall below the limit. There is also a limit on the maximum size of an individual message (imq.message.max_size) and a time interval at which expired messages are reclaimed (imq.message.expiration.interval).
Individual destination limits regulate the flow of messages to a specific physical destination. The configuration properties controlling these limits are described in Chapter 17, Physical Destination Property Reference. They include limits on the number and size of messages the destination will hold, the number of message producers and consumers that can be created for it, and the number of messages that can be batched together for delivery to the destination.
The destination can be configured to respond to memory limits by slowing down the delivery of message by message producers, by rejecting new incoming messages, or by throwing out the oldest or lowest-priority existing messages. Messages deleted from the destination in this way may optionally be moved to the dead message queue rather than discarded outright; the broker property imq.destination.DMQ.truncateBody controls whether the entire message body is saved in the dead message queue, or only the header and property data.
As a convenience during application development and testing, you can configure a message broker to create new physical destinations automatically whenever a message producer or consumer attempts to access a nonexistent destination. The broker properties summarized in Table 16–3 parallel the ones just described, but apply to such auto-created destinations instead of administratively created ones.
System memory thresholds define levels of memory usage at which the broker takes increasingly serious action to prevent memory overload. Four such usage levels are defined:
Green: Plenty of memory is available.
Yellow: Broker memory is beginning to run low.
Orange: The broker is low on memory.
Red: The broker is out of memory.
The memory utilization percentages defining these levels are specified by the broker properties imq.green.threshold, imq.yellow.threshold , imq.orange.threshold, and imq.red.threshold , respectively; the default values are 0% for green, 80% for yellow, 90% for orange, and 98% for red.
As memory usage advances from one level to the next, the broker responds progressively, first by swapping messages out of active memory into persistent storage and then by throttling back producers of nonpersistent messages, eventually stopping the flow of messages into the broker. (Both of these measures degrade broker performance.) The throttling back of message production is done by limiting the size of each batch delivered to the number of messages specified by the properties imq.resourceState .count, where resourceState is green , yellow, orange, or red , respectively.
The triggering of these system memory thresholds is a sign that systemwide and destination message limits are set too high. Because the memory thresholds cannot always catch potential memory overloads in time, you should not rely on them to control memory usage, but rather reconfigure the system-wide and destination limits to optimize memory resources.
Message Queue clients subscribing to a topic destination can register as durable subscribers. The corresponding durable subscription has a unique, persistent identity and requires the broker to retain messages addressed to it even when its message consumer (the durable subscriber) becomes inactive. Ordinarily, the broker may delete a message held for a durable subscriber only when the message expires.
The Message Queue Command utility provides subcommands for managing a broker’s durable subscriptions in the following ways:
Listing durable subscriptions
Purging all messages for a durable subscription
Destroying a durable subscription
To list durable subscriptions for a specified physical destination, use the imqcmd list dur subcommand:
imqcmd list dur -d topicName
For example, the following command lists all durable subscriptions to the topic SPQuotes on the default broker (host localhost at port 7676):
imqcmd list dur -d SPQuotes
The resulting output lists the name of each durable subscription to the topic, the client identifier to which it belongs, its current state (active or inactive), and the number of messages currently queued to it. Example 7–6 shows an example.
|
The imqcmd purge dur subcommand purges all messages for a specified durable subscriber and client identifier:
imqcmd purge dur -n subscriberName -c clientID
For example, the following command purges all messages for the durable subscription listed in Example 7–6:
imqcmd purge dur -n myCurable -c myClientID
The imqcmd destroy dur subcommand destroys a durable subscription, specified by its subscriber name and client identifier:
imqcmd destroy dur -n subscriberName -c clientID
For example, the following command destroys the durable subscription listed in Example 7–6:
imqcmd destroy dur -n myCurable -c myClientID
All transactions initiated by client applications are tracked by the broker. These can be local Message Queue transactions or distributed transactions managed by a distributed transaction manager.
Each transaction is identified by a unique 64-bit Message Queue transaction identifier. Distributed transactions also have a distributed transaction identifier (XID), up to 128 bytes long, assigned by the distributed transaction manager. Message Queue maintains the association between its own transaction identifiers and the corresponding XIDs.
The imqcmd list txn subcommand lists the transactions being tracked by a broker:
imqcmd list txn
This lists all transactions on the broker, both local and distributed. For each transaction, it shows the transaction ID, state, user name, number of messages and acknowledgments, and creation time. Example 7–7 shows an example of the resulting output.
|
To display detailed information about a single transaction, obtain the transaction identifier from imqcmd list txn and pass it to the imqcmd query txn subcommand:
imqcmd query txn -n transactionID
This displays the same information as imqcmd list txn, along with the client identifier, connection identification, and distributed transaction identifier (XID). For example, the command
imqcmd query txn -n 64248349708800
produces output like that shown in Example 7–8.
|
If a broker fails, it is possible that a distributed transaction could be left in the PREPARED state without ever having been committed. Until such a transaction is committed, its messages will not be delivered and its acknowledgments will not be processed. Hence, as an administrator, you might need to monitor such transactions and commit them or roll them back manually. For example, if the broker’s imq.transaction.autorollback property (see Table 16–2) is set to false, you must manually commit or roll back non-distributed transactions and unrecoverable distributed transactions found in the PREPARED state at broker startup, using the Command utility’s commit txn or rollback txn subcommand:
imqcmd commit txn -n transactionID
imqcmd rollback txn -n transactionID
For example, the following command commits the transaction listed in Example 7–8:
imqcmd commit txn -n64248349708800
Only transactions in the PREPARED state can be committed . However, transaction in the STARTED, FAILED, INCOMPLETE, COMPLETE, and PREPARED states can be rolled back. You should do so only if you know that the transaction has been left in this state by a failure and is not in the process of being committed by the distributed transaction manager.
For a broker to recover in case of failure, it needs to re-create the state of its message delivery operations. To do this, the broker must save state information to a persistent data store. When the broker restarts, it uses the saved data to re-create destinations and durable subscriptions, recover persistent messages, roll back open transactions, and rebuild its routing table for undelivered messages. It can then resume message delivery.
A persistent data store is thus a key aspect of providing for reliable message delivery. This chapter describes the two different persistence implementations supported by the Message Queue broker and how to set each of them up:
A broker’s persistent data store holds information about physical destinations, durable subscriptions, messages, transactions, and acknowledgments.
Message Queue supports both file-based and JDBC-based persistence modules, as shown in the following figure. File-based persistence uses individual files to store persistent data; JDBC-based persistence uses the Java Database Connectivity (JDBC) interface to connect the broker to a JDBC-based data store. While file-based persistence is generally faster than JDBC-based persistence, some users prefer the redundancy and administrative control provided by a JDBC database. The broker configuration property imq.persist.store (see Table 16–4) specifies which of the two persistence modules (file or jdbc) to use.
Message Queue brokers are configured by default to use a file-based persistent store, but you can reconfigure them to plug in any data store accessible through a JDBC-compliant driver. The broker configuration property imq.persist.store (see Table 16–4) specifies which of the two forms of persistence to use.
By default, Message Queue uses a file-based data store, in which individual files store persistent data (such as messages, destinations, durable subscriptions, transactions, and routing information).
The file-based data store is located in a directory identified by the name of the broker instance (instanceName) to which the data store belongs:
…/instances/instanceName/fs370
(See Appendix A, Platform-Specific Locations of Message Queue Data for the location of the instances directory.) Each destination on the broker has its own subdirectory holding messages delivered to that destination.
Because the data store can contain messages of a sensitive or proprietary nature, you should secure the …/instances/instanceName/fs370 directory against unauthorized access; see Securing a File-Based Data Store.
Broker configuration properties related to file-based persistence are listed under File-Based Persistence Properties. These properties let you configure various aspects of how the file-based data store behaves.
All persistent data other than messages is stored in separate files: one file each for destinations, durable subscriptions, and transaction state information. Most messages are stored in a single file consisting of variable-size records. You can compact this file to alleviate fragmentation as messages are added and removed (see Managing Physical Destination Disk Utilization). In addition, messages above a certain threshold size are stored in their own individual files rather than in the variable-sized record file. You can configure this threshold size with the broker property imq.persist.file.message.max_record_size.
The broker maintains a file pool for these individual message files: instead of being deleted when it is no longer needed, a file is returned to the pool of free files in its destination directory so that it can later be reused for another message. The broker property imq.persist.file.destination.message.filepool.limit specifies the maximum number of files in the pool. When the number of individual message files for a destination exceeds this limit, files will be deleted when no longer needed instead of being returned to the pool.
When returning a file to the file pool, the broker can save time at the expense of storage space by simply tagging the file as available for reuse without deleting its previous contents. You can use the imq.persist.file.message.filepool.cleanratio broker property to specify the percentage of files in each destination’s file pool that should be maintained in a “clean” (empty) state rather than simply marked for reuse. The higher you set this value, the less space will be required for the file pool, but the more overhead will be needed to empty the contents of files when they are returned to the pool. If the broker’s imq.persist.file.message.cleanup property is true, all files in the pool will be emptied at broker shutdown, leaving them in a clean state; this conserves storage space but slows down the shutdown process.
In writing data to the data store, the operating system has some leeway in whether to write the data synchronously or “lazily” (asynchronously). Lazy storage can lead to data loss in the event of a system crash, if the broker believes the data to have been written to the data store when it has not. To ensure absolute reliability (at the expense of performance), you can require that all data be written synchronously by setting the broker property imq.persist.file.sync.enabled to true. In this case, the data is guaranteed to be available when the system comes back up after a crash, and the broker can reliably resume operation.
A file-based data store is automatically created when you create a broker instance. However, you can configure the data store using the properties described in File-Based Persistence Properties.
For example, by default, Message Queue performs asynchronous write operations to disk. However, to attain the highest reliability, you can set the broker property imq.persist.file.sync to write data synchronously instead. See Table 16–5.
When you start a broker instance, you can use the imqbrokerd command’s -- reset option to clear the file-based data store. For more information about this option and its suboptions, see Broker Utility.
The persistent data store can contain, among other information, message files that are being temporarily stored. Since these messages may contain proprietary information, it is important to secure the data store against unauthorized access. This section describes how to secure data in a file-based data store.
A broker using file-based persistence writes persistent data to a flat-file data store whose location is platform-dependent (see Appendix A, Platform-Specific Locations of Message Queue Data):
…/instances/instanceName/fs370
where instanceName is a name identifying the broker instance. This directory is created when the broker instance is started for the first time. The procedure for securing this directory depends on the operating system platform on which the broker is running:
On Solaris and Linux, the directory’s permissions are determined by the file mode creation mask (umask) of the user who started the broker instance. Hence, permission to start a broker instance and to read its persistent files can be restricted by setting the mask appropriately. Alternatively, an administrator (superuser) can secure persistent data by setting the permissions on the instances directory to 700.
On Windows, the directory’s permissions can be set using the mechanisms provided by the Windows operating system. This generally involves opening a Properties dialog for the directory.
Instead of using a file-based data store, you can set up a broker to access any data store accessible through a JDBC-compliant driver. This involves setting the appropriate JDBC-related broker configuration properties and using the Database Manager utility (imqdbmgr) to create the proper database schema. See Configuring a JDBC-Based Data Store for specifics.
The full set of properties for configuring a broker to use a JDBC database are listed in Table 16–6. You can specify these properties either in the instance configuration file (config.properties) of each broker instance or by using the -D command line option to the Broker utility (imqbrokerd) or the Database Manager utility (imqdbmgr).
In practice, however, JDBC properties are preconfigured by default, depending on the database vendor being used for the data store. The property values are set in the default.properties file, and only need to be explicitly set if you are overriding the default values. In general, you only need to set the following properties:
This property specifies that a JDBC-based data store (as opposed to the default file-based data store) is used to store persistent data.
imq.persist.jdbc.dbVendor
This property identifies the database vendor being used for the data store; all of the remaining properties are qualified by this vendor name.
imq.persist.jdbc.connection.limit
This property specifies the maximum number of connections that can be opened to the database.
imq.persist.jdbcvendorName.user
This property specifies the user name to be used by the broker in accessing the database.
imq.persist.jdbcvendorName.password
This property specifies the password for accessing the database, if required; imq.persist.jdbc.vendorName.needpassword is a boolean flag specifying whether a password is needed. For security reasons, the database access password should be specified only in a password file referenced with the -passfile command line option; if no such password file is specified, the imqbrokerd and imqdbmgr commands will prompt for the password interactively.
imq.persist.jdbc.vendorName.property.propName
This set of properties represents any additional, vendor-specific properties that are required.
imq.persist.jdbc.vendorName.tableoption
Specifies the vendor-specific options passed to the database when creating the table schema.
imq.persist.store=jdbc imq.persist.jdbc.dbVendor=mysql imq.persist.jdbc.mysql.user=userName imq.persist.jdbc.mysql.password=password imq.persist.jdbc.mysql.property.url=jdbc:mysql://hostName:port/dataBase |
If you expect to have messages that are larger than 1 MB, configure MySQL's max_allowed_packet variable accordingly when starting the database. For more information see Appendix B of the MySQL 5.0 Reference Manual.
imq.persist.store=jdbc imq.persist.jdbc.dbVendor=hadb imq.persist.jdbc.hadb.user=userName imq.persist.jdbc.hadb.password=password imq.persist.jdbc.hadb.property.serverlist=hostName:port,hostName:port,... |
You can obtain the server list using the hadbm get jdbcURL command.
In addition, in an enhanced broker cluster, in which a JDBC database is shared by multiple broker instances, each broker must be uniquely identified in the database (unnecessary for an embedded database, which stores data for only one broker instance). The configuration property imq.brokerid specifies a unique instance identifier to be appended to the names of database tables for each broker. See Enhanced Broker Cluster Properties.
After setting all of the broker’s needed JDBC configuration properties, you must also install your JDBC driver’s .jar file in the appropriate directory location, depending on your operating-system platform (as listed in Appendix A, Platform-Specific Locations of Message Queue Data) and then create the database schema for the JDBC-based data store (see To Set Up a JDBC-Based Data Store).
To configure a broker to use a JDBC database, you set JDBC-related properties in the broker’s instance configuration file and create the appropriate database schema. The Message Queue Database Manager utility (imqdbmgr) uses your JDBC driver and the broker configuration properties to create the schema and manage the database. You can also use the Database Manager to delete corrupted tables from the database or if you want to use a different database as a data store. See Database Manager Utility for more information.
If you use an embedded database, it is best to create it under the following directory:
.../instances/instanceName/dbstore/databaseName
If an embedded database is not protected by a user name and password, it is probably protected by file system permissions. To ensure that the database is readable and writable by the broker, the user who runs the broker should be the same user who created the embedded database using the imqdbmgr command.
Set JDBC-related properties in the broker’s instance configuration file.
The relevant properties are discussed, with examples, in JDBC-Based Persistence Properties and listed in full in Table 16–6. In particular, you must specify a JDBC-based data store by setting the broker’s imq.persist.store property to jdbc.
Place a copy of, or a symbolic link to, your JDBC driver’s .jar file in the Message Queue external resource files directory, depending on your platform (see Appendix A, Platform-Specific Locations of Message Queue Data):
Solaris: /usr/share/lib/imq/ext
Linux: /opt/sun/mq/share/lib/ext
AIX: IMQ_VARHOME/lib/ext
Windows: IMQ_VARHOME\lib\ext
For example, if you are using HADB on a Solaris system, the following command copies the driver’s .jar file to the appropriate location:
cp /opt/SUNWhadb/4/lib/hadbjdbc4.jar /usr/share/lib/imq/ext
The following command creates a symbolic link instead:
ln -s /opt/SUNWhadb/4/lib/hadbjdbc4.jar /usr/share/lib/imq/ext
Create the database schema needed for Message Queue persistence.
Use the imqdbmgr create all command (for an embedded database) or the imqdbmgr create tbl command (for an external database); see Database Manager Utility.
You can display information about a JDBC-based data store using the Database Manager utility (imqdbmgr) as follows:
Change to the directory where the Database Manager utility resides, depending on your platform:
Solaris: cd /usr/bin
Linux: cd /opt/sun/mq/bin
AIX: cd IMQ_HOME/bin
Windows: cd IMQ_HOME\bin
Enter the imqdbmgr command:
imqdbmgr query
The output should resemble the following
dbmgr query [04/Oct/2005:15:30:20 PDT] Using plugged-in persistent store: version=400 brokerid=Mozart1756 database connection url=jdbc:oracle:thin:@Xhome:1521:mqdb database user=scott Running in standalone mode. Database tables have already been created.
The persistent data store can contain, among other information, message files that are being temporarily stored. Since these messages may contain proprietary information, it is important to secure the data store against unauthorized access. This section describes how to secure data in a JDBC-based data store.
A broker using JDBC-based persistence writes persistent data to a JDBC-compliant database. For a database managed by a database server (such as Oracle), it is recommended that you create a user name and password to access the Message Queue database tables (tables whose names start with MQ). If the database does not allow individual tables to be protected, create a dedicated database to be used only by Message Queue brokers. See the documentation provided by your database vendor for information on how to create user name/password access.
The user name and password required to open a database connection by a broker can be provided as broker configuration properties. However it is more secure to provide them as command line options when starting up the broker, using the imqbrokerd command’s -dbuserand -dbpassword options (see Broker Utility).
For an embedded database that is accessed directly by the broker by means of the database’s JDBC driver, security is usually provided by setting file permissions on the directory where the persistent data will be stored, as described above under Securing a File-Based Data Store To ensure that the database is readable and writable by both the broker and the Database Manager utility, however, both should be run by the same user.
Changes in the file formats for both file-based and JDBC-based persistent data stores were introduced in Message Queue 3.7, with further JDBC changes in version 4.0 and 4.1. As a result of these changes, the persistent data store version numbers have been updated to 370 for file-based data stores and 410 for JDBC-based stores. You can use the imqdbmgr query command to determine the version number of your existing data store.
On first startup, the Message Queue Broker utility (imqbrokerd) will check for the presence of an older persistent data store and automatically migrate it to the latest format:
File-based data store versions 200 and 350 are migrated to the version 370 format.
JDBC-based data store versions 350, 370, and 400 are migrated to the version 410 format. (If you need to upgrade a version 200 data store, you will need to step through an intermediate Message Queue 3.5 or 3.6 release.)
The upgrade leaves the older copy of the persistent data store intact, allowing you to roll back the upgrade if necessary. To do so, you can uninstall the current version of Message Queue and reinstall the earlier version you were previously running. The older version’s message brokers will locate and use the older copy of the data store.
This chapter describes Message Queue’s facilities for security-related administration tasks, such as configuring user authentication, defining access control, configuring a Secure Socket Layer (SSL) connection service to encrypt client-broker communication, and setting up a password file for administrator account passwords. In addition to Message Queue’s own built-in authentication mechanisms, you can also plug in a preferred external service based on the Java Authentication and Authorization Service (JAAS) API.
This chapter includes the following sections:
Message Queue provides security services for user access control (authentication and authorization) and for encryption:
Authentication ensures that only verified users can establish a connection to a broker.
Authorization specifies which users or groups have the right to access resources and to perform specific operations.
Encryption protects messages from being tampered with during delivery over a connection.
As a Message Queue administrator, you are responsible for setting up the information the broker needs to authenticate users and authorize their actions. The broker properties pertaining to security services are listed under Security Properties. The boolean property imq.accesscontrol.enabled acts as a master switch that controls whether access control is applied on a brokerwide basis; for finer control, you can override this setting for a particular connection service by setting the imq.serviceName .accesscontrol.enabled property, where serviceName is the name of the connection service, as shown in Table 6–1: for example, imq.httpjms.accesscontrol.enabled.
The following figure shows the components used by the broker to provide authentication and authorization services. These services depend on a user repository containing information about the users of the messaging system: their names, passwords, and group memberships. In addition, to authorize specific operations for a user or group, the broker consults an access control file that specifies which operations a user or group can perform. You can designate a single access control file for the broker as a whole, using the configuration property imq.accesscontrol.file.filename, or for a single connection service with imq.serviceName. accesscontrol.file.filename.
As Figure 9–1 shows, you can store user data in a flat file user repository that is provided with the Message Queue service, you can access an existing LDAP repository, or you can plug in a Java Authentication and Authorization Service (JAAS) module.
If you choose a flat-file repository, you must use the imqusermgr utility to manage the repository. This option is easy to use and built-in.
If you want to use an existing LDAP server, you use the tools provided by the LDAP vendor to populate and manage the user repository. You must also set properties in the broker instance configuration file to enable the broker to query the LDAP server for information about users and groups.
The LDAP option is better if scalability is important or if you need the repository to be shared by different brokers. This might be the case if you are using broker clusters.
If you want to plug-in an existing JAAS authentication service, you need to set the corresponding properties in the broker instance configuration file.
The broker’s imq.authentication.basic.user_repository property specifies which type of repository to use. In general, an LDAP repository or JAAS authentication service is preferable if scalability is important or if you need the repository to be shared by different brokers (if you are using broker clusters, for instance). See User Authentication for more information on setting up a flat-file user repository, LDAP access, or JAAS authentication service.
A client requesting a connection to a broker must supply a user name and password, which the broker compares with those stored in the user repository. Passwords transmitted from client to broker are encoded using either base-64 encoding (for flat-file repositories) or message digest (MD5) hashing (for LDAP repositories). The choice is controlled by the imq.authentication.type property for the broker as a whole, or by imq.serviceName. authentication.type for a specific connection service. The imq.authentication.client.response.timeout property sets a timeout interval for authentication requests.
As described under Password Files, you can choose to put your passwords in a password file instead of being prompted for them interactively. The boolean broker property imq.passfile.enabled controls this option. If this property is true, the imq.passfile.dirpath and imq.passfile.name properties give the directory path and file name for the password file. The imq.imqcmd.password property (which can be embedded in the password file) specifies the password for authenticating an administrative user to use the Command utility (imqcmd) for managing brokers, connection services, connections, physical destinations, durable subscriptions, and transactions.
If you are using an LDAP-based user repository, there are a whole range of broker properties available for configuring various aspects of the LDAP lookup. The address (host name and port number) of the LDAP server itself is specified by imq.user_repository.ldap.server. The imq.user_repository.ldap.principal property gives the distinguished name for binding to the LDAP repository, while imq.user_repository.ldap.password supplies the associated password. Other properties specify the directory bases and optional JNDI filters for individual user and group searches, the provider-specific attribute identifiers for user and group names, and so forth; see Security Properties for details.
Once authenticated, a user can be authorized to perform various Message Queue-related activities. As a Message Queue administrator, you can define user groups and assign individual users membership in them. The default access control file explicitly refers to only one group, admin (see User Groups and Status). A user in this group has connection permission for the admin connection service, which allows the user to perform administrative functions such as creating destinations and monitoring and controlling a broker. A user in any other group that you define cannot, by default, get an admin service connection.
When a user attempts to perform an operation, the broker checks the user’s name and group membership (from the user repository) against those specified for access to that operation (in the access control file). The access control file specifies permissions to users or groups for the following operations:
Connecting to a broker
Accessing destinations: creating a consumer, a producer, or a queue browser for any given destination or for all destinations
For information on configuring authorization, see User Authorization
To encrypt messages sent between clients and broker, you need to use a connection service based on the Secure Socket Layer (SSL) standard. SSL provides security at the connection level by establishing an encrypted connection between an SSL-enabled broker and client.
To use an SSL-based Message Queue connection service, you generate a public/private key pair using the Message Queue Key Tool utility (imqkeytool). This utility embeds the public key in a self-signed certificate and places it in a Message Queue key store. The key store is itself password-protected; to unlock it, you must provide a key store password at startup time, specified by the imq.keystore.password property. Once the key store is unlocked, a broker can pass the certificate to any client requesting a connection. The client then uses the certificate to set up an encrypted connection to the broker.
For information on configuring encryption, see Message Encryption
Users attempting to connect to a Message Queue message broker must provide a user name and password for authentication. The broker will grant the connection only if the name and password match those in a broker-specific user repository listing the authorized users and their passwords. Each broker instance can have its own user repository, which you as an administrator are responsible for maintaining. This section tells how to create, populate, and manage the user repository.
Message Queue can support any of three types of authentication mechanism:
A flat-file repository that is shipped with Message Queue. This type of repository is very easy to populate and manage, using the Message Queue User Manager utility (imqusermgr). See Using a Flat-File User Repository.
A Lightweight Directory Access Protocol (LDAP) server. This could be a new or existing LDAP directory server using the LDAP v2 or v3 protocol. You use the tools provided by the LDAP vendor to populate and manage the user repository. This type of repository is not as easy to use as the flat-file repository, but it is more scalable and therefore better for production environments. See Using an LDAP User Repository.
An external authentication mechanism plugged into Message Queue by means of the Java Authentication and Authorization Service (JAAS) API. See Using JAAS-Based Authentication.
Message Queue provides a built-in flat-file user repository and a command line tool, the User Manager utility (imqusermgr), for populating and managing it. Each broker has its own flat-file user repository, created automatically when you start the broker. The user repository resides in a file named passwd, in a directory identified by the name of the broker instance with which the repository is associated:
…/instances/instanceName/etc/passwd
(See Appendix A, Platform-Specific Locations of Message Queue Data for the exact location of the instances directory, depending on your operating system platform.)
Each user in the repository can be assigned to a user group, which defines the default access privileges granted to all of its members. You can then specify authorization rules to further restrict these access privileges for specific users, as described in User Authorization. A user’s group is assigned when the user entry is first created, and cannot be changed thereafter. The only way to reassign a user to a different group is to delete the original user entry and add another entry specifying the new group.
The flat-file user repository provides three predefined groups:
For broker administrators. By default, users in this group are granted the access privileges needed to configure, administer, and manage message brokers.
For normal (non-administrative) client users. Newly created user entries are assigned to this group unless otherwise specified. By default, users in this group can connect to all Message Queue connection services of type NORMAL, produce messages to or consume messages from all physical destinations, and browse messages in any queue.
For Message Queue clients that do not wish to use a user name known to the broker (for instance, because they do not know of a real user name to use). This group is analogous to the anonymous account provided by most FTPservers. No more than one user at a time can be assigned to this group. You should restrict the access privileges of this group in comparison to the user group, or remove users from the group at deployment time.
You cannot rename or delete these predefined groups or create new ones.
In addition to its group, each user entry in the repository has a user status: either active or inactive. New user entries added to the repository are marked active by default. Changing a user’s status to inactive rescinds all of that user’s access privileges, making the user unable to open new broker connections. Such inactive entries are retained in the user repository, however, and can be reactivated at a later time. If you attempt to add a new user with the same name as an inactive user already in the repository, the operation will fail; you must either delete the inactive user entry or give the new user a different name.
To allow the broker to be used immediately after installation without further intervention by the administrator, the flat-file user repository is created with two initial entries, summarized in Table 9–1:
The admin entry (user name and password admin/admin) enables you to administer the broker with Command utility (imqcmd) commands. Immediately on installation, you should update this initial entry to change its password (see Changing a User’s Password).
The guest entry allows clients to connect to the broker using a default user name and password (guest/guest).
You can then proceed to add any additional user entries you need for individual users of your message service.
Table 9–1 Initial Entries in Flat-File User Repository
User Name |
Password |
Group |
Status |
---|---|---|---|
admin |
admin |
admin |
Active |
guest |
guest |
anonymous |
Active |
The Message Queue User Manager utility (imqusermgr) enables you to populate or edit a flat-file user repository. SeeUser Manager Utility for general reference information about the syntax, subcommands, and options of the imqusermgr command.
Before using the User Manager, keep the following things in mind:
The imqusermgr command must be run on the host where the broker is installed.
If a broker-specific user repository does not yet exist, you must start up the corresponding broker instance to create it.
You must have appropriate permissions to write to the repository; in particular, on Solaris and Linux platforms, you must be logged in as the root user or the user who first created the broker instance.
Table 9–2 lists the subcommands of the imqusermgr command. For full reference information about these subcommands, see Table 15–15.
Table 9–2 User Manager Subcommands
Subcommand |
Description |
---|---|
add |
Add user and password to repository |
delete |
Delete user from repository |
update |
Set user’s password or active status (or both) |
list |
Display user information |
The general options listed in Table 9–3 apply to all subcommands of the imqusermgr command.
Table 9–3 General User Manager Options
To display the Message Queue product version, use the -v option. For example:
imqusermgr -v
If you enter an imqusermgr command line containing the -v option in addition to a subcommand or other options, the User Manager utility processes only the -v option. All other items on the command line are ignored.
To display help on the imqusermgr command, use the -h option, and do not use a subcommand. You cannot get help about specific subcommands.
For example, the following command displays help about imqusermgr:
imqusermgr -h
If you enter an imqusermgr command line containing the -h option in addition to a subcommand or other options, the Command utility processes only the -h option. All other items on the command line are ignored.
The subcommand imqusermgr add adds an entry to the user repository, consisting of a user name and password:
imqusermgr add [-i brokerName] -u userName -p password [-g group]
The -u and -p options specify the user name and password, respectively, for the new entry. These must conform to the following conventions:
All user names and passwords must be at least one character long. Their maximum length is limited only by command shell restrictions on the maximum number of characters that can be entered on a command line.
A user name cannot contain an asterisk (*), a comma (,), a colon (:), or a new-line or carriage-return character.
If a user name or password contains a space, the entire name or password must be enclosed in quotation marks (" ").
The optional -g option specifies the group (admin, user, or anonymous) to which the new user belongs; if no group is specified, the user is assigned to the user group by default. If the broker name (-i option) is omitted, the default broker imqbroker is assumed.
For example, the following command creates a user entry on broker imqbroker for a user named AliBaba, with password Sesame, in the admin group:
imqusermgr add -u AliBaba -p Sesame -g admin
The subcommand imqusermgr delete deletes a user entry from the repository:
imqusermgr delete [-i brokerName] -u userName
The -u option specifies the user name of the entry to be deleted. If the broker name (-i option) is omitted, the default broker imqbroker is assumed.
For example, the following command deletes the user named AliBaba from the user repository on broker imqbroker:
imqusermgr delete -u AliBaba
You can use the subcommand imqusermgr update to change a user’s password:
imqusermgr update [-i brokerName] -u userName -p password
The -u identifies the user; -p specifies the new password. If the broker name (-i option) is omitted, the default broker imqbroker is assumed.
For example, the following command changes the password for user AliBaba to Shazam on broker imqbroker:
imqusermgr update -u AliBaba -p Shazam
For the sake of security, you should change the password of the admin user from its initial default value (admin) to one that is known only to you. The following command changes the default administrator password for broker mybroker to veeblefetzer:
imqusermgr update -i mybroker -u admin -p veeblefetzer
You can quickly confirm that this change is in effect by running any of the command line tools when the broker is running. For example, the following command will prompt you for a password:
imqcmd list svc mybroker -u admin
Entering the new password (veeblefetzer) should work; the old password should fail.
After changing the password, you should supply the new password whenever you use any of the Message Queue administration tools, including the Administration Console.
The imqusermgr update subcommand can also be used to change a user’s active status:
imqusermgr update [-i brokerName] -u userName -a activeStatus
The -u identifies the user; -a is a boolean value specifying the user’s new status as active (true) or inactive (false). If the broker name (-i option) is omitted, the default broker imqbroker is assumed.
For example, the following command sets user AliBaba’s status to inactive on broker imqbroker:
imqusermgr update -u AliBaba -a false
This renders AliBabe unable to open new broker connections.
You can combine the -p (password) and -a (active status) options in the same imqusermgr update command. The options may appear in either order: for example, both of the following commands activate the user entry for AliBaba and set the password to plugh:
imqusermgr update -u AliBaba -p plugh -a true imqusermgr update -u AliBaba -a true -p plugh
The imqusermgr list command displays information about a user in the user repository:
imqusermgr list [-i brokerName] [-u userName]
The command
imqusermgr list -u AliBaba
displays information about user AliBabe, as shown in Example 9–1.
|
If you omit the -u option
imqusermgr list
the command lists information about all users in the repository, as in Example 9–2.
|
You configure a broker to use an LDAP directory server by setting the values for certain configuration properties in the broker’s instance configuration file (config.properties). These properties enable the broker instance to query the LDAP server for information about users and groups when a user attempts to connect to the broker or perform messaging operations.
The imq.authentication.basic.user_repository property specifies the kind of user authentication the broker is to use. By default, this property is set to file, for a flat-file user repository. For LDAP authentication, set it to ldap instead:
imq.authentication.basic.user_repository=ldap
The imq.authentication.type property controls the type of encoding used when passing a password between client and broker. By default, this property is set to digest, denoting MD5 encoding, the form used by flat-file user repositories. For LDAP authentication, set it to basic instead:
imq.authentication.type=basic
This denotes base-64 encoding, the form used by LDAP user repositories.
The following properties control various aspects of LDAP access. See Table 16–8 for more detailed information:
imq.user_repository.ldap.server |
imq.user_repository.ldap.principal |
imq.user_repository.ldap.password |
imq.user_repository.ldap.propertyName |
imq.user_repository.ldap.base |
imq.user_repository.ldap.uidattr |
imq.user_repository.ldap.usrfilter |
imq.user_repository.ldap.grpsearch |
imq.user_repository.ldap.grpbase |
imq.user_repository.ldap.gidattr |
imq.user_repository.ldap.memattr |
imq.user_repository.ldap.grpfilter |
imq.user_repository.ldap.timeout |
imq.user_repository.ldap.ssl.enabled |
The imq.user_repository.ldap.userformat property, if set to a value of dn, specifies that the login username for authentication be in DN username format (for example: uid=mquser,ou=People,dc=red,dc=sun,dc=com). In this case, the broker extracts the value of the imq.user.repository.lpdap.uidatr attribute from the DN username, and uses this value as the user name in access control operations (see User Authorization).
If you want the broker to use a secure, encrypted SSL (Secure Socket Layer) connection for communicating with the LDAP server, set the broker’s imq.user_repository.ldap.ssl.enabled property to true
imq.user_repository.ldap.ssl.enabled=true
and the imq.user_repository.ldap.server property to the port used by the LDAP server for SSL communication: for example,
imq.user_repository.ldap.server=myhost:7878
You will also need to activate SSL communication in the LDAP server.
In addition, you may need to edit the user and group names in the broker’s access control file to match those defined in the LDAP user repository; see User Authorization for more information.
For example, to create administrative users, you use the access control file to specify those users and groups in the LDAP directory that can create ADMIN connections.
Any user or group that can create an ADMIN connection can issue administrative commands.
The following procedure makes use of a broker's access control file, which is described in User Authorization.
Enable the use of the access control file by setting the broker property imq.accesscontrol.enabled to true, which is the default value.
The imq.accesscontrol.enabled property enables use of the access control file.
Open the access control file, accesscontrol.properties. The location for the file is listed in Appendix A, Platform-Specific Locations of Message Queue Data
The file contains an entry such as the following:
service connection access control##################################connection.NORMAL.allow.user=*connection.ADMIN.allow.group=admin
The entries listed are examples. Note that the admin group exists by default in the file-based user repository but does not exist by default in the LDAP directory.
To grant Message Queue administrator privileges to users, enter the user names as follows:
connection.ADMIN.allow.user= userName[[,userName2] …]
The users must be defined in the LDAP directory.
To grant Message Queue administrator privileges to groups, enter the group names as follows:
connection.ADMIN.allow.group= groupName[[,groupName2] …]
The groups must be defined in the LDAP directory.
The Java Authentication and Authorization Service (JAAS) API allows you to plug an external authentication mechanism into Message Queue. This section describes the information that the Message Queue message broker makes available to a JAAS-compliant authentication service and explains how to configure the broker to use such a service. The following sources provide further information on JAAS:
For complete information about the JAAS API, see the Java™ Authentication and Authorization Service (JAAS) Reference Guide at the URL
For information about writing a JAAS login module, see the Java™ Authentication and Authorization Service (JAAS) LoginModule Developer’s Guide at
JAAS is a core API in Java 2 Standard Edition (J2SE), and is therefore an integral part of Message Queue’s runtime environment. It defines an abstraction layer between an application and an authentication mechanism, allowing the desired mechanism to be plugged in with no change to application code. In the case of the Message Queue service, the abstraction layer lies between the broker (application) and an authentication provider. By setting a few broker properties, it is possible to plug in any JAAS-compliant authentication service and to upgrade this service with no disruption or change to broker code.
You cannot use the Java Management Extensions (JMX) API to change JAAS-related broker properties. However, once JAAS-based authentication is configured, JMX client applications (like other clients) can be authenticated using this mechanism.
Figure 9–2 shows the basic elements of JAAS: a JAAS client, a JAAS-compliant authentication service, and a JAAS configuration file.
The JAAS client is an application wishing to perform authentication using a JAAS-compliant authentication service. The JAAS client communicates with the authentication service using one or more login modules and is responsible for providing a callback handler that the login module can call to obtain the user name, password, and other information needed for authentication.
The JAAS-compliant authentication service consists of one or more login modules along with logic to perform the needed authentication. The login module (LoginModule) may include the authentication logic itself, or it may use a private protocol or API to communicate with an external security service that provides the logic.
The JAAS configuration file is a text file that the JAAS client uses to locate the login module(s) to be used.
Figure 9–3 shows how JAAS is used by the Message Queue broker. It shows a more complex implementation of the JAAS model shown in Figure 9–2.
The authentication service layer, consisting of one or more login modules (if needed) and corresponding authentication logic, is separate from the broker. The login modules run in the same Java virtual machine as the broker. The broker is represented to the login module as a login context, and communicates with the login module by means of a callback handler that is part of the broker runtime code.
The authentication service also supplies a JAAS configuration file containing entries that reference the login modules. The configuration file specifies the order in which the login modules (if more than one) are to be used and any conditions for their use. When the broker starts up, it locates the configuration file by consulting either the Java system property java.security.auth.login.config or the Java security properties file. The broker then selects an entry in the JAAS configuration file according to the value of the broker property imq.user_repository.jaas.name. That entry specifies which login module(s) will be used for authentication. The classes for the login modules are found in the Message Queue external resource files directory, whose location depends on the operating system platform you are using; see Appendix A, Platform-Specific Locations of Message Queue Data for details.
The relation between the configuration file, the login module, and the broker is shown in the following figure. Figure 9–4.
The fact that the broker uses a JAAS plug-in authentication service remains completely transparent to the Message Queue client. The client continues to connect to the broker as it did before, passing a user name and password. In turn, the broker uses a callback handler to pass login information to the authentication service, and the service uses the information to authenticate the user and return the results. If authentication succeeds, the broker grants the connection; if it fails, the client runtime returns a JMS security exception that the client must handle.
After the Message Queue client is authenticated, if there is further authorization to be done, the broker proceeds as it normally would, consulting the access control file to determine whether the authenticated client is authorized to perform the actions it undertakes: accessing a destination, consuming a message, browsing a queue, and so on.
Setting up JAAS-compliant authentication involves setting broker and system properties to select this type of authentication, to specify the location of the configuration file, and to specify the entries to the login modules that are going to be used.
To set up JAAS support for Message Queue, you perform the following general steps. (These steps assume you are creating your own authentication service.)
Create one or more login module classes that implement the authentication service. The JAAS callback types that the broker supports are listed below.
The broker uses this callback to pass the authentication service the locale in which the broker is running. This value can be used for localization.
The broker uses this callback to pass to the authentication service the user name specified by the Message Queue client when the connection was requested.
The broker uses this callback to pass the value of the following information to the login module (authentication service) when requested through the TextInputCallback.getPrompt() with the following strings:
imq.authentication.type: The broker authentication type in effect at runtime
imq.accesscontrol.type: The broker access control type in effect at runtime
imq.authentication.clientip: The client IP address (null if unavailable)
imq.servicename: The name of the connection service (jms, ssljms, admin, or ssladmin) being used by the client
imq.servicetype: The type of the connection service (NORMAL or ADMIN) being used by the client
The broker uses this callback to pass to the authentication service the password specified by the Message Queue client when the connection was requested.
The broker handles this callback to provide logging service to the authentication service by logging the text output to the broker's log file. The callback's MessageType ERROR, INFORMATION, WARNING are mapped to the broker logging levels ERROR, INFO, WARNING respectively.
Create a JAAS configuration file with entries that reference the login module classes created in Step 1 and specify the location of this file.
Note the name of the entry in the JAAS configuration file (that references the login module implementation classes).
Archive the classes that implement the login modules to a jar file, and place the jar file in the Message Queue lib/ext directory.
Set the broker configuration properties that relate to JAAS support. These are described in Table 9–4.
Set the following system property (to specify the location of the JAAS configuration file).
java.security.auth.login.config=JAAS_Config_File_Location
For example, you can specify the location when you start the broker.
imqbrokerd -Djava.security.auth.login.config=JAAS_Config_File_Location
There are other ways to specify the location of the JAAS configuration file. For additional information, please see
http://java.sun.com/j2se/1.5.0/docs/guide/security/jaas/tutorials/LoginConfigFile.html
The following table lists the broker properties that need to be set to set up JAAS support.
Table 9–4 Broker Properties for JAAS Support
Property |
Description |
---|---|
imq.authentication.type |
Set to basic to indicate Base-64 password encoding. This is the only permissible value for JAAS authentication. |
imq.authentication.basic.user_repository |
Set to jaas to specify JAAS authentication. |
imq.user_repository.jaas.name |
Set to the name of the desired entry (in the JAAS configuration file) that references the login modules you want to use as the authentication mechanism. This is the name you noted in Step 3. |
imq.user_repository.jaas.userPrincipalClass |
This property, used by Message Queue access control, specifies the java.security.Principal implementation class in the login module(s) that the broker uses to extract the Principal name to represent the user entity in the Message Queue access control file. If, it is not specified, the user name passed from the Message Queue client when a connection was requested is used instead. |
imq.user_repository.jaas.groupPrincipalClass |
This property, used by Message Queue access control, specifies the java.security.Principal implementation class in the login module(s) that the broker uses to extract the Principal name to represent the group entity in the Message Queue access control file. If, it is not specified, the group rules, if any, in the Message Queue access control file are ignored. |
An access control file contains rules that specify which users (or groups of users) are authorized to perform certain operations on a message broker. These operations include the following:
Creating a connection
Creating a message producer for a physical destination
Creating a message consumer for a physical destination
Browsing a queue destination
Auto-creating a physical destination
If access control is enabled (that is, if the broker’s imq.accesscontrol.enabled configuration property is set to true, the broker will consult its access control file whenever a client attempts one of these operations, to verify whether the user generating the request (or a group to which the user belongs) is authorized to perform the operation. By editing this file, you can restrict access to these operations to particular users and groups. Changes take effect immediately; there is no need to restart the broker after editing the file.
Each broker has it own access control file, created automatically when the broker is started. The file is named accesscontrol.properties and is located at a path of the form
…/instances/brokerInstanceName/etc/accesscontrol.properties
(See Appendix A, Platform-Specific Locations of Message Queue Data for the exact location, depending on your platform.)
The file is formatted as a Java properties file. It starts with a version property defining the version of the file:
version=JMQFileAccessControlModel/100
This is followed by three sections specifying the access control for three categories of operations:
Creating connections
Creating message producers or consumers, or browsing a queue destination
Auto-creating physical destinations
Each of these sections consists of a sequence of authorization rules specifying which users or groups are authorized to perform which specific operations. These rules have the following syntax:
resourceType.resourceVariant.operation.access.principalType=principals
Table 9–5 describes the various elements.
Table 9–5 Authorization Rule Elements
Rule: queue.q1.consume.allow.user=*
Description: allows all users to consume messages from the queue destination q1.
Rule: queue.*.consume.allow.user=Snoopy
Description: allows user Snoopy to consume messages from all queue destinations.
Rule: topic.t1.produce.deny.user=Snoopy
Description: prevents Snoopy from producing messages to the topic destination t1
You can use Unicode escape (\\uXXXX) notation to specify non-ASCII user, group, or destination names. If you have edited and saved the access control file with these names in a non-ASCII encoding, you can use the Java native2ascii tool to convert the file to ASCII. See the Java Internationalization FAQ at
http://java.sun.com/j2se/1.4/docs/guide/intl/faq.html
for more information.
Authorization rules in the access control file are applied according to the following principles:
Any operation not explicitly authorized through an authorization rule is implicitly prohibited. For example, if the access control file contains no authorization rules, all users are denied access to all operations.
Authorization rules for specific users override those applying generically to all users. For example, the rules
queue.q1.produce.allow.user=* queue.q1.produce.deny.user=Snoopy
authorize all users except Snoopy to send messages to queue destination q1.
Authorization rules for a specific user override those for any group to which the user belongs. For example, if user Snoopy is a member of group user, the rules
queue.q1.consume.allow.group=user queue.q1.consume.deny.user=Snoopy
authorize all members of user except Snoopy to receive messages from queue destination q1.
Authorization rules applying generically to all users override those applying to all groups. For example, the rules
topic.t1.produce.deny.group=* topic.t1.produce.allow.user=*
authorize all users to publish messages to topic destination t1, overriding the rule denying such access to all groups.
Authorization rules for specific resources override those applying generically to all resources of a given type. For example, the rules
topic.*.consume.allow.user=Snoopy topic.t1.consume.deny.user=Snoopy
authorize Snoopy to subscribe to all topic destinations except t1.
Authorization rules authorizing and denying access to the same resource and operation for the same user or group cancel each other out, resulting in authorization being denied. For example, the rules
queue.q1.browse.deny.user=Snoopy queue.q1.browse.allow.user=Snoopy
prevent Snoopy from browsing queue q1. The rules
topic.t1.consume.deny.group=user topic.t1.consume.allow.group=user
prevent all members of group user from subscribing to topic t1.
When multiple authorization rules are specified for the same resource, operation, and principal type, only the last rule applies. The rules
queue.q1.browse.allow.user=Snoopy,Linus queue.q1.browse.allow.user=Snoopy
authorize user Snoopy, but not Linus, to browse queue destination q1.
Authorization rules with the resource type connection control access to the broker’s connection services. The rule’s resourceVariant element specifies the service type of the connection services to which the rule applies, as shown in Table 6–1; the only possible values are NORMAL or ADMIN. There is no operation element.
The default access control file contains the rules
connection.NORMAL.allow.user=* connection.ADMIN.allow.group=admin
giving all users access to NORMAL connection services (jms, ssljms, httpjms, and httpsjms) and those in the admin group access to ADMIN connection services (admin and ssladmin). You can then add additional authorization rules to restrict the connection access privileges of specific users: for example, the rule
connection.NORMAL.deny.user=Snoopy
denies user Snoopy access privileges for connection services of type NORMAL.
If you are using a file-based user repository, the admin user group is created by the User Manager utility. If access control is disabled (imq.accesscontrol.enabled = false), all users in the admin group automatically have connection privileges for ADMIN connection services. If access control is enabled, access to these services is controlled by the authorization rules in the access control file.
If you are using an LDAP user repository, you must define your own user groups in the LDAP directory, using the tools provided by your LDAP vendor. You can either define a group named admin, which will then be governed by the default authorization rule shown above, or edit the access control file to refer to one or more other groups that you have defined in the LDAP directory. You must also explicitly enable access control by setting the broker’s imq.accesscontrol.enabled property to true.
Access to specific physical destinations on the broker is controlled by authorization rules with a resource type of queue or topic, as the case may be. These rules regulate access to the following operations:
Sending messages to a queue: produce operation
Receiving messages from a queue: consume operation
Publishing messages to a topic: produce operation
Subscribing to and consuming messages from a topic: consume operation
Browsing a queue: browse operation
By default, all users and groups are authorized to perform all of these operations on any physical destination. You can change this by editing the default authorization rules in the access control properties file or overriding them with more specific rules of your own. For example, the rule
topic.Admissions.consume.deny.group=user
denies all members of the user group the ability to subscribe to the topic Admissions.
The final section of the access control file, includes authorization rules that specify for which users and groups the broker will auto-create a physical destination.
When a client creates a message producer or consumer for a physical destination that does not already exist, the broker will auto-create the destination (provided that the broker’s imq.autocreate.queue or imq.autocreate.topic property is set to true).
A separate section of the access control file controls the ability of users and groups to perform such auto-creation. This is governed by authorization rules with a resourceType of queue or topic and an operation element of create. the resourceVariant element is omitted, since these rules apply to all queues or all topics, rather than any specific destination.
The default access control file contains the rules
queue.create.allow.user=* topic.create.allow.user=*
authorizing all users to have physical destinations auto-created for them by the broker. You can edit the file to restrict such authorization for specific users. For example, the rule
topic.create.deny.user=Snoopy
denies user Snoopy the ability to auto-create topic destinations.
Note that the effect of such auto-creation rules must be congruent with that of other physical destination access rules. For example, if you change the destination authorization rule to prohibit any user from sending a message to a queue, but enable the auto-creation of queue destinations, the broker will create the physical destination if it does not exist, but will not deliver a message to it.
This section explains how to set up a connection service based on the Secure Socket Layer (SSL) standard, which enables delivery of encrypted messages over the connection. Message Queue supports the following SSL-based connection services:
The ssljms service delivers secure, encrypted messages between a client and a broker, using the TCP/IP transport protocol.
The httpsjms service delivers secure, encrypted messages between a client and a broker, using an HTTPS tunnel servlet with the HTTP transport protocol.
The ssladmin service creates a secure, encrypted connection between the Message Queue Command utility (imqcmd) and a broker, using the TCP/IP transport protocol. Encrypted connections are not supported for the Administration Console (imqadmin).
The cluster connection service is used internally to provide secure, encrypted communication between brokers in a cluster, using the TCP/IP transport protocol.
A JMX connector that supports secure, encrypted communication between a JMX client and a broker's MBean server using the RMI transport protocol over TCP.
The remainder of this section describes how to set up secure connections over TCP/IP, using the ssljms, ssladmin, and cluster connection services. For information on setting up secure connections over HTTP with the httpsjms service, see Appendix C, HTTP/HTTPS Support.
To use an SSL-based connection service over TCP/IP, you generate a public/private key pair using the Key Tool utility (imqkeytool). This utility embeds the public key in a self-signed certificate that is passed to any client requesting a connection to the broker, and the client uses the certificate to set up an encrypted connection. This section describes how to set up an SSL-based service using such self-signed certificates.
For a stronger level of authentication, you can use signed certificates verified by a certification authority. The use of signed certificates involves some additional steps beyond those needed for self-signed certificates: you must first perform the procedures described in this section and then perform the additional steps in Using Signed Certificates.
Message Queue's support for SSL with self-signed certificates is oriented toward securing on-the-wire data, on the assumption that the client is communicating with a known and trusted server. Configuring SSL with self-signed certificates requires configuration on both the broker and client:
Setting Up an SSL-Based Connection Service Using Self-Signed Certificates
Configuring and Running an SSL-Based Client Using Self-Signed Certificates
The following sequence of procedures are needed to set up an SSL-based connection service for using self-signed certificates:
Starting with release 4.0, the default value for the client connection factory property imqSSLIsHostTrusted is false. If your application depends on the prior default value of true, you need to reconfigure and to set the property explicitly to true. In particular, old or new clients using self-signed certificates should set this property to true; for example:
java -DimqConnectionType=TLS -DimqSSLIsHostTrusted=true MyAppThe administration tool imqcmd is also affected by this change. In addition to using the –secure option to specify that it uses a SSL-based admin connection service, the imqSSLIsHostTrusted should be set to true when connecting to a broker configured with a self-signed certificate. You can do this as follows:
imqcmd list svc -secure -DimqSSLIsHostTrusted=trueAlternatively, you can import the broker's self-signed certificate into the client runtime trust store. Use the procedure in To Install a Signed Certificate.
Generate a self-signed certificate.
Enable the desired SSL-based connection services in the broker. These can include the ssljms, ssladmin, or cluster connection services.
Start the broker.
Run the Key Tool utility (imqkeytool) to generate a self-signed certificate for the broker. (On Solaris and Linux operating systems, you may need to run the utility as the root user in order to have permission to create the keystore file.) The same certificate can be used for all SSL-based connection services (ssljms, ssladmin, cluster connection services, and the ssljmxrmi connector).
Enter the following at the command prompt:
imqkeytool -broker
The Key Tool utility prompts you for a key store password:
At the prompt type a keystore password.
The Keystore utility prompts you for identifying information from which to construct an X.500 distinguished name. The following table shows the prompts and the values to be provided for each. Values are case-insensitive and can include spaces.
Prompt |
X.500 Attribute |
Description |
Example |
---|---|---|---|
What is your first and last name? |
commonName (CN) |
Fully qualified name of server running the broker |
mqserver.sun.com |
What is the name of your organizational unit? |
organizationalUnit (OU) |
Name of department or division |
purchasing |
What is the name of your organization? |
organizationName (ON) |
Name of larger organization, such as a company or government entity |
Acme Widgets, Inc. |
What is the name of your city or locality? |
localityName (L) |
Name of city or locality |
San Francisco |
What is the name of your state or province? |
stateName (ST) |
Full (unabbreviated) name of state or province |
California |
What is the two-letter country code for this unit? |
country (C) |
Standard two-letter country code |
US |
The Key Tool utility displays the information you entered for confirmation. For example,
Is CN=mqserver.sun.com, OU=purchasing, ON=Acme Widgets, Inc., L=San Francisco, ST=California, C=US correct?
Accept the current values and proceed by typing yes.
To reenter values, accept the default or enter no. After you confirm, the utility pauses while it generates a key pair.
The utility asks for a password to lock the key pair (key password).
Press return.
This will set the same password for both the key password and the keystore password.
Be sure to remember the password you specify. You must provide this password when you start the broker, to allow the broker to open the keystore file. You can store the keystore password in a password file (see Password Files).
The Key Tool utility generates a self-signed certificate and places it in Message Queue’s keystore file. The keystore file is located in a directory whose location depends upon the operating system platform, as shown in Appendix A, Platform-Specific Locations of Message Queue Data.
The following are the configurable properties for the Message Queue keystore for SSL-based connection services:
Path to directory containing keystore file (see Appendix A, Platform-Specific Locations of Message Queue Data for default value)
In some circumstances, you may need to regenerate a key pair in order to solve certain problems: for example, if you forget the key store password or if the SSL-based service fails to initialize when you start a broker and you get the exception:
java.security.UnrecoverableKeyException: Cannot recover key
(This exception may result if you provided a key password different from the keystore password when you generated the self-signed certificate.)
Remove the broker’s keystore file.
The file is located as shown in Appendix A, Platform-Specific Locations of Message Queue Data.
Run imqkeytool again.
The command will generate a new key pair, as described above.
To enable an SSL-based connection service in the broker, you need to add the corresponding service or services to the imq.service.activelist property.
Open the broker’s instance configuration file.
The instance configuration file is located in a directory identified by the name of the broker instance (instanceName) with which the configuration file is associated (see Appendix A, Platform-Specific Locations of Message Queue Data):
…/instances/instanceName/props/config.properties
Add an entry (if one does not already exist) for the imq.service.activelist property and include the desired SSL-based service(s) in the list.
By default, the property includes the jms and admin connection services. Add the SSL-based service or services you wish to activate (ssljms, ssladmin, or both):
imq.service.activelist=jms,admin,ssljms,ssladmin
The SSL-based cluster connection service is enabled using the imq.cluster.transport property rather than the imq.service.activelist property (see Cluster Connection Service Properties). To enable SSL for RMI-based JMX connectors, see SSL-Based JMX Connections.
Save and close the instance configuration file.
Start the broker, providing the key store password.
When you start a broker or client with SSL, you may notice a sharp increase in CPU usage for a few seconds. This is because the JSSE (Java Secure Socket Extension) method java.security.SecureRandom, which Message Queue uses to generate random numbers, takes a significant amount of time to create the initial random number seed. Once the seed is created, the CPU usage level will drop to normal.
Start the broker, providing the keystore password.
Put the keystore password in a password file, as described in Password Files and set the imq.passfile.enabled property to true. You can now do one of the following:
Pass the location of the password file to the imqbrokerd command:
imqbrokerd -passfile /passfileDirectory/passfileName
Start the broker without the -passfile option, but specify the location of the password file using the following two broker configuration properties:
imq.passfile.dirpath=/passfileDirectory
imq.passfile.name=/passfileName
If you are not using a password file, enter the keystore password at the prompt.
imqbrokerd
You are prompted for the keystore passwrd.
The procedure for configuring a client to use an SSL-based connection service differs depending on whether it is an application client (using the ssljms connection service) or a Message Queue administrative client such as imqcmd (using the ssladmin connection service.)
For application clients, you must make sure the client has the following .jar files specified in its CLASSPATH variable:
imq.jar |
jms.jar |
Once the CLASSPATH files are properly specified, one way to start the client and connect to the broker’s ssljms connection service is by entering a command like the following:
java -DimqConnectionType=TLS clientAppName
This tells the connection to use an SSL-based connection service.
For administrative clients, you can establish a secure connection by including the -secure option when you invoke the imqcmd command: for example,
imqcmd list svc -b hostName:portNumber -u userName -secure
where userName is a valid ADMIN entry in the Message Queue user repository. The command will prompt you for the password.
Listing the connection services is a way to verify that the ssladmin service is running and that you can successfully make a secure administrative connection, as shown in Example 9–6.
|
Signed certificates provide a stronger level of server authentication than self-signed certificates. You can implement signed certificates only between a client and broker, and currently not between multiple brokers in a cluster. This requires the following extra procedures in addition to the ones described in Using Self-Signed Certificates. Using signed certificates requires configuration on both the broker and client:
The following procedures explain how to obtain and install a signed certificate.
Use the J2SE keytool command to generate a certificate signing request (CSR) for the self-signed certificate you generated in the preceding section.
Information about the keytool command can be found at
Here is an example:
keytool -certreq -keyalg RSA -alias imq -file certreq.csr -keystore /etc/imq/keystore -storepass myStorePassword
This generates a CSR encapsulating the certificate in the specified file (certreq.csr in the example).
Use the CSR to generate or request a signed certificate.
You can do this by either of the following methods:
Have the certificate signed by a well known certification authority (CA), such as Thawte or Verisign. See your CA’s documentation for more information on how to do this.
Sign the certificate yourself, using an SSL signing software package.
The resulting signed certificate is a sequence of ASCII characters. If you receive the signed certificate from a CA, it may arrive as an e-mail attachment or in the text of a message.
Save the signed certificate in a file.
The instructions below use the example name broker.cer to represent the broker certificate.
Check whether J2SE supports your certification authority by default.
The following command lists the root CAs in the system key store:
keytool -v -list -keystore $JAVA_HOME/lib/security/cacerts
If your CA is listed, skip the next step.
If your certification authority is not supported in J2SE, import the CA’s root certificate into the Message Queue key store.
Here is an example:
keytool -import -alias ca -file ca.cer -noprompt -trustcacerts -keystore /etc/imq/keystore -storepass myStorePassword
where ca.cer is the file containing the root certificate obtained from the CA.
If you are using a CA test certificate, you probably need to import the test CA root certificate. Your CA should have instructions on how to obtain a copy.
Import the signed certificate into the key store to replace the original self-signed certificate.
Here is an example:
keytool -import -alias imq -file broker.cer -noprompt -trustcacerts -keystore /etc/imq/keystore -storepass myStorePassword
where broker.cer is the file containing the signed certificate that you received from the CA.
The Message Queue key store now contains a signed certificate to use for SSL connections.
You must now configure the Message Queue client runtime to require signed certificates, and ensure that it trusts the certification authority that signed the certificate.
By default, starting with release 4.0, the connection factory object that the client will be using to establish broker connections has its imqSSLIsHostTrusted attribute set to false, meaning that the client runtime will attempt to validate all certificates. Validation will fail if the signer of the certificate is not in the client's trust store.
Verify whether the signing authority is registered in the client's trust store.
To test whether the client will accept certificates signed by your certification authority, try to establish an SSL connection, as described above under Configuring and Running an SSL-Based Client Using Self-Signed Certificates. If the CA is in the client's trust store, the connection will succeed and you can skip the next step. If the connection fails with a certificate validation error, go on to the next step.
Install the signing CA’s root certificate in the client’s trust store.
The client searches the key store files cacerts and jssecacerts by default, so no further configuration is necessary if you install the certificate in either of those files. The following example installs a test root certificate from the Verisign certification authority from a file named testrootca.cer into the default system certificate file, cacerts. The example assumes that J2SE is installed in the directory $JAVA_HOME/usr/j2se:
keytool -import -keystore /usr/j2se/jre/lib/security/cacerts -alias VerisignTestCA -file testrootca.cer -noprompt -trustcacerts -storepass myStorePassword
An alternative (and recommended) option is to install the root certificate into the alternative system certificate file, jssecacerts:
keytool -import -keystore /usr/j2se/jre/lib/security/jssecacerts -alias VerisignTestCA -file testrootca.cer -noprompt -trustcacerts -storepass myStorePassword
A third possibility is to install the root certificate into some other key store file and configure the client to use that as its trust store. The following example installs into the file /home/smith/.keystore:
keytool -import -keystore /home/smith/.keystore -alias VerisignTestCA -file testrootca.cer -noprompt -trustcacerts -storepass myStorePassword
Since the client does not search this key store by default, you must explicitly provide its location to the client to use as a trust store. You do this by setting the Java system property javax.net.ssl.trustStore once the client is running:
javax.net.ssl.trustStore=/home/smith/.keystore
Several types of command require passwords. In Table 9–6, the first column lists the commands that require passwords and the second column lists the reason that passwords are needed.
Table 9–6 Commands That Use Passwords
Command |
Description |
Purpose of Password |
---|---|---|
Start broker |
Access a JDBC-based persistent data store, an SSL certificate key store, or an LDAP user repository |
|
Manage broker |
Authenticate an administrative user who is authorized to use the command |
|
Manage JDBC-based data store |
Access the data store |
You can specify these passwords in a password file and use the -passfile option to specify the name of the file. This is the format for the -passfile option:
imqbrokerd -passfile filePath
In previous versions of Message Queue, you could use the -p, -password, -dbpassword, and -ldappassword options to specify passwords on the command line. As of Message Queue 4.0, these options are deprecated and are no longer supported; you must use a password file instead.
Typing a password interactively, in response to a prompt, is the most secure method of specifying a password (provided that your monitor is not visible to other people). You can also specify a password file on the command line. For non-interactive use of commands, however, you must use a password file.
A password file is unencrypted, so you must set its permissions to protect it from unauthorized access. Set the permissions so that they limit the users who can view the file, but provide read access to the user who starts the broker.
A password file is a simple text file containing a set of properties and values. Each value is a password used by a command. Table 9–7 shows the types of passwords that a password file can contain.
Table 9–7 Passwords in a Password File
Password |
Affected Commands |
Description |
---|---|---|
imqcmd |
Administrator password for Message Queue Command utility (authenticated for each command) |
|
imqbrokerd |
Key store password for SSL-based services |
|
imqbrokerdimqdbmgr |
Password for opening a database connection, if required |
|
imqbrokerd |
Password associated with the distinguished name assigned to a broker for binding to a configured LDAP user repository |
A sample password file is provided as part of your Message Queue installation; see Appendix A, Platform-Specific Locations of Message Queue Data for the location of this file, depending on your platform.
When a client application is separated from the broker by a firewall, special measures are needed in order to establish a connection. One approach is to use the httpjms or httpsjms connection service, which can “tunnel” through the firewall; see Appendix C, HTTP/HTTPS Support for details. HTTP connections are slower than other connection services, however; a faster alternative is to bypass the Message Queue Port Mapper and explicitly assign a static port address to the desired connection service, and then open that specific port in the firewall. This approach can be used to connect through a firewall using the jms or ssljms connection service (or, in unusual cases, admin or ssladmin).
Table 9–8 Broker Configuration Properties for Static Port Addresses
Connection Service |
Configuration Property |
---|---|
jms | |
ssljms | |
admin | |
ssladmin |
Assign a static port address to the connection service you wish to use.
To bypass the Port Mapper and assign a static port number directly to a connection service, set the broker configuration property imq.serviceName.protocolType.port, where serviceName is the name of the connection service and protocolType is its protocol type (see Table 9–8). As with all broker configuration properties, you can specify this property either in the broker's instance configuration file or from the command line when starting the broker. For example, to assign port number 10234 to the jms connection service, either include the line
imq.jms.tcp.port=10234
in the configuration file or start the broker with the command
imqbrokerd -name brokerName -Dimq.jms.tcp.port=10234
where brokerName is the name of the broker to be started.
Configure the firewall to allow connections to the port number you assigned to the connection service.
You must also allow connections through the firewall to Message Queue's Port Mapper port (normally 7676, unless you have reassigned it to some other port). In the example above, for instance, you would need to open the firewall for ports 10234 and 7676.
Message Queue supports the use of broker clusters: groups of brokers working together to provide message delivery services to clients. Clusters enable a message service to scale its operations to meet an increasing volume of message traffic by distributing client connections among multiple brokers.
In addition, clusters provide for message service availability. In the case of a conventional cluster, if a broker fails, clients connected to that broker can reconnect to another broker in the cluster and continue producing and consuming messages. In the case of an enhanced cluster, if a broker fails, clients connected to that broker reconnect to a failover broker that takes over the pending work of the failed broker, delivering messages without interruption of service.
See the Chapter 4, Broker Clusters, in Sun Java System Message Queue 4.3 Technical Overview for a description of conventional and enhanced broker clusters and how they operate.
This chapter describes how to configure and manage both conventional and enhanced broker clusters:
You create a broker cluster by specifying cluster configuration properties for each of its member brokers. Except where noted in this chapter, cluster configuration properties must be set to the same value for each broker in a cluster. This section introduces these properties and the use of a cluster configuration file to specify them:
Like all broker properties, cluster configuration properties can be set individually for each broker in a cluster, either in its instance configuration file (config.properties) or by using the -D option on the command line when you start the broker. However, except where noted in this chapter, each cluster configuration property must be set to the same value for each broker in a cluster.
For example, to specify the transport protocol for the cluster connection service, you can include the following property in the instance configuration file for each broker in the cluster: imq.cluster.transport=ssl. If you need to change the value of this property, you must change its value in the instance configuration file for every broker in the cluster.
For consistency and ease of maintenance, it is generally more convenient to collect all of the common cluster configuration properties into a central cluster configuration file that all of the individual brokers in a cluster reference. Using a cluster configuration file prevents the settings from getting out of synch and ensures that all brokers in a cluster use the same, consistent cluster configuration information.
When using a cluster configuration file, each broker’s instance configuration file must point to the location of the cluster configuration file by setting the imq.cluster.url property. For example,
imq.cluster.url=file:/home/cluster.properties
A cluster configuration file can also include broker properties that are not used specifically for cluster configuration. For example, you can place any broker property in the cluster configuration file that has the same value for all brokers in a cluster. For more information, see Connecting Brokers into a Conventional Cluster
This section reviews the most important cluster configuration properties, grouped into the following categories:
A complete list of cluster configuration properties can be found in Table 16–11
The following properties are used to configure the cluster connection service used for internal communication between brokers in the cluster. These properties are used by both conventional and enhanced clusters.
imq.cluster.transport specifies the transport protocol used by the cluster connection service, such as tcp or ssl.
imq.cluster.port specifies the port number for the cluster connection service. You might need to set this property, for instance, to specify a static port number for connecting to the broker through a firewall.
imq.cluster.hostname specifies the host name or IP address for the cluster connection service, used for internal communication between brokers in the cluster. The default setting works fine, however, explicitly setting the property can be useful if there is more than one network interface card installed in a computer. If you set the value of this property to localhost, the value will be ignored and the default will be used.
The following properties, in addition to those listed in Cluster Connection Service Properties, are used to configure conventional clusters:
imq.cluster.brokerlist specifies a list of broker addresses defining the membership of the cluster; all brokers in the cluster must have the same value for this property.
For example, to create a conventional cluster consisting of brokers at port 9876 on host1, port 5000 on host2, and the default port (7676) on ctrlhost, use the following value:
imq.cluster.brokerlist=host1:9876,host2:5000,ctrlhost
imq.cluster.masterbroker specifies which broker in a conventional cluster is the master broker that maintains the configuration change record that tracks the addition and deletion of destinations and durable subscribers. For example:
imq.cluster.masterbroker=host2:5000
While specifying a master broker using the imq.cluster.masterbroker is not mandatory for a conventional cluster to function, it guarantees that persistent information propagated across brokers (destinations and durable subscriptions) is always synchronized. See Conventional Clusters in Sun Java System Message Queue 4.3 Technical Overview.
Enhanced broker clusters, which share a JDBC-based data store, require more configuration than do conventional broker clusters. In addition to the properties listed in Cluster Connection Service Properties, the following categories of properties are used to configure an enhanced cluster:
imq.cluster.ha is a boolean value that specifies if the cluster is an enhanced cluster (true) or a conventional broker (false). The default value is false.
If set to true, mechanisms for failure detection and takeover of a failed broker are enabled. Enhanced clusters are self-configuring: any broker configured to use the cluster’s shared data store is automatically registered as part of the cluster, without further action on your part. If the conventional cluster property, imq.cluster.brokerlist, is specified for a high–availability broker, the property is ignored and a warning message is logged at broker startup.
imq.persist.store specifies the model for a broker's persistent data store. This property must be set to the value jdbc for every broker in an enhanced cluster.
imq.cluster.clusterid specifies the cluster identifier, which will be appended to the names of all database tables in the cluster’s shared persistent store. The value of this property must be the same for all brokers in a given cluster, but must be unique for each cluster: no two running clusters may have the same cluster identifier.
imq.brokerid is a broker identifier that must be unique for each broker in the cluster. Hence, this property must be set in each broker's instance configuration file rather than in a cluster configuration file.
The persistent data store for an enhanced cluster is maintained on a highly-available JDBC database.
The highly-availabile database may be Sun’s MySQL Cluster Edition or High Availability Session Store (HADB), or it may be an open-source or third-party product such as Oracle Corporation’s Real Application Clusters (RAC). As described in JDBC-Based Persistence Properties, the imq.persist.jdbc.dbVendor broker property specifies the name of the database vendor, and all of the remaining JDBC-related properties are qualified with this vendor name.
The JDBC-related properties are discussed under JDBC-Based Persistence Properties and summarized in Table 16–6. See the example configurations for MySQL and HADB in Example 8–1 and Example 8–2, respectively.
In setting JDBC-related properties for an enhanced cluster, note the following vendor-specific issues:
MySQL Cluster Edition
When using MySQL Cluster Edition as a highly-available database, you must specify the NDB Storage Engine rather than the InnoDB Storage Engine set by Message Queue by default. To specify the NDB Storage Engine, set the following broker property for all brokers in the cluster:
imq.persist.jdbc.mysql.tableoption=ENGINE=NDBCLUSTER
HADB
When using HADB in a Sun Java Application Server environment, if the integration between Message Queue and Application Server is local (that is, there is a one-to-one relationship between Application Server instances and Message Queue brokers), the Application Server will automatically propagate HADB-related properties to each broker in the cluster. However, if the integration is remote (a single Application Server instance using an externally configured broker cluster), then it is your responsibility to configure the needed HADB properties explicitly.
The following configuration properties (listed in Table 16–11) specify the parameters for the exchange of heartbeat and status information within an enhanced cluster:
imq.cluster.heartbeat.hostname specifies the host name (or IP address) for the heartbeat connection service.
imq.cluster.heartbeat.port specifies the port number for the heartbeat connection service.
imq.cluster.heartbeat.interval specifies the interval, in seconds, at which heartbeat packets are transmitted.
imq.cluster.heartbeat.threshold specifies the number of missed heartbeat intervals after which a broker is considered suspect of failure.
imq.cluster.monitor.interval specifies the interval, in seconds, at which to monitor a suspect broker’s state information to determine whether it has failed.
imq.cluster.monitor.threshold specifies the number of elapsed monitor intervals after which a suspect broker is considered to have failed.
Smaller values for these heartbeat and monitoring intervals will result in quicker reaction to broker failure, but at the cost of reduced performance and increased likelihood of false suspicions and erroneous failure detection.
To display information about a cluster’s configuration, use the Command utility’s list bkr subcommand:
imqcmd list bkr
This lists the current state of all brokers included in the cluster to which a given broker belongs. The broker states are described in the following table:
Table 10–1 Broker States
The results of the imqcmd list bkr command are shown in Example 10–1 (for a conventional cluster) and Example 10–2 (for an enhanced cluster).
|
|
The following sections describe how to perform various administrative management tasks for conventional and enhanced clusters, respectively.
The procedures in this section show how to perform the following tasks for a conventional cluster:
There are two general methods of connecting brokers into a conventional cluster: from the command line (using the -cluster option) or by setting the imq.cluster.brokerlist property in the cluster configuration file.
Whichever method you use, each broker that you start attempts to connect to the other brokers in the cluster every five seconds; the connection will succeed once the master broker is started up (if one is configured). If a broker in the cluster starts before the master broker, it will remain in a suspended state, rejecting client connections, until the master broker starts; the suspended broker then will automatically become fully functional. It is therefore a good idea to start the master broker first and then the others, after the master broker has completed its startup.
When connecting brokers into a conventional cluster, you should be aware of the following issues:
Mixed broker versions. A conventional cluster can contain brokers of different versions if all brokers have a version at least as great as that of the master broker. If the cluster is not configured to use a master broker, then all brokers must be of the same version.
Matching broker property values. In addition to cluster configuration properties, the following broker properties also must have the same value for all brokers in a cluster:
imq.service.activelist
imq.autocreate.queue
imq.autocreate.topic
imq.autocreate.queue.maxNumActiveConsumers
imq.autocreate.queue.maxNumBackupConsumers
This restriction is particularly important when a cluster contains mixed broker versions that might contain properties with different default values. For example, If you are clustering a Message Queue version 4.1 or later broker together with those from earlier versions than Message Queue 4.1, you must set the value of the imq.autocreate.queue.maxNumActiveConsumers property, which has different default values before and after version 4.1 (1 and -1, respectively), to be the same. Otherwise the brokers will not be able to establish a cluster connection.
Multiple interface cards. On a multi-homed computer, in which there is more than one network interface card, be sure to explicitly set the network interface to be used by the broker for client connection services (imq.hostname) and for the cluster connection service (imq.cluster.hostname). If imq.cluster.hostname is not set, then connections between brokers might not succeed and as a result, the cluster will not be established.
Network loopback IP address. You must make sure that no broker in the cluster is given an address that resolves to the network loopback IP address (127.0.0.1). Any broker configured with this address will be unable to connect to other brokers in the cluster.
In particular, some Linux installers automatically set the localhost entry to the network loopback address. On such systems, you must modify the system IP address so that all brokers in the cluster can be addressed properly: For each Linux system participating in the cluster, check the /etc/hosts file as part of cluster setup. If the system uses a static IP address, edit the /etc/hosts file to specify the correct address for localhost. If the address is registered with Domain Name Service (DNS), edit the file DNS lookup is performed before consulting the local hosts file. The line in /etc/nsswitch.conf should read as follows: hosts: dns files/etc/nsswitch.conf to change the order of the entries so that
The method best suited for production systems is to use a cluster configuration file to specify the configuration of the cluster:
Create a cluster configuration file that uses the imq.cluster.brokerlist property to specify the list of brokers to be connected.
If you are using a master broker, identify it with the imq.cluster.masterbroker property in the configuration file.
For each broker in the cluster, set the imq.cluster.url property in the broker’s instance configuration file to point to the cluster configuration file.
Use the imqbrokerd command to start each broker.
If there is a master broker, start it first, then the others after it has completed its startup.
If you are using a master broker, start it with the imqbrokerd command, using the -cluster option to specify the complete list of brokers to be included in the cluster.
For example, the following command starts the broker as part of a cluster consisting of the brokers running at the default port (7676) on host1, at port 5000 on host2, and at port 9876 on the default host (localhost):
imqbrokerd -cluster host1,host2:5000,:9876
Once the master broker (if any) is running, start each of the other brokers in the cluster with the imqbrokerd command, using the same list of brokers with the -cluster option that you used for the master broker.
The value specified for the -cluster option must be the same for all brokers in the cluster.
If you want secure, encrypted message delivery between brokers in a cluster, configure the cluster connection service to use an SSL-based transport protocol:
For each broker in the cluster, set up SSL-based connection services, as described in Message Encryption.
Set each broker’s imq.cluster.transport property to ssl, either in the cluster configuration file or individually for each broker.
The procedure for adding a new broker to a conventional cluster depends on whether the cluster uses a cluster configuration file.
Add the new broker to the imq.cluster.brokerlist property in the cluster configuration file.
Issue the following command to any broker in the cluster:
imqcmd reload cls
This forces each broker to reload the imq.cluster.brokerlist property. It is not necessary to issue this command to every broker in the cluster; executing it for any one broker will cause all of them to reload the cluster configuration.
(Optional) Set the value of the imq.cluster.url property in the new broker’s instance configuration file (config.properties) to point to the cluster configuration file.
Start the new broker.
If you did not perform step 3, use the -D option on the imqbrokerd command line to set the value of imq.cluster.url to the location of the cluster configuration file.
(Optional) Set the values of the following properties in the new broker’s instance configuration file (config.properties) :
imq.cluster.masterbroker (if necessary)
imq.cluster.transport (if you are using a secure cluster connection service)
When the newly added broker starts, it connects and exchanges data with all the other brokers in the imq.cluster.brokerlist value.
Modify the imq.cluster.brokerlist property of other brokers in the cluster to include the new broker.
This step is not strictly necessary to add a broker to a functioning cluster. However, should any broker need to be restarted, its imq.cluster.brokerlist value must include all other brokers in the cluster, including the newly added broker.
Start the new broker.
If you did not perform step 1, use the -D option on the imqbrokerd command line to set the property values listed there.
The method you use to remove a broker from a conventional cluster depends on whether you originally created the cluster using a cluster configuration file or by means of command line options.
If you originally created a cluster by specifying its member brokers with the imq.cluster.brokerlist property in a central cluster configuration file, it isn’t necessary to stop the brokers in order to remove one of them. Instead, you can simply edit the configuration file to exclude the broker you want to remove, force the remaining cluster members to reload the cluster configuration, and reconfigure the excluded broker so that it no longer points to the same cluster configuration file:
Edit the cluster configuration file to remove the excluded broker from the list specified for the imq.cluster.brokerlist property.
Issue the following command to each broker remaining in the cluster:
imqcmd reload cls
This forces the brokers to reload the cluster configuration.
Stop the broker you’re removing from the cluster.
Edit that broker’s instance configuration file (config.properties), removing or specifying a different value for its imq.cluster.url property.
If you used the imqbrokerd command from the command line to connect the brokers into a cluster, you must stop each of the brokers and then restart them, specifying the new set of cluster members on the command line:
Stop each broker in the cluster, using the imqcmd command.
Restart the brokers that will remain in the cluster, using the imqbrokerd command’s -cluster option to specify only those remaining brokers.
For example, suppose you originally created a cluster consisting of brokers A, B, and C by starting each of the three with the command
imqbrokerd -cluster A,B,C
To remove broker A from the cluster, restart brokers B and C with the command
imqbrokerd -cluster B,C
As noted earlier, a conventional cluster can optionally have one master broker, which maintains a configuration change record to keep track of any changes in the cluster’s persistent state. The master broker is identified by the imq.cluster.masterbroker configuration property, either in the cluster configuration file or in the instance configuration files of the individual brokers.
Because of the important information that the configuration change record contains, it is important to back it up regularly so that it can be restored in case of failure. Although restoring from a backup will lose any changes in the cluster’s persistent state that have occurred since the backup was made, frequent backups can minimize this potential loss of information. The backup and restore operations also have the positive effect of compressing and optimizing the change history contained in the configuration change record, which can grow significantly over time.
Use the -backup option of the imqbrokerd command, specifying the name of the backup file.
For example:
imqbrokerd -backup mybackuplog
Shut down all brokers in the cluster.
Restore the master broker’s configuration change record from the backup file.
imqbrokerd -restore mybackuplog
If you assign a new name or port number to the master broker, update the imq.cluster.brokerlist and imq.cluster.masterbroker properties accordingly in the cluster configuration file.
Restart all brokers in the cluster.
This section presents step-by-step procedures for performing a variety of administrative tasks for an enhanced cluster:
Because enhanced clusters are self-configuring, there is no need to explicitly specify the list of brokers to be included in the cluster. Instead, all that is needed is to set each broker’s configuration properties appropriately and then start the broker; as long as its properties are set properly, it will automatically be incorporated into the cluster. Enhanced Broker Cluster Properties describes the required properties, which include vendor-specific JDBC database properties.
In addition to creating an enhanced cluster as described in this section, you must also configure clients to successfully reconnect to a failover broker in the event of broker or connection failure. You do this by setting the imqReconnectAttempts connection factory attribute to a value of -1.
The property values needed for brokers in an enhanced cluster can be set separately in each broker’s instance configuration file, or they can be specified in a cluster configuration file that all the brokers reference. The procedures are as follows:
The method best suited for production systems is to use a cluster configuration file to specify the configuration of the cluster.
Create a cluster configuration file specifying the cluster’s high-availability-related configuration properties.
Enhanced Broker Cluster Properties shows the required property values. However, do not include the imq.brokerid property in the cluster configuration file; this must be specified separately for each individual broker in the cluster.
Specify any additional, vendor-specific JDBC configuration properties that might be needed.
The vendor-specific properties required for MySQL and HADB are shown in Example 8–1 and Example 8–2, respectively.
For each broker in the cluster:
Start the broker at least once, using the imqbrokerd command.
The first time a broker instance is run, an instance configuration file (config.properties) is automatically created.
Shut down the broker.
Use the imqcmd shutdown bkr command.
Edit the instance configuration file to specify the location of the cluster configuration file.
In the broker’s instance configuration file, set the imq.cluster.url property to point to the location of the cluster configuration file you created in step 1.
Specify the broker identifier.
Set the imq.brokerid property in the instance configuration file to the broker’s unique broker identifier. This value must be different for each broker.
Place a copy of, or a symbolic link to, your JDBC driver’s .jar file in the Message Queue external resource files directory, depending on your platform (see Appendix A, Platform-Specific Locations of Message Queue Data):
Solaris: /usr/share/lib/imq/ext
Linux: /opt/sun/mq/share/lib/ext
AIX: IMQ_HOME/lib/ext
Windows: IMQ_HOME\lib\ext
Create the database schema needed for Message Queue persistence.
Use the imqdbmgr create tbl command; see Database Manager Utility.
Restart each broker with the imqbrokerd command.
The brokers will automatically register themselves into the cluster on startup.
For each broker in the cluster:
Start the broker at least once, using the imqbrokerd command.
The first time a broker instance is run, an instance configuration file (config.properties) is automatically created.
Shut down the broker.
Use the imqcmd shutdown bkr command.
Edit the instance configuration file to specify the broker’s high-availability-related configuration properties.
Enhanced Broker Cluster Properties shows the required property values. Be sure to set the brokerid property uniquely for each broker.
Specify any additional, vendor-specific JDBC configuration properties that might be needed.
The vendor-specific properties required for MySQL and HADB are shown in Example 8–1 and Example 8–2, respectively.
Place a copy of, or a symbolic link to, your JDBC driver’s .jar file in the Message Queue external resource files directory, depending on your platform (see Appendix A, Platform-Specific Locations of Message Queue Data):
Solaris: /usr/share/lib/imq/ext
Linux: /opt/sun/mq/share/lib/ext
AIX: IMQ_HOME/lib/ext
Windows: IMQ_HOME\lib\ext
Create the database schema needed for Message Queue persistence.
Use the imqdbmgr create tbl command; see Database Manager Utility.
Restart each broker with the imqbrokerd command.
The brokers will automatically register themselves into the cluster on startup.
Because enhanced clusters are self-configuring, the procedures for adding and removing brokers are simpler than for a conventional cluster.
Set the new broker’s high-availability-related properties, as described in the preceding section.
You can do this either by specifying the individual properties in the broker’s instance configuration file (config.properties) or, if there is a cluster configuration file, by setting the broker’s imq.cluster.url property to point to it.
Start the new broker with the imqbrokerd command.
The broker will automatically register itself into the cluster on startup.
Make sure the broker is not running.
If necessary, use the command
imqcmd shutdown bkr
to shut down the broker.
Remove the broker from the cluster with the command
imqdbmgr remove bkr
This command deletes all database tables for the corresponding broker.
Although the takeover of a failed broker’s persistent data by a failover broker in an enhanced cluster is normally automatic, there may be times when you want to prevent such failover from occurring. To suppress automatic failover when shutting down a broker, use the -nofailover option to the imqcmd shutdown bkr subcommand:
imqcmd shutdown bkr -nofailover -b hostName:portNumber
where hostName and portNumber are the host name and port number of the broker to be shut down.
Conversely, you may sometimes need to force a broker failover to occur manually. (This might be necessary, for instance, if a failover broker were to itself fail before completing the takeover process.) In such cases, you can initiate a failover manually from the command line: first shut down the broker to be taken over with the -nofailover option, as shown above, then issue the command
imqcmd takeover bkr -n brokerID
where brokerID is the broker identifier of the broker to be taken over. If the specified broker appears to be running, the Command utility will display a confirmation message:
The broker associated with brokerID last accessed the database # seconds ago. Do you want to take over for this broker?
You can suppress this message, and force the takeover to occur unconditionally, by using the -f option to the imqcmd takeover bkr command:
imqcmd takeover bkr -f -n brokerID
The imqcmd takeover bkr subcommand is intended only for use in failed-takeover situations. You should use it only as a last resort, and not as a general way of forcibly taking over a running broker.
For durability and reliability, it is a good idea to back up an enhanced cluster’s shared data store periodically to backup files. This creates a snapshot of the data store that you can then use to restore the data in case of catastrophic failure. The command for backing up the data store is
imqdbmgr backup -dir backupDir
where backupDir is the path to the directory in which to place the backup files. To restore the data store from these files, use the command
imqdbmgr restore -restore backupDir
The best approach to converting a conventional broker cluster to an enhanced broker cluster is to drain your messaging system of all persistent data before attempting the conversion. This lets you create a new shared data store without worrying about loss of data. However, if you are using individual JDBC-based data stores for your brokers, a utility is available for converting a standalone datastore to a shared data store.
If the brokers in your conventional cluster are using file-based data stores, use the following procedure to convert to an enhanced cluster.
Drain down your messaging system of all persistent data.
Stop all producer clients from producing messages, and wait for all messages in the system to be consumed.
Shut down all client applications.
Shut down all brokers in the conventional cluster.
Reconfigure all brokers for an enhanced cluster.
See Enhanced Broker Cluster Properties. It is recommended that you use a cluster configuration file to specify cluster configuration property values, such as the imq.cluster.clusterid, imq.persist.store, and additional shared JDBC database properties.
Start all brokers in the enhanced cluster.
Configure client applications to re-connect to failover brokers.
Client re-connection behavior is specified by connection handling attributes of the connection factory administered objects (see the Connection Handling). In the case of enhanced broker clusters, the imqAddressList, imqAddressListBehavior, and imqAddressListIterations attributes are ignored, however the imqReconnectAttempts attribute should be set to a value of -1 (unlimited).
Start all client applications.
Resume messaging operations
If the brokers in your conventional cluster are using JDBC-based data stores, use the following procedure to convert to an enhanced cluster. The procedure assumes that individual standalone broker data stores reside on the same JDBC database server.
Back up all persistent data in the standalone JDBC-based data store of each broker.
Use proprietary JDBC database tools.
Shut down all client applications.
Shut down all brokers in the conventional cluster.
Convert each standalone data store to a shared data store.
Use the Message Queue Database Manager utility (imqdbmgr) subcommand
imqdbmgr upgrade hastore
to convert an existing standalone JDBC database to a shared JDBC database.
Reconfigure all brokers for an enhanced cluster.
See Enhanced Broker Cluster Properties. It is recommended that you use a cluster configuration file to specify cluster configuration property values, such as the imq.cluster.clusterid, imq.persist.store, and additional shared JDBC database properties.
Start all brokers in the enhanced cluster.
Configure client applications to re-connect to failover brokers.
Client re-connection behavior is specified by connection handling attributes of the connection factory administered objects (see the Connection Handling). In the case of enhanced broker clusters, the imqAddressList, imqAddressListBehavior, and imqAddressListIterations attributes are ignored, however the imqReconnectAttempts attribute should be set to a value of -1 (unlimited).
Start all client applications.
Resume messaging operations.
Administered objects encapsulate provider-specific configuration and naming information, enabling the development of client applications that are portable from one JMS provider to another. A Message Queue administrator typically creates administered objects for client applications to use in obtaining broker connections for sending and receiving messages.
This chapter tells how to use the Object Manager utility (imqobjmgr) to create and manage administered objects. It contains the following sections:
Administered objects are placed in a readily available object store where they can be accessed by client applications by means of the Java Naming and Directory Interface (JNDI). There are two types of object store you can use: a standard Lightweight Directory Access Protocol (LDAP) directory server or a directory in the local file system.
An LDAP server is the recommended object store for production messaging systems. LDAP servers are designed for use in distributed systems and provide security features that are useful in production environments.
LDAP implementations are available from a number of vendors. To manage an object store on an LDAP server with Message Queue administration tools, you may first need to configure the server to store Java objects and perform JNDI lookups; see the documentation provided with your LDAP implementation for details.
To use an LDAP server as your object store, you must specify the attributes shown in Table 11–1. These attributes fall into the following categories:
Initial context. The java.naming.factory.initial attribute specifies the initial context for JNDI lookups on the server. The value of this attribute is fixed for a given LDAP object store.
Location. The java.naming.provider.url attribute specifies the URL and directory path for the LDAP server. You must verify that the specified directory path exists.
Security. The java.naming.security.principal, java.naming.security.credentials, and java.naming.security.authentication attributes govern the authentication of callers attempting to access the object store. The exact format and values of these attributes depend on the LDAP service provider; see the documentation provided with your LDAP implementation for details and to determine whether security information is required on all operations or only on those that change the stored data.
Message Queue also supports the use of a directory in the local file system as an object store for administered objects. While this approach is not recommended for production systems, it has the advantage of being very easy to use in development environments. Note, however, that for a directory to be used as a centralized object store for clients deployed across multiple computer nodes, all of those clients must have access to the directory. In addition, any user with access to the directory can use Message Queue administration tools to create and manage administered objects.
To use a file-system directory as your object store, you must specify the attributes shown in Table 11–2. These attributes have the same general meanings described above for LDAP object stores; in particular, the java.naming.provider.url attribute specifies the directory path of the directory holding the object store. This directory must exist and have the proper access permissions for the user of Message Queue administration tools as well as the users of the client applications that will access the store.
Table 11–2 File-system Object Store Attributes
Attribute |
Description |
---|---|
Initial context for JNDI lookup Example: com.sun.jndi.fscontext.RefFSContextFactory |
|
Directory path Example: file:///C:/myapp/mqobjs |
Message Queue administered objects are of two basic kinds:
Connection factories are used by client applications to create connections to brokers.
Destinations represent locations on a broker with which client applications can exchange (send and retrieve) messages.
Each type of administered object has certain attributes that determine the object’s properties and behavior. This section describes how to use the Object Manager command line utility (imqobjmgr) to set these attributes; you can also set them with the GUI Administration Console, as described in Working With Administered Objects.
Client applications use connection factory administered objects to create connections with which to exchange messages with a broker. A connection factory’s attributes define the properties of all connections it creates. Once a connection has been created, its properties cannot be changed; thus the only way to configure a connection’s properties is by setting the attributes of the connection factory used to create it.
Message Queue defines two classes of connection factory objects:
ConnectionFactory objects support normal messaging and nondistributed transactions.
XAConnectionFactory objects support distributed transactions.
Both classes share the same configuration attributes, which you can use to optimize resources, performance, and message throughput. These attributes are listed and described in detail in Chapter 18, Administered Object Attribute Reference and are discussed in the following sections below:
Connection handling attributes specify the broker address to which to connect and, if required, how to detect connection failure and attempt reconnection. They are summarized in Table 18–1.
The most important connection handling attribute is imqAddressList, which specifies the broker or brokers to which to establish a connection. The value of this attribute is a string containing a broker address or (in the case of a broker cluster) multiple addresses separated by commas. Broker addresses can use a variety of addressing schemes, depending on the connection service to be used (see Configuring Connection Services) and the method of establishing a connection:
mq uses the broker’s Port Mapper to assign a port dynamically for either the jms or ssljms connection service.
mqtcp bypasses the Port Mapper and connects directly to a specified port, using the jms connection service.
mqssl makes a Secure Socket Layer (SSL) connection to a specified port, using the ssljms connection service.
http makes a Hypertext Transport Protocol (HTTP) connection to a Message Queue tunnel servlet at a specified URL, using the httpjms connection service.
https makes a Secure Hypertext Transport Protocol (HTTPS) connection to a Message Queue tunnel servlet at a specified URL, using the httpsjms connection service.
These addressing schemes are summarized in Table 18–2.
The general format for each broker address is
scheme://address
where scheme is one of the addressing schemes listed above and address denotes the broker address itself. The exact syntax for specifying the address varies depending on the addressing scheme, as shown in the “Description” column of Table 18–2. Table 18–3 shows examples of the various address formats.
In a multiple-broker cluster environment, the address list can contain more than one broker address. If the first connection attempt fails, the Message Queue client runtime will attempt to connect to another address in the list, and so on until the list is exhausted. Two additional connection factory attributes control the way this is done:
imqAddressListBehavior specifies the order in which to try the specified addresses. If this attribute is set to the string PRIORITY, addresses will be tried in the order in which they appear in the address list. If the attribute value is RANDOM, the addresses will instead be tried in random order; this is useful, for instance, when many Message Queue clients are sharing the same connection factory object, to prevent them from all attempting to connect to the same broker address.
imqAddressListIterations specifies how many times to cycle through the list before giving up and reporting failure. A value of -1 denotes an unlimited number of iterations: the client runtime will keep trying until it succeeds in establishing a connection or until the end of time, whichever occurs first.
Because enhanced clusters are self-configuring (see Cluster Configuration Properties and Connecting Brokers into an Enhanced Cluster), their membership can change over time as brokers enter and leave the cluster. In this type of cluster, the value of each member broker’s imqAddressList attribute is updated dynamically so that it always reflects the cluster’s current membership.
By setting certain connection factory attributes, you can configure a client to reconnect automatically to a broker in the event of a failed connection. For standalone brokers or those belonging to a conventional broker cluster (see Conventional Clusters in Sun Java System Message Queue 4.3 Technical Overview), you enable this behavior by setting the connection factory’s imqReconnectEnabled attribute to true. The imqReconnectAttempts attribute controls the number of reconnection attempts to a given broker address; imqReconnectInterval specifies the interval, in milliseconds, to wait between attempts.
If the broker is part of a conventional cluster, the failed connection can be restored not only on the original broker, but also on a different one in the cluster. If reconnection to the original broker fails, the client runtime will try the other addresses in the connection factory’s broker address list (imqAddressList). The imqAddressListBehavior and imqAddressListIterations attributes control the order in which addresses are tried and the number of iterations through the list, as described in the preceding section. Each address is tried repeatedly at intervals of imqReconnectInterval milliseconds, up to the maximum number of attempts specified by imqReconnectAttempts.
Note, however, that in a conventional cluster, such automatic reconnection only provides connection failover and not data failover: persistent messages and other state information held by a failed or disconnected broker can be lost when the client is reconnected to a different broker instance. While attempting to reestablish a connection, Message Queue does maintain objects (such as sessions, message consumers, and message producers) provided by the client runtime. Temporary destinations are also maintained for a time when a connection fails, because clients might reconnect and access them again; after giving clients time to reconnect and use these destinations, the broker will delete them. In circumstances where the client-side state cannot be fully restored on the broker on reconnection (for instance, when using transacted sessions, which exist only for the duration of a connection), automatic reconnection will not take place and the connection’s exception handler will be called instead. It is then up to the client application to catch the exception, reconnect, and restore state.
By contrast, in an enhanced cluster (see High-Availability Clusters in Sun Java System Message Queue 4.3 Technical Overview), another broker can take over a failed broker’s persistent state and proceed to deliver its pending messages without interruption of service. In this type of cluster, automatic reconnection is always enabled. The connection factory’s imqReconnectEnabled, imqAddressList, and imqAddressListIterations attributes are ignored. The client runtime is automatically redirected to the failover broker. Because there might be a short time lag during which the failover broker takes over from the failed broker, the imqReconnectAttempts connection factory attribute should be set to a value of -1 (client runtime continues connect attempts until successful).
Automatic reconnection supports all client acknowledgment modes for message consumption. Once a connection has been reestablished, the broker will redeliver all unacknowledged messages it had previously delivered, marking them with a Redeliver flag. Client applications can use this flag to determine whether a message has already been consumed but not yet acknowledged. (In the case of nondurable subscribers, however, the broker does not hold messages once their connections have been closed. Thus any messages produced for such subscribers while the connection is down cannot be delivered after reconnection and will be lost.) Message production is blocked while automatic reconnection is in progress; message producers cannot send messages to the broker until after the connection has been reestablished.
The Message Queue client runtime can be configured to periodically test, or “ping,” a connection, allowing connection failures to be detected preemptively before an attempted message transmission fails. Such testing is particularly important for client applications that only consume messages and do not produce them, since such applications cannot otherwise detect when a connection has failed. Clients that produce messages only infrequently can also benefit from this feature.
The connection factory attribute imqPingInterval specifies the frequency, in seconds, with which to ping a connection. By default, this interval is set to 30 seconds; a value of -1 disables the ping operation.
The response to an unsuccessful ping varies from one operating-system platform to another. On some platforms, an exception is immediately thrown to the client application’s exception listener. (If the client does not have an exception listener, its next attempt to use the connection will fail.) Other platforms may continue trying to establish a connection to the broker, buffering successive pings until one succeeds or the buffer overflows.
The connection factory attributes listed in Table 18–4 support client authentication and the setting of client identifiers for durable subscribers.
All attempts to connect to a broker must be authenticated by user name and password against a user repository maintained by the message service. The connection factory attributes imqDefaultUsername and imqDefaultPassword specify a default user name and password to be used if the client does not supply them explicitly when creating a connection.
As a convenience for developers who do not wish to bother populating a user repository during application development and testing, Message Queue provides a guest user account with user name and password both equal to guest. This is also the default value for the imqDefaultUsername and imqDefaultPassword attributes, so that if they are not specified explicitly, clients can always obtain a connection under the guest account. In a production environment, access to broker connections should be restricted to users who are explicitly registered in the user repository.
The Java Message Service Specification requires that a connection provide a unique client identifier whenever the broker must maintain a persistent state on behalf of a client. Message Queue uses such client identifiers to keep track of durable subscribers to a topic destination. When a durable subscriber becomes inactive, the broker retains all incoming messages for the topic and delivers them when the subscriber becomes active again. The broker identifies the subscriber by means of its client identifier.
While it is possible for a client application to set its own client identifier programmatically using the connection object’s setClientID method, this makes it difficult to coordinate client identifiers to ensure that each is unique. It is generally better to have Message Queue automatically assign a unique identifier when creating a connection on behalf of a client. This can be done by setting the connection factory’s imqConfiguredClientID attribute to a value of the form
${u}factoryID
The characters ${u} must be the first four characters of the attribute value. (Any character other than u between the braces will cause an exception to be thrown on connection creation; in any other position, these characters have no special meaning and will be treated as plain text.) The value for factoryID is a character string uniquely associated with this connection factory object.
When creating a connection for a particular client, Message Queue will construct a client identifier by replacing the characters ${u} with ${u:userName}, where userName is the user name authenticated for the connection. This ensures that connections created by a given connection factory, although identical in all other respects, will each have their own unique client identifier. For example, if the user name is Calvin and the string specified for the connection factory’s imqConfiguredClientID attribute is ${u}Hobbes, the client identifier assigned will be ${u:Calvin}Hobbes.
This scheme will not work if two clients both attempt to obtain connections using the default user name guest, since each would have a client identifier with the same ${u} component. In this case, only the first client to request a connection will get one; the second client’s connection attempt will fail, because Message Queue cannot create two connections with the same client identifier.
Even if you specify a client identifier with imqConfiguredClientID, client applications can override this setting with the connection method setClientID. You can prevent this by setting the connection factory’s imqDisableSetClientID attribute to true. Note that for an application that uses durable subscribers, the client identifier must be set one way or the other: either administratively with imqConfiguredClientID or programmatically with setClientID.
Because “payload” messages sent and received by clients and control messages (such as broker acknowledgments) used by Message Queue itself pass over the same client-broker connection, excessive levels of payload traffic can interfere with the delivery of control messages. To help alleviate this problem, the connection factory attributes listed in Table 18–5 allow you to manage the relative flow of the two types of message. These attributes fall into four categories:
Acknowledgment timeout specifies the maximum time (imqAckTimeout) to wait for a broker acknowledgment before throwing an exception.
Connection flow metering limits the transmission of payload messages to batches of a specified size (imqConnectionFlowCount), ensuring periodic opportunities to deliver any accumulated control messages.
Connection flow control limits the number of payload messages (imqConnectionFlowLimit) that can be held pending on a connection, waiting to be consumed. When the limit is reached, delivery of payload messages to the connection is suspended until the number of messages awaiting consumption falls below the limit. Use of this feature is controlled by a boolean flag (imqConnectionFlowLimitEnabled).
Consumer flow control limits the number of payload messages (imqConsumerFlowLimit) that can be held pending for any single consumer, waiting to be consumed. (This limit can also be specified as a property of a specific queue destination, consumerFlowLimit.) When the limit is reached, delivery of payload messages to the consumer is suspended until the number of messages awaiting consumption, as a percentage of imqConsumerFlowLimit, falls below the limit specified by the imqConsumerFlowThreshold attribute. This helps improve load balancing among multiple consumers by preventing any one consumer from starving others on the same connection.
The use of any of these flow control techniques entails a trade-off between reliability and throughput; see Client Runtime Message Flow Adjustments for further discussion.
Table 18–6 lists connection factory attributes affecting client queue browsing and server sessions. The imqQueueBrowserMaxMessagesPerRetrieve attribute specifies the maximum number of messages to retrieve at one time when browsing the contents of a queue destination; imqQueueBrowserRetrieveTimeout gives the maximum waiting time for retrieving them. (Note that imqQueueBrowserMaxMessagesPerRetrieve does not affect the total number of messages browsed, only the way they are batched for delivery to the client runtime: fewer but larger batches or more but smaller ones. Changing the attribute's value may affect performance, but will not affect the total amount of data retrieved; the client application will always receive all messages in the queue.) The boolean attribute imqLoadMaxToServerSession governs the behavior of connection consumers in an application server session: if the value of this attribute is true, the client will load up to the maximum number of messages into a server session; if false, it will load only a single message at a time.
The Java Message Service Specification defines certain standard message properties, which JMS providers (such as Message Queue) may optionally choose to support. By convention, the names of all such standard properties begin with the letters JMSX. The connection factory attributes listed in Table 18–7 control whether the Message Queue client runtime sets certain of these standard properties. For produced messages, these include the following:
Identity of the user sending the message
Identity of the application sending the message
Transaction identifier of the transaction within which the message was produced
For consumed messages, they include
Transaction identifier of the transaction within which the message was consumed
Time the message was delivered to the consumer
You can use the connection factory attributes listed in Table 18–8 to override the values set by a client for certain JMS message header fields. The settings you specify will be used for all messages produced by connections obtained from that connection factory. Header fields that you can override in this way are
Delivery mode (persistent or nonpersistent)
Expiration time
Priority level
There are two attributes for each of these fields: one boolean, to control whether the field can be overridden, and another to specify its value. For instance, the attributes for setting the priority level are imqOverrideJMSPriority and imqJMSPriority. There is also an additional attribute, imqOverrideJMSHeadersToTemporaryDestinations, that controls whether override values apply to temporary destinations.
Because overriding message headers may interfere with the needs of specific applications, these attributes should only be used in consultation with an application’s designers or users.
The destination administered object that identifies a physical queue or topic destination has only two attributes, listed in Table 18–9. The important one is imqDestinationName, which gives the name of the physical destination that this administered object represents; this is the name that was specified with the -n option to the imqcmd create dst command that created the physical destination. (Note that there is not necessarily a one-to-one relationship between destination administered objects and the physical destinations they represent: a single physical destination can be referenced by more than one administered object, or by none at all.) There is also an optional descriptive string, imqDestinationDescription, which you can use to help identify the destination object and distinguish it from others you may have created.
The Message Queue Object Manager utility (imqobjmgr) allows you to create and manage administered objects. The imqobjmgr command provides the following subcommands for performing various operations on administered objects:
Add an administered object to an object store
Delete an administered object from an object store
List existing administered objects in an object store
Display information about an administered object
Modify the attributes of an administered object
See Object Manager Utility for reference information about the syntax, subcommands, and options of the imqobjmgr command.
Most Object Manager operations require you to specify the following information as options to the imqobjmgr command:
The JNDI lookup name (-l) of the administered object
This is the logical name by which client applications can look up the administered object in the object store, using the Java Naming and Directory Interface.
The attributes of the JNDI object store (-j)
See Object Stores for information on the possible attributes and their values.
The type (-t) of the administered object
Possible types include the following:
Queue destination
Topic destination
Connection factory
Queue connection factory
Topic connection factory
Connection factory for distributed transactions
Queue connection factory for distributed transactions
Topic connection factory for distributed transactions
The attributes (-o) of the administered object
See Administered Object Attributes for information on the possible attributes and their values.
The imqobjmgr command’s add subcommand adds administered objects for connection factories and topic or queue destinations to the object store. Administered objects stored in an LDAP object store must have lookup names beginning with the prefix cn=; lookup names in a file-system object store need not begin with any particular prefix, but must not include the slash character (/).
The Object Manager lists and displays only Message Queue administered objects. If an object store contains a non–Message Queue object with the same lookup name as an administered object that you wish to add, you will receive an error when you attempt the add operation.
To enable client applications to create broker connections, add a connection factory administered object for the type of connection to be created: a queue connection factory or a topic connection factory, as the case may be. Example 11–1 shows a command to add a queue connection factory (administered object type qf) to an LDAP object store. The object has lookup name cn=myQCF and connects to a broker running on host myHost at port number 7272, using the jms connection service.
|
When creating an administered object representing a destination, it is good practice to create the physical destination first, before adding the administered object to the object store. Use the Command utility (imqcmd) to create the physical destination, as described in Creating and Destroying Physical Destinations.
The command shown in Example 11–2 adds an administered object to an LDAP object store representing a topic destination with lookup name myTopic and physical destination name physTopic. The command for adding a queue destination would be similar, except that the administered object type (-t option) would be q (for “queue destination”) instead of t (for “topic destination”).
|
Example 11–3 shows the same command, but with the administered object stored in a Solaris file system instead of an LDAP server.
|
To delete an administered object from the object store, use the imqobjmgr delete subcommand, specifying the lookup name, type, and location of the object to be deleted. The command shown in Example 11–4 deletes the object that was added in Adding a Destination above.
|
You can use the imqobjmgr list subcommand to get a list of all administered objects in an object store or those of a specific type. Example 11–5 shows how to list all administered objects on an LDAP server.
|
Example 11–6 lists all queue destinations (type q).
|
The imqobjmgr query subcommand displays information about a specified administered object, identified by its lookup name and the attributes of the object store containing it. Example 11–7 displays information about an object whose lookup name is cn=myTopic.
|
To modify the attributes of an administered object, use the imqobjmgr update subcommand. You supply the object’s lookup name and location, and use the -o option to specify the new attribute values.
Example 11–8 changes the value of the imqReconnectAttempts attribute for the queue connection factory that was added to the object store in Example 11–1.
|
The -i option to the imqobjmgr command allows you to specify the name of a command file that uses Java property file syntax to represent all or part of the subcommand clause. This feature is especially useful for specifying object store attributes, which typically require a lot of typing and are likely to be the same across multiple invocations of imqobjmgr. Using a command file can also allow you to avoid exceeding the maximum number of characters allowed for the command line.
Example 11–9 shows the general syntax for an Object Manager command file. Note that the version property is not a command line option: it refers to the version of the command file itself (not that of the Message Queue product) and must be set to the value 2.0.
|
As an example, consider the Object Manager command shown earlier in Example 11–1, which adds a queue connection factory to an LDAP object store. This command can be encapsulated in a command file as shown in Example 11–10. If the command file is named MyCmdFile, you can then execute the command with the command line
imqobjmgr -i MyCmdFile
|
A command file can also be used to specify only part of the imqobjmgr subcommand clause, with the remainder supplied directly on the command line. For example, the command file shown in Example 11–11 specifies only the attribute values for an LDAP object store.
|
You could then use this command file to specify the object store in an imqobjmgr command while supplying the remaining options explicitly, as shown in Example 11–12.
|
Additional examples of command files can be found at the following locations, depending on your platform:
/usr/demo/imq/imqobjmgr
/opt/sun/mq/examples/imqobjmgr
IMQ_HOME\demo\imqobjmgr
This chapter describes the tools you can use to monitor a broker and how you can get metrics data. The chapter has the following sections:
Reference information on specific metrics is available in Chapter 20, Metrics Information Reference
The broker includes components for monitoring and diagnosing application and broker performance. These include the components and services shown in the following figure:
Broker code that logs broker events.
A metrics generator that provides.
The metrics generator provides information about broker activity, such as message flow in and out of the broker, the number of messages in broker memory and the memory they consume, the number of open connections, and the number of threads being used. The boolean broker property imq.metrics.enabled controls whether such information is logged and the imq.metrics.interval property specifies how often metrics information is generated.
A logger component that writes out information to a number of output channels.
A comprehensive set of Java Management Extensions (JMX) MBeans that expose broker resources using the JMX API
Support for the Java ES Monitoring Framework
A metrics message producer that sends JMS messages containing metrics information to topic destinations for consumption by JMS monitoring clients.
Broker properties for configuring the monitoring services are listed under Monitoring Properties.
There are five tools (or interfaces) for monitoring Message Queue information, as described briefly below:
Log files provide a long-term record of metrics data, but cannot easily be parsed.
The Command Utility (imqcmd metrics) lets you interactively sample information tailored to your needs, but does not provide historical information or allow you to manipulate the data programmatically.
The Java Management Extensions (JMX) Administration API lets you perform broker resource configuration and monitoring operations programmatically from within a Java application. You can write your own JMX administration application or use the standard Java Monitoring and Management Console (jconsole).
The Sun Java Enterprise System Monitoring Framework (JESMF) and Monitoring Console offers a common, Web-based graphical interface shared with other Java ES components, but can monitor only a subset of all Message Queue entities and operations.
The Message-based Monitoring API lets you extract metrics information from messages produced by the broker to metrics topic destinations. However, to use it, you must write a Message Queue client application to capture, analyze, and display the metrics data.
The following tabel compares the different tools.
Table 12–1 Benefits and Limitations of Metrics Monitoring Tools
In addition to the differences shown in the table, each tool gathers a somewhat different subset of the metrics information generated by the broker. For information on which metrics data is gathered by each monitoring tool, see Chapter 20, Metrics Information Reference.
The Message Queue Logger takes information generated by broker code, a debugger, and a metrics generator and writes that information to a number of output channels: to standard output (the console), to a log file, and, on Solaris platforms, to the syslog daemon process. You can specify the type of information gathered by the Logger as well as the type of information the Logger writes to each of the output channels. For example, you can specify that you want metrics information written out to a log file.
This section describes the configuration and use of the Logger for monitoring broker activity. It includes the following topics:
The imq.log.file.dirpath and imq.log.file.filename broker properties identify the log file to use and the imq.log.console.stream property specifies whether console output is directed to stdout or stderr.
The imq.log.level property controls the categories of metric information that the Logger gathers: ERROR, WARNING, or INFO. Each level includes those above it, so if you specify, for example, WARNING as the logging level, error messages will be logged as well.
There is also an imq.destination.logDeadMsgs property that specifies whether to log entries when dead messages are discarded or moved to the dead message queue.
The imq.log.console.output and imq.log.file.output properties control which of the specified categories the Logger writes to the console and the log file, respectively. In this case, however, the categories do not include those above them; so if you want, for instance, both errors and warnings written to the log file and informational messages to the console, you must explicitly set imq.log.file.output to ERROR|WARNING and imq.log.console.output to INFO.
On Solaris platforms another property, imq.log.syslog.output, specifies the categories of metric information to be written to the syslog daemon.
In the case of a log file, you can specify the point at which the file is closed and output is rolled over to a new file. Once the log file reaches a specified size (imq.log.file.rolloverbytes) or age (imq.log.file.rolloversecs), it is saved and a new log file created.
See Monitoring Properties for additional broker properties related to logging and subsequent sections for details about how to configure the Logger and how to use it to obtain performance information.
A logged message consists of a time stamp, a message code, and the message itself. The volume of information included varies with the logging level you have set. The broker supports three logging levels: ERROR, WARNING , and INFO (see Table 12–2). Each level includes those above it (for example, WARNING includes ERROR).
Table 12–2 Logging Levels
Logging Level |
Description |
---|---|
ERROR |
Serious problems that could cause system failure |
WARNING |
Conditions that should be heeded but will not cause system failure |
INFO |
Metrics and other informational messages |
The default logging level is INFO, so messages at all three levels are logged by default. The following is an example of an INFO message:
[13/Sep/2000:16:13:36 PDT] [B1004]: Starting the broker service using tcp [25374,100] with min threads 50 and max threads of 500
You can change the time zone used in the time stamp by setting the broker configuration property imq.log.timezone (see Table 16–10).
A broker is automatically configured to save log output to a set of rolling log files. The log files are located in a directory identified by the instance name of the associated broker (see Appendix A, Platform-Specific Locations of Message Queue Data):
…/instances/instanceName/log
For a broker whose life cycle is controlled by the Application Server, the log files are located in a subdirectory of the domain directory for the domain for which the broker was started:
…/appServerDomainDir/imq/instances/imqbroker/log
The log files are simple text files. The system maintains nine backup files named as follows, from earliest to latest:
log.txt log_1.txt log_2.txt … log_9.txt
By default, the log files are rolled over once a week. You can change this rollover interval, or the location or names of the log files, by setting appropriate configuration properties:
To change the directory in which the log files are kept, set the property imq.log.file.dirpath to the desired path.
To change the root name of the log files from log to something else, set the imq.log.file.filename property.
To change the frequency with which the log files are rolled over, set the property imq.log.file.rolloversecs.
See Table 16–10 for further information on these properties.
Log-related properties are described in Table 16–10.
Set the logging level.
Set the output channel (file, console, or both) for one or more logging categories.
If you log output to a file, configure the rollover criteria for the file.
You complete these steps by setting Logger properties. You can do this in one of two ways:
Change or add Logger properties in the config.properties file for a broker before you start the broker.
Specify Logger command line options in the imqbrokerd command that starts the broker. You can also use the broker option -D to change Logger properties (or any broker property).
Options passed on the command line override properties specified in the broker instance configuration files. The following imqbrokerd options affect logging:
Logging interval for broker metrics, in seconds
Logging level (ERROR, WARNING, INFO, or NONE)
Silent mode (no logging to console)
Log all messages to console
The following sections describe how you can change the default configuration in order to do the following:
Change the output channel (the destination of log messages)
Change rollover criteria
By default, error and warning messages are displayed on the terminal as well as being logged to a log file. (On Solaris, error messages are also written to the system’s syslog daemon.)
You can change the output channel for log messages in the following ways:
To have all log categories (for a given level) output displayed on the screen, use the -tty option to the imqbrokerd command.
To prevent log output from being displayed on the screen, use the -silent option to the imqbrokerd command.
Use the imq.log.file.output property to specify which categories of logging information should be written to the log file. For example,
imq.log.file.output=ERROR
Use the imq.log.console.output property to specify which categories of logging information should be written to the console. For example,
imq.log.console.output=INFO
On Solaris, use the imq.log.syslog.output property to specify which categories of logging information should be written to Solaris syslog. For example,
imq.log.syslog.output=NONE
Before changing Logger output channels, you must make sure that logging is set at a level that supports the information you are mapping to the output channel. For example, if you set the logging level to ERROR and then set the imq.log.console.output property to WARNING, no messages will be logged because you have not enabled the logging of WARNING messages.
There are two criteria for rolling over log files: time and size. The default is to use a time criteria and roll over files every seven days.
To change the time interval, you need to change the property imq.log.file.rolloversecs. For example, the following property definition changes the time interval to ten days:
imq.log.file.rolloversecs=864000
To change the rollover criteria to depend on file size, you need to set the imq.log.file.rolloverbytes property. For example, the following definition directs the broker to rollover files after they reach a limit of 500,000 bytes
imq.log.file.rolloverbytes=500000
If you set both the time-related and the size-related rollover properties, the first limit reached will trigger the rollover. As noted before, the broker maintains up to nine rollover files.
You can set or change the log file rollover properties when a broker is running. To set these properties, use the imqcmd update bkr command.
This section describes the procedure for using broker log files to report metrics information. For general information on configuring the Logger, see Configuring and Using Broker Logging.
Configure the broker’s metrics generation capability:
Confirm imq.metrics.enabled=true
Generation of metrics for logging is turned on by default.
Set the metrics generation interval to a convenient number of seconds.
imq.metrics.interval=interval
This value can be set in the config.properties file or using the -metrics interval command line option when starting up the broker.
Confirm that the Logger gathers metrics information:
imq.log.level=INFO |
This is the default value. This value can be set in the config.properties file or using the -loglevel level command line option when starting up the broker.
Confirm that the Logger is set to write metrics information to the log file:
imq.log.file.output=INFO |
This is the default value. It can be set in the config.properties file.
Start up the broker.
The following shows sample broker metrics output to the log file:
[21/Jul/2004:11:21:18 PDT] Connections: 0 JVM Heap: 8323072 bytes (7226576 free) Threads: 0 (14-1010) In: 0 msgs (0bytes) 0 pkts (0 bytes) Out: 0 msgs (0bytes) 0 pkts (0 bytes) Rate In: 0 msgs/sec (0 bytes/sec) 0 pkts/sec (0 bytes/sec) Rate Out: 0 msgs/sec (0 bytes/sec) 0 pkts/sec (0 bytes/sec) |
For reference information about metrics data, see Chapter 20, Metrics Information Reference
You can monitor physical destinations by enabling dead message logging for a broker. You can log dead messages whether or not you are using a dead message queue.
If you enable dead message logging, the broker logs the following types of events:
A physical destination exceeded its maximum size.
The broker removed a message from a physical destination, for a reason such as the following:
The destination size limit has been reached.
The message time to live expired.
The message is too large.
An error occurred when the broker attempted to process the message.
If a dead message queue is in use, logging also includes the following types of events:
The broker moved a message to the dead message queue.
The broker removed a message from the dead message queue and discarded it.
The following is an example of the log format for dead messages:
[29/Mar/2006:15:35:39 PST] [B1147]: Message 8-129.145.180.87(e7:6b:dd:5d:98:aa)- 35251-1143675279400 from destination Q:q0 has been placed on the DMQ because [B0053]: Message on destination Q:q0 Expired: expiration time 1143675279402, arrival time 1143675279401, JMSTimestamp 1143675279400 |
Dead message logging is disabled by default. To enable it, set the broker attribute imq.destination.logDeadMsgs.
A Message Queue broker can report metrics of the following types:
Java Virtual Machine (JVM) metrics. Information about the JVM heap size.
Brokerwide metrics. Information about messages stored in a broker, message flows into and out of a broker, and memory use. Messages are tracked in terms of numbers of messages and numbers of bytes.
Connection Service metrics. Information about connections and connection thread resources, and information about message flows for a particular connection service.
Destination metrics. Information about message flows into and out of a particular physical destination, information about a physical destination’s consumers, and information about memory and disk space usage.
The imqcmd command can obtain metrics information for the broker as a whole, for individual connection services, and for individual physical destinations. To obtain metrics data, you generally use the metrics subcommand of imqcmd. Metrics data is written at an interval you specify, or the number of times you specify, to the console screen.
You can also use the query subcommand to view similar data that also includes configuration information. See imqcmd query for more information.
The syntax and options of imqcmd metrics are shown in Table 12–3 and Table 12–4, respectively.
Table 12–3 imqcmd metrics Subcommand SyntaxTable 12–4 imqcmd metrics Subcommand Options
Start the broker for which metrics information is desired.
See Starting Brokers.
Issue the appropriate imqcmd metrics subcommand and options as shown in Table 12–3 and Table 12–4.
This section contains examples of output for the imqcmd metrics subcommand. The examples show brokerwide, connection service, and physical destination metrics.
To get the rate of message and packet flow into and out of the broker at 10 second intervals, use the metrics bkr subcommand:
imqcmd metrics bkr -m rts -int 10 -u admin
This command produces output similar to the following (see data descriptions in Table 20–2):
-------------------------------------------------------- Msgs/sec Msg Bytes/sec Pkts/sec Pkt Bytes/sec In Out In Out In Out In Out -------------------------------------------------------- 0 0 27 56 0 0 38 66 10 0 7365 56 10 10 7457 1132 0 0 27 56 0 0 38 73 0 10 27 7402 10 20 1400 8459 0 0 27 56 0 0 38 73 |
To get cumulative totals for messages and packets handled by the jms connection service, use the metrics svc subcommand:
imqcmd metrics svc -n jms -m ttl -u admin
This command produces output similar to the following (see data descriptions in Table 20–3):
------------------------------------------------- Msgs Msg Bytes Pkts Pkt Bytes In Out In Out In Out In Out ------------------------------------------------- 164 100 120704 73600 282 383 135967 102127 657 100 483552 73600 775 876 498815 149948 |
To get metrics information about a physical destination, use the metrics dst subcommand:
imqcmd metrics dst -t q -n XQueue -m ttl -u admin
This command produces output similar to the following (see data descriptions in Table 20–4):
----------------------------------------------------------------------------- Msgs Msg Bytes Msg Count Total Msg Bytes (k) Largest In Out In Out Current Peak Avg Current Peak Avg Msg (k) ----------------------------------------------------------------------------- 200 200 147200 147200 0 200 0 0 143 71 0 300 200 220800 147200 100 200 10 71 143 64 0 300 300 220800 220800 0 200 0 0 143 59 0 |
To get information about a physical destination’s consumers, use the following metrics dst subcommand:
imqcmd metrics dst -t q -n SimpleQueue -m con -u admin
This command produces output similar to the following (see data descriptions in Table 20–4):
------------------------------------------------------------------ Active Consumers Backup Consumers Msg Count Current Peak Avg Current Peak Avg Current Peak Avg ------------------------------------------------------------------ 1 1 0 0 0 0 944 1000 525 |
The syntax and options of imqcmd query are shown in Table 12–5 along with a description of the metrics data provided by the command.
Table 12–5 imqcmd query Subcommand Syntax
Subcommand Syntax |
Metrics Data Provided |
|
---|---|---|
|
Information on the current number of messages and message bytes stored in broker memory and persistent store (see Viewing Broker Information). |
|
or | ||
|
Information on the current number of allocated threads and number of connections for a specified connection service (see Viewing Connection Service Information). |
|
or | ||
|
Information on the current number of producers, active and backup consumers, and messages and message bytes stored in memory and persistent store for a specified destination (see Viewing Physical Destination Information). |
Because of the limited metrics data provided by imqcmd query , this tool is not represented in the tables presented in Chapter 20, Metrics Information Reference.
The broker implements a comprehensive set of Java Management Extensions (JMX) MBeans that represent the broker's manageable resources. Using the JMX API, you can access these MBeans to perform broker configuration and monitoring operations programmatically from within a Java application.
In this way, the MBeans provide a Java application access to data values representing static or dynamic properties of a broker, connection, destination, or other resource. The application can also receive notifications of state changes or other significant events affecting the resource.
JMX-based administration provides dynamic, fine grained, programmatic access to the broker. You can use this kind of administration in a number of ways.
You can include JMX code in your JMS client application to monitor application performance and, based on the results, to reconfigure the Message Queue resources you use to improve performance.
You can write JMX client applications that monitor the broker to identify use patterns and performance problems, and you can use the JMX API to reconfigure the broker to optimize performance.
You can write a JMX client application to automate regular maintenance tasks.
You can write a JMX client application that constitutes your own version of the Command utility (imqcmd), and you can use it instead of imqcmd.
You can use the standard Java Monitoring and Management Console (jconsole) that can provide access to the broker's MBeans.
For information on JMX infrastructure and configuring the broker's JMX support, see Appendix D, JMX Support. To manage a Message Queue broker using the JMX architecture, see Sun Java System Message Queue 4.3 Developer’s Guide for JMX Clients.
Message Queue supports the Sun Java System Monitoring Framework (JESMF), which allows Java Enterprise System (Java ES) components to be monitored using a common graphical interface, the Sun Java System Monitoring Console. Administrators can use the Monitoring Console to view performance statistics, create rules for automatic monitoring, and acknowledge alarms. If you are running Message Queue along with other Java ES components, you may find it more convenient to use a single interface to manage all of them.
The Java ES Monitoring Framework defines a common data model, the Common Monitoring Model (CMM), to be used by all Java ES component products. This model enables a centralized and uniform view of all Java ES components. Message Queue exposes the following objects through the Common Monitoring Model:
The installed product
The broker instance name
The broker Port Mapper
Each connection service
Each physical destination
The persistent data store
The user repository
Each of these objects is mapped to a CMM object whose attributes can be monitored using the Java ES Monitoring Console. The reference tables in Chapter 21, JES Monitoring Framework Reference identify those attributes that are available for JESMF monitoring. For detailed information about the mapping of Message Queue objects to CMM objects, see the Sun Java Enterprise System Monitoring Guide.
To enable JESMF monitoring, you must do the following:
Enable and configure the Monitoring Framework for all of your monitored components, as described in the Sun Java Enterprise System Monitoring Guide.
Install the Monitoring Console on a separate host, start the master agent, and then start the Web server, as described in the Sun Java Enterprise System Monitoring Guide.
Using the Java ES Monitoring Framework will not affect broker performance, because all the work of gathering metrics is done by the Monitoring Framework, which pulls data from the broker’s existing data monitoring infrastructure.
For information on metric information provided by the Java ES Monitoring Framework, see Chapter 21, JES Monitoring Framework Reference.
Message Queue provides a Metrics Message Producer, which receives information from the Metrics Generator at regular intervals and writes the information into metrics messages,. The Metrics Message Producer then sends these messages to one of a number of metric topic destinations, depending on the type of metric information contained in the messages.
You can access this metrics information by writing a client application that subscribes to the metrics topic destinations, consumes the messages in these destinations, and processes the metrics information contained in the messages. This allows you to create custom monitoring tools to support messaging applications. For details of the metric quantities reported in each type of metrics message, see Chapter 4, Using the Metrics Monitoring API, in Sun Java System Message Queue 4.3 Developer’s Guide for Java Clients
There are five metrics topic destinations, whose names are shown in Table 12–6, along with the type of metrics messages delivered to each destination.
Table 12–6 Metrics Topic Destinations
Topic Name | |
---|---|
mq.metrics.broker | |
mq.metrics.jvm | |
mq.metrics.destination_list | |
mq.metrics.destination.queue.queueName |
Destination metrics for queue queueName |
mq.metrics.destination.topic.topicName |
Destination metrics for topic topicName |
The broker properties imq.metrics.topic.enabled and imq.metrics.topic.interval control, respectively, whether messages are sent to metric topic destinations and how often. The imq.metrics.topic.timetolive and imq.metrics.topic.persist properties specify the lifetime of such messages and whether they are persistent.
Besides the information contained in the body of a metrics message, the header of each message includes properties that provide the following additional information:
The message type
The address (host name and port number) of the broker that sent the message
The time the metric sample was taken
These properties are useful to client applications that process metrics messages of different types or from different brokers.
This section describes the procedure for using the message-based monitoring capability to gather metrics information. The procedure includes both client development and administration tasks.
Write a metrics monitoring client.
See the Message Queue Developer’s Guide for Java Clients for instructions on programming clients that subscribe to metrics topic destinations, consume metrics messages, and extract the metrics data from these messages.
Configure the broker’s Metrics Message Producer by setting broker property values in the config.properties file:
Enable metrics message production.
Set imq.metrics.topic.enabled=true
The default value is true.
Set the interval (in seconds) at which metrics messages are generated.
Set imq.metrics.topic.interval=interval .
The default is 60 seconds.
Specify whether you want metrics messages to be persistent (that is, whether they will survive a broker failure).
Set imq.metrics.topic.persist .
The default is false.
Specify how long you want metrics messages to remain in their respective destinations before being deleted.
Set imq.metrics.topic.timetolive .
The default value is 300 seconds.
Set any access control you desire on metrics topic destinations.
See the discussion in Security and Access Considerations below.
Start up your metrics monitoring client.
When consumers subscribe to a metrics topic, the metrics topic destination will automatically be created. Once a metrics topic has been created, the broker’s metrics message producer will begin sending metrics messages to the metrics topic.
There are two reasons to restrict access to metrics topic destinations:
Metrics data might include sensitive information about a broker and its resources.
Excessive numbers of subscriptions to metrics topic destinations might increase broker overhead and negatively affect performance.
Because of these considerations, it is advisable to restrict access to metrics topic destinations.
Monitoring clients are subject to the same authentication and authorization control as any other client. Only users maintained in the Message Queue user repository are allowed to connect to the broker.
You can provide additional protections by restricting access to specific metrics topic destinations through an access control file, as described in User Authorization.
For example, the following entries in an accesscontrol.properties file will deny access to the mq.metrics.broker metrics topic to everyone except user1 and user 2.
topic.mq.metrics.broker.consume.deny.user=* topic.mq.metrics.broker.consume.allow.user=user1,user2 |
The following entries will only allow users user3 to monitor topic t1.
topic.mq.metrics.destination.topic.t1.consume.deny.user=* topic.mq.metrics.destination.topic.t1.consume.allow.user=user3 |
Depending on the sensitivity of metrics data, you can also connect your metrics monitoring client to a broker using an encrypted connection. For information on using encrypted connections, see Message Encryption.
The metrics data outputs you get using the message-based monitoring API is a function of the metrics monitoring client you write. You are limited only by the data provided by the metrics generator in the broker. For a complete list of this data, see Chapter 20, Metrics Information Reference.
This chapter covers a number of topics about how to analyze and tune a Message Queue service to optimize the performance of your messaging applications. It includes the following topics:
This section provides some background information on performance tuning.
The performance you get out of a messaging application depends on the interaction between the application and the Message Queue service. Hence, maximizing performance requires the combined efforts of both the application developer and the administrator.
The process of optimizing performance begins with application design and continues on through tuning the message service after the application has been deployed. The performance tuning process includes the following stages:
Defining performance requirements for the application
Designing the application taking into account factors that affect performance (especially tradeoffs between reliability and performance)
Establishing baseline performance measures
Tuning or reconfiguring the message service to optimize performance
The process outlined above is often iterative. During deployment of the application, a Message Queue administrator evaluates the suitability of the message service for the application’s general performance requirements. If the benchmark testing meets these requirements, the administrator can tune the system as described in this chapter. However, if benchmark testing does not meet performance requirements, a redesign of the application might be necessary or the deployment architecture might need to be modified.
In general, performance is a measure of the speed and efficiency with which a message service delivers messages from producer to consumer. However, there are several different aspects of performance that might be important to you, depending on your needs.
The number of message producers, or message consumers, or the number of concurrent connections a system can support.
The number of messages or message bytes that can be pumped through a messaging system per second.
The time it takes a particular message to be delivered from message producer to message consumer.
The overall availability of the message service or how gracefully it degrades in cases of heavy load or failure.
The efficiency of message delivery; a measure of message throughput in relation to the computing resources employed.
These different aspects of performance are generally interrelated. If message throughput is high, that means messages are less likely to be backlogged in the broker, and as a result, latency should be low (a single message can be delivered very quickly). However, latency can depend on many factors: the speed of communication links, broker processing speed, and client processing speed, to name a few.
In any case, the aspects of performance that are most important to you generally depends on the requirements of a particular application.
Benchmarking is the process of creating a test suite for your messaging application and of measuring message throughput or other aspects of performance for this test suite.
For example, you could create a test suite by which some number of producing clients, using some number of connections, sessions, and message producers, send persistent or nonpersistent messages of a standard size to some number of queues or topics (all depending on your messaging application design) at some specified rate. Similarly, the test suite includes some number of consuming clients, using some number of connections, sessions, and message consumers (of a particular type) that consume the messages in the test suite’s physical destinations using a particular acknowledgment mode.
Using your standard test suite you can measure the time it takes between production and consumption of messages or the average message throughput rate, and you can monitor the system to observe connection thread usage, message storage data, message flow data, and other relevant metrics. You can then ramp up the rate of message production, or the number of message producers, or other variables, until performance is negatively affected. The maximum throughput you can achieve is a benchmark for your message service configuration.
Using this benchmark, you can modify some of the characteristics of your test suite. By carefully controlling all the factors that might have an effect on performance (see Application Design Factors Affecting Performance), you can note how changing some of these factors affects the benchmark. For example, you can increase the number of connections or the size of messages five-fold or ten-fold, and note the effect on performance.
Conversely, you can keep application-based factors constant and change your broker configuration in some controlled way (for example, change connection properties, thread pool properties, JVM memory limits, limit behaviors, file-based versus JDBC-based persistence, and so forth) and note how these changes affect performance.
This benchmarking of your application provides information that can be valuable when you want to increase the performance of a deployed application by tuning your message service. A benchmark allows the effect of a change or a set of changes to be more accurately predicted.
As a general rule, benchmarks should be run in a controlled test environment and for a long enough period of time for your message service to stabilize. (Performance is negatively affected at startup by the just-in-time compilation that turns Java code into machine code.)
Once a messaging application is deployed and running, it is important to establish baseline use patterns. You want to know when peak demand occurs and you want to be able to quantify that demand. For example, demand normally fluctuates by number of end users, activity levels, time of day, or all of these.
To establish baseline use patterns you need to monitor your message service over an extended period of time, looking at data such as the following:
Number of connections
Number of messages stored in the broker (or in particular physical destinations)
Message flows into and out of a broker (or particular physical destinations)
Numbers of active consumers
You can also use average and peak values provided in metrics data.
It is important to check these baseline metrics against design expectations. By doing so, you are checking that client code is behaving properly: for example, that connections are not being left open or that consumed messages are not being left unacknowledged. These coding errors consume broker resources and could significantly affect performance.
The base-line use patterns help you determine how to tune your system for optimal performance. For example:
If one physical destination is used significantly more than others, you might want to set higher message memory limits on that physical destination than on others, or to adjust limit behaviors accordingly.
If the number of connections needed is significantly greater than allowed by the maximum thread pool size, you might want to increase the thread pool size or adopt a shared thread model.
If peak message flows are substantially greater than average flows, that might influence the limit behaviors you employ when memory runs low.
In general, the more you know about use patterns, the better you are able to tune your system to those patterns and to plan for future needs.
Message latency and message throughput, two of the main performance indicators, generally depend on the time it takes a typical message to complete various steps in the message delivery process. These steps are shown below for the case of a persistent, reliably delivered message. The steps are described following the illustration.
The message is delivered from producing client to broker.
The broker reads in the message.
The message is placed in persistent storage (for reliability).
The broker confirms receipt of the message (for reliability).
The broker determines the routing for the message.
The broker writes out the message.
The message is delivered from broker to consuming client.
The consuming client acknowledges receipt of the message (for reliability).
The broker processes client acknowledgment (for reliability).
The broker confirms that client acknowledgment has been processed.
Since these steps are sequential, any one of them can be a potential bottleneck in the delivery of messages from producing clients to consuming clients. Most of the steps depend on physical characteristics of the messaging system: network bandwidth, computer processing speeds, message service architecture, and so forth. Some, however, also depend on characteristics of the messaging application and the level of reliability it requires.
The following subsections discuss the effect of both application design factors and messaging system factors on performance. While application design and messaging system factors closely interact in the delivery of messages, each category is considered separately.
Application design decisions can have a significant effect on overall messaging performance.
The most important factors affecting performance are those that affect the reliability of message delivery. Among these are the following:
Other application design factors affecting performance are the following:
The sections that follow describe the effect of each of these factors on messaging performance. As a general rule, there is a tradeoff between performance and reliability: factors that increase reliability tend to decrease performance.
Table 13–1 shows how the various application design factors generally affect messaging performance. The table shows two scenarios—one high-reliability, low-performance, and one high-performance, low-reliability—and the choices of application design factors that characterize each. Between these extremes, there are many choices and tradeoffs that affect both reliability and performance.
Table 13–1 Comparison of High-Reliability and High-Performance Scenarios
Application Design Factor |
High-Reliability, Low-Performance Scenario |
High-Performance, Low-Reliability Scenario |
---|---|---|
Delivery mode |
Persistent messages |
Nonpersistent messages |
Use of transactions |
Transacted sessions |
No transactions |
Acknowledgment mode |
AUTO_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE |
DUPS_OK_ACKNOWLEDGE |
Durable/nondurable subscriptions |
Durable subscriptions |
Nondurable subscriptions |
Use of selectors |
Message filtering |
No message filtering |
Message size |
Large number of small messages |
Small number of large messages |
Message body type |
Complex body types |
Simple body types |
Persistent messages guarantee message delivery in case of broker failure. The broker stores the message in a persistent store until all intended consumers acknowledge they have consumed the message.
Broker processing of persistent messages is slower than for nonpersistent messages for the following reasons:
A broker must reliably store a persistent message so that it will not be lost should the broker fail.
The broker must confirm receipt of each persistent message it receives. Delivery to the broker is guaranteed once the method producing the message returns without an exception.
Depending on the client acknowledgment mode, the broker might need to confirm a consuming client’s acknowledgment of a persistent message.
For both queues and topics with durable subscribers, performance was approximately 40% faster for nonpersistent messages. We obtained these results using 10k-sized messages and AUTO_ACKNOWLEDGE mode.
A transaction is a guarantee that all messages produced in a transacted session and all messages consumed in a transacted session will be either processed or not processed (rolled back) as a unit.
Message Queue supports both local and distributed transactions.
A message produced or acknowledged in a transacted session is slower than in a nontransacted session for the following reasons:
Additional information must be stored with each produced message.
In some situations, messages in a transaction are stored when normally they would not be (for example, a persistent message delivered to a topic destination with no subscriptions would normally be deleted, however, at the time the transaction is begun, information about subscriptions is not available).
Information on the consumption and acknowledgment of messages within a transaction must be stored and processed when the transaction is committed.
To improve performance, Message Queue message brokers are configured by default to use a memory-mapped file to store transaction data. On file systems that do not support memory-mapped files, you can disable this behavior by setting the broker property imq.persist.file.transaction.memorymappedfile.enabled to false.
One mechanism for ensuring the reliability of JMS message delivery is for a client to acknowledge consumption of messages delivered to it by the Message Queue broker.
If a session is closed without the client acknowledging the message or if the broker fails before the acknowledgment is processed, the broker redelivers that message, setting a JMSRedelivered flag.
For a nontransacted session, the client can choose one of three acknowledgment modes, each of which has its own performance characteristics:
AUTO_ACKNOWLEDGE. The system automatically acknowledges a message once the consumer has processed it. This mode guarantees at most one redelivered message after a provider failure.
CLIENT_ACKNOWLEDGE. The application controls the point at which messages are acknowledged. All messages processed in that session since the previous acknowledgment are acknowledged. If the broker fails while processing a set of acknowledgments, one or more messages in that group might be redelivered.
DUPS_OK_ACKNOWLEDGE. This mode instructs the system to acknowledge messages in a lazy manner. Multiple messages can be redelivered after a provider failure.
(Using CLIENT_ACKNOWLEDGE mode is similar to using transactions, except there is no guarantee that all acknowledgments will be processed together if a provider fails during processing.)
Acknowledgment mode affects performance for the following reasons:
Extra control messages between broker and client are required in AUTO_ACKNOWLEDGE and CLIENT_ACKNOWLEDGE modes. The additional control messages add additional processing overhead and can interfere with JMS payload messages, causing processing delays.
In AUTO_ACKNOWLEDGE and CLIENT_ACKNOWLEDGE modes, the client must wait until the broker confirms that it has processed the client’s acknowledgment before the client can consume additional messages. (This broker confirmation guarantees that the broker will not inadvertently redeliver these messages.)
The Message Queue persistent store must be updated with the acknowledgment information for all persistent messages received by consumers, thereby decreasing performance.
Subscribers to a topic destination fall into two categories, those with durable and nondurable subscriptions.
Durable subscriptions provide increased reliability but slower throughput, for the following reasons:
The Message Queue message service must persistently store the list of messages assigned to each durable subscription so that should a broker fail, the list is available after recovery.
Persistent messages for durable subscriptions are stored persistently, so that should a broker fail, the messages can still be delivered after recovery, when the corresponding consumer becomes active. By contrast, persistent messages for nondurable subscriptions are not stored persistently (should a broker fail, the corresponding consumer connection is lost and the message would never be delivered).
We compared performance for durable and nondurable subscribers in two cases: persistent and nonpersistent 10k-sized messages. Both cases use AUTO_ACKNOWLEDGE acknowledgment mode. We found an effect on performance only in the case of persistent messages which slowed durables by about 30%
Application developers often want to target sets of messages to particular consumers. They can do so either by targeting each set of messages to a unique physical destination or by using a single physical destination and registering one or more selectors for each consumer.
A selector is a string requesting that only messages with property values that match the string are delivered to a particular consumer. For example, the selector NumberOfOrders >1 delivers only the messages with a NumberOfOrders property value of 2 or more.
Creating consumers with selectors lowers performance (as compared to using multiple physical destinations) because additional processing is required to handle each message. When a selector is used, it must be parsed so that it can be matched against future messages. Additionally, the message properties of each message must be retrieved and compared against the selector as each message is routed. However, using selectors provides more flexibility in a messaging application.
Message size affects performance because more data must be passed from producing client to broker and from broker to consuming client, and because for persistent messages a larger message must be stored.
However, by batching smaller messages into a single message, the routing and processing of individual messages can be minimized, providing an overall performance gain. In this case, information about the state of individual messages is lost.
In our tests, which compared throughput in kilobytes per second for 1k, 10k, and 100k-sized messages to a queue destination and AUTO_ACKNOWLEDGE acknowledgment mode, we found that nonpersistent messaging was about 50% faster for 1k messages, about 20% faster for 10k messages, and about 5% faster for 100k messages. The size of the message affected performance significantly for both persistent and nonpersistent messages. 100k messages are about 10 times faster than 10k, and 10k are about 5 times faster than 1k.
JMS supports five message body types, shown below roughly in the order of complexity:
BytesMessage contains a set of bytes in a format determined by the application.
TextMessage is a simple Java string.
StreamMessage contains a stream of Java primitive values.
MapMessage contains a set of name-value pairs.
ObjectMessage contains a Java serialized object.
While, in general, the message type is dictated by the needs of an application, the more complicated types (MapMessage and ObjectMessage) carry a performance cost: the expense of serializing and deserializing the data. The performance cost depends on how simple or how complicated the data is.
The performance of a messaging application is affected not only by application design, but also by the message service performing the routing and delivery of messages.
The following sections discuss various message service factors that can affect performance. Understanding the effect of these factors is key to sizing a message service and diagnosing and resolving performance bottlenecks that might arise in a deployed application.
The most important factors affecting performance in a Message Queue service are the following:
The sections below describe the effect of each of these factors on messaging performance.
For both the Message Queue broker and client applications, CPU processing speed and available memory are primary determinants of message service performance. Many software limitations can be eliminated by increasing processing power, while adding memory can increase both processing speed and capacity. However, it is generally expensive to overcome bottlenecks simply by upgrading your hardware.
Because of the efficiencies of different operating systems, performance can vary, even assuming the same hardware platform. For example, the thread model employed by the operating system can have an important effect on the number of concurrent connections a broker can support. In general, all hardware being equal, Solaris is generally faster than Linux, which is generally faster than Windows.
The broker is a Java process that runs in and is supported by the host JVM. As a result, JVM processing is an important determinant of how fast and efficiently a broker can route and deliver messages.
In particular, the JVM’s management of memory resources can be critical. Sufficient memory has to be allocated to the JVM to accommodate increasing memory loads. In addition, the JVM periodically reclaims unused memory, and this memory reclamation can delay message processing. The larger the JVM memory heap, the longer the potential delay that might be experienced during memory reclamation.
The number and speed of connections between client and broker can affect the number of messages that a message service can handle as well as the speed of message delivery.
All access to the broker is by way of connections. Any limit on the number of concurrent connections can affect the number of producing or consuming clients that can concurrently use the broker.
The number of connections to a broker is generally limited by the number of threads available. Message Queue can be configured to support either a dedicated thread model or a shared thread model (see Thread Pool Management).
The dedicated thread model is very fast because each connection has dedicated threads, however the number of connections is limited by the number of threads available (one input thread and one output thread for each connection). The shared thread model places no limit on the number of connections, however there is significant overhead and throughput delays in sharing threads among a number of connections, especially when those connections are busy.
Message Queue software allows clients to communicate with the broker using various low-level transport protocols. Message Queue supports the connection services (and corresponding protocols) described in Configuring Connection Services.
The choice of protocols is based on application requirements (encrypted, accessible through a firewall), but the choice affects overall performance.
Our tests compared throughput for TCP and SSL for two cases: a high-reliability scenario (1k persistent messages sent to topic destinations with durable subscriptions and using AUTO_ACKNOWLEDGE acknowledgment mode) and a high-performance scenario (1k nonpersistent messages sent to topic destinations without durable subscriptions and using DUPS_OK_ACKNOWLEDGE acknowledgment mode).
In general we found that protocol has less effect in the high-reliability case. This is probably because the persistence overhead required in the high-reliability case is a more important factor in limiting throughput than the protocol speed. Additionally:
TCP provides the fastest method to communicate with the broker.
SSL is 50 to 70 percent slower than TCP when it comes to sending and receiving messages (50 percent for persistent messages, closer to 70 percent for nonpersistent messages). Additionally, establishing the initial connection is slower with SSL (it might take several seconds) because the client and broker (or Web Server in the case of HTTPS) need to establish a private key to be used when encrypting the data for transmission. The performance drop is caused by the additional processing required to encrypt and decrypt each low-level TCP packet.
HTTP is slower than either the TCP or SSL. It uses a servlet that runs on a Web server as a proxy between the client and the broker. Performance overhead is involved in encapsulating packets in HTTP requests and in the requirement that messages go through two hops--client to servlet, servlet to broker--to reach the broker.
HTTPS is slower than HTTP because of the additional overhead required to encrypt the packet between client and servlet and between servlet and broker.
A Message Queue message service can be implemented as a single broker or as a cluster consisting of multiple interconnected broker instances.
As the number of clients connected to a broker increases, and as the number of messages being delivered increases, a broker will eventually exceed resource limitations such as file descriptor, thread, and memory limits. One way to accommodate increasing loads is to add more broker instances to a Message Queue message service, distributing client connections and message routing and delivery across multiple brokers.
In general, this scaling works best if clients are evenly distributed across the cluster, especially message producing clients. Because of the overhead involved in delivering messages between the brokers in a cluster, clusters with limited numbers of connections or limited message delivery rates, might exhibit lower performance than a single broker.
You might also use a broker cluster to optimize network bandwidth. For example, you might want to use slower, long distance network links between a set of remote brokers within a cluster, while using higher speed links for connecting clients to their respective broker instances.
For more information on clusters, see Chapter 10, Configuring and Managing Broker Clusters
The message throughput that a broker might be required to handle is a function of the use patterns of the messaging applications the broker supports. However, the broker is limited in resources: memory, CPU cycles, and so forth. As a result, it would be possible for a broker to become overwhelmed to the point where it becomes unresponsive or unstable.
The Message Queue message broker has mechanisms built in for managing memory resources and preventing the broker from running out of memory. These mechanisms include configurable limits on the number of messages or message bytes that can be held by a broker or its individual physical destinations, and a set of behaviors that can be instituted when physical destination limits are reached.
With careful monitoring and tuning, these configurable mechanisms can be used to balance the inflow and outflow of messages so that system overload cannot occur. While these mechanisms consume overhead and can limit message throughput, they nevertheless maintain operational integrity.
Message Queue supports both file-based and JDBC-based persistence modules. File-based persistence uses individual files to store persistent data. JDBC-based persistence uses a Java Database Connectivity (JDBC) interface and requires a JDBC-compliant data store. File-based persistence is generally faster than JDBC-based; however, some users prefer the redundancy and administrative control provided by a JDBC-compliant store.
In the case of file-based persistence, you can maximize reliability by specifying that persistence operations synchronize the in-memory state with the data store. This helps eliminate data loss due to system crashes, but at the expense of performance.
The Message Queue client runtime provides client applications with an interface to the Message Queue message service. It supports all the operations needed for clients to send messages to physical destinations and to receive messages from such destinations. The client runtime is configurable (by setting connection factory attribute values), allowing you to control aspects of its behavior, such as connection flow metering, consumer flow limits, and connection flow limits, that can improve performance and message throughput. See Client Runtime Message Flow Adjustments for more information on these features and the attributes used to configure them.
The following sections explain how configuration adjustments can affect performance.
The following sections describe adjustments you can make to the operating system, JVM, communication protocols, and persistent data store.
See your system documentation for tuning your operating system.
By default, the broker uses a JVM heap size of 192MB. This is often too small for significant message loads and should be increased.
When the broker gets close to exhausting the JVM heap space used by Java objects, it uses various techniques such as flow control and message swapping to free memory. Under extreme circumstances it even closes client connections in order to free the memory and reduce the message inflow. Hence it is desirable to set the maximum JVM heap space high enough to avoid such circumstances.
However, if the maximum Java heap space is set too high, in relation to system physical memory, the broker can continue to grow the Java heap space until the entire system runs out of memory. This can result in diminished performance, unpredictable broker crashes, and/or affect the behavior of other applications and services running on the system. In general, you need to allow enough physical memory for the operating system and other applications to run on the machine.
In general it is a good idea to evaluate the normal and peak system memory footprints, and configure the Java heap size so that it is large enough to provide good performance, but not so large as to risk system memory problems.
To change the minimum and maximum heap size for the broker, use the -vmargs command line option when starting the broker. For example:
/usr/bin/imqbrokerd -vmargs "-Xms256m -Xmx1024m"
This command will set the starting Java heap size to 256MB and the maximum Java heap size to 1GB.
On Solaris or Linux, if starting the broker via /etc/rc* (that is, /etc/init.d/imq), specify broker command line arguments in the file /etc/imq/imqbrokerd.conf (Solaris) or /etc/opt/sun/mq/imqbrokerd.conf (Linux). See the comments in that file for more information.
On Windows, if starting the broker as a Window’s service, specify JVM arguments using the -vmargs option to the imqsvcadmin install command. See Service Administrator Utility in Chapter 15, Command Line Reference
In any case, verify settings by checking the broker’s log file or using the imqcmd metrics bkr -m cxn command.
Once a protocol that meets application needs has been chosen, additional tuning (based on the selected protocol) might improve performance.
A protocol’s performance can be modified using the following three broker properties:
For TCP and SSL protocols, these properties affect the speed of message delivery between client and broker. For HTTP and HTTPS protocols, these properties affect the speed of message delivery between the Message Queue tunnel servlet (running on a Web server) and the broker. For HTTP/HTTPS protocols there are additional properties that can affect performance (see HTTP/HTTPS Tuning).
The protocol tuning properties are described in the following sections.
The nodelay property affects Nagle’s algorithm (the value of the TCP_NODELAY socket-level option on TCP/IP) for the given protocol. Nagle’s algorithm is used to improve TCP performance on systems using slow connections such as wide-area networks (WANs).
When the algorithm is used, TCP tries to prevent several small chunks of data from being sent to the remote system (by bundling the data in larger packets). If the data written to the socket does not fill the required buffer size, the protocol delays sending the packet until either the buffer is filled or a specific delay time has elapsed. Once the buffer is full or the timeout has occurred, the packet is sent.
For most messaging applications, performance is best if there is no delay in the sending of packets (Nagle’s algorithm is not enabled). This is because most interactions between client and broker are request/response interactions: the client sends a packet of data to the broker and waits for a response. For example, typical interactions include:
Creating a connection
Creating a producer or consumer
Sending a persistent message (the broker confirms receipt of the message)
Sending a client acknowledgment in an AUTO_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE session (the broker confirms processing of the acknowledgment)
For these interactions, most packets are smaller than the buffer size. This means that if Nagle’s algorithm is used, the broker delays several milliseconds before sending a response to the consumer.
However, Nagle’s algorithm may improve performance in situations where connections are slow and broker responses are not required. This would be the case where a client sends a nonpersistent message or where a client acknowledgment is not confirmed by the broker (DUPS_OK_ACKNOWLEDGE session).
The inbufsz property sets the size of the buffer on the input stream reading data coming in from a socket. Similarly, outbufsz sets the buffer size of the output stream used by the broker to write data to the socket.
In general, both parameters should be set to values that are slightly larger than the average packet being received or sent. A good rule of thumb is to set these property values to the size of the average packet plus 1 kilobyte (rounded to the nearest kilobyte). For example, if the broker is receiving packets with a body size of 1 kilobyte, the overall size of the packet (message body plus header plus properties) is about 1200 bytes; an inbufsz of 2 kilobytes (2048 bytes) gives reasonable performance. Increasing inbufsz or outbufsz greater than that size may improve performance slightly, but increases the memory needed for each connection.
In addition to the general properties discussed in the previous two sections, HTTP/HTTPS performance is limited by how fast a client can make HTTP requests to the Web server hosting the Message Queue tunnel servlet.
A Web server might need to be optimized to handle multiple requests on a single socket. With JDK version 1.4 and later, HTTP connections to a Web server are kept alive (the socket to the Web server remains open) to minimize resources used by the Web server when it processes multiple HTTP requests. If the performance of a client application using JDK version 1.4 is slower than the same application running with an earlier JDK release, you might need to tune the Web server keep-alive configuration parameters to improve performance.
In addition to such Web server tuning, you can also adjust how often a client polls the Web server. HTTP is a request-based protocol. This means that clients using an HTTP-based protocol periodically need to check the Web server to see if messages are waiting. The imq.httpjms.http.pullPeriod broker property (and the corresponding imq.httpsjms.https.pullPeriod property) specifies how often the Message Queue client runtime polls the Web server.
If the pullPeriod value is -1 (the default value), the client runtime polls the server as soon as the previous request returns, maximizing the performance of the individual client. As a result, each client connection monopolizes a request thread in the Web server, possibly straining Web server resources.
If the pullPeriod value is a positive number, the client runtime periodically sends requests to the Web server to see if there is pending data. In this case, the client does not monopolize a request thread in the Web server. Hence, if large numbers of clients are using the Web server, you might conserve Web server resources by setting the pullPeriod to a positive value.
For information on tuning the file-based persistent store, see Configuring a File-Based Data Store.
You can improve performance and increase broker stability under load by properly managing broker memory. Memory management can be configured on a destination-by-destination basis or on a system-wide level (for all destinations, collectively).
To configure physical destination limits, see the properties described in Physical Destination Properties.
If message producers tend to overrun message consumers, messages can accumulate in the broker. The broker contains a mechanism for throttling back producers and swapping messages out of active memory under low memory conditions, but it is wise to set a hard limit on the total number of messages (and message bytes) that the broker can hold.
Control these limits by setting the imq.system.max_count and the imq.system.max_size broker properties.
For example:
imq.system.max_count=5000
The defined value above means that the broker will only hold up to 5000 undelivered and/or unacknowledged messages. If additional messages are sent, they are rejected by the broker. If a message is persistent then the clinet runtime will throw an exception when the producer tries to send the message. If the message is non-persistent, the broker silently drops the message.
When an exception is thrown in sending a message, the client should process the exception by pausing for a moment and retrying the send again. (Note that the exception will never be due to the broker’s failure to receive a message; the exception is thrown by the client runtime before the message is sent to the broker.)
This section discusses client runtimeflow control behaviors that affect performance. These behaviors are configured as attributes of connection factory administered objects. For information on setting connection factory attributes, see Chapter 11, Managing Administered Objects
Messages sent and received by clients (payload messages), as well as Message Queue control messages, pass over the same client-broker connection. Delays in the delivery of control messages, such as broker acknowledgments, can result if control messages are held up by the delivery of payload messages. To prevent this type of congestion, Message Queue meters the flow of payload messages across a connection.
Payload messages are batched (as specified with the connection factory attribute imqConnectionFlowCount) so that only a set number are delivered. After the batch has been delivered, delivery of payload messages is suspended and only pending control messages are delivered. This cycle repeats, as additional batches of payload messages are delivered followed by pending control messages.
The value of imqConnectionFlowCount should be kept low if the client is doing operations that require many responses from the broker: for example, if the client is using CLIENT_ACKNOWLEDGE or AUTO_ACKNOWLEDGE mode, persistent messages, transactions, or queue browsers, or is adding or removing consumers. If, on the other hand, the client has only simple consumers on a connection using DUPS_OK_ACKNOWLEDGE mode, you can increase imqConnectionFlowCount without compromising performance.
There is a limit to the number of payload messages that the Message Queue client runtime can handle before encountering local resource limitations, such as memory. When this limit is approached, performance suffers. Hence, Message Queue lets you limit the number of messages per consumer (or messages per connection) that can be delivered over a connection and buffered in the client runtime, waiting to be consumed.
When the number of payload messages delivered to the client runtime exceeds the value of imqConsumerFlowLimit for any consumer, message delivery for that consumer stops. It is resumed only when the number of unconsumed messages for that consumer drops below the value set with imqConsumerFlowThreshold.
The following example illustrates the use of these limits: consider the default settings for topic consumers:
imqConsumerFlowLimit=1000 imqConsumerFlowThreshold=50
When the consumer is created, the broker delivers an initial batch of 1000 messages (providing they exist) to this consumer without pausing. After sending 1000 messages, the broker stops delivery until the client runtime asks for more messages. The client runtime holds these messages until the application processes them. The client runtime then allows the application to consume at least 50% (imqConsumerFlowThreshold ) of the message buffer capacity (i.e. 500 messages) before asking the broker to send the next batch.
In the same situation, if the threshold were 10%, the client runtime would wait for the application to consume at least 900 messages before asking for the next batch.
The next batch size is calculated as follows:
imqConsumerFlowLimit - (current number of pending msgs in buffer)
So if imqConsumerFlowThreshold is 50%, the next batch size can fluctuate between 500 and 1000, depending on how fast the application can process the messages.
If the imqConsumerFlowThreshold is set too high (close to 100%), the broker will tend to send smaller batches, which can lower message throughput. If the value is set too low (close to 0%), the client may be able to finish processing the remaining buffered messages before the broker delivers the next set, again degrading message throughput. Generally speaking, unless you have specific performance or reliability concerns, you will not need to change the default value of imqConsumerFlowThreshold attribute.
The consumer-based flow controls (in particular, imqConsumerFlowLimit ) are the best way to manage memory in the client runtime. Generally, depending on the client application, you know the number of consumers you need to support on any connection, the size of the messages, and the total amount of memory that is available to the client runtime.
In the case of some client applications, however, the number of consumers may be indeterminate, depending on choices made by end users. In those cases, you can still manage memory using connection-level flow limits.
Connection-level flow controls limit the total number of messages buffered for all consumers on a connection. If this number exceeds the value of imqConnectionFlowLimit, delivery of messages through the connection stops until that total drops below the connection limit. (The imqConnectionFlowLimit attribute is enabled only if you set imqConnectionFlowLimitEnabled to true.)
The number of messages queued up in a session is a function of the number of message consumers using the session and the message load for each consumer. If a client is exhibiting delays in producing or consuming messages, you can normally improve performance by redesigning the application to distribute message producers and consumers among a larger number of sessions or to distribute sessions among a larger number of connections.
The efficiency with which multiple queue consumers process messages in a queue destination depends on a number of factors. To achieve optimal message throughput there must be a sufficient number of consumers to keep up with the rate of message production for the queue, and the messages in the queue must be routed and then delivered to the active consumers in such a way as to maximize their rate of consumption.
The message delivery mechanism for multiple-consumer queues is that messages are delivered to consumers in batches as each consumer is ready to receive a new batch. The readiness of a consumer to receive a batch of messages depends upon configurable client runtime properties, such as imqConsumerFlowLimit and imqConsumerFlowThreshold, as described in Message Flow Limits. As new consumers are added to a queue, they are sent a batch of messages to consume, and receive subsequent batches as they become ready.
The message delivery mechanism for multiple-consumer queues described above can result in messages being consumed in an order different from the order in which they are produced.
If messages are accumulating in the queue, it is possible that there is an insufficient number of consumers to handle the message load. It is also possible that messages are being delivered to consumers in batch sizes that cause messages to be backing up on the consumers. For example, if the batch size (consumerFlowLimit) is too large, one consumer might receive all the messages in a queue while other consumers receive none. If consumers are very fast, this might not be a problem. However, if consumers are relatively slow, you want messages to be distributed to them evenly, and therefore you want the batch size to be small. Although smaller batch sizes require more overhead to deliver messages to consumers, for slow consumers there is generally a net performance gain in using small batch sizes. The value of consumerFlowLimit can be set on a destination as well as on the client runtime: the smaller value overrides the larger one.
This chapter explains how to understand and resolve the following problems:
When problems occur, it is useful to check the version number of the installed Message Queue software. Use the version number to ensure that you are using documentation whose version matches the software version. You also need the version number to report a problem to Sun. To check the version number, issue the following command:
imqcmd -v
Symptoms:
Client cannot make a new connection.
Client cannot auto-reconnect on failed connection.
Possible causes:
Broker is not running or there is a network connectivity problem.
Too few threads available for the number of connections required.
TCP backlog limits the number of simultaneous new connection requests that can be established.
Operating system limits the number of concurrent connections.
Possible cause: Client applications are not closing connections, causing the number of connections to exceed resource limitations.
To confirm this cause of the problem: List all connections to a broker:
imqcmd list cxn
The output will list all connections and the host from which each connection has been made, revealing an unusual number of open connections for specific clients.
To resolve the problem: Rewrite the offending clients to close unused connections.
Possible cause: Broker is not running or there is a network connectivity problem.
To confirm this cause of the problem:
Telnet to the broker’s primary port (for example, the default of 7676) and verify that the broker responds with Port Mapper output.
Verify that the broker process is running on the host.
To resolve the problem:
Start up the broker.
Fix the network connectivity problem.
Possible cause: Connection service is inactive or paused.
To confirm this cause of the problem: Check the status of all connection services:
imqcmd list svc
If the status of a connection service is shown as unknown or paused, clients will not be able to establish a connection using that service.
To resolve the problem:
If the status of a connection service is shown as unknown , it is missing from the active service list (imq.service.active ). In the case of SSL-based services, the service might also be improperly configured, causing the broker to make the following entry in the broker log:
ERROR [B3009]: Unable to start service ssljms:[B4001]: Unable to open protocol tls for ssljms service...
followed by an explanation of the underlying cause of the exception.
To properly configure SSL services, see Message Encryption.
If the status of a connection service is shown as paused, resume the service (see Pausing and Resuming a Connection Service).
Possible cause: Too few threads available for the number of connections required.
To confirm this cause of the problem: Check for the following entry in the broker log:
WARNING [B3004]: No threads are available to process a new connection on service ...Closing the new connection.
Also check the number of connections on the connection service and the number of threads currently in use, using one of the following formats:
imqcmd query svc -n serviceNameimqcmd metrics svc -n serviceName -m cxn
Each connection requires two threads: one for incoming messages and one for outgoing messages (see Thread Pool Management).
To resolve the problem:
If you are using a dedicated thread pool model (imq.serviceName.threadpool_model=dedicated), the maximum number of connections is half the maximum number of threads in the thread pool. Therefore, to increase the number of connections, increase the size of the thread pool (imq.serviceName.max_threads) or switch to the shared thread pool model.
If you are using a shared thread pool model (imq.serviceName.threadpool_model=shared), the maximum number of connections is half the product of the connection monitor limit (imq.serviceName.connectionMonitor_limit) and the maximum number of threads (imq.serviceName.max_threads). Therefore, to increase the number of connections, increase the size of the thread pool or increase the connection monitor limit.
Ultimately, the number of supportable connections (or the throughput on connections) will reach input/output limits. In such cases, use a multiple-broker cluster to distribute connections among the broker instances within the cluster.
Possible cause: Too few file descriptors for the number of connections required on the Solaris or Linux platform.
For more information about this issue, see Setting the File Descriptor Limit.
To confirm this cause of the problem: Check for an entry in the broker log similar to the following:
Too many open files
To resolve the problem: Increase the file descriptor limit, as described in the man page for the ulimit command.
Possible cause: TCP backlog limits the number of simultaneous new connection requests that can be established.
The TCP backlog places a limit on the number of simultaneous connection requests that can be stored in the system backlog (imq.portmapper.backlog) before the Port Mapper rejects additional requests. (On the Windows platform there is a hard-coded backlog limit of 5 for Windows desktops and 200 for Windows servers.)
The rejection of requests because of backlog limits is usually a transient phenomenon, due to an unusually high number of simultaneous connection requests.
To confirm this cause of the problem: Examine the broker log. First, check to see whether the broker is accepting some connections during the same time period that it is rejecting others. Next, check for messages that explain rejected connections. If you find such messages, the TCP backlog is probably not the problem, because the broker does not log connection rejections due to the TCP backlog. If some successful connections are logged, and no connection rejections are logged, the TCP backlog is probably the problem.
To resolve the problem:
Program the client to retry the attempted connection after a short interval of time (this normally works because of the transient nature of this problem).
Increase the value of imq.portmapper.backlog.
Check that clients are not closing and then opening connections too often.
Possible cause: Operating system limits the number of concurrent connections.
The Windows operating system license places limits on the number of concurrent remote connections that are supported.
To confirm this cause of the problem: Check that there are plenty of threads available for connections (using imqcmd query svc) and check the terms of your Windows license agreement. If you can make connections from a local client, but not from a remote client, operating system limitations might be the cause of the problem.
To resolve the problem:
Upgrade the Windows license to allow more connections.
Distribute connections among a number of broker instances by setting up a multiple-broker cluster.
Possible cause: Authentication or authorization of the user is failing.
The authentication may be failing for any of the following reasons:
Incorrect password
No entry for user in user repository
User does not have access permission for connection service
To confirm this cause of the problem: Check entries in the broker log for the Forbidden error message. This will indicate an authentication error, but will not indicate the reason for it.
If you are using a file-based user repository, enter the following command:
imqusermgr list -i instanceName -u userName
If the output shows a user, the wrong password was probably submitted. If the output shows the following error, there is no entry for the user in the user repository:
Error [B3048]: User does not exist in the password file
If you are using an LDAP server user repository, use the appropriate tools to check whether there is an entry for the user.
Check the access control file to see whether there are restrictions on access to the connection service.
To resolve the problem:
If the wrong password was used, provide the correct password.
If there is no entry for the user in the user repository, add one (see Adding a User to the Repository).
If the user does not have access permission for the connection service, edit the access control file to grant such permission (see Authorization Rules for Connection Services).
Possible cause: Authentication or authorization of the user is failing.
Authentication may be failing for any of the following reasons:
No entry for user in user repository
Incorrect password
User does not have access permission for connection service
To confirm this cause of the problem
Check entries in the broker log for the error message Forbidden. This will indicate an authentication error, but will not indicate the reason for it.
Check the user repository for an entry for this user:
If you are using a flat-file user repository, enter the command
imqusermgr list -i instanceName -u userName
If the output shows the error
Error [B3048]: User does not exist in the password file
then there is no entry for the user in the user repository:
If you are using an LDAP user repository, use the appropriate tools to check whether there is an entry for the user.
If the output from step 2 does show a user entry, the wrong password was probably provided.
Check the access control file to see whether there are restrictions on access to the connection service.
To resolve the problem
If there is no entry for the user in the user repository, add one (see Adding a User to the Repository).
If the wrong password was used, provide the correct password.
If the user does not have access permission for the connection service, edit the access control file to grant such permission (see Authorization Rules for Connection Services).
Symptoms:
Message throughput does not meet expectations.
Message input/output rates are not limited by an insufficient number of supported connections (as described in A Client Cannot Establish a Connection).
Possible causes:
Possible cause: Network connection or WAN is too slow.
To confirm this cause of the problem:
Ping the network, to see how long it takes for the ping to return, and consult a network administrator.
Send and receive messages using local clients and compare the delivery time with that of remote clients (which use a network link).
To resolve the problem: Upgrade the network link.
Possible cause: Connection service protocol is inherently slow compared to TCP.
For example, SSL-based or HTTP-based protocols are slower than TCP (see Transport Protocols).
To confirm this cause of the problem: If you are using SSL-based or HTTP-based protocols, try using TCP and compare the delivery times.
To resolve the problem: Application requirements usually dictate the protocols being used, so there is little you can do other than attempt to tune the protocol as described in Tuning Transport Protocols.
Possible cause: Connection service protocol is not optimally tuned.
To confirm this cause of the problem: Try tuning the protocol to see whether it makes a difference.
To resolve the problem: Try tuning the protocol, as described in Tuning Transport Protocols.
Possible cause: Messages are so large that they consume too much bandwidth.
To confirm this cause of the problem: Try running your benchmark with smaller-sized messages.
To resolve the problem:
Have application developers modify the application to use the message compression feature, which is described in the Message Queue Developer’s Guide for Java Clients.
Use messages as notifications of data to be sent, but move the data using another protocol.
Possible cause: What appears to be slow connection throughput is actually a bottleneck in some other step of the message delivery process.
To confirm this cause of the problem: If what appears to be slow connection throughput cannot be explained by any of the causes above, see Factors Affecting Performance for other possible bottlenecks and check for symptoms associated with the following problems:
To resolve the problem: Follow the problem resolution guidelines provided in the troubleshooting sections listed above.
Symptom:
A message producer cannot be created for a physical destination; the client receives an exception.
Possible causes:
A physical destination has been configured to allow only a limited number of producers.
The user is not authorized to create a message producer due to settings in the access control file.
Possible cause: A physical destination has been configured to allow only a limited number of producers.
One of the ways of avoiding the accumulation of messages on a physical destination is to limit the number of producers (maxNumProducers) that it supports.
To confirm this cause of the problem: Check the physical destination:
imqcmd query dst
(see Viewing Physical Destination Information). The output will show the current number of producers and the value of maxNumProducers. If the two values are the same, the number of producers has reached its configured limit. When a new producer is rejected by the broker, the broker returns the exception
ResourceAllocationException [C4088]: A JMS destination limit was reached
and makes the following entry in the broker log:
[B4183]: Producer can not be added to destination
To resolve the problem: Increase the value of the maxNumProducers property (see Updating Physical Destination Properties).
Possible cause: The user is not authorized to create a message producer due to settings in the access control file.
To confirm this cause of the problem: When a new producer is rejected by the broker, the broker returns the exception
JMSSecurityException [C4076]: Client does not have permission to create producer on destination
and makes the following entries in the broker log:
[B2041]: Producer on destination denied[B4051]: Forbidden guest.
To resolve the problem: Change the access control properties to allow the user to produce messages (see Authorization Rules for Physical Destinations).
Symptoms:
When sending persistent messages, the send method does not return and the client blocks.
When sending a persistent message, the client receives an exception.
A producing client slows down.
Possible causes:
The broker is backlogged and has responded by slowing message producers.
The broker cannot save a persistent message to the data store.
Possible cause: The broker is backlogged and has responded by slowing message producers.
A backlogged broker accumulates messages in broker memory. When the number of messages or message bytes in physical destination memory reaches configured limits, the broker attempts to conserve memory resources in accordance with the specified limit behavior. The following limit behaviors slow down message producers:
FLOW_CONTROL: The broker does not immediately acknowledge receipt of persistent messages (thereby blocking a producing client).
REJECT_NEWEST: The broker rejects new persistent messages.
Similarly, when the number of messages or message bytes in brokerwide memory (for all physical destinations) reaches configured limits, the broker will attempt to conserve memory resources by rejecting the newest messages. Also, when system memory limits are reached because physical destination or brokerwide limits have not been set properly, the broker takes increasingly serious action to prevent memory overload. These actions include throttling back message producers.
To confirm this cause of the problem: When a message is rejected by the broker because of configured message limits, the broker returns the exception
JMSException [C4036]: A server error occurred
and makes the following entry in the broker log:
[B2011]: Storing of JMS message from IMQconn failed
This message is followed by another indicating the limit that has been reached:
[B4120]: Cannot store message on destination destName because capacity of maxNumMsgs would be exceeded.
if the exceeded message limit is on a physical destination, or
[B4024]: The maximum number of messages currrently in the system has been exceeded, rejecting message.
if the limit is brokerwide.
More generally, you can check for message limit conditions before the rejections occur as follows:
Query physical destinations and the broker and inspect their configured message limit settings.
Monitor the number of messages or message bytes currently in a physical destination or in the broker as a whole, using the appropriate imqcmd commands. See Chapter 20, Metrics Information Reference for information about metrics you can monitor and the commands you use to obtain them.
To resolve the problem:
Modify the message limits on a physical destination (or brokerwide), being careful not to exceed memory resources.
In general, you should manage memory at the individual destination level, so that brokerwide message limits are never reached. For more information, see Broker Memory Management Adjustments.
Change the limit behaviors on a destination so as not to slow message production when message limits are reached, but rather to discard messages in memory.
For example, you can specify the REMOVE_OLDEST and REMOVE_LOW_PRIORITY limit behaviors, which delete messages that accumulate in memory (see Table 17–1).
Possible cause: The broker cannot save a persistent message to the data store.
If the broker cannot access a data store or write a persistent message to it, the producing client is blocked. This condition can also occur if destination or brokerwide message limits are reached, as described above.
To confirm this cause of the problem: If the broker is unable to write to the data store, it makes one of the following entries in the broker log:
[B2011]: Storing of JMS message from connectionID failed[B4004]: Failed to persist message messageID
To resolve the problem:
In the case of file-based persistence, try increasing the disk space of the file-based data store.
In the case of a JDBC-compliant data store, check that JDBC-based persistence is properly configured (seeConfiguring a JDBC-Based Data Store). If so, consult your database administrator to troubleshoot other database problems.
Possible cause: Broker acknowledgment timeout is too short.
Because of slow connections or a lethargic broker (caused by high CPU utilization or scarce memory resources), a broker may require more time to acknowledge receipt of a persistent message than allowed by the value of the connection factory’s imqAckTimeout attribute.
To confirm this cause of the problem: If the imqAckTimeout value is exceeded, the broker returns the exception
JMSException [C4000]: Packet acknowledge failed
To resolve the problem: Change the value of the imqAckTimeout connection factory attribute (see Reliability And Flow Control).
Possible cause: A producing client is encountering JVM limitations.
To confirm this cause of the problem:
Find out whether the client application receives an out-of-memory error.
Check the free memory available in the JVM heap, using runtime methods such as freeMemory, maxMemory, and totalMemory.
To resolve the problem: Adjust the JVM (see Java Virtual Machine Adjustments).
Symptoms:
Message production is delayed or produced messages are rejected by the broker.
Messages take an unusually long time to reach consumers.
The number of messages or message bytes in the broker (or in specific destinations) increases steadily over time.
To see whether messages are accumulating, check how the number of messages or message bytes in the broker changes over time and compare to configured limits. First check the configured limits:
imqcmd query bkr
The imqcmd metrics bkr subcommand does not display this information.
Then check for message accumulation in each destination:
imqcmd list dst
To see whether messages have exceeded configured destination or brokerwide limits, check the broker log for the entry
[B2011]: Storing of JMS message from … failed.
This entry will be followed by another identifying the limit that has been exceeded.
Possible causes:
There are inactive durable subscriptions on a topic destination.
Too few consumers are available to consume messages in a queue.
Message consumers are processing too slowly to keep up with message producers.
Client acknowledgment processing is slowing down message consumption.
Client code defects; consumers are not acknowledging messages.
Possible cause: There are inactive durable subscriptions on a topic destination.
If a durable subscription is inactive, messages are stored in a destination until the corresponding consumer becomes active and can consume the messages.
To confirm this cause of the problem: Check the state of durable subscriptions on each topic destination:
imqcmd list dur -d destName
To resolve the problem:
Purge all messages for the offending durable subscriptions (see Managing Durable Subscriptions).
Specify message limit and limit behavior attributes for the topic (see Table 17–1). For example, you can specify the REMOVE_OLDEST and REMOVE_LOW_PRIORITY limit behaviors, which delete messages that accumulate in memory.
Purge all messages from the corresponding destinations (see Purging a Physical Destination).
Limit the time messages can remain in memory by rewriting the producing client to set a time-to-live value on each message. You can override any such settings for all producers sharing a connection by setting the imqOverrideJMSExpiration and imqJMSExpiration connection factory attributes (see Message Header Overrides).
Possible cause: Too few consumers are available to consume messages in a multiple-consumer queue.
If there are too few active consumers to which messages can be delivered, a queue destination can become backlogged as messages accumulate. This condition can occur for any of the following reasons:
Too few active consumers exist for the destination.
Consuming clients have failed to establish connections.
No active consumers use a selector that matches messages in the queue.
To confirm this cause of the problem: To help determine the reason for unavailable consumers, check the number of active consumers on a destination:
imqcmd metrics dst -n destName -t q -m con
To resolve the problem: Depending on the reason for unavailable consumers,
Create more active consumers for the queue by starting up additional consuming clients.
Adjust the imq.consumerFlowLimit broker property to optimize queue delivery to multiple consumers (see Adjusting Multiple-Consumer Queue Delivery ).
Specify message limit and limit behavior attributes for the queue (see Table 17–1). For example, you can specify the REMOVE_OLDEST and REMOVE_LOW_PRIOROTY limit behaviors, which delete messages that accumulate in memory.
Purge all messages from the corresponding destinations (see Purging a Physical Destination).
Limit the time messages can remain in memory by rewriting the producing client to set a time-to-live value on each message. You can override any such setting for all producers sharing a connection by setting the imqOverrideJMSExpiration and imqJMSExpiration connection factory attributes (see Message Header Overrides).
Possible cause: Message consumers are processing too slowly to keep up with message producers.
In this case, topic subscribers or queue receivers are consuming messages more slowly than the producers are sending messages. One or more destinations are getting backlogged with messages because of this imbalance.
To confirm this cause of the problem: Check for the rate of flow of messages into and out of the broker:
imqcmd metrics bkr -m rts
Then check flow rates for each of the individual destinations:
imqcmd metrics bkr -t destType -n destName -m rts
To resolve the problem:
Optimize consuming client code.
For queue destinations, increase the number of active consumers (see Adjusting Multiple-Consumer Queue Delivery ).
Possible cause: Client acknowledgment processing is slowing down message consumption.
Two factors affect the processing of client acknowledgments:
Significant broker resources can be consumed in processing client acknowledgments. As a result, message consumption may be slowed in those acknowledgment modes in which consuming clients block until the broker confirms client acknowledgments.
JMS payload messages and Message Queue control messages (such as client acknowledgments) share the same connection. As a result, control messages can be held up by JMS payload messages, slowing message consumption.
To confirm this cause of the problem:
Check the flow of messages relative to the flow of packets. If the number of packets per second is out of proportion to the number of messages, client acknowledgments may be a problem.
Check to see whether the client has received the following exception:
JMSException [C4000]: Packet acknowledge failed
To resolve the problem:
Modify the acknowledgment mode used by clients: for example, switch to DUPS_OK_ACKNOWLEDGE or CLIENT_ACKNOWLEDGE.
If using CLIENT_ACKNOWLEDGE or transacted sessions, group a larger number of messages into a single acknowledgment.
Adjust consumer and connection flow control parameters (see Client Runtime Message Flow Adjustments ).
Possible cause: The broker cannot keep up with produced messages.
In this case, messages are flowing into the broker faster than the broker can route and dispatch them to consumers. The sluggishness of the broker can be due to limitations in any or all of the following:
CPU
Network socket read/write operations
Disk read/write operations
Memory paging
Persistent store
JVM memory limits
To confirm this cause of the problem: Check that none of the other possible causes of this problem are responsible.
To resolve the problem:
Upgrade the speed of your computer or data store.
Use a broker cluster to distribute the load among multiple broker instances.
Possible cause: Client code defects; consumers are not acknowledging messages.
Messages are held in a destination until they have been acknowledged by all consumers to which they have been sent. If a client is not acknowledging consumed messages, the messages accumulate in the destination without being deleted.
For example, client code might have the following defects:
Consumers using the CLIENT_ACKNOWLEDGE acknowledgment mode or transacted session may not be calling Session.acknowledge or Session.commit regularly.
Consumers using the AUTO_ACKNOWLEDGE acknowledgment mode may be hanging for some reason.
To confirm this cause of the problem: First check all other possible causes listed in this section. Next, list the destination with the following command:
imqcmd list dst
Notice whether the number of messages listed under the UnAcked header is the same as the number of messages in the destination. Messages under this header were sent to consumers but not acknowledged. If this number is the same as the total number of messages, then the broker has sent all the messages and is waiting for acknowledgment.
To resolve the problem: Request the help of application developers in debugging this problem.
Symptom:
Message throughput sporadically drops and then resumes normal performance.
Possible causes:
JVM memory reclamation (garbage collection) is taking place.
The JVM is using the just-in-time compiler to speed up performance.
Possible cause: The broker is very low on memory resources.
Because destination and broker limits were not properly set, the broker takes increasingly serious action to prevent memory overload; this can cause the broker to become sluggish until the message backlog is cleared.
To confirm this cause of the problem: Check the broker log for a low memory condition
[B1089]: In low memory condition, broker is attempting to free up resources
followed by an entry describing the new memory state and the amount of total memory being used. Also check the free memory available in the JVM heap:
imqcmd metrics bkr -m cxn
Free memory is low when the value of total JVM memory is close to the maximum JVM memory value.
To resolve the problem:
Adjust the JVM (see Java Virtual Machine Adjustments).
Increase system swap space.
Possible cause: JVM memory reclamation (garbage collection) is taking place.
Memory reclamation periodically sweeps through the system to free up memory. When this occurs, all threads are blocked. The larger the amount of memory to be freed up and the larger the JVM heap size, the longer the delay due to memory reclamation.
To confirm this cause of the problem: Monitor CPU usage on your computer. CPU usage drops when memory reclamation is taking place.
Also start your broker using the following command line options:
-vmargs -verbose:gc
Standard output indicates the time when memory reclamation takes place.
To resolve the problem: In computers with multiple CPUs, set the memory reclamation to take place in parallel:
-XX:+UseParallelGC=true
Possible cause: The JVM is using the just-in-time compiler to speed up performance.
To confirm this cause of the problem: Check that none of the other possible causes of this problem are responsible.
To resolve the problem: Let the system run for awhile; performance should improve.
Symptom:
Messages sent by producers are not received by consumers.
Possible causes:
Limit behaviors are causing messages to be deleted on the broker.
Consuming client failed to start message delivery on a connection.
Possible cause: Limit behaviors are causing messages to be deleted on the broker.
When the number of messages or message bytes in destination memory reach configured limits, the broker attempts to conserve memory resources. Three of the configurable behaviors adopted by the broker when these limits are reached will cause messages to be lost:
REMOVE_OLDEST: Delete the oldest messages.
REMOVE_LOW_PRIORITY: Delete the lowest-priority messages according to age.
REJECT_NEWEST: Reject new persistent messages.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value REMOVE_OLDEST or REMOVE_LOW_PRIORITY.
To resolve the problem: Increase the destination limits. For example:
imqcmd update dst -n MyDest -o maxNumMsgs=1000
Possible cause: Message timeout value is expiring.
The broker deletes messages whose timeout value has expired. If a destination gets sufficiently backlogged with messages, messages whose time-to-live value is too short might be deleted.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value EXPIRED.
To resolve the problem: Contact the application developers and have them increase the time-to-live value.
Possible cause: The broker clock and producer clock are not synchronized.
If clocks are not synchronized, broker calculations of message lifetimes can be wrong, causing messages to exceed their expiration times and be deleted.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value EXPIRED.
In the broker log file, look for any of the following messages: B2102, B2103, B2104. These messages all report that possible clock skew was detected.
To resolve the problem: Check that you are running a time synchronization program, as described in Preparing System Resources.
Possible cause: Consuming client failed to start message delivery on a connection.
Messages cannot be delivered until client code establishes a connection and starts message delivery on the connection.
To confirm this cause of the problem: Check that client code establishes a connection and starts message delivery.
To resolve the problem: Rewrite the client code to establish a connection and start message delivery.
Symptom:
When you list destinations, you see that the dead message queue contains messages. For example, issue a command like the following:
imqcmd list dst
After you supply a user name and password, output like the following appears:
Listing all the destinations on the broker specified by: --------------------------------- Host Primary Port --------------------------------- localhost 7676 ---------------------------------------------------------------------- Name Type State Producers Consumers Msgs Total Count UnAck Avg Size ------------------------------------------------- ---------------------- MyDest Queue RUNNING 0 0 5 0 1177.0 mq.sys.dmq Queue RUNNING 0 0 35 0 1422.0 Successfully listed destinations. |
In this example, the dead message queue, mq.sys.dmq, contains 35 messages.
Possible causes:
The number of messages, or their sizes, exceed destination limits.
Consumers are not receiving messages before they time out.
There are a number of possible reasons for messages to time out:
Possible cause: The number of messages, or their sizes, exceed destination limits.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check the values for the following message properties:
JMS_SUN_DMQ_UNDELIVERED_REASON
JMS_SUN_DMQ_UNDELIVERED_COMMENT
JMS_SUN_DMQ_UNDELIVERED_TIMESTAMP
Under JMS Headers, scroll down to the value for JMSDestination to determine the destination whose messages are becoming dead.
To resolve the problem: Increase the destination limits. For example:
imqcmd update dst -n MyDest -o maxNumMsgs=1000
Possible cause: The broker clock and producer clock are not synchronized.
If clocks are not synchronized, broker calculations of message lifetimes can be wrong, causing messages to exceed their expiration times and be deleted.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value EXPIRED.
In the broker log file, look for any of the following messages: B2102, B2103, B2104. These messages all report that possible clock skew was detected.
To resolve the problem: Check that you are running a time synchronization program, as described in Preparing System Resources.
Possible cause: An unexpected broker error has occurred.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value ERROR.
To resolve the problem:
Examine the broker log file to find the associated error.
Contact Sun Technical Support to report the broker problem.
Possible cause: Consumers are not consuming messages before they time out.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value EXPIRED.
Check to see if there any consumers on the destination and the value for the Current Number of Active Consumers. For example:
imqcmd query dst -t q -n MyDest
If there are active consumers, then there might be any number of possible reasons why messages are timing out before being consumed. One is that the message timeout is too short for the speed at which the consumer executes. In that case, request that application developers increase message time-to-live values. Otherwise, investigate the following possible causes for messages to time out before being consumed:
Possible cause: There are too many producers for the number of consumers.
To confirm this cause of the problem: Use the QBrowser demo application to inspect the contents of the dead message queue (see To Inspect the Dead Message Queue).
Check whether the JMS_SUN_DMQ_UNDELIVERED_REASON property of messages in the queue has the value REMOVE_OLDEST or REMOVE_LOW_PRIORITY. If so, use the imqcmd query dst command to check the number of producers and consumers on the destination. If the number of producers exceeds the number of consumers, the production rate might be overwhelming the consumption rate.
To resolve the problem: Add more consumer clients or set the destination’s limit behavior to FLOW_CONTROL (which uses consumption rate to control production rate), using a command such as the following:
imqcmd update dst -n myDst -t q -o limitBehavior=FLOW_CONTROL
Possible cause: Producers are faster than consumers.
To confirm this cause of the problem: To determine whether slow consumers are causing producers to slow down, set the destination’s limit behavior to FLOW_CONTROL (which uses consumption rate to control production rate), using a command such as the following:
imqcmd update dst -n myDst -t q -o limitBehavior=FLOW_CONTROL
Use metrics to examine the destination’s input and output, using a command such as the following:
imqcmd metrics dst -n myDst -t q -m rts
In the metrics output, examine the following values:
Msgs/sec Out: Shows how many messages per second the broker is removing. The broker removes messages when all consumers acknowledge receiving them, so the metric reflects consumption rate.
Msgs/sec In: Shows how many messages per second the broker is receiving from producers. The metric reflects production rate.
Because flow control aligns production to consumption, note whether production slows or stops. If so, there is a discrepancy between the processing speeds of producers and consumers. You can also check the number of unacknowledged (UnAcked) messages sent, by using the imqcmd list dst command. If the number of unacknowledged messages is less than the size of the destination, the destination has additional capacity and is being held back by client flow control.
To resolve the problem: If production rate is consistently faster than consumption rate, consider using flow control regularly, to keep the system aligned. In addition, consider and attempt to resolve each of the following possible causes, which are subsequently described in more detail:
Possible cause: A consumer is too slow.
To confirm this cause of the problem: Use imqcmd metrics to determine the rate of production and consumption, as described above under “Producers are faster than consumers.”
To resolve the problem:
Set the destinations’ limit behavior to FLOW_CONTROL, using a command such as the following:
imqcmd update dst -n myDst -t q -o limitBehaviort=FLOW_CONTROL
Use of flow control slows production to the rate of consumption and prevents the accumulation of messages in the destination. Producer applications hold messages until the destination can process them, with less risk of expiration.
Find out from application developers whether producers send messages at a steady rate or in periodic bursts. If an application sends bursts of messages, increase destination limits as described in the next item.
Increase destination limits based on number of messages or bytes, or both. To change the number of messages on a destination, enter a command with the following format:
imqcmd update dst -n destName -t {q|t} -o maxNumMsgs=number
To change the size of a destination, enter a command with the following format:
imqcmd update dst -n destName -t {q|t} -o maxTotalMsgBytes=number
Be aware that raising limits increases the amount of memory that the broker uses. If limits are too high, the broker could run out of memory and become unable to process messages.
Consider whether you can accept loss of messages during periods of high production load.
Possible cause: Clients are not committing transactions.
To confirm this cause of the problem: Check with application developers to find out whether the application uses transactions. If so, list the active transactions as follows:
imqcmd list txn
Here is an example of the command output:
---------------------------------------------------------------------- Transaction ID State User name # Msgs/# Acks Creation time ---------------------------------------------------------------------- 6800151593984248832 STARTED guest 3/2 7/19/04 11:03:08 AM |
Note the numbers of messages and number of acknowledgments. If the number of messages is high, producers may be sending individual messages but failing to commit transactions. Until the broker receives a commit, it cannot route and deliver the messages for that transaction. If the number of acknowledgments is high, consumers may be sending acknowledgments for individual messages but failing to commit transactions. Until the broker receives a commit, it cannot remove the acknowledgments for that transaction.
To resolve the problem: Contact application developers to fix the coding error.
Possible cause: Consumers are failing to acknowledge messages.
To confirm this cause of the problem: Contact application developers to determine whether the application uses system-based acknowledgment (AUTO_ACKNOWLEDGE or DUPES_ONLY) or client-based acknowledgment (CLIENT_ACKNOWLEDGE). If the application uses system-based acknowledgment , skip this section; if it uses client-based acknowledgment), first decrease the number of messages stored on the client, using a command like the following:
imqcmd update dst -n myDst -t q -o consumerFlowLimit=1
Next, you will determine whether the broker is buffering messages because a consumer is slow, or whether the consumer processes messages quickly but does not acknowledge them. List the destination, using the following command:
imqcmd list dst
After you supply a user name and password, output like the following appears:
Listing all the destinations on the broker specified by: --------------------------------- Host Primary Port --------------------------------- localhost 7676 ---------------------------------------------------------------------- Name Type State Producers Consumers Msgs Total Count UnAck Avg Size ------------------------------------------------ ----------------------- MyDest Queue RUNNING 0 0 5 200 1177.0 mq.sys.dmq Queue RUNNING 0 0 35 0 1422.0 Successfully listed destinations. |
The UnAck number represents messages that the broker has sent and for which it is waiting for acknowledgment. If this number is high or increasing, you know that the broker is sending messages, so it is not waiting for a slow consumer. You also know that the consumer is not acknowledging the messages.
To resolve the problem: Contact application developers to fix the coding error.
Possible cause: Durable subscribers are inactive.
To confirm this cause of the problem: Look at the topic’s durable subscribers, using the following command format:
imqcmd list dur -d topicName
To resolve the problem:
Purge the durable subscribers using the imqcmd purge dur command.
Restart the consumer applications.
A number of troubleshooting procedures involve an inspection of the dead message queue (mq.sys.dmq). The following procedure explains how to carry out such an inspection by using the QBrowser demo application.
Locate the QBrowser demo application.
See Appendix A, Platform-Specific Locations of Message Queue Data and look in the tables for “Example Applications and Locations.”
Run the QBrowser application.
Here is an example invocation on the Windows platform:
cd \MessageQueue3\demo\applications\qbrowser java QBrowser
The QBrowser main window appears.
Select the queue name mq.sys.dmq and click Browse.
A list like the following appears:
Double-click any message to display details about that message:
The display should resemble the following:
You can inspect the Message Properties pane to determine the reason why the message was placed in the dead message queue.