Sun Java System Message Queue 4.1 Administration Guide

Chapter 8 Broker Clusters

Message Queue supports the use of broker clusters: groups of brokers working together to provide message delivery services to clients. Clusters enable a message service to scale its operations with the volume of message traffic by distributing client connections among multiple brokers. In addition, clusters help to maintain service availability: in the event of broker failure, clients can fail over to another broker in the cluster and continue receiving messages. High availability clusters provide an even greater degree of service availability: if one of the brokers within the cluster should fail, another can take over ownership of its pending messages and see that they are delivered to their destinations without interruption of service. See the Message Queue Technical Overview for a general discussion of clusters and how they operate.

This chapter describes how to manage broker clusters, connect brokers to them, and configure them. It contains the following sections:

Types of Cluster

Two types of cluster can be created: conventional and high availability (HA). The distinction between the two depends on the value of the imq.cluster.ha property of the brokers belonging to the cluster. All of the brokers in a given cluster must have the same value for this property: if the value is false, the cluster is a conventional one; if true, it is a high-availability cluster.

Conventional Clusters

In a conventional cluster, each of the constituent brokers maintains its own separate persistent data store (see Persistence Services). Brokers within the cluster share information about one another’s persistent destinations, message consumers, and durable subscriptions. However, if one of the brokers should fail, none of the other brokers in the cluster can take over its operations, since none of them have access to the failed broker’s persistent messages, open transactions, and other aspects of its internal state.

Changes to a cluster’s destinations, consumers, or durable subscriptions are automatically propagated to all of the other brokers in the cluster. However, a broker that is offline at the time of the change (through failure, for instance) will not immediately receive this information. To keep such state information synchronized throughout the cluster, one of its brokers can optionally be designated as the master broker to track changes in the cluster’s persistent state. The master broker maintains a configuration change record containing information about changes in the persistent entities associated with the cluster, such as durable subscriptions and administrator-created physical destinations. All brokers in the cluster consult the master broker during startup to update their information about these persistent entities; thus a broker returning to operation after having been temporarily offline can update its information about changes that may have occurred during its absence.


Note –

While it is possible to mix brokers with different versions in the same cluster, all brokers must have a version at least as great as that of the master broker. If there is no master broker, all brokers in the cluster must have the same version.


Because all brokers in a conventional cluster need the master broker in order to perform persistent operations, the following imqcmd subcommands for any broker in the cluster will return an error when a master broker has been configured but is unavailable:

Auto-created physical destinations and temporary destinations are unaffected.

In the absence of a master broker, any client application attempting to create a durable subscriber or unsubscribe from a durable subscription will get an error. However, a client can successfully specify and interact with an existing durable subscription.

High-Availability Clusters

In a high-availability cluster, all of the brokers share a common JDBC-based persistent data store holding dynamic state information (destinations, persistent messages, durable subscriptions, open transactions, and so forth) for each broker. In the event of broker failure, this enables another broker to assume ownership of the failed broker’s persistent state and provide uninterrupted service to its clients. Because they share a common JDBC-based data store, all brokers belonging to an HA cluster must have their imq.persist.store property (see Table 14–4) set to jdbc.

Brokers within an HA cluster inform each other at regular intervals that they are still in operation by exchanging heartbeat packets, (using a special internal connection service, the cluster connection service), and updating their state information in the cluster’s shared persistent store. When no heartbeat packet is detected from a broker for a specified number of heartbeat intervals, the broker is considered suspect of failure. The other brokers in the cluster then begin to monitor the suspect broker’s state information in the persistent store to confirm whether the broker has indeed failed. If the suspect broker fails to update its state information within a certain threshold interval, it is considered to have failed. (The duration of these heartbeat and failure-detection intervals can be adjusted by means of broker configuration properties to balance the tradeoff between speed and accuracy of failure detection: shorter intervals result in quicker reaction to broker failure, but increase the likelihood of false suspicions and erroneous failure detection.)

When a broker in an HA cluster detects that another broker has failed, it will attempt to take over the failed broker’s persistent state (pending messages, destination definitions, durable subscriptions, pending acknowledgments, and open transactions), in order to provide uninterrupted service to the failed broker’s clients. If two or more brokers attempt such a takeover, only the first will succeed; that broker acquires a lock on the failed broker’s data in the persistent store, preventing subsequent takeover attempts by other brokers from succeeding. After an initial waiting period, the takeover broker will then clean up any transient resources (such as transactions and temporary destinations) belonging to the failed broker; these resources will be unavailable if the client later reconnects.

Configuring Clusters

You define a cluster by specifying cluster configuration properties for each of its member brokers. These properties are discussed below under Cluster Configuration Properties and are described in detail in Table 14–10.

Setting the Cluster Configuration

Like all broker properties, the cluster configuration properties can be set individually for each broker in a cluster, either in its instance configuration file (config.properties) or by using the -D option on the command line when you start the broker. For example, to create a conventional cluster consisting of brokers at port 9876 on host1, port 5000 on host2, and the default port (7676) on ctrlhost, you could include the following property in the instance configuration files for all three brokers:

   imq.cluster.brokerlist=host1:9876,host2:5000,ctrlhost

Notice, however, that if you need to change the cluster configuration, this method requires you to update the instance configuration file for every broker in the cluster. For consistency and ease of maintenance, it is generally more convenient to collect all of the shared cluster configuration properties into a central cluster configuration file that all of the individual brokers reference. This prevents the settings from getting out of agreement and ensures that all brokers in a cluster share the same, consistent configuration information. In this approach, each broker’s instance configuration file must set the imq.cluster.url property to point to the location of the cluster configuration file: for example,

   imq.cluster.url=file:/home/cluster.properties

The cluster configuration file then defines the shared configuration properties for all of the brokers in the cluster, such as the list of brokers to be connected (imq.cluster.brokerlist), the transport protocol to use for the cluster connection service (imq.cluster.transport), and optionally, for conventional clusters, the address of the master broker (imq.cluster.masterbroker). The following code defines the same conventional cluster as in the previous example, with the broker running on ctrlhost serving as the master broker:

   imq.cluster.brokerlist=host1:9876,host2:5000,ctrlhost
   imq.cluster.masterbroker=ctrlhost

Cluster Configuration Properties

As shown above, the most important cluster configuration property in a conventional cluster is imq.cluster.brokerlist, a list of broker addresses defining the membership of the cluster; all brokers in the cluster must have the same value for this property. (By contrast, high-availability clusters are self-configuring: any broker configured to use the cluster’s shared store is automatically registered as part of the cluster, without further action on your part. If imq.cluster.brokerlist is specified for an HA broker, it is ignored and a warning message is logged at broker startup.)

Additional cluster configuration properties include the following:


Caution – Caution –

While the hostname and port properties can be set independently for each individual broker, all of the other properties listed above must have the same values for all brokers in the cluster. In addition, in an HA cluster, you must specify a unique broker identifier for each broker by setting the broker’s imq.brokerid property (see Table 14–1); this value must be different for each broker in the cluster.


Brokers in a high-availability cluster have additional properties relating to persistent store configuration, failure detection, and takeover, which are discussed in the following sections.

JDBC Configuration Properties for HA Clusters

The persistent data store for an HA cluster is maintained on a high-availability database server, using the Java Database Connectivity (JDBC) API (see JDBC-Based Persistence). All brokers belonging to such a cluster must therefore have their imq.persist.store property (see Table 14–4) set to jdbc. The remaining persistent store properties are discussed under JDBC-Based Persistence and summarized in Table 14–6.

The database server may be Sun’s own High Availability Database (HADB) server, or it may be an open-source or third-party product such as Apache Software Foundation’s Derby (Java DB) or Oracle Corporation’s Real Application Clusters (RAC). As described in JDBC-Based Persistence, the imq.persist.jdbc.dbVendor broker property specifies the name of the database vendor, and all of the remaining JDBC-related properties are qualified with this vendor name: for instance, when using Sun’s HADB for the HA server, the Java class name of the JDBC driver is specified by a property named imq.persist.jdbc.hadb.driver.


Note –

If the integration between Message Queue and Application Server is local (that is, there is a one-to-one relationship between Application Server instances and Message Queue message brokers), the Application Server will automatically propagate these properties to each broker in the HA cluster. However, if the integration is remote (a single Application Server instance using an externally configured broker cluster), then it is your responsibility to configure the needed properties explicitly for each broker.


After setting all of the needed JDBC configuration properties for the brokers in an HA cluster, you must also install your JDBC driver’s .jar file in the appropriate directory location, depending on your operating-system platform (as listed in Appendix A, Platform-Specific Locations of Message Queue Data) and then execute the Database Manager command

   imqdbmgr create tbl

to create the database schema for the HA persistent data store.

Failure Detection and Takeover Properties for HA Clusters

The following configuration properties (listed in Table 14–10) specify the parameters for the exchange of heartbeat and status information within an HA cluster:

Smaller values for these heartbeat and monitoring intervals will result in quicker reaction to broker failure, but at the cost of reduced performance and increased likelihood of false suspicions and erroneous failure detection.

Displaying the Cluster Configuration

To display information about a cluster’s configuration, use the Command utility’s list bkr subcommand:

   imqcmd list bkr

This lists the current status of all brokers included in the cluster to which a given broker belongs, as shown in Example 8–1 (for a conventional cluster) or Example 8–2 (for an HA cluster).


Example 8–1 Configuration Listing for a Conventional Cluster


Listing all the brokers in the cluster that the following broker is a member of:

-------------------------
Host         Primary Port
-------------------------
localHost    7676

Cluster Is Highly Available             False

-------------------------
Address         State
---------------------
whippet:7676    OPERATING
greyhound:7676  OPERATING



Example 8–2 Configuration Listing for an HA Cluster


Listing all the brokers in the cluster that the following broker is a member of:

----------------------------------------------
Host         Primary Port    Cluster Broker ID
----------------------------------------------
localHost    7676            brokerA

Cluster ID                              myClusterID
Cluster Is Highly Available             True

--------------------------------------------------------------------------------------------------------------
                                                                           ID of broker       Time since last
Broker ID       Address         State                   Msgs in store   performing takeover   status timestamp
--------------------------------------------------------------------------------------------------------------
brokerA         localhost:7676  OPERATING               121                                   30 sec
brokerB         greyhound:7676  TAKEOVER_STARTED        52              brokerA               3 hrs
brokerC         jpgserv:7676    SHUTDOWN_STARTED        12346                                 10 sec
brokerD         icdev:7676      TAKEOVER_COMPLETE       0               brokerA               2 min
brokerE         mrperf:7676     *unknown                12                                    0 sec
brokerG         iclab1:7676     QUIESCING               4                                     2 sec
brokerH         iclab2:7676     QUIESCE_COMPLETE        8                                     5 sec


Managing Clusters

The following sections describe how to perform various administrative management tasks for conventional and high-availability clusters, respectively.

Managing Conventional Clusters

The procedures in this section show how to perform the following tasks for a conventional cluster:

Clustering Conventional Brokers

There are two general methods of connecting conventional brokers into a cluster: from the command line (using the -cluster option) or by setting the imq.cluster.brokerlist property in the cluster configuration file. Whichever method you use, each broker that you start attempts to connect to the other brokers in the cluster every five seconds; the connection will succeed once the master broker is started up (if one is configured). If a broker in the cluster starts before the master broker, it will remain in a suspended state, rejecting client connections, until the master broker starts; the suspended broker then will automatically become fully functional. It is therefore a good idea to start the master broker first and then the others, after the master broker has completed its startup.


Note –

Whichever clustering method you use, you must make sure that no broker in the cluster is given an address that resolves to the network loopback IP address (127.0.0.1). Any broker configured with this address will be unable to connect to other brokers in the cluster.

In particular, some Linux installers automatically set the localhost entry to the network loopback address. On such systems, you must modify the system IP address so that all brokers in the cluster can be addressed properly: For each Linux system participating in the cluster, check the /etc/hosts file as part of cluster setup. If the system uses a static IP address, edit the /etc/hosts file to specify the correct address for localhost. If the address is registered with Domain Name Service (DNS), edit the file /etc/nsswitch.conf to change the order of the entries so that DNS lookup is performed before consulting the local hosts file. The line in /etc/nsswitch.conf should read as follows:

   hosts: dns files


Note –

If you are clustering a Message Queue 4.1 broker together with those from earlier versions of Message Queue, you must set the value of the 4.1 broker’s imq.autocreate.queue.maxNumActiveConsumers property to 1. Otherwise the brokers will not be able to establish a cluster connection.


ProcedureTo Cluster Conventional Brokers from the Command Line

  1. If you are using a master broker, start it with the imqbrokerd command, using the -cluster option to specify the complete list of brokers to be included in the cluster.

    For example, the following command starts the broker as part of a cluster consisting of the brokers running at the default port (7676) on host1, at port 5000 on host2, and at port 9876 on the default host (localhost):

       imqbrokerd  -cluster host1,host2:5000,:9876
    
  2. Once the master broker (if any) is running, start each of the other brokers in the cluster with the imqbrokerd command, using the same list of brokers with the -cluster option that you used for the master broker.

    The value specified for the -cluster option must be the same for all brokers in the cluster.

ProcedureTo Cluster Conventional Brokers Using a Cluster Configuration File

An alternative method, better suited for production systems, is to use a cluster configuration file to specify the composition of the cluster:

  1. Create a cluster configuration file that uses the imq.cluster.brokerlist property to specify the list of brokers to be connected.

    If you are using a master broker, identify it with the imq.cluster.masterbroker property in the configuration file.

  2. For each broker in the cluster, set the imq.cluster.url property in the broker’s instance configuration file to point to the cluster configuration file.

  3. Use the imqbrokerd command to start each broker.

    If there is a master broker, start it first, then the others after it has completed its startup.

ProcedureTo Establish Secure Connections Between Brokers

If you want secure, encrypted message delivery between brokers in a cluster, configure the cluster connection service to use an SSL-based transport protocol:

  1. For each broker in the cluster, set up SSL-based connection services, as described in Message Encryption.

  2. Set each broker’s imq.cluster.transport property to ssl, either in the cluster configuration file or individually for each broker.

Adding Brokers to a Conventional Cluster

The procedure for adding a new broker to a conventional cluster depends on whether the cluster uses a cluster configuration file.

ProcedureTo Add a New Broker to a Conventional Cluster Using a Cluster Configuration File

  1. Add the new broker to the imq.cluster.brokerlist property in the cluster configuration file.

  2. Issue the following command to any broker in the cluster:

       imqcmd reload cls
    

    This forces each broker to reload the cluster configuration, ensuring that all persistent information for brokers in the cluster is up to date. Note that it is not necessary to issue this command to every broker in the cluster; executing it for any one broker will cause all of them to reload the cluster configuration.

  3. (Optional) Set the value of the imq.cluster.url property in the new broker’s instance configuration file (config.properties) to point to the cluster configuration file.

  4. Start the new broker.

    If you did not perform step 3, use the -D option on the imqbrokerd command line to set the value of imq.cluster.url to the location of the cluster configuration file.

ProcedureTo Add a New Broker to a Conventional Cluster Without a Cluster Configuration File

  1. (Optional) Set the values of the following properties in the new broker’s instance configuration file (config.properties) :

      imq.cluster.brokerlist


      imq.cluster.masterbroker (if necessary)


      imq.cluster.transport (if you are using a secure cluster connection service)


  2. Start the new broker.

    If you did not perform step 1, use the -D option on the imqbrokerd command line to set the property values listed there.

Removing Brokers From a Conventional Cluster

The method you use to remove a broker from a conventional cluster depends on whether you originally created the cluster from the command line or by means of a central cluster configuration file.

ProcedureTo Remove a Broker From a Conventional Cluster Using the Command Line

If you used the imqbrokerd command from the command line to connect the brokers into a cluster, you must stop each of the brokers and then restart them, specifying the new set of cluster members on the command line:

  1. Stop each broker in the cluster, using the imqcmd command.

  2. Restart the brokers that will remain in the cluster, using the imqbrokerd command’s -cluster option to specify only those remaining brokers.

    For example, suppose you originally created a cluster consisting of brokers A, B, and C by starting each of the three with the command

       imqbrokerd  -cluster A,B,C
    

    To remove broker A from the cluster, restart brokers B and C with the command

       imqbrokerd  -cluster B,C
    

ProcedureTo Remove a Broker From a Conventional Cluster Using a Cluster Configuration File

If you originally created a cluster by specifying its member brokers with the imq.cluster.brokerlist property in a central cluster configuration file, it isn’t necessary to stop the brokers in order to remove one of them. Instead, you can simply edit the configuration file to exclude the broker you want to remove, force the remaining cluster members to reload the cluster configuration, and reconfigure the excluded broker so that it no longer points to the same cluster configuration file:

  1. Edit the cluster configuration file to remove the excluded broker from the list specified for the imq.cluster.brokerlist property.

  2. Issue the following command to each broker remaining in the cluster:

       imqcmd reload cls
    

    This forces the brokers to reload the cluster configuration.

  3. Stop the broker you’re removing from the cluster.

  4. Edit that broker’s instance configuration file (config.properties), removing or specifying a different value for its imq.cluster.url property.

Managing the Configuration Change Record

As noted earlier, a conventional cluster can optionally have one master broker, which maintains a configuration change record to keep track of any changes in the cluster’s persistent state. The master broker is identified by the imq.cluster.masterbroker configuration property, either in the cluster configuration file or in the instance configuration files of the individual brokers.

Because of the important information that the configuration change record contains, it is important to back it up regularly so that it can be restored in case of failure. Although restoring from a backup will lose any changes in the cluster’s persistent state that have occurred since the backup was made, frequent backups can minimize this potential loss of information. The backup and restore operations also have the positive effect of compressing and optimizing the change history contained in the configuration change record, which can grow significantly over time.

ProcedureTo Back Up the Configuration Change Record

  1. Use the -backup option of the imqbrokerd command, specifying the name of the backup file.

    For example:

       imqbrokerd  -backup mybackuplog
    

ProcedureTo Restore the Configuration Change Record

  1. Shut down all brokers in the cluster.

  2. Restore the master broker’s configuration change record from the backup file.

    The command is

       imqbrokerd  -restore mybackuplog
    
  3. If you assign a new name or port number to the master broker, update the imq.cluster.brokerlist and imq.cluster.masterbroker properties accordingly in the cluster configuration file.

  4. Restart all brokers in the cluster.

Managing High-Availability Clusters

This section presents step-by-step procedures for performing a variety of administrative tasks for a high-availability cluster:

Clustering High-Availability Brokers

Because high-availability clusters are self-configuring, there is no need to explicitly specify the list of brokers to be included in the cluster. Instead, all that is needed is to set each broker’s configuration properties appropriately and then start the broker; as long as its properties are set properly, it will automatically be incorporated into the cluster. Table 8–1 shows the required settings. In addition, there may be vendor-specific settings required for a particular vendor’s database; Table 8–2 and Table 8–3 show these vendor-specific settings for Sun’s own HADB and MySQL from MySQLAB, respectively.

Table 8–1 Required Configuration Properties for HA Clusters

Property 

Required Value 

Description 

imq.cluster.ha

true

Broker is part of an HA cluster

imq.cluster.clusterid

 

Cluster identifier 

Must be the same for all brokers in the cluster.  

imq.brokerid

 

Broker identifier 

Must be different for each broker in the cluster 

imq.persist.store

jdbc

Model for persistent data storage 

Only JDBC-based persistence is supported for HA data stores.

imq.persist.jdbc.dbVendor

 

Database vendor for HA persistent store:

    hadb: HADB (Sun Microsystems, Inc.)


    derby: Java DB (Derby, Apache Software Foundation)


    oracle: Oracle Real Application Cluster (Oracle Corporation)


    mysql: MySQL (MySQL AB)


Table 8–2 Vendor-Specific Configuration Properties for HADB Database

Property 

Description 

imq.persist.jdbc.hadb.user

User name for opening database connection 

imq.persist.jdbc.hadb.password

Password for opening database connection 

imq.persist.hadb.property.serverList

JDBC URL of database

Use the command 

   hadbm get JdbcURL

to get the URL; remove the prefix

   jdbc:sun:hadb

and use  

   host:port,host:port...

for the property value.  

Table 8–3 Vendor-Specific Configuration Properties for MySQL Database

Property 

Description 

imq.persist.jdbc.mysql.user

User name for opening database connection 

imq.persist.jdbc.mysql.password

Password for opening database connection 

imq.persist.jdbc.mysql.property.url

JDBC URL for opening database

The property values can be set separately in each broker’s instance configuration file, or they can be specified in a cluster configuration file that all the brokers share. The procedures are as follows:

ProcedureTo Cluster HA Brokers Using Instance Configuration Files

  1. For each broker in the cluster:

    1. Start the broker with the imqbrokerd command.

      The first time a broker instance is run, an instance configuration file (config.properties) is automatically created.

    2. Shut down the broker.

      Use the imqcmd shutdown bkr command.

    3. Edit the instance configuration file to specify the broker’s HA-related configuration properties.

      Table 8–1 shows the required property values.

    4. Specify any additional, vendor-specific properties that may be needed.

      Table 8–2 and Table 8–3 show the required properties for HADB and MySQL databases, respectively.

  2. Place a copy of, or a symbolic link to, your JDBC driver’s .jar file in the appropriate location, depending on your platform:

      Solaris: /usr/share/lib/imq/ext/


      Linux: /opt/sun/mq/share/lib/


      Windows: IMQ_VARHOME\lib\ext


  3. Create the database schema needed for Message Queue persistence.

    Use the imqdbmgr create tbl command; see Database Manager Utility.

  4. Restart each broker with the imqbrokerd command.

    The brokers will automatically register themselves into the cluster on startup.

ProcedureTo Cluster HA Brokers Using a Cluster Configuration File

An alternative method, better suited for production systems, is to use a cluster configuration file to specify the composition of the cluster:

  1. Create a cluster configuration file specifying the cluster’s HA-related configuration properties.

    Table 8–1 shows the required property values. However, do not include the imq.brokerid property in the cluster configuration file; this must be specified separately for each individual broker in the cluster.

  2. Specify any additional, vendor-specific properties that may be needed.

    Table 8–2 and Table 8–3 show the required properties for HADB and MySQL databases, respectively.

  3. For each broker in the cluster:

    1. Start the broker with the imqbrokerd command.

      The first time a broker instance is run, an instance configuration file (config.properties) is automatically created.

    2. Shut down the broker.

      Use the imqcmd shutdown bkr command.

    3. Edit the instance configuration file to specify the location of the cluster configuration file.

      In the broker’s instance configuration file, set the imq.cluster.url property to point to the location of the cluster configuration file you created in step 1.

    4. Specify the broker identifier.

      Set the imq.brokerid property in the instance configuration file to the broker’s unique broker identifier. This value must be different for each broker.

  4. Place a copy of, or a symbolic link to, your JDBC driver’s .jar file in the appropriate location, depending on your platform:

      Solaris: /usr/share/lib/imq/ext/


      Linux: /opt/sun/mq/share/lib/


      Windows: IMQ_VARHOME\lib\ext


  5. Create the database schema needed for Message Queue persistence.

    Use the imqdbmgr create tbl command; see Database Manager Utility.

  6. Restart each broker with the imqbrokerd command.

    The brokers will automatically register themselves into the cluster on startup.

Adding and Removing Brokers in a High-Availability Cluster

Because HA clusters are self-configuring, the procedures for adding and removing brokers are simpler than for a conventional cluster:

ProcedureTo Add a New Broker to an HA Cluster

  1. Set the new broker’s HA-related properties, as described in the preceding section.

    You can do this either by specifying the individual properties in the broker’s instance configuration file (config.properties) or, if there is a cluster configuration file, by setting the broker’s imq.cluster.url property to point to it.

  2. Start the new broker with the imqbrokerd command.

    The broker will automatically register itself into the cluster on startup.

ProcedureTo Remove a Broker from an HA Cluster

  1. Make sure the broker is not running.

    If necessary, use the command

       imqcmd shutdown bkr
    

    to shut down the broker.

  2. Remove the broker from the cluster with the command

       imqdbmgr remove bkr
    

Preventing or Forcing Takeover of a Broker

Although the takeover of a failed broker’s persistent data by another broker in an HA cluster is normally automatic, there may be times when you want to prevent such a takeover from occurring. To suppress automatic takeover when shutting down a broker, use the -nofailover option to the imqcmd shutdown bkr subcommand:

   imqcmd shutdown bkr  -nofailover  -b hostName:portNumber

where hostName and portNumber are the host name and port number of the broker to be shut down.

Conversely, you may sometimes need to force a broker takeover to occur manually. (This might be necessary, for instance, if an automatic takeover broker were to fail before completing the takeover process.) In such cases, you can initiate a takeover manually from the command line: first shut down the broker to be taken over with the -nofailover option, as shown above, then issue the command

   imqcmd takeover bkr  -n brokerID

where brokerID is the broker identifier of the broker to be taken over. If the specified broker appears to be running, the Command utility will display a confirmation message:

   The broker associated with brokerID last accessed the database # seconds ago. 
   Do you want to take over for this broker?

You can suppress this message, and force the takeover to occur unconditionally, by using the -f option to the imqcmd takeover bkr command:

   imqcmd takeover bkr  -f  -n brokerID

Note –

The imqcmd takeover bkr subcommand is intended only for use in failed-takeover situations. You should use it only as a last resort, and not as a general way of forcibly taking over a running broker.


You may also find it useful to quiesce a broker before shutting it down, causing it to refuse any new client connections while continuing to service old ones. This allows the broker’s operations to wind down gradually without triggering a takeover by another broker, for instance in preparation for shutting it down administratively for upgrade or similar purposes; see Quiescing a Broker for more information.

Managing the HA Data Store

When converting to high-availability operation, you can use the Message Queue Database Manager utility (imqdbmgr) subcommand

   imqdbmgr upgrade hastore

to convert an existing standalone HADB persistent data store to a shared HADB store. You can use this command in the following cases:

Because this command only supports conversion of HADB stores, it cannot be used to convert file-based stores or other JDBC-based stores to a shared HADB store. If you were previously running a 3.x version of Message Queue, you must create an HADB store and then manually migrate your data to that store in order to use the high availability feature.

For durability and reliability, it is a good idea to back up a high-availability cluster’s shared persistent data store periodically to backup files. This creates a snapshot of the data store that you can then use to restore the data in case of catastrophic failure. The command for backing up the data store is

   imqdbmgr backup  -dir backupDir

where backupDir is the path to the directory in which to place the backup files. To restore the data store from these files, use the command

   imqdbmgr restore  -restore backupDir