JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for WebSphere MQ Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Solaris Cluster HA for WebSphere MQ

HA for WebSphere MQ Overview

Overview of Installing and Configuring HA for WebSphere MQ

Planning the HA for WebSphere MQ Installation and Configuration

Configuration Restrictions

Restriction for the supported configurations of HA for WebSphere MQ

Restriction for the Location of WebSphere MQ files

Restriction for multiple WebSphere MQ instances

Configuration Requirements

Determine which Solaris zone WebSphere MQ will use

Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

Installing and Configuring WebSphere MQ

How to Install and Configure WebSphere MQ

Verifying the Installation and Configuration of WebSphere MQ

How to Verify the Installation and Configuration of WebSphere MQ

Installing the HA for WebSphere MQ Packages

How to Install the HA for WebSphere MQ Packages

Registering and Configuring Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group

How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container

Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration

How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration

Upgrading HA for WebSphere MQ

How to Migrate Existing Resources to a New Version of HA for WebSphere MQ

Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor

Resource Properties

Probing Algorithm and Functionality

Operations of the queue manager probe

Operations of the channel initiator, command server, listener and trigger monitor probes

Debug Solaris Cluster HA for WebSphere MQ

How to turn on debug for Solaris Cluster HA for WebSphere MQ

A.  Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones

B.  Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container

Index

Planning the HA for WebSphere MQ Installation and Configuration

This section contains the information you need to plan your HA for WebSphere MQ installation and configuration.

Configuration Restrictions

The configuration restrictions in the subsections that follow apply only to HA for WebSphere MQ.


Caution

Caution - Your data service configuration might not be supported if you do not observe these restrictions.


Restriction for the supported configurations of HA for WebSphere MQ

The Solaris Cluster HA for WebSphere MQ data service can only be configured as a failover service.

Single or multiple instances of WebSphere MQ can be deployed in the cluster.

WebSphere MQ can be deployed in the global zone, whole root non-global zone or a whole root failover non-global zone. See Restriction for multiple WebSphere MQ instances for more information.

The Solaris Cluster HA for WebSphere MQ data service supports different versions of WebSphere MQ, however you must check that the Solaris Cluster HA for WebSphere MQ data service has been verified against that version.

Restriction for the Location of WebSphere MQ files

The WebSphere MQ files are where the queue manager data files /var/mqm/qmgr/queue-manager and /var/mqm/log/queue-manager are stored.

These WebSphere MQ files needs to be placed on shared storage as either a cluster file system or a highly available local file system.

Refer to Step 5 and Step 6 in How to Install and Configure WebSphere MQ for a more information.

Restriction for multiple WebSphere MQ instances

The HA for WebSphere MQ data service can support multiple WebSphere MQ instances, potentially with different versions.

If you intend to deploy multiple WebSphere MQ instances with different versions you will need to consider deploying WebSphere MQ in separate whole root non-global zones.

The purpose of the following discussion is to help you decide how to use whole root non-global zones to deploy multiple WebSphere MQ instances and then to determine what Nodelist entries are required.

Within these examples:


Note - Although these examples show non-global zones z1 and z2, you may also use global as the zone name or omit the zone entry within the Nodelist property value to use the global zone.


Example 1-1 Run multiple WebSphere MQ instances in the same failover resource group.

Create a single failover resource group that will contain all the WebSphere MQ instances in the same non-global zones across node1 and node2.

# clresourcegroup create -n node1:z1,node2:z1 RG1

Example 1-2 Run multiple WebSphere MQ instances in separate failover resource groups.

Create multiple failover resource groups that will each contain one WebSphere MQ instance in the same non-global zones across node1 and node2.

# clresourcegroup create -n node1:z1,node2:z1 RG1
# clresourcegroup create -n node1:z1,node2:z1 RG2

Example 1-3 Run multiple WebSphere MQ instances within separate failover resource groups and zones.

Create multiple failover resource groups that will each contain one WebSphere MQ instance in separate non-global zones across node1 and node2.

# clresourcegroup create -n node1:z1,node2:z1 RG1
# clresourcegroup create -n node1:z2,node2:z2 RG2

Example 1-4 Run multiple WebSphere MQ instances in separate failover resource groups that contain separate HA containers across node1 and node2.

Create multiple failover resource groups that will each contain a HA container. Each HA container can then contain one or more WebSphere MQ instances.

# clresourcegroup create -n node1,node2 RG1
# clresourcegroup create -n node1,node2 RG2

Note - If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over a HA container deployment. Deploying WebSphere MQ within a HA container will incur additional failover time to boot/halt the HA container.


Configuration Requirements

The configuration requirements in this section apply only to HA for WebSphere MQ.


Caution

Caution - If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Determine which Solaris zone WebSphere MQ will use

Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.

You must determine which Solaris zone WebSphere MQ will run in. WebSphere MQ can run within a global zone, non-global zone or in an HA container configuration. Table 1-2 provides some reasons to help you decide.


Note - WebSphere MQ can be deployed within the global zone, whole root non-global zone or whole root failover non-global zone, also referred as an HA container.


Table 1-2 Choosing the appropriate Solaris Zone for WebSphere MQ

Zone type
Reasons for choosing the appropriate Solaris Zone for WebSphere MQ
Global Zone
Only one instance of WebSphere MQ will be installed.

Non-global zones are not required.

Non-global Zone
Several WebSphere MQ instances need to be consolidated and isolated from each other.

Different versions of WebSphere MQ will be installed.

Failover testing of WebSphere MQ between non-global zones on the same node is required.

HA container
You require WebSphere MQ to run in the same zone regardless of which node the HA container is running on.

Note - If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over an HA container deployment. Deploying WebSphere MQ within an HA container will incur additional failover time to boot/halt the HA container.


Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

If a cluster file system is being used for the WebSphere MQ files, it is possible to manually start the queue manager on one node of the cluster and at the same time to also manually start the same queue manager on another node of the cluster.


Note - Although it is possible, you should not attempt this as doing so will cause severe damage to the WebSphere MQ files.


Although it is expected that no-one will manually start the same queue manager on separate nodes of the cluster at the same time the HA for WebSphere MQ provides a mechanism to prevent someone from doing so, albeit by mistake.

To prevent against this happening you must implement one of the following two solutions.

  1. Use a highly available local file system for the WebSphere MQ files.

    This is the recommended approach as the WebSphere MQ files would be mounted only on one node of the cluster at a time. This then limits starting the queue manager on only one node of the cluster at a time.

  2. Create a symbolic link for /opt/mqm/bin/strmqm and /opt/mqm/bin/endmqm to /opt/SUNWscmqs/mgr/bin/check-start.

    /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent manually starting or stopping the queue manager, by verifying that the start or stop is being attempted by the Solaris Cluster HA for WebSphere MQ data service.

    /opt/SUNWscmqs/mgr/bin/check-start will report the following error if an attempt to manually start or stop the queue manager.

    $ strmqm qmgr1
    $ Request to run </usr/bin/strmqm qmgr1> within Solaris Cluster has been refused

    If a cluster file system is used for the WebSphere MQ files, you must create a symbolic link for strmqm and endmqm to /opt/SUNWscmqs/mgr/bin/check-start and inform the Solaris Cluster HA for WebSphere MQ data service of this change.

    To do this, you must perform the following on each node of the cluster.

    # cd /opt/mqm/bin
    #
    # mv strmqm strmqm_sc3
    # mv endmqm endmqm_sc3
    #
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm

    After renaming strmqm and endmqm you must use these new program names (strmqm_sc3 and endmqm_sc3) for the START_CMD and STOP_CMD variables when you edit the /opt/SUNWscmqs/mgr/util/mgr_config file in Step 7 in How to Register and Configure Solaris Cluster HA for WebSphere MQ


    Note - If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you must again apply this workaround.

    Instead the recommended approach is to use a highly available local file system for the WebSphere MQ files.