1. Installing and Configuring Solaris Cluster HA for WebSphere MQ
Overview of Installing and Configuring HA for WebSphere MQ
Planning the HA for WebSphere MQ Installation and Configuration
Restriction for the supported configurations of HA for WebSphere MQ
Restriction for the Location of WebSphere MQ files
Restriction for multiple WebSphere MQ instances
Determine which Solaris zone WebSphere MQ will use
Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.
Installing and Configuring WebSphere MQ
How to Install and Configure WebSphere MQ
Verifying the Installation and Configuration of WebSphere MQ
How to Verify the Installation and Configuration of WebSphere MQ
Installing the HA for WebSphere MQ Packages
How to Install the HA for WebSphere MQ Packages
Registering and Configuring Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group
How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container
Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Migrate Existing Resources to a New Version of HA for WebSphere MQ
Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor
Probing Algorithm and Functionality
Operations of the queue manager probe
Operations of the channel initiator, command server, listener and trigger monitor probes
Debug Solaris Cluster HA for WebSphere MQ
How to turn on debug for Solaris Cluster HA for WebSphere MQ
A. Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones
B. Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container
This section contains the information you need to plan your HA for WebSphere MQ installation and configuration.
The configuration restrictions in the subsections that follow apply only to HA for WebSphere MQ.
Caution - Your data service configuration might not be supported if you do not observe these restrictions. |
The Solaris Cluster HA for WebSphere MQ data service can only be configured as a failover service.
Single or multiple instances of WebSphere MQ can be deployed in the cluster.
WebSphere MQ can be deployed in the global zone, whole root non-global zone or a whole root failover non-global zone. See Restriction for multiple WebSphere MQ instances for more information.
The Solaris Cluster HA for WebSphere MQ data service supports different versions of WebSphere MQ, however you must check that the Solaris Cluster HA for WebSphere MQ data service has been verified against that version.
The WebSphere MQ files are where the queue manager data files /var/mqm/qmgr/queue-manager and /var/mqm/log/queue-manager are stored.
These WebSphere MQ files needs to be placed on shared storage as either a cluster file system or a highly available local file system.
Refer to Step 5 and Step 6 in How to Install and Configure WebSphere MQ for a more information.
The HA for WebSphere MQ data service can support multiple WebSphere MQ instances, potentially with different versions.
If you intend to deploy multiple WebSphere MQ instances with different versions you will need to consider deploying WebSphere MQ in separate whole root non-global zones.
The purpose of the following discussion is to help you decide how to use whole root non-global zones to deploy multiple WebSphere MQ instances and then to determine what Nodelist entries are required.
Within these examples:
There are two nodes within the cluster, node1 and node2.
Both nodes have two non-global zones each named z1 and z2.
Each example listed simply shows the required Nodelist property value, via the -n parameter, when creating a failover resource group.
Benefits and drawbacks are listed within each example as + and -.
Note - Although these examples show non-global zones z1 and z2, you may also use global as the zone name or omit the zone entry within the Nodelist property value to use the global zone.
Example 1-1 Run multiple WebSphere MQ instances in the same failover resource group.
Create a single failover resource group that will contain all the WebSphere MQ instances in the same non-global zones across node1 and node2.
# clresourcegroup create -n node1:z1,node2:z1 RG1
+ Only one non-global zone per node is required.
- Multiple WebSphere MQ instances do not have independent failover as they are all within the same failover resource group.
Example 1-2 Run multiple WebSphere MQ instances in separate failover resource groups.
Create multiple failover resource groups that will each contain one WebSphere MQ instance in the same non-global zones across node1 and node2.
# clresourcegroup create -n node1:z1,node2:z1 RG1 # clresourcegroup create -n node1:z1,node2:z1 RG2
+ Only one non-global zone per node is required.
+ Multiple WebSphere MQ instances have independent failover in separate failover resource groups.
Example 1-3 Run multiple WebSphere MQ instances within separate failover resource groups and zones.
Create multiple failover resource groups that will each contain one WebSphere MQ instance in separate non-global zones across node1 and node2.
# clresourcegroup create -n node1:z1,node2:z1 RG1 # clresourcegroup create -n node1:z2,node2:z2 RG2
+ Multiple WebSphere MQ instances have independent failover in separate failover resource groups and separate non-global zones.
+ All WebSphere MQ instances are isolated within their own separate non-global zones.
- Each resource group requires a unique non-global zone per node.
Example 1-4 Run multiple WebSphere MQ instances in separate failover resource groups that contain separate HA containers across node1 and node2.
Create multiple failover resource groups that will each contain a HA container. Each HA container can then contain one or more WebSphere MQ instances.
# clresourcegroup create -n node1,node2 RG1 # clresourcegroup create -n node1,node2 RG2
+ Multiple WebSphere MQ instances have independent failover within separate failover resource groups and separate HA containers.
+ The same HA container per resource group is used per node.
+ Each HA container is only active on one node at a time.
- Each resource group requires a unique HA container per node.
Note - If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over a HA container deployment. Deploying WebSphere MQ within a HA container will incur additional failover time to boot/halt the HA container.
The configuration requirements in this section apply only to HA for WebSphere MQ.
Caution - If your data service configuration does not conform to these requirements, the data service configuration might not be supported. |
Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.
You must determine which Solaris zone WebSphere MQ will run in. WebSphere MQ can run within a global zone, non-global zone or in an HA container configuration. Table 1-2 provides some reasons to help you decide.
Note - WebSphere MQ can be deployed within the global zone, whole root non-global zone or whole root failover non-global zone, also referred as an HA container.
Table 1-2 Choosing the appropriate Solaris Zone for WebSphere MQ
|
Note - If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over an HA container deployment. Deploying WebSphere MQ within an HA container will incur additional failover time to boot/halt the HA container.
If a cluster file system is being used for the WebSphere MQ files, it is possible to manually start the queue manager on one node of the cluster and at the same time to also manually start the same queue manager on another node of the cluster.
Note - Although it is possible, you should not attempt this as doing so will cause severe damage to the WebSphere MQ files.
Although it is expected that no-one will manually start the same queue manager on separate nodes of the cluster at the same time the HA for WebSphere MQ provides a mechanism to prevent someone from doing so, albeit by mistake.
To prevent against this happening you must implement one of the following two solutions.
Use a highly available local file system for the WebSphere MQ files.
This is the recommended approach as the WebSphere MQ files would be mounted only on one node of the cluster at a time. This then limits starting the queue manager on only one node of the cluster at a time.
Create a symbolic link for /opt/mqm/bin/strmqm and /opt/mqm/bin/endmqm to /opt/SUNWscmqs/mgr/bin/check-start.
/opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent manually starting or stopping the queue manager, by verifying that the start or stop is being attempted by the Solaris Cluster HA for WebSphere MQ data service.
/opt/SUNWscmqs/mgr/bin/check-start will report the following error if an attempt to manually start or stop the queue manager.
$ strmqm qmgr1 $ Request to run </usr/bin/strmqm qmgr1> within Solaris Cluster has been refused
If a cluster file system is used for the WebSphere MQ files, you must create a symbolic link for strmqm and endmqm to /opt/SUNWscmqs/mgr/bin/check-start and inform the Solaris Cluster HA for WebSphere MQ data service of this change.
To do this, you must perform the following on each node of the cluster.
# cd /opt/mqm/bin # # mv strmqm strmqm_sc3 # mv endmqm endmqm_sc3 # # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm
After renaming strmqm and endmqm you must use these new program names (strmqm_sc3 and endmqm_sc3) for the START_CMD and STOP_CMD variables when you edit the /opt/SUNWscmqs/mgr/util/mgr_config file in Step 7 in How to Register and Configure Solaris Cluster HA for WebSphere MQ
Note - If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you must again apply this workaround.
Instead the recommended approach is to use a highly available local file system for the WebSphere MQ files.