1. Installing and Configuring Solaris Cluster HA for WebSphere MQ
Overview of Installing and Configuring HA for WebSphere MQ
Planning the HA for WebSphere MQ Installation and Configuration
Restriction for the supported configurations of HA for WebSphere MQ
Restriction for the Location of WebSphere MQ files
Restriction for multiple WebSphere MQ instances
Determine which Solaris zone WebSphere MQ will use
Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.
Verifying the Installation and Configuration of WebSphere MQ
How to Verify the Installation and Configuration of WebSphere MQ
Installing the HA for WebSphere MQ Packages
How to Install the HA for WebSphere MQ Packages
Registering and Configuring Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group
How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container
Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Migrate Existing Resources to a New Version of HA for WebSphere MQ
Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor
Probing Algorithm and Functionality
Operations of the queue manager probe
Operations of the channel initiator, command server, listener and trigger monitor probes
Debug Solaris Cluster HA for WebSphere MQ
How to turn on debug for Solaris Cluster HA for WebSphere MQ
A. Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones
B. Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container
This section contains the procedures you need to install and configure WebSphere MQ.
This section contains the procedures you need to install and configure WebSphere MQ.
Refer to Restriction for multiple WebSphere MQ instances for more information.
Refer to Determine which Solaris zone WebSphere MQ will use for more information.
Refer to System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for complete information about installing and configuring a zone.
Refer to Oracle Solaris Cluster Data Service for Solaris Containers Guide for complete information about creating an HA container.
Repeat this step on all nodes of the cluster for a non-global zone and on one node of the cluster if an HA container is being used.
Boot the zone if it is not running.
# zoneadm list -v # zoneadm -z zonename boot
WebSphere MQ can be deployed onto a cluster file system or highly available file system on the cluster. The following discussion will help you determine the correct approach to take.
Within this section, a single instance or multiple instances of WebSphere MQ will be considered within a global zone, non-global zone, or HA container.
In each scenario, file system options for /var/mqm and the WebSphere MQ files will be listed together with a recommendation.
Can be deployed on a cluster file system, highly available local file system or on local storage on each cluster node.
It is recommended to deploy /var/mqm on local storage on each cluster node.
Can be deployed on a cluster file system or highly available local file system.
It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.
Can be deployed on a highly available local file system or on non-global zone local storage on each cluster node.
It is recommended to deploy /var/mqm on non-global local storage on each cluster node.
Must be deployed on a highly available local file system.
If considering an HA container, you must be aware that an HA container will incur additional failover time to boot/halt the HA container.
Can be deployed on a highly available local file system or in HA container's zonepath.
It is recommended to deploy /var/mqm on the HA container's zonepath.
Must be deployed on a highly available local file system.
Can be deployed on a cluster file system or on local storage on each cluster node.
It is recommended to deploy /var/mqm on local storage on each cluster node.
Can be deployed on a cluster file system or highly available local file system.
It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.
Must be deployed on non-global zone local storage on each cluster node.
Must be deployed on a highly available local file system.
If considering an HA container, you must be aware that an HA container will incur additional failover time to boot/halt the HA container.
Can be deployed on a highly available local file system or on HA container's zonepath.
It is recommended to deploy /var/mqm on the HA container's zonepath.
Must be deployed on a highly available local file system.
Note - Refer to Appendix A, Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones for Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones and Appendix B, Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container for Deployment Example: Installing a WebSphere MQ Queue Manager in an HA Container for examples on how to set up the WebSphere MQ files.
Within this step you will create file systems for the WebSphere MQ files and /var/mqm. Once you have determined how WebSphere MQ should be deployed in the cluster, you can choose one of the sub steps below.
Create the WebSphere MQ files and /var/mqm on cluster file systems by using Step a.
Create the WebSphere MQ files on SVM highly available local file systems and /var/mqm on cluster file system by using Step b.
Create the WebSphere MQ files on ZFS highly available local file systems and /var/mqm on local storage or within an HA container's zonepath by using Step c.
Within this deployment:
The WebSphere MQ files are deployed on cluster file systems.
The WebSphere MQ instances are qmgr1 and qmgr2.
/var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Dec 16 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global
Within this deployment:
The WebSphere MQ files are deployed on SVM highly available local file systems.
The WebSphere MQ instances are qmgr1 and qmgr2.
/var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Sep 17 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /local/mqm/log/qmgr2 ufs 4 no logging
Within this deployment:
The WebSphere MQ files are deployed on ZFS highly available local file systems.
The WebSphere MQ instances are qmgr1 and qmgr2.
/var/mqm uses local storage on each cluster node or the zonepath of a HA container.
As /var/mqm is on a local file system you must copy /var/mqm/mqs.ini from the node where the queue managers was created to all other nodes or zones in the cluster where the queue manager will run.
# df -k /var/mqm Filesystem kbytes used avail capacity Mounted on / 59299764 25657791 33048976 44% / # # ls -l /var/mqm/qmgrs total 6 drwxrwsr-x 2 mqm mqm 512 Sep 11 11:42 @SYSTEM lrwxrwxrwx 1 mqm mqm 14 Sep 11 11:45 qmgr1 -> /ZFSwmq1/qmgrs lrwxrwxrwx 1 mqm mqm 14 Sep 11 11:50 qmgr2 -> /ZFSwmq2/qmgrs # # ls -l /var/mqm/log total 4 lrwxrwxrwx 1 mqm mqm 12 Sep 11 11:44 qmgr1 -> /ZFSwmq1/log lrwxrwxrwx 1 mqm mqm 12 Sep 11 11:54 qmgr2 -> /ZFSwmq2/log # # df -k /ZFSwmq1 Filesystem kbytes used avail capacity Mounted on HAZpool1 4096453 13180 4083273 1% /ZFSwmq1 # # df -k /ZFSwmq2 Filesystem kbytes used avail capacity Mounted on HAZpool2 4096453 13133 4083320 1% /ZFSwmq2
Within this deployment:
If /var/mqm is placed on shared storage as a cluster file system, a symbolic link is made from /var/mqm/qmgrs/@SYSTEM to local file /var/mqm_local/qmgrs/@SYSTEM.
You must perform this step on all nodes in the cluster only if /var/mqm is a cluster file system.
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # mkdir -p /var/mqm/qmgrs # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM
This restriction is required because WebSphere MQ uses keys to build internal control structures. Mounting /var/mqm as a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file ensures that any derived shared memory keys are unique on each node.
If multiple queue managers are required and your queue manager was created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link.
You must stop all queue managers before doing this and perform this on each node of the cluster.
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # cd /var/mqm/qmgrs # cp -rp @SYSTEM/* /var/mqm_local/qmgrs/@SYSTEM # rm -r @SYSTEM # ln -s /var/mqm_local/qmgrs/@SYSTEM @SYSTEM
Perform this step on one node of the cluster.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager.
# metaset -s disk-set -t
For Veritas Volume Manager.
# vxdg -C import disk-group # vxdg -g disk-group startall
# mount websphere-mq-highly-available-local-file-system
Create the mount point on all zones of the cluster that are being used for WebSphere MQ.
Mount the highly available local file system on one of the zones being used .
# zlogin zonename mkdir websphere-mq-highly-available-local-file-system # # mount -F lofs websphere-mq-highly-available-local-file-system \ > /zonepath/root/websphere-mq-highly-available-local-file-system
# zpool export -f HAZpool # zpool import -R /zonepath/root HAZpool
After you have created and mounted the appropriate file systems for the WebSphere MQ files and /var/mqm, you must install WebSphere MQ on each node of the cluster, either in the global zone and/or the non-global zone or HA container as required.
Follow the IBM WebSphere MQ for Sun Solaris Quick Beginnings manual to install WebSphere MQ.
Note - You may choose to locate the mqm userid and group within /etc/passwd and /etc/group or within a name service such as NIS or NIS+. However, as the Solaris Cluster HA for WebSphere MQ uses the su user command to start, stop and probe WebSphere MQ, it is recommend that the mqm userid/group is located within /etc/passwd and /etc/group in the cluster. This is to ensure that the su(1M)command is not impacted if a name service such as NIS or NIS+ is unavailable.
If you choose to locate the mqm userid/group within a network information name service such as NIS or NIS+, WebSphere MQ maybe affected if the network information name service is unavailable.
Follow the IBM WebSphere MQ for Sun Solaris Quick Beginnings manual to create a WebSphere MQ queue manager.
Within this deployment:
If /var/mqm/mqs.ini is placed on local storage as a local file system, you must copy /var/mqm/mqs.ini from the node or zone where the queue manager was created to all other nodes or zones in the cluster where the queue manager will run.
You must perform this step on all nodes or zones in the cluster only if /var/mqm is a local file system.
# rcp /var/mqm/mqs.ini remote-node:/var/mqm/mqs.ini
# rcp /zonepath/root/var/mqm/mqs.ini \ > remote-node:/zonepath/root/var/mqm/mqs.ini