1. Installing and Configuring Solaris Cluster HA for WebSphere MQ
Overview of Installing and Configuring HA for WebSphere MQ
Planning the HA for WebSphere MQ Installation and Configuration
Restriction for the supported configurations of HA for WebSphere MQ
Restriction for the Location of WebSphere MQ files
Restriction for multiple WebSphere MQ instances
Determine which Solaris zone WebSphere MQ will use
Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.
Installing and Configuring WebSphere MQ
How to Install and Configure WebSphere MQ
Verifying the Installation and Configuration of WebSphere MQ
How to Verify the Installation and Configuration of WebSphere MQ
Installing the HA for WebSphere MQ Packages
How to Install the HA for WebSphere MQ Packages
Registering and Configuring Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ
How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group
How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container
Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration
How to Migrate Existing Resources to a New Version of HA for WebSphere MQ
Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor
Probing Algorithm and Functionality
Operations of the queue manager probe
Operations of the channel initiator, command server, listener and trigger monitor probes
Debug Solaris Cluster HA for WebSphere MQ
How to turn on debug for Solaris Cluster HA for WebSphere MQ
A. Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones
B. Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container
This section contains the procedure you need to verify the installation and configuration.
This procedure does not verify that your application is highly available because you have not yet installed your data service.
Perform this procedure on one node or zone of the cluster unless a specific steps indicates otherwise.
Repeat this step on all nodes on the cluster for a non-global zone and on one node of the cluster if a HA container is being used.
Boot the zone if it is not running.
# zoneadm list -v # zoneadm -z zonename boot
# zlogin zonename
# su - mqm $ strmqm queue-manager $ runmqsc queue-manager def ql(sc3test) defpsist(yes) end $ $ /opt/mqm/samp/bin/amqsput SC3TEST queue-manager test test test test test ^C
$ endmqm -i queue-manager $ exit
# exit
Perform this step in the global zone only.
You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ
# umount websphere-mq-highly-available-local-file-system
Unmount the highly available local file system from the zone.
# umount /zonepath/root/websphere-mq-highly-available-local-file-system
# zpool export -f HAZpool
Perform this step on another node of the cluster.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager.
# metaset -s disk-set -t
For Veritas Volume Manager.
# vxdg -C import disk-group # vxdg -g disk-group startall
# mount websphere-mq-highly-available-local-file-system
Create the mount point on all zones of the cluster that are being used for WebSphere MQ.
Mount the highly available local file system on one of the zones being used .
# zlogin zonename mkdir websphere-mq-highly-available-local-file-system # # mount -F lofs websphere-mq-highly-available-local-file-system \ > /zonepath/root/websphere-mq-highly-available-local-file-system
# zpool import -R /zonepath/root HAZpool
Perform this step on the other node of the cluster.
# zlogin zonename
Perform this step on the other node or zone of the cluster.
# su - mqm $ strmqm queue-manager $ /opt/mqm/samp/bin/amqsget SC3TEST queue-manager ^C $ runmqsc queue-manager delete ql(sc3test) end
Perform this step on the other node or zone of the cluster.
$ endmqm -i queue-manager $ exit
# exit
Perform this step in the global zone only.
You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ
# umount websphere-mq-highly-available-local-file-system
Unmount the highly available local file system from the zone.
# umount /zonepath/root/websphere-mq-highly-available-local-file-system
# zpool export -f HAZpool
Note - This step is only required if a HA container is being used.
# zlogin zonename halt