This appendix presents a complete example of how to install and configure a WebSphere MQ queue manager in a failover zone. It presents a simple node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual.
This example uses a single-node cluster with the following node and zone names:
The physical node, which owns the file system.
A whole root non-global zone named z3.
This deployment example uses the following software products and versions:
Solaris 10 06/06 software for SPARC or x86 platforms
Sun Cluster 3.2 core software
Sun Cluster HA for WebSphere MQ data service
Sun Cluster HA for Solaris Containers data service
WebSphere MQ v6 Solaris x86–64
This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.
The instructions in this example were developed with the following assumptions:
Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment.
User login: Unless otherwise specified, perform all procedures as superuser or assume a role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read RBAC authorization.
This deployment example is designed for a single-node cluster. It is provided simply as a concise guide to help you if you need to refer to an installation and configuration of WebSphere MQ.
This deployment example is not meant to be a precise guide to install and configure WebSphere MQ.
If you need to install WebSphere MQ in any other configuration, refer to the general purpose procedures elsewhere in this manual.
The instructions with this deployment example assumes that you are using the WebSphere MQ V6 Solaris x86–64 and will configure WebSphere MQ on a ZFS highly available local file system .
The failover zonepath cannot use a ZFS highly available local file system, instead the zonepath will use a SVM highly available local system.
The cluster resource group is simply brought online and is not failed over to another node as this deployment example is on a single node cluster.
The tasks you must perform to install and configure WebSphere MQ in the failover zone are as follows:
Example: Enable the WebSphere MQ Software to Run in the Cluster
Example: Verify the Sun Cluster HA for WebSphere MQ resource group
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on node Vigor5.
Sun Cluster core software
Sun Cluster data service for WebSphere MQ
Sun Cluster data service for Solaris Containers
Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the global zone and failover zone.
The following output shows the logical host name entry for qmgr3 in the global zone.
Vigor5# grep qmgr1 /etc/hosts /etc/inet/ipnodes /etc/hosts:192.168.1.150 qmgr1 /etc/inet/ipnodes:192.168.1.150 qmgr1 |
Install and configure a Zettabyte File System
The following zpool definition represents a very basic configuration for deployment on a single-node cluster.
You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.
Create a ZFS pool
Vigor5# zpool create -m /ZFSwmq3/log HAZpool1 c1t1d0 Vigor5# zpool create -m /ZFSwmq3/qmgr HAZpool2 c1t4d0 |
Install and Configure a Solaris Volume Manager File System
The following metaset definitions represent a very basic configuration for deployment on a single-node cluster.
You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.
Create a SVM Disk Set.
Vigor5# metaset -s dg_d1 -a -h Vigor5 |
Add a Disk to the SVM Disk Set
Vigor5# metaset -s dg_d1 -a /dev/did/rdsk/d2 |
Add the Disk Information to the metainit utility input file
Vigor5# cat >> /etc/lvm/md.tab <<-EOF dg_d1/d100 -m dg_d1/d110 dg_d1/d110 1 1 /dev/did/rdsk/d2s0 EOF |
Configure the metadevices
Vigor5# metainit -s dg_d1 -a |
Create a Mount Point for the SVM Highly Available Local File System
Vigor5# mkdir /FOZones |
Add the SVM highly available local file system to /etc/vfstab
Vigor5# cat >> /etc/vfstab <<-EOF /dev/md/dg_d1/dsk/d100 /dev/md/dg_d1/rdsk/d100 /FOZones ufs 3 no logging EOF |
Create the File System
Vigor5# newfs /dev/md/dg_d1/rdsk/d100 |
Mount the File System
Vigor5# mount /FOZones |
In this task you will create a whole root failover non-global zone on node Vigor5.
Create a non-global zone to be used as the failover zone
Vigor5# cat > /tmp/z3 <<-EOF create -b set zonepath=/FOZones/z3 set autoboot=false add inherit-pkg-dir set dir=/opt/SUNWscmqs end EOF |
Configure the non-global failover zone, using the file you created.
Vigor5# zonecfg -z z3 -f /tmp/z3 |
Install the zones.
Vigor5# zoneadm -z z3 install |
Boot the zone.
Perform this step after the installation of the zones are complete.
Vigor5# zoneadm -z z3 boot |
Log in to the zone and complete the zone system identification.
Open another window and issue the following command.
Vigor5# zlogin -C z3 |
Disconnect from the zone console and close the terminal window.
After you have completed the zone system identification, disconnect from the zone and close the window you previously opened.
Vigo5# ~. Vigo5# exit |
Create the appropriate mount points and symlinks for WebSphere MQ in the zone.
Vigor5# zlogin z3 mkdir -p /var/mqm/log /var/mqm/qmgrs Vigor5# zlogin z3 ln -s /ZFSwmq3/log /var/mqm/log/qmgr3 Vigor5# zlogin z3 ln -s /ZFSwmq3/qmgrs /var/mqm/qmgrs/qmgr3 |
Create the WebSphere MQ userid in the zone.
Vigor5# zlogin z3 groupadd -g 1000 mqm Vigor5# zlogin z3 useradd -u 1000 -g 1000 -d /var/mqm mqm |
Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the zone
The following output shows logical host name entry for qmgr3 in zone z3 .
Vigor5# zlogin z3 grep qmgr3 /etc/hosts /etc/inet/ipnodes /etc/hosts:192.168.1.152 qmgr3 /etc/inet/ipnodes:192.168.1.152 qmgr3 |
Mount the WebSphere MQ software in the zones.
In this example, the WebSphere MQ software has been copied to node Vigor5 in directory /export/software/ibm/wmqsv6.
Vigor5# zlogin z3 mkdir -p /var/tmp/software Vigor5# Vigor5# mount -F lofs /export/software /FOZzones/z3/root/var/tmp/software |
Mount the ZFS pools in the zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 Vigor5# zpool import -R /FOZones/z3/root HAZpool1 Vigor5# zpool import -R /FOZones/z3/root HAZpool2 |
Setup the ZFS file systems for user and group mqm
Vigor5# zlogin z3 chown -R mqm:mqm /ZFSwmq3 |
Login to the failover zone in a separate window.
Vigor5# zlogin z3 |
Install the WebSphere MQ software in the failover zone.
Perform this step within each new window you used to login to the zone.
# cd /var/tmp/softwareibm/wmqsv6 # ./mqlicense.sh # pkgadd -d . # exit |
Create and start a queue manager.
Perform this step from the global zone.
Vigor5# zlogin z3 # su - mqm $ crtmqm qmgr3 $ strmqm qmgr3 |
Create a persistent queue in the queue manager and put a message to the queue.
Perform this step in zone z3.
$ runmqsc qmgr3 def ql(sc3test) defpsist(yes) end $ /opt/mqm/samp/bin/amqsput SC3TEST qmgr3 test test test test test ^C |
Stop the queue manager.
Perform this step in zone z3.
$ endmqm -i qmgr3 $ exit # exit |
Unmount and mount the ZFS file systems in the zone.
Perform this step in the global zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 Vigor5# zpool import -R /FOZones/z3/root HAZpool1 Vigor5# zpool import -R /FOZones/z3/root HAZpool2 |
Start the queue manager.
Perform this step from the global zone.
Vigor5# zlogin z3 # su - mqm $ strmqm qmgr3 |
Get the messages from the persistent queues in the queue manager and delete the queue.
Perform this step in zone z3.
$ /opt/mqm/samp/bin/amqsget SC3TEST qmgr3 ^C $ runmqsc qmgr3 delete ql(sc3test) end |
Stop the queue manager.
Perform this step in zone z3.
$ endmqm -i qmgr3 $ exit # exit |
Unmount the ZFS file systems from the other zone.
Perform this step in the global zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 |
Halt the failover zone.
Perform this step in the global zone.
Vigor5# zoneadm -z z3 halt |
Unmount the SVM zonepath.
Perform this step in the global zone.
Vigor5# umount -f /FOZones |
Register the necessary data types on the single node cluster
Vigor5# clresourcetype register SUNW.HAStoragePlus Vigor5# clresourcetype register SUNW.gds |
Create the resource group.
Vigor5# clresourcegroup create -n Vigor5 wmq3-rg |
Create the logical host.
Vigor5# clreslogicalhostname create -g wmq3-rg -h qmgr3 wmq3-lh |
Create the SVM HAStoragePlus resource in the wmq3-rg resource group.
Vigor5# clresource create -g wmq3-rg -t SUNW.HAStoragePlus \ > -p FilesystemMountPoints=/FOZones wmq3-SVMhas |
Create the ZFS HAStoragePlus resource in the wmq3-rg resource group.
Vigor5# clresource create -g wmq3-rg -t SUNW.HAStoragePlus \ > -p Zpools=HAZpool1,HAZpool2 wmq3-ZFShas |
Enable the resource group.
Vigor5# clresourcegroup online -M wmq3-rg |
Create the Sun Cluster HA for Solaris Container Configuration file.
Vigor5# cat > /var/tmp/sczbt_config <<-EOF RS=wmq3-FOZ RG=wmq3-rg PARAMETERDIR=/FOZones SC_NETWORK=true SC_LH=wmq3-lh FAILOVER=true HAS_RS=wmq3-SVMhas,wmq3-ZFShas Zonename=z3 Zonebootopt= Milestone=multi-user-server Mounts="/ZFSwmq3/log /ZFSwmq3/qmgrs" EOF |
Register the Sun Cluster HA for Solaris Container data service.
Vigor5# /opt/SUNWsczone/sczbt/util/sczbt_register -f /var/tmp/sczbt_config |
Enable the failover zone resource
Vigor5# clresource enable wmq3-FOZ |
Create the Sun Cluster HA for WebSphere MQ queue manager configuration file.
Vigor5# cat > /var/tmp/mgr3_config <<-EOF # +++ Required parameters +++ RS=wmq3-qmgr RG=wmq3-rg QMGR=qmgr3 LH=wmq3-lh HAS_RS=wmq3-ZFShas LSR_RS= CLEANUP=YES SERVICES=NO USERID=mqm # +++ Optional parameters +++ DB2INSTANCE= ORACLE_HOME= ORACLE_SID= START_CMD= STOP_CMD= # +++ Failover zone parameters +++ # These parameters are only required when WebSphere MQ should run # within a failover zone managed by the Sun Cluster Data Service # for Solaris Containers. RS_ZONE=wmq3-FOZ PROJECT=default TIMEOUT=300 EOF |
Register the Sun Cluster HA for WebSphere MQ data service.
Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr3_config |
Enable the resource.
Vigor5# clresource enable wmq3-qmgr |
Check the status of the WebSphere MQ resources.
Vigor5# clrs status wmq3-FOZ Vigor5# clrs status wmq3-qmgr Vigor5# clrg status wmq3-rg |
If another queue manager is required you can repeat the following tasks. However you must change the entries within that task to reflect your new queue manager.
Repeat the following steps from Example: Prepare the Cluster for WebSphere MQ.
Repeat the following steps from Example: Configure the Failover Zone.
Repeat the following steps from Example: Install WebSphere MQ in the failover zone.
Repeat the following steps from Example: Verify WebSphere MQ.
Repeat the following steps from Example: Configure Cluster Resources for WebSphere MQ.
After creating these resources you must enable them using clresource enable resource before continuing with the next step.
Repeat the following steps from Example: Enable the WebSphere MQ Software to Run in the Cluster.
Also repeat as required for any WebSphere MQ component.