This appendix presents a complete example of how to install and configure multiple WebSphere MQ queue managers in non-global zones. It presents a simple node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual.
This example uses a single-node cluster with the following node and zone names:
The physical node, which owns the file system.
A whole root non-global zone named z1.
A whole root non-global zone named z2.
This deployment example uses the following software products and versions:
Solaris 10 06/06 software for SPARC or x86 platforms
Sun Cluster 3.2 core software
Sun Cluster HA for WebSphere MQ data service
WebSphere MQ v6 Solaris x86–64
This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.
The instructions in this example were developed with the following assumptions:
Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment.
User login: Unless otherwise specified, perform all procedures as superuser or assume a role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read RBAC authorization.
This deployment example is designed for a single-node cluster. It is provided simply as a concise guide to help you if you need to refer to an installation and configuration of WebSphere MQ.
This deployment example is not meant to be a precise guide to install and configure WebSphere MQ.
If you need to install WebSphere MQ in any other configuration, refer to the general purpose procedures elsewhere in this manual.
The instructions with this deployment example assumes that you are using the WebSphere MQ v6 Solaris x86–64 platform and will configure WebSphere MQ on a ZFS highly available local file system.
The cluster resource group will be configured to failover between two non-global zones on a single node cluster.
The tasks you must perform to install and configure WebSphere MQ in the non-global zones are as follows:
Example: Enable the WebSphere MQ Software to Run in the Cluster
Example: Verify the Sun Cluster HA for WebSphere MQ Resource Group
Perform all steps within this example in the global zone.
Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.
Install the following cluster software components on node Vigor5.
Sun Cluster core software
Sun Cluster data service for WebSphere MQ
Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the global zone.
The following output shows logical host name entries for qmgr1.
Vigor5# grep qmgr1 /etc/hosts /etc/inet/ipnodes /etc/hosts:192.168.1.150 qmgr1 /etc/inet/ipnodes:192.168.1.150 qmgr1 |
Install and configure a Zettabyte file system.
Create two ZFS pools.
The following zpool definitions represent a very basic configuration for deployment on a single-node cluster.
You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.
Vigor5# zpool create -m /ZFSwmq1/log HAZpool1 c1t1d0 Vigor5# zpool create -m /ZFSwmq1/qmgrs HAZpool2 c1t4d0 |
Perform all steps within this example in the global zone.
On local storage create a directory for the non-global zones root path.
Vigor5# mkdir /zones |
Create a temporary file for the whole root zones, for example /tmp/z1 and /tmp/z2, and include the following entries:
Vigor5# cat > /tmp/z1 <<-EOF create -b set zonepath=/zones/z1 EOF Vigor5# cat > /tmp/z2 <<-EOF create -b set zonepath=/zones/z2 EOF |
Configure the non-global zones, using the files you created.
Vigor5# zonecfg -z z1 -f /tmp/z1 Vigor5# zonecfg -z z2 -f /tmp/z2 |
Install the zones.
Open two windows and issue the following command in each window.
Vigor5# zoneadm -z z1 install Vigor5# zoneadm -z z2 install |
Boot the zones.
Perform this step after the installation of the zones are complete.
Vigor5# zoneadm -z z1 boot Vigor5# zoneadm -z z2 boot |
Log in to the zones and complete the zone system identification.
Vigor5# zlogin -C z1 Vigor5# zlogin -C z2 |
Close the terminal window and disconnect from the zone consoles.
After you have completed the zone system identification, disconnect from the window your previously opened.
Vigo5# ~. |
Create the appropriate mount points and symlinks for the queue manager in the zone.
Vigor5# zlogin z1 mkdir -p /var/mqm/log /var/mqm/qmgrs Vigor5# zlogin z1 ln -s /ZFSwmq1/log /var/mqm/log/qmgr1 Vigor5# zlogin z1 ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1 Vigor5# Vigor5# zlogin z2 mkdir -p /var/mqm/log /var/mqm/qmgrs Vigor5# zlogin z2 ln -s /ZFSwmq1/log /var/mqm/log/qmgr1 Vigor5# zlogin z2 ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1 |
Create the WebSphere MQ userid in the zones.
Vigor5# zlogin z1 groupadd -g 1000 mqm Vigor5# zlogin z1 useradd -u 1000 -g 1000 -d /var/mqm mqm Vigor5# Vigor5# zlogin z2 groupadd -g 1000 mqm Vigor5# zlogin z2 useradd -u 1000 -g 1000 -d /var/mqm mqm |
Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the zones.
The following output shows the logical host name entry for qmgr1 in zones z1 and z2.
Vigor5# zlogin z1 grep qmgr1 /etc/hosts /etc/inet/ipnodes 192.168.1.150 qmgr1 Vigor5# zlogin z2 grep qmgr1 /etc/hosts /etc/inet/ipnodes /etc/hosts:192.168.1.150 qmgr1 /etc/inet/ipnodes:192.168.1.150 qmgr1 |
Mount the WebSphere MQ software in the zones.
Perform this step in the global zone.
In this example, the WebSphere MQ software has been copied to node Vigor5 in directory /export/software/ibm/wmqsv6 on .
Vigor5# zlogin z1 mkdir -p /var/tmp/software Vigor5# zlogin z2 mkdir -p /var/tmp/software Vigor5# Vigor5# mount -F lofs /export/software /zones/z1/root/var/tmp/software Vigor5# mount -F lofs /export/software /zones/z2/root/var/tmp/software |
Mount the ZFS pools in the non-global zone.
Perform this step in the global zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 Vigor5# zpool import -R /zones/z1/root HAZpool1 Vigor5# zpool import -R /zones/z1/root HAZpool2 |
Setup the ZFS file systems for user and group mqm
Vigor5# zlogin z1 chown -R mqm:mqm /ZFSwmq1 |
Login to each zone in two separate windows.
Perform this step from the global zone.
Vigor5# zlogin z1 Vigor5# zlogin z2 |
Install the WebSphere MQ software in each zone.
Perform this step within each new window you used to login to the zone.
# cd /var/tmp/software/ibm/wmqsv6 # ./mqlicense.sh # pkgadd -d . # exit |
Create and start the queue manager.
Perform this step from the global zone.
Vigor5# zlogin z1 # su - mqm $ crtmqm qmgr1 $ strmqm qmgr1 |
Create a persistent queue in each queue manager and put a message to the queue .
Perform this step in zone z1.
$ runmqsc qmgr1 def ql(sc3test) defpsist(yes) end $ /opt/mqm/samp/bin/amqsput SC3TEST qmgr1 test test test test test ^C |
Stop the queue manager.
Perform this step in zone z1.
$ endmqm -i qmgr1 $ exit # exit |
Copy the mqs.ini file between the two zones.
Perform this step in the global zone.
Vigor5# cp /zones/z1/root/var/mqm/mqs.ini /zones/z2/root/var/mqm/mqs.ini |
Unmount and mount the ZFS file systems in the other zone.
Perform this step in the global zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 Vigor5# zpool import -R /zones/z2/root HAZpool1 Vigor5# zpool import -R /zones/z2/root HAZpool2 |
Start the queue manager.
Perform this step from the global zone.
Vigor5# zlogin z2 # su - mqm $ strmqm qmgr1 |
Get the messages from the persistent queue and delete the queue.
Perform this step in zone z2.
$ /opt/mqm/samp/bin/amqsget SC3TEST qmgr1 ^C $ runmqsc qmgr1 delete ql(sc3test) end |
Stop the queue manager.
Perform this step in zone z2.
$ endmqm -i qmgr1 $ exit # exit |
Unmount the ZFS file systems from the zone.
Perform this step in the global zone.
Vigor5# zpool export -f HAZpool1 Vigor5# zpool export -f HAZpool2 |
Perform all steps within this example in the global zone.
Register the required resource types.
Vigor5# clresourcetype register SUNW.HAStoragePlus Vigor5# clresourcetype register SUNW.gds |
Create the resource group.
Vigor5# clresourcegroup create -n Vigor5:z1,Vigor5:z2 wmq1-rg |
Create the logical hosts.
Vigor5# clreslogicalhostname create -g wmq1-rg -h qmgr1 wmq1-lh |
Create the HAStoragePlus resource in the wmq1-rg resource group.
Vigor5# clresource create -g wmq1-rg -t SUNW.HAStoragePlus \ > -p Zpools=HAZpool1,HAZpool2 wmq1-ZFShas |
Enable the resource group.
Vigor5# clresourcegroup online -M wmq1-rg |
Perform all steps within this example in the global zone.
Create the Sun Cluster HA for WebSphere MQ queue manager configuration file.
Either cat the following into /var/tmp/mgr1_config or edit /opt/SUNWscmqs/mgr/util/mgr_config and execute /opt/SUNWscmqs/mgr/util/mgr_register.
Vigor5# cat > /var/tmp/mgr1_config <<-EOF # +++ Required parameters +++ RS=wmq1-qmgr RG=wmq1-rg QMGR=qmgr1 LH=wmq1-lh HAS_RS=wmq1-haZFS LSR_RS= CLEANUP=YES SERVICES=NO USERID=mqm # +++ Optional parameters +++ DB2INSTANCE= ORACLE_HOME= ORACLE_SID= START_CMD= STOP_CMD= # +++ Failover zone parameters +++ # These parameters are only required when WebSphere MQ should run # within a failover zone managed by the Sun Cluster Data Service # for Solaris Containers. RS_ZONE= PROJECT=default TIMEOUT=300 EOF |
Register the Sun Cluster HA for WebSphere MQ queue manager resource.
Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr1_config |
Enable the Sun Cluster HA for WebSphere MQ queue manager resource.
Vigor5# clresource enable wmq1-qmgr |
Create the Sun Cluster HA for WebSphere MQ listener configuration file.
Either cat the following into /var/tmp/lsr1_config or edit /opt/SUNWscmqs/lsr/util/lsr_config and execute /opt/SUNWscmqs/lsr/util/lsr_register.
Vigor5# cat > /var/tmp/lsr1_config <<-EOF # +++ Required parameters +++ RS=wmq1-lsr RG=wmq1-rg QMGR=qmgr1 PORT=1414 IPADDR= BACKLOG=100 LH=wmq1-lh QMGR_RS=wmq1-qmgr USERID=mqm # +++ Failover zone parameters +++ # These parameters are only required when WebSphere MQ should run # within a failover zone managed by the Sun Cluster Data Service # for Solaris Containers. RS_ZONE= PROJECT=default EOF |
Register the Sun Cluster HA for WebSphere MQ listener resource.
Vigor5# /opt/SUNWscmqs/lsr/util/lsr_register -f /var/tmp/lsr1_config |
Enable the Sun Cluster HA for WebSphere MQ listener resource.
Vigor5# clresource enable wmq1-lsr |
Perform this step in the global zone.
Switch the WebSphere MQ resource group between the two non-global zones.
Vigor5# for node in Vigor5:z2 Vigor5:z1 do clrg switch -n $node wmq1-rg clrs status wmq1-qmgr clrs status wmq1-lsr clrg status wmq1-rg done |
If another queue manager is required you can repeat the following tasks. However you must change the entries within that task to reflect your new queue manager.
Repeat the following steps from Example: Prepare the Cluster for WebSphere MQ.
Repeat the following steps from Example: Configure two Non-Global Zones.
Repeat the following steps from Example: Install WebSphere MQ in the Non-Global Zones.
Repeat the following steps from Example: Verify WebSphere MQ.
Repeat the following steps from Example: Configure Cluster Resources for WebSphere MQ.
Repeat the following steps from Example: Enable the WebSphere MQ Software to Run in the Cluster.
Repeat as required for any WebSphere MQ component.