Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Appendix A Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones

This appendix presents a complete example of how to install and configure multiple WebSphere MQ queue managers in non-global zones. It presents a simple node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual.

Target Cluster Configuration

This example uses a single-node cluster with the following node and zone names:

Vigor5

The physical node, which owns the file system.

Vigor5:z1

A whole root non-global zone named z1.

Vigor5:z2

A whole root non-global zone named z2.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring WebSphere MQ


Note –

This deployment example is designed for a single-node cluster. It is provided simply as a concise guide to help you if you need to refer to an installation and configuration of WebSphere MQ.

This deployment example is not meant to be a precise guide to install and configure WebSphere MQ.

If you need to install WebSphere MQ in any other configuration, refer to the general purpose procedures elsewhere in this manual.


The instructions with this deployment example assumes that you are using the WebSphere MQ v6 Solaris x86–64 platform and will configure WebSphere MQ on a ZFS highly available local file system.

The cluster resource group will be configured to failover between two non-global zones on a single node cluster.

The tasks you must perform to install and configure WebSphere MQ in the non-global zones are as follows:

ProcedureExample: Prepare the Cluster for WebSphere MQ

Perform all steps within this example in the global zone.

  1. Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on node Vigor5.

    • Sun Cluster core software

    • Sun Cluster data service for WebSphere MQ

  2. Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the global zone.

    The following output shows logical host name entries for qmgr1.


    Vigor5# grep qmgr1 /etc/hosts /etc/inet/ipnodes
    /etc/hosts:192.168.1.150	qmgr1
    /etc/inet/ipnodes:192.168.1.150	qmgr1
  3. Install and configure a Zettabyte file system.

    Create two ZFS pools.


    Note –

    The following zpool definitions represent a very basic configuration for deployment on a single-node cluster.

    You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.



    Vigor5# zpool create -m /ZFSwmq1/log HAZpool1 c1t1d0
    Vigor5# zpool create -m /ZFSwmq1/qmgrs HAZpool2 c1t4d0
    

ProcedureExample: Configure two Non-Global Zones

Perform all steps within this example in the global zone.

  1. On local storage create a directory for the non-global zones root path.


    Vigor5# mkdir /zones
    
  2. Create a temporary file for the whole root zones, for example /tmp/z1 and /tmp/z2, and include the following entries:


    Vigor5# cat > /tmp/z1 <<-EOF
    create -b
    set zonepath=/zones/z1
    EOF
    Vigor5# cat > /tmp/z2 <<-EOF
    create -b
    set zonepath=/zones/z2
    EOF
    
  3. Configure the non-global zones, using the files you created.


    Vigor5# zonecfg -z z1 -f /tmp/z1
    Vigor5# zonecfg -z z2 -f /tmp/z2
    
  4. Install the zones.

    Open two windows and issue the following command in each window.


    Vigor5# zoneadm -z z1 install
    Vigor5# zoneadm -z z2 install
    
  5. Boot the zones.

    Perform this step after the installation of the zones are complete.


    Vigor5# zoneadm -z z1 boot
    Vigor5# zoneadm -z z2 boot
    
  6. Log in to the zones and complete the zone system identification.


    Vigor5# zlogin -C z1
    Vigor5# zlogin -C z2
    
  7. Close the terminal window and disconnect from the zone consoles.

    After you have completed the zone system identification, disconnect from the window your previously opened.


    Vigo5# ~.
    
  8. Create the appropriate mount points and symlinks for the queue manager in the zone.


    Vigor5# zlogin z1 mkdir -p /var/mqm/log /var/mqm/qmgrs
    Vigor5# zlogin z1 ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
    Vigor5# zlogin z1 ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
    Vigor5#
    Vigor5# zlogin z2 mkdir -p /var/mqm/log /var/mqm/qmgrs
    Vigor5# zlogin z2 ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
    Vigor5# zlogin z2 ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
    
  9. Create the WebSphere MQ userid in the zones.


    Vigor5# zlogin z1 groupadd -g 1000 mqm
    Vigor5# zlogin z1 useradd -u 1000 -g 1000 -d /var/mqm mqm
    Vigor5#
    Vigor5# zlogin z2 groupadd -g 1000 mqm
    Vigor5# zlogin z2 useradd -u 1000 -g 1000 -d /var/mqm mqm
    
  10. Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the zones.

    The following output shows the logical host name entry for qmgr1 in zones z1 and z2.


    Vigor5# zlogin z1 grep qmgr1 /etc/hosts /etc/inet/ipnodes
    192.168.1.150	qmgr1
    Vigor5# zlogin z2 grep qmgr1 /etc/hosts /etc/inet/ipnodes
    /etc/hosts:192.168.1.150	qmgr1
    /etc/inet/ipnodes:192.168.1.150	qmgr1

ProcedureExample: Install WebSphere MQ in the Non-Global Zones

  1. Mount the WebSphere MQ software in the zones.

    Perform this step in the global zone.

    In this example, the WebSphere MQ software has been copied to node Vigor5 in directory /export/software/ibm/wmqsv6 on .


    Vigor5# zlogin z1 mkdir -p /var/tmp/software
    Vigor5# zlogin z2 mkdir -p /var/tmp/software
    Vigor5#
    Vigor5# mount -F lofs /export/software /zones/z1/root/var/tmp/software
    Vigor5# mount -F lofs /export/software /zones/z2/root/var/tmp/software
    
  2. Mount the ZFS pools in the non-global zone.

    Perform this step in the global zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    Vigor5# zpool import -R /zones/z1/root HAZpool1
    Vigor5# zpool import -R /zones/z1/root HAZpool2
    
  3. Setup the ZFS file systems for user and group mqm


    Vigor5# zlogin z1 chown -R mqm:mqm /ZFSwmq1
    
  4. Login to each zone in two separate windows.

    Perform this step from the global zone.


    Vigor5# zlogin z1
    Vigor5# zlogin z2
    
  5. Install the WebSphere MQ software in each zone.

    Perform this step within each new window you used to login to the zone.


    # cd /var/tmp/software/ibm/wmqsv6
    # ./mqlicense.sh
    # pkgadd -d .
    # exit
    

ProcedureExample: Verify WebSphere MQ

  1. Create and start the queue manager.

    Perform this step from the global zone.


    Vigor5# zlogin z1
    # su - mqm
    $ crtmqm qmgr1
    $ strmqm qmgr1
    
  2. Create a persistent queue in each queue manager and put a message to the queue .

    Perform this step in zone z1.


    $ runmqsc qmgr1
    def ql(sc3test) defpsist(yes)
    end
    $ /opt/mqm/samp/bin/amqsput SC3TEST qmgr1
    test test test test test
    ^C
  3. Stop the queue manager.

    Perform this step in zone z1.


    $ endmqm -i qmgr1
    $ exit
    # exit
    
  4. Copy the mqs.ini file between the two zones.

    Perform this step in the global zone.


    Vigor5# cp /zones/z1/root/var/mqm/mqs.ini /zones/z2/root/var/mqm/mqs.ini
    
  5. Unmount and mount the ZFS file systems in the other zone.

    Perform this step in the global zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    Vigor5# zpool import -R /zones/z2/root HAZpool1
    Vigor5# zpool import -R /zones/z2/root HAZpool2
    
  6. Start the queue manager.

    Perform this step from the global zone.


    Vigor5# zlogin z2
    # su - mqm
    $ strmqm qmgr1
    
  7. Get the messages from the persistent queue and delete the queue.

    Perform this step in zone z2.


    $ /opt/mqm/samp/bin/amqsget SC3TEST qmgr1
    ^C
    $ runmqsc qmgr1
    delete ql(sc3test)
    end
    
  8. Stop the queue manager.

    Perform this step in zone z2.


    $ endmqm -i qmgr1
    $ exit
    # exit
    
  9. Unmount the ZFS file systems from the zone.

    Perform this step in the global zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    

ProcedureExample: Configure Cluster Resources for WebSphere MQ

Perform all steps within this example in the global zone.

  1. Register the required resource types.


    Vigor5# clresourcetype register SUNW.HAStoragePlus
    Vigor5# clresourcetype register SUNW.gds
    
  2. Create the resource group.


    Vigor5# clresourcegroup create -n Vigor5:z1,Vigor5:z2 wmq1-rg
    
  3. Create the logical hosts.


    Vigor5# clreslogicalhostname create -g wmq1-rg -h qmgr1 wmq1-lh
    
  4. Create the HAStoragePlus resource in the wmq1-rg resource group.


    Vigor5# clresource create -g wmq1-rg -t SUNW.HAStoragePlus \
    > -p Zpools=HAZpool1,HAZpool2 wmq1-ZFShas
    
  5. Enable the resource group.


    Vigor5# clresourcegroup online -M wmq1-rg
    

ProcedureExample: Enable the WebSphere MQ Software to Run in the Cluster

Perform all steps within this example in the global zone.

  1. Create the Sun Cluster HA for WebSphere MQ queue manager configuration file.

    Either cat the following into /var/tmp/mgr1_config or edit /opt/SUNWscmqs/mgr/util/mgr_config and execute /opt/SUNWscmqs/mgr/util/mgr_register.


    Vigor5# cat > /var/tmp/mgr1_config <<-EOF
    # +++ Required parameters +++
    RS=wmq1-qmgr
    RG=wmq1-rg
    QMGR=qmgr1
    LH=wmq1-lh
    HAS_RS=wmq1-haZFS
    LSR_RS=
    CLEANUP=YES
    SERVICES=NO
    USERID=mqm
    
    # +++ Optional parameters +++
    DB2INSTANCE=
    ORACLE_HOME=
    ORACLE_SID=
    START_CMD=
    STOP_CMD=
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=
    PROJECT=default
    TIMEOUT=300
    EOF
    
  2. Register the Sun Cluster HA for WebSphere MQ queue manager resource.


    Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr1_config
    
  3. Enable the Sun Cluster HA for WebSphere MQ queue manager resource.


    Vigor5# clresource enable wmq1-qmgr
    
  4. Create the Sun Cluster HA for WebSphere MQ listener configuration file.

    Either cat the following into /var/tmp/lsr1_config or edit /opt/SUNWscmqs/lsr/util/lsr_config and execute /opt/SUNWscmqs/lsr/util/lsr_register.


    Vigor5# cat > /var/tmp/lsr1_config <<-EOF
    # +++ Required parameters +++
    RS=wmq1-lsr
    RG=wmq1-rg
    QMGR=qmgr1
    PORT=1414
    IPADDR=
    BACKLOG=100
    LH=wmq1-lh
    QMGR_RS=wmq1-qmgr
    USERID=mqm
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=
    PROJECT=default
    EOF
    
  5. Register the Sun Cluster HA for WebSphere MQ listener resource.


    Vigor5# /opt/SUNWscmqs/lsr/util/lsr_register -f /var/tmp/lsr1_config
    
  6. Enable the Sun Cluster HA for WebSphere MQ listener resource.


    Vigor5# clresource enable wmq1-lsr
    

ProcedureExample: Verify the Sun Cluster HA for WebSphere MQ Resource Group

Perform this step in the global zone.

    Switch the WebSphere MQ resource group between the two non-global zones.


    Vigor5# for node in Vigor5:z2 Vigor5:z1
    do
       clrg switch -n $node wmq1-rg
       clrs status wmq1-qmgr
       clrs status wmq1-lsr
       clrg status wmq1-rg
    done
    

ProcedureExample: Creating Multiple Instances

If another queue manager is required you can repeat the following tasks. However you must change the entries within that task to reflect your new queue manager.

  1. Repeat the following steps from Example: Prepare the Cluster for WebSphere MQ.

    Step 2 and Step 3.

  2. Repeat the following steps from Example: Configure two Non-Global Zones.

    Step 8 and Step 10.

  3. Repeat the following steps from Example: Install WebSphere MQ in the Non-Global Zones.

    Step 2.

  4. Repeat the following steps from Example: Verify WebSphere MQ.

    Step 1, Step 3, Step 4 and Step 9.

  5. Repeat the following steps from Example: Configure Cluster Resources for WebSphere MQ.

    Step 2, Step 3, Step 4 and Step 5.

  6. Repeat the following steps from Example: Enable the WebSphere MQ Software to Run in the Cluster.

    Step 1, Step 2 and Step 3.

    Repeat as required for any WebSphere MQ component.