Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Appendix B Deployment Example: Installing aWebSphere MQ Queue Manager in a Failover Zone

This appendix presents a complete example of how to install and configure a WebSphere MQ queue manager in a failover zone. It presents a simple node cluster configuration. If you need to install the application in any other configuration, refer to the general-purpose procedures presented elsewhere in this manual.

Target Cluster Configuration

This example uses a single-node cluster with the following node and zone names:

Vigor5

The physical node, which owns the file system.

Vigor5:z3

A whole root non-global zone named z3.

Software Configuration

This deployment example uses the following software products and versions:

This example assumes that you have already installed and established your cluster. It illustrates installation and configuration of the data service application only.

Assumptions

The instructions in this example were developed with the following assumptions:

Installing and Configuring WebSphere MQ in a Failover Zone


Note –

This deployment example is designed for a single-node cluster. It is provided simply as a concise guide to help you if you need to refer to an installation and configuration of WebSphere MQ.

This deployment example is not meant to be a precise guide to install and configure WebSphere MQ.

If you need to install WebSphere MQ in any other configuration, refer to the general purpose procedures elsewhere in this manual.


The instructions with this deployment example assumes that you are using the WebSphere MQ V6 Solaris x86–64 and will configure WebSphere MQ on a ZFS highly available local file system .

The failover zonepath cannot use a ZFS highly available local file system, instead the zonepath will use a SVM highly available local system.

The cluster resource group is simply brought online and is not failed over to another node as this deployment example is on a single node cluster.

The tasks you must perform to install and configure WebSphere MQ in the failover zone are as follows:

ProcedureExample: Prepare the Cluster for WebSphere MQ

  1. Install and configure the cluster as instructed in Sun Cluster Software Installation Guide for Solaris OS.

    Install the following cluster software components on node Vigor5.

    • Sun Cluster core software

    • Sun Cluster data service for WebSphere MQ

    • Sun Cluster data service for Solaris Containers

  2. Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the global zone and failover zone.

    The following output shows the logical host name entry for qmgr3 in the global zone.


    Vigor5# grep qmgr1 /etc/hosts /etc/inet/ipnodes
    /etc/hosts:192.168.1.150	qmgr1
    /etc/inet/ipnodes:192.168.1.150	qmgr1
  3. Install and configure a Zettabyte File System


    Note –

    The following zpool definition represents a very basic configuration for deployment on a single-node cluster.

    You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.


    Create a ZFS pool


    Vigor5# zpool create -m /ZFSwmq3/log HAZpool1 c1t1d0
    Vigor5# zpool create -m /ZFSwmq3/qmgr HAZpool2 c1t4d0
    
  4. Install and Configure a Solaris Volume Manager File System


    Note –

    The following metaset definitions represent a very basic configuration for deployment on a single-node cluster.

    You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.


    1. Create a SVM Disk Set.


      Vigor5# metaset -s dg_d1 -a -h Vigor5
      
    2. Add a Disk to the SVM Disk Set


      Vigor5# metaset -s dg_d1 -a /dev/did/rdsk/d2
      
    3. Add the Disk Information to the metainit utility input file


      Vigor5# cat >> /etc/lvm/md.tab <<-EOF
      dg_d1/d100      -m      dg_d1/d110
      dg_d1/d110      1 1     /dev/did/rdsk/d2s0
      EOF
      
    4. Configure the metadevices


      Vigor5# metainit -s dg_d1 -a
      
    5. Create a Mount Point for the SVM Highly Available Local File System


      Vigor5# mkdir /FOZones
      
    6. Add the SVM highly available local file system to /etc/vfstab


      Vigor5# cat >> /etc/vfstab <<-EOF
      /dev/md/dg_d1/dsk/d100 /dev/md/dg_d1/rdsk/d100 /FOZones ufs 3 no logging
      EOF
      
    7. Create the File System


      Vigor5# newfs /dev/md/dg_d1/rdsk/d100
      
    8. Mount the File System


      Vigor5# mount /FOZones
      

ProcedureExample: Configure the Failover Zone

In this task you will create a whole root failover non-global zone on node Vigor5.

  1. Create a non-global zone to be used as the failover zone


    Vigor5# cat > /tmp/z3 <<-EOF
    create -b
    set zonepath=/FOZones/z3
    set autoboot=false
    add inherit-pkg-dir
    set dir=/opt/SUNWscmqs
    end
    EOF
    
  2. Configure the non-global failover zone, using the file you created.


    Vigor5# zonecfg -z z3 -f /tmp/z3
    
  3. Install the zones.


    Vigor5# zoneadm -z z3 install
    
  4. Boot the zone.

    Perform this step after the installation of the zones are complete.


    Vigor5# zoneadm -z z3 boot
    
  5. Log in to the zone and complete the zone system identification.

    Open another window and issue the following command.


    Vigor5# zlogin -C z3
    
  6. Disconnect from the zone console and close the terminal window.

    After you have completed the zone system identification, disconnect from the zone and close the window you previously opened.


    Vigo5# ~.
    Vigo5# exit
    
  7. Create the appropriate mount points and symlinks for WebSphere MQ in the zone.


    Vigor5# zlogin z3 mkdir -p /var/mqm/log /var/mqm/qmgrs
    Vigor5# zlogin z3 ln -s /ZFSwmq3/log /var/mqm/log/qmgr3
    Vigor5# zlogin z3 ln -s /ZFSwmq3/qmgrs /var/mqm/qmgrs/qmgr3
    
  8. Create the WebSphere MQ userid in the zone.


    Vigor5# zlogin z3 groupadd -g 1000 mqm
    Vigor5# zlogin z3 useradd -u 1000 -g 1000 -d /var/mqm mqm
    
  9. Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the zone

    The following output shows logical host name entry for qmgr3 in zone z3 .


    Vigor5# zlogin z3 grep qmgr3 /etc/hosts /etc/inet/ipnodes
    /etc/hosts:192.168.1.152	qmgr3
    /etc/inet/ipnodes:192.168.1.152	qmgr3

ProcedureExample: Install WebSphere MQ in the failover zone

  1. Mount the WebSphere MQ software in the zones.

    In this example, the WebSphere MQ software has been copied to node Vigor5 in directory /export/software/ibm/wmqsv6.


    Vigor5# zlogin z3 mkdir -p /var/tmp/software
    Vigor5#
    Vigor5# mount -F lofs /export/software /FOZzones/z3/root/var/tmp/software
    
  2. Mount the ZFS pools in the zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    Vigor5# zpool import -R /FOZones/z3/root HAZpool1
    Vigor5# zpool import -R /FOZones/z3/root HAZpool2
    
  3. Setup the ZFS file systems for user and group mqm


    Vigor5# zlogin z3 chown -R mqm:mqm /ZFSwmq3
    
  4. Login to the failover zone in a separate window.


    Vigor5# zlogin z3
    
  5. Install the WebSphere MQ software in the failover zone.

    Perform this step within each new window you used to login to the zone.


    # cd /var/tmp/softwareibm/wmqsv6
    # ./mqlicense.sh
    # pkgadd -d .
    # exit
    

ProcedureExample: Verify WebSphere MQ

  1. Create and start a queue manager.

    Perform this step from the global zone.


    Vigor5# zlogin z3
    # su - mqm
    $ crtmqm qmgr3
    $ strmqm qmgr3
    
  2. Create a persistent queue in the queue manager and put a message to the queue.

    Perform this step in zone z3.


    $ runmqsc qmgr3
    def ql(sc3test) defpsist(yes)
    end
    $ /opt/mqm/samp/bin/amqsput SC3TEST qmgr3
    test test test test test
    ^C
  3. Stop the queue manager.

    Perform this step in zone z3.


    $ endmqm -i qmgr3
    $ exit
    # exit
    
  4. Unmount and mount the ZFS file systems in the zone.

    Perform this step in the global zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    Vigor5# zpool import -R /FOZones/z3/root HAZpool1
    Vigor5# zpool import -R /FOZones/z3/root HAZpool2
    
  5. Start the queue manager.

    Perform this step from the global zone.


    Vigor5# zlogin z3
    # su - mqm
    $ strmqm qmgr3
    
  6. Get the messages from the persistent queues in the queue manager and delete the queue.

    Perform this step in zone z3.


    $ /opt/mqm/samp/bin/amqsget SC3TEST qmgr3
    ^C
    $ runmqsc qmgr3
    delete ql(sc3test)
    end
    
  7. Stop the queue manager.

    Perform this step in zone z3.


    $ endmqm -i qmgr3
    $ exit
    # exit
    
  8. Unmount the ZFS file systems from the other zone.

    Perform this step in the global zone.


    Vigor5# zpool export -f HAZpool1
    Vigor5# zpool export -f HAZpool2
    
  9. Halt the failover zone.

    Perform this step in the global zone.


    Vigor5# zoneadm -z z3 halt
    
  10. Unmount the SVM zonepath.

    Perform this step in the global zone.


    Vigor5# umount -f /FOZones
    

ProcedureExample: Configure Cluster Resources for WebSphere MQ

  1. Register the necessary data types on the single node cluster


    Vigor5# clresourcetype register SUNW.HAStoragePlus
    Vigor5# clresourcetype register SUNW.gds
    
  2. Create the resource group.


    Vigor5# clresourcegroup create -n Vigor5 wmq3-rg
    
  3. Create the logical host.


    Vigor5# clreslogicalhostname create -g wmq3-rg -h qmgr3 wmq3-lh
    
  4. Create the SVM HAStoragePlus resource in the wmq3-rg resource group.


    Vigor5# clresource create -g wmq3-rg -t SUNW.HAStoragePlus \
    > -p FilesystemMountPoints=/FOZones wmq3-SVMhas
    
  5. Create the ZFS HAStoragePlus resource in the wmq3-rg resource group.


    Vigor5# clresource create -g wmq3-rg -t SUNW.HAStoragePlus \
    > -p Zpools=HAZpool1,HAZpool2 wmq3-ZFShas
    
  6. Enable the resource group.


    Vigor5# clresourcegroup online -M wmq3-rg
    
  7. Create the Sun Cluster HA for Solaris Container Configuration file.


    Vigor5# cat > /var/tmp/sczbt_config <<-EOF
    RS=wmq3-FOZ
    RG=wmq3-rg
    PARAMETERDIR=/FOZones
    SC_NETWORK=true
    SC_LH=wmq3-lh
    FAILOVER=true
    HAS_RS=wmq3-SVMhas,wmq3-ZFShas
    
    Zonename=z3
    Zonebootopt=
    Milestone=multi-user-server
    Mounts="/ZFSwmq3/log /ZFSwmq3/qmgrs"
    EOF
    
  8. Register the Sun Cluster HA for Solaris Container data service.


    Vigor5# /opt/SUNWsczone/sczbt/util/sczbt_register -f /var/tmp/sczbt_config
    
  9. Enable the failover zone resource


    Vigor5# clresource enable wmq3-FOZ
    

ProcedureExample: Enable the WebSphere MQ Software to Run in the Cluster

  1. Create the Sun Cluster HA for WebSphere MQ queue manager configuration file.


    Vigor5# cat > /var/tmp/mgr3_config <<-EOF
    # +++ Required parameters +++
    RS=wmq3-qmgr
    RG=wmq3-rg
    QMGR=qmgr3
    LH=wmq3-lh
    HAS_RS=wmq3-ZFShas
    LSR_RS=
    CLEANUP=YES
    SERVICES=NO
    USERID=mqm
    
    # +++ Optional parameters +++
    DB2INSTANCE=
    ORACLE_HOME=
    ORACLE_SID=
    START_CMD=
    STOP_CMD=
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=wmq3-FOZ
    PROJECT=default
    TIMEOUT=300
    EOF
    
  2. Register the Sun Cluster HA for WebSphere MQ data service.


    Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr3_config
    
  3. Enable the resource.


    Vigor5# clresource enable wmq3-qmgr
    

ProcedureExample: Verify the Sun Cluster HA for WebSphere MQ resource group

    Check the status of the WebSphere MQ resources.


    Vigor5# clrs status wmq3-FOZ
    Vigor5# clrs status wmq3-qmgr
    Vigor5# clrg status wmq3-rg
    

ProcedureExample: Creating Multiple Instances

If another queue manager is required you can repeat the following tasks. However you must change the entries within that task to reflect your new queue manager.

  1. Repeat the following steps from Example: Prepare the Cluster for WebSphere MQ.

    Step 2 and Step 3.

  2. Repeat the following steps from Example: Configure the Failover Zone.

    Step 7 and Step 9.

  3. Repeat the following steps from Example: Install WebSphere MQ in the failover zone.

    Step 2.

  4. Repeat the following steps from Example: Verify WebSphere MQ.

    Step 1, Step 3 and Step 8.

  5. Repeat the following steps from Example: Configure Cluster Resources for WebSphere MQ.

    Step 3 and Step 5.

    After creating these resources you must enable them using clresource enable resource before continuing with the next step.

  6. Repeat the following steps from Example: Enable the WebSphere MQ Software to Run in the Cluster.

    Step 1, Step 2 and Step 3.

    Also repeat as required for any WebSphere MQ component.