Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Installing and Configuring Sun Cluster HA for WebSphere MQ

This chapter explains how to install and configure Sun Cluster HA for WebSphere MQ.

This chapter contains the following sections.

Installing and Configuring Sun Cluster HA for WebSphere MQ

Table 1 lists the tasks for installing and configuring Sun Cluster HA for WebSphere MQ. Perform these tasks in the order that they are listed.

Table 1 Task Map: Installing and Configuring Sun Cluster HA for WebSphere MQ

Task 

For Instructions, Go To 

Plan the installation 

Sun Cluster HA for WebSphere MQ Overview

Planning the Sun Cluster HA for WebSphere MQ Installation and Configuration

Install and configure WebSphere MQ 

How to Install and Configure WebSphere MQ

Verify installation and configuration 

How to Verify the Installation and Configuration of WebSphere MQ

Install Sun Cluster HA for WebSphere MQ Packages 

How to Install the Sun Cluster HA for WebSphere MQ Packages using the scinstall Utility

Register and Configure Sun Cluster HA for WebSphere MQ 

How to Register and Configure Sun Cluster HA for WebSphere MQ

Verify Sun Cluster HA for WebSphere MQ Installation and Configuration 

How to Verify the Sun Cluster HA for WebSphere MQ Installation and Configuration

Upgrading Sun Cluster HA for WebSphere MQ 

Upgrading Sun Cluster HA for WebSphere MQ

Understand Sun Cluster HA for WebSphere MQ fault monitor 

Understanding Sun Cluster HA for WebSphere MQ Fault Monitor

Debug Sun Cluster HA for WebSphere MQ 

How to turn on debug for Sun Cluster HA for WebSphere MQ

Sun Cluster HA for WebSphere MQ Overview

WebSphere MQ messaging software enables business applications to exchange information across operating platforms in a way that is easy and straightforward for programmers to implement. Programs communicate using the WebSphere MQ API that assures once-only delivery and time-independent communications.

The Sun Cluster HA for WebSphere MQ data service provides a mechanism for orderly startup and shutdown, fault monitoring, and automatic failover of the WebSphere MQ service. Table 2 lists components protected by the Sun Cluster HA for WebSphere MQ data service.

Table 2 Protection of Components

Component 

Protected by 

Queue Manager 

Sun Cluster HA for WebSphere MQ 

Channel Initiator 

Sun Cluster HA for WebSphere MQ 

Command Server 

Sun Cluster HA for WebSphere MQ 

Listener  

Sun Cluster HA for WebSphere MQ 

Trigger Monitor 

Sun Cluster HA for WebSphere MQ 

Planning the Sun Cluster HA for WebSphere MQ Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for WebSphere MQ installation and configuration.


Note –

It is best practice to mount Global File Systems with the /global prefix and to mount Failover File Systems with the /local prefix.


Configuration Restrictions

This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ only. For restrictions that apply to all data services, see the Sun Cluster Release Notes.


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.



Example 1 WebSphere MQ Managers with Failover File Systems


# ls -l /var/mqm
lrwxrwxrwx   1 root     other         11 Sep 17 16:53 /var/mqm ->
 /global/mqm
#
# ls -l /global/mqm/qmgrs
total 6
lrwxrwxrwx   1 root      other          512 Sep 17 09:57 @SYSTEM -> 
 /var/mqm_local/qmgrs/@SYSTEM
lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr1 ->
 /local/mqm/qmgrs/qmgr1
lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr2 ->
 /local/mqm/qmgrs/qmgr2
#
# ls -l /global/mqm/log
total 4
lrwxrwxrwx   1 root     other         20 Sep 17 17:18 qmgr1 ->
 /local/mqm/log/qmgr1
lrwxrwxrwx   1 root     other         20 Sep 17 17:19 qmgr2 ->
 /local/mqm/log/qmgr2
#
# more /etc/vfstab (Subset of the output)
/dev/md/dg_d3/dsk/d30   /dev/md/dg_d3/rdsk/d30  /global/mqm
             ufs     3       yes     logging,global
/dev/md/dg_d3/dsk/d33   /dev/md/dg_d3/rdsk/d33  /local/mqm/qmgrs/qmgr1
  ufs     4       no      logging
/dev/md/dg_d3/dsk/d36   /dev/md/dg_d3/rdsk/d36  /local/mqm/log/qmgr1
    ufs     4       no      logging
/dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /local/mqm/qmgrs/qmgr2
  ufs     4       no      logging
/dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /local/mqm/log/qmgr2
    ufs     4       no      logging
#


Example 2 WebSphere MQ Managers with Global File Systems


# ls -l /var/mqm
lrwxrwxrwx   1 root     other         11 Jan  8 14:17 /var/mqm ->
 /global/mqm
#  
# ls -l /global/mqm/qmgrs
total 6
lrwxrwxrwx   1 root      other          512 Dec 16 09:57 @SYSTEM -> 
 /var/mqm_local/qmgrs/@SYSTEM
drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
# 
# ls -l /global/mqm/log
total 4
drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
#
# more /etc/vfstab (Subset of the output)
/dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
     ufs     3       yes     logging,global
/dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /global/mqm/qmgrs/qmgr1
 ufs     4       yes     logging,global
/dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /global/mqm/log/qmgr1
   ufs     4       yes     logging,global
/dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /global/mqm/qmgrs/qmgr2
 ufs     4       yes     logging,global
/dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /global/mqm/log/qmgr2
   ufs     4       yes     logging,global

Configuration Requirements

The requirements in this section apply to Sun Cluster HA for WebSphere MQ only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ installation and configuration.


Caution – Caution –

Your data service configuration might not be supported if you do not adhere to these requirements.



Example 3 Manual attempt to start the WebSphere MQ Manager by mistake.


# strmqm qmgr1
# Request to run </usr/bin/strmqm qmgr1> within SC3.0 has been refused
#

This solution is required only if you require a Global File System for the WebSphere MQ instance. Example 4 details the steps that you must take to achieve this.



Example 4 Create a symbolic link for strmqm and endmqm to check-start


# cd /opt/mqm/bin
#
# mv strmqm strmqm_sc3
# mv endmqm endmqm_sc3
#
# ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm
# ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm
#

Edit the /opt/SUNWscmqs/mgr/etc/config file and change the following entries for START_COMMAND and STOP_COMMAND. In this example we have chosen to add a suffix to the command names with _sc3. You can choose another name.


# cat /opt/SUNWscmqs/mgr/etc/config
# Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# Usage:
#       DEBUG=<RESOURCE_NAME> or ALL
#       START_COMMAND=/opt/mqm/bin/<renamed_strmqm_program>
#       STOP_COMMAND=/opt/mqm/bin/<renamed_endmqm_program>
#
DEBUG=
START_COMMAND=/opt/mqm/bin/strmqm_sc3
STOP_COMMAND=/opt/mqm/bin/endmqm_sc3
#

Installing and Configuring WebSphere MQ

This section contains the procedures you need to install and configure WebSphere MQ.

ProcedureHow to Install and Configure WebSphere MQ

Steps
  1. Determine how WebSphere MQ will be deployed in Sun Cluster.

    • Determine how many WebSphere MQ instances will be deployed.

    • Determine which Cluster File System will be used by each WebSphere MQ instance.

  2. Mount WebSphere MQ Cluster File Systems.


    Note –

    If Failover File Systems will be used by the WebSphere MQ instance, you must mount these manually.


  3. Install WebSphere MQ onto all nodes within Sun Cluster. .

    It is recommended that you install WebSphere MQ onto local disks. For a discussion of the advantages and disadvantages of installing the software on a local versus a cluster file system, see “Determining the Location of the Application Binaries” on page 3 of the Sun Cluster Data Services Installation and Configuration Guide

    • Install WebSphere MQ onto all nodes within Sun Cluster that will run WebSphere MQ, regardless of the location of the application binaries. This is required because the pkgadd for WebSphere MQ additionally sets up several symbolic links on the host.


      Note –

      Follow IBM's WebSphere MQ for Sun Solaris — Quick Beginnings manual to install WebSphere MQ.


  4. Create your WebSphere MQ Manager(s).

    WebSphere MQ V5.3 has a bug when you use the default setting, LogDefaultPath=/var/mqm/log, when issuing crtmqm to create your WebSphere MQ Manager. For example, the crtmqm command displays AMQ7064: Log path not valid or inaccessible.

    To work around this, specify the -ld parameter when creating the WebSphere MQ Manager, for example, crtmqm -ld /global/mqm/log/<qmgr> <qmgr>

    This will cause another <qmgr> directory to appear, that is /global/mqm/log/<qmgr>/<qmgr>. However, it overcomes this bug.


    Note –

    This bug, of having to specify the -ld parameter when LogDefaultPath=/var/mqm/log is being used, has been reported to IBM and a fix is being worked on.



Example 5 Create your WebSphere MQ V5.3 Manager with the -ld parameter


# crtmqm qmgr1
AMQ7064: Log path not valid or inaccessible.
#
# crtmqm -ld /global/mqm/log/qmgr1  qmgr1
WebSphere MQ queue manager created.
Creating or replacing default objects for qmgr1 .
Default objects statistics : 31 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
#
# cd /global/mqm/log/qmgr1 
#
# ls -l
total 2
drwxrwx---   3 mqm      mqm          512 Jan 10 11:44 qmgr1 
#
# cd qmgr1 
#
# ls -l
total 12
drwxrwx---   2 mqm      mqm          512 Jan 10 11:44 active
-rw-rw----   1 mqm      mqm         4460 Jan 10 11:44 amqhlctl.lfh
#
# pwd
/global/mqm/log/qmgr1/qmgr1 
#
# cd /global/mqm/qmgrs/qmgr1
#
# more qm.ini
#*******************************************************************#
#* Module Name: qm.ini                                             *#
#* Type       : MQSeries queue manager configuration file          *#
#  Function   : Define the configuration of a single queue manager *#
#*                                                                 *#
#*******************************************************************#
#* Notes      :                                                    *#
#* 1) This file defines the configuration of the queue manager     *#
#*                                                                 *#
#*******************************************************************#
ExitPath:
   ExitsDefaultPath=/var/mqm/exits/
#*                                                                 *#
#*                                                                 *#
Log:
   LogPrimaryFiles=3
   LogSecondaryFiles=2
   LogFilePages=1024
   LogType=CIRCULAR
   LogBufferPages=0
   LogPath=/global/mqm/log/qmgr1/qmgr1/
   LogWriteIntegrity=TripleWrite
Service:
   Name=AuthorizationService
   EntryPoints=10
ServiceComponent:
   Service=AuthorizationService
   Name=MQSeries.UNIX.auth.service
   Module=/opt/mqm/lib/amqzfu
   ComponentDataSize=0
QueueManagerStartup: 
   Chinit=No
 # 

Verifying the Installation and Configuration of WebSphere MQ

This section contains the procedure you need to verify the installation and configuration.

ProcedureHow to Verify the Installation and Configuration of WebSphere MQ

This procedure does not verify that your application is highly available because you have not installed your data service yet.

Steps
  1. Start the WebSphere MQ Manager, and check the installation.


    # su - mqm
    Sun Microsystems Inc.   SunOS 5.8       Generic February 2000
    $ strmqm qmgr1
    WebSphere MQ queue manager 'qmgr1' started.
    $ 
    $ runmqsc qmgr1
    5724-B41 (C) Copyright IBM Corp. 1994, 2002.  ALL RIGHTS RESERVED.
    Starting WebSphere MQ script Commands.
    
    
    def ql(test) defpsist(yes)
         1 : def ql(test) defpsist(yes)
    AMQ8006: WebSphere MQ queue created.
    end
         2 : end
    One MQSC command read.
    No commands have a syntax error.
    All valid MQSC commands were processed.
    $ 
    $ /opt/mqm/samp/bin/amqsput TEST qmgr1
    Sample AMQSPUT0 start
    target queue is TEST
    test test test test test test test
    
    Sample AMQSPUT0 end
    $ 
    $ /opt/mqm/samp/bin/amqsget TEST qmgr1
    Sample AMQSGET0 start
    message <test test test test test test test>
    ^C$ 
    $
    $ runmqsc qmgr1
    5724-B41 (C) Copyright IBM Corp. 1994, 2002.  ALL RIGHTS RESERVED.
    Starting WebSphere MQ script Commands.
    
    
    delete ql(test)
         1 : delete ql(test)
    AMQ8007: WebSphere MQ queue deleted.
    end
         2 : end
    One MQSC command read.
    No commands have a syntax error.
    All valid MQSC commands were processed.
    $ 
  2. Stop the WebSphere MQ Manager.


    # su - mqm
    Sun Microsystems Inc.   SunOS 5.8       Generic February 2000
    $ 
    $ endmqm -i qmgr1
    WebSphere MQ queue manager 'qmgr1' ending.
    WebSphere MQ queue manager 'qmgr1' ended.
    $

Installing the Sun Cluster HA for WebSphere MQ Packages

If you did not install the Sun Cluster HA for WebSphere MQ packages during your Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere MQ packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.

If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.

Install the Sun Cluster HA for WebSphere MQ packages by using one of the following installation tools:


Note –

If you are using Solaris 10, install these packages only in the global zone. To ensure that these packages are not propagated to any local zones that are created after you install the packages, use the scinstall utility to install these packages. Do not use the Web Start program.


ProcedureHow to Install the Sun Cluster HA for WebSphere MQ Packages Using the Web Start Program

You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.

Steps
  1. On the cluster node where you are installing the Sun Cluster HA for WebSphere MQ packages, become superuser.

  2. (Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.

  3. Insert the Sun Cluster Agents CD-ROM into the CD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.

  4. Change to the Sun Cluster HA for WebSphere MQ component directory of the CD-ROM.

    The Web Start program for the Sun Cluster HA for WebSphere MQ data service resides in this directory.


    # cd /cdrom/cdrom0/components/SunCluster_HA_MQS_3.1
    
  5. Start the Web Start program.


    # ./installer
    
  6. When you are prompted, select the type of installation.

    • To install only the C locale, select Typical.

    • To install other locales, select Custom.

  7. Follow the instructions on the screen to install the Sun Cluster HA for WebSphere MQ packages on the node.

    After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.

  8. Exit the Web Start program.

  9. Remove the Sun Cluster Agents CD-ROM from the CD-ROM drive.

    1. To ensure that the CD-ROM is not being used, change to a directory that does not reside on the CD-ROM.

    2. Eject the CD-ROM.


      # eject cdrom
      

ProcedureHow to Install the Sun Cluster HA for WebSphere MQ Packages using the scinstall Utility

Use this procedure to install the Sun Cluster HA for WebSphere MQ packages by using the scinstall utility. You need the Sun Java Enterprise System Accessory CD Volume 3 to perform this procedure. This procedure assumes that you did not install the data service packages during your initial Sun Cluster installation.

If you installed the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, proceed to Registering and Configuring Sun Cluster HA for WebSphere MQ.

Otherwise, use this procedure to install the Sun Cluster HA for WebSphere MQ packages. Perform this procedure on all nodes that can run Sun Cluster HA for WebSphere MQ data service.

Steps
  1. Load the Sun Cluster Agents CD-ROM into the CD-ROM drive.

  2. Run the scinstall utility with no options.

    This step starts the scinstall utility in interactive mode.

  3. Choose the menu option, Add Support for New Data Service to This Cluster Node.

    The scinstall utility prompts you for additional information.

  4. Provide the path to the Sun Cluster Agents CD-ROM.

    The utility refers to the CD as the “data services cd.”

  5. Specify the data service to install.

    The scinstall utility lists the data service that you selected and asks you to confirm your choice.

  6. Exit the scinstall utility.

  7. Unload the CD from the drive.

Registering and Configuring Sun Cluster HA for WebSphere MQ

This section contains the procedures you need to configure Sun Cluster HA for WebSphere MQ.

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere MQ

Use this procedure to configure Sun Cluster HA for WebSphere MQ as a failover data service. This procedure assumes that you installed the data service packages during your Sun Cluster installation.

If you did not install the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Packages using the scinstall Utility.

Steps
  1. Become superuser on one of the nodes in the cluster that will host WebSphere MQ.

  2. Register the SUNW.gds resource type.


    # scrgadm -a -t SUNW.gds
    
  3. Register the SUNW.HAStoragePlus resource type.


    # scrgadm -a -t SUNW.HAStoragePlus
    
  4. Create a failover resource group.


    # scrgadm -a -g WebSphere MQ-failover-resource-group
    
  5. Create a resource for the WebSphere MQ Disk Storage.


    # scrgadm -a -j WebSphere MQ-has-resource  \
    -g WebSphere MQ-failover-resource-group   \
    -t SUNW.HAStoragePlus  \
    -x FilesystemMountPoints=WebSphere MQ- instance-mount-points
    
  6. Create a resource for the WebSphere MQ Logical Hostname.


    # scrgadm -a -L -j WebSphere MQ-lh-resource  \
    -g WebSphere MQ-failover-resource-group  \
    -l WebSphere MQ-logical-hostname
    
  7. Enable the failover resource group that now includes the WebSphere MQ Disk Storage and Logical Hostname resources.


    # scswitch -Z -g WebSphere MQ-failover-resource-group
    
  8. Create and register each required WebSphere MQ component.

    Perform this step for the Queue Manager component (mgr), and repeat for each of the optional WebSphere MQ components that you use, replacing mgr with one of the following:

    chi - Channel Initiator

    csv - Command Server

    lsr - Dedicated Listener

    trm - Trigger monitor


    Note –

    The chi component allows a channel initiator to be managed by Sun Cluster. However, by default WebSphere MQ starts up the default channel initiation queue SYSTEM.CHANNEL.INITQ. If this channel initiation queue is required to be managed by the chi component, then you must code QueueManagerStartup: and Chinit=No on separate lines within the Queue Manager`s qm.ini file. This will prevent the Queue Manager from starting the default channel initiation queue. Instead this will now be started by the chi component.



    Note –

    The lsr component allows for multiple ports. You must specify multiple port numbers separated by / for each port entry required for the PORT parameter within /opt/SUNWscmqs/lsr/util/lsr_config. This will cause the lsr component to start multiple runmqlsr programs for different port entries.



    Note –

    The trm component allows for multiple trigger monitors. You must specify file for the TRMQ parameter within /opt/SUNWscmqs/trm/util/trm_config before you run /opt/SUNWscmqs/trm/util/trm_register. This will cause the trm component to start multiple trigger monitor entries from /opt/SUNWscmqs/trm/etc/<qmgr>_trm_queues, which must contain trigger monitor queue names, where <qmgr> is the name of your Queue Manager. You must create this file which is required on each node within Sun Cluster that will run Sun Cluster HA for WebSphere MQ. Alternatively this could be a symbolic link to a Global File System.



    # cd /opt/SUNWscmqs/mgr/util
    

    Edit the mgr_config file and follow the comments within that file, for example:


    #
    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    # 
    # This file will be sourced in by mgr_register and the parameters
    # listed below will be used.
    #
    # These parameters can be customized in (key=value) form
    #
    #          RS - name of the resource for the application
    #          RG - name of the resource group containing RS
    #        QMGR - name of the Queue Manager
    #        PORT - name of the Queue Manager port number
    #          LH - name of the LogicalHostname SC resource
    #      HAS_RS - name of the Queue Manager HAStoragePlus SC resource
    #     CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES)
    #      USERID - name of userid to issue strmqm/endmqm commands 
    #               (Default USERID=mqm)
    #
    #       +++ Optional parameters +++
    #
    # DB2INSTANCE - name of the DB2 Instance name
    # ORACLE_HOME - name of the Oracle Home Directory
    #  ORACLE_SID - name of the Oracle SID
    #   START_CMD - pathname and name of the renamed strmqm program
    #    STOP_CMD - pathname and name of the renamed endmqm program
    #
    # Note 1: Optional parameters
    #
    #       Null entries for optional parameters are allowed if not used.
    #
    # Note 2: XAResourceManager processing
    #
    #       If DB2 will participate in global units of work then set
    #       DB2INSTANCE=
    #
    #       If Oracle will participate in global units of work then set
    #       ORACLE_HOME=
    #       ORACLE_SID=
    #
    # Note 3: Renamed strmqm/endmqm programs
    #
    #       This is only recommended if WebSphere MQ is deployed onto 
    #       Global File Systems for qmgr/log files. You should specify 
    #       the full pathname/program, i.e. /opt/mqm/bin/<renamed_strmqm>
    #
    # Note 4: Cleanup IPC
    #
    #       Under normal shutdown and startup WebSphere MQ manages it's
    #       cleanup of IPC resources with the following fix packs.
    #
    #       MQSeries v5.2 Fix Pack 07 (CSD07) or later
    #       WebSphere MQ v5.3 Fix Pack 04 (CSD04) or later
    #
    #       Please refer to APAR number IY38428.
    #
    #       However, while running in a failover environment, the IPC keys
    #       that get generated will be different between nodes. As a result
    #       after a failover of a Queue Manager, some shared memory segments
    #       can remain allocated on the node although not used. 
    #
    #       Although this does not cause WebSphere MQ a problem when starting
    #       or stopping (with the above fix packs applied), it can deplete
    #       the available swap space and in extreme situations a node may 
    #       run out of swap space. 
    #
    #       To resolve this issue, setting CLEANUP=YES will ensure that 
    #       IPC shared memory segments for WebSphere MQ are removed whenever
    #       a Queue Manager is stopped. However IPC shared memory segments 
    #       are only removed under strict conditions, namely
    #
    #       - The shared memory segment(s) are owned by
    #               CREATOR=mqm and CGROUP=mqm
    #       - The shared memory segment has no attached processes
    #       - The CPID and LPID process ids are not running
    #       - The shared memory removal is performed by userid mqm
    #
    #       Setting CLEANUP=NO will not remove any shared memory segments.
    #
    #       Setting CLEANUP=YES will cleanup shared memory segments under the
    #       conditions described above.
    #

    The following is an example for WebSphere MQ Manager qmgr1.


    RS=wmq-qmgr-res
    RG=wmq-rg
    QMGR=qmgr1
    PORT=1414
    LH=wmq-lh-res
    HAS_RS=wmq-has-res
    CLEANUP=YES
    USERID=mqm
    DB2INSTANCE=
    ORACLE_HOME=
    ORACLE_SID=
    START_CMD=
    STOP_CMD=

    After editing mgr_config, register the resource.


    # ./mgr_register
    
  9. Enable WebSphere MQ Manager protection (if required).

    You should implement WebSphere MQ Manager protection only if you have deployed WebSphere MQ onto a Global File System. Refer to Configuration Requirements for more details to implement WebSphere MQ Manager protection and in particular to Example 4. Otherwise, skip to the next step.

    You must repeat this on each node within Sun Cluster that will host Sun Cluster HA for WebSphere MQ.

  10. Enable each WebSphere MQ resource.

    Repeat this step for each WebSphere MQ component as in the previous step.


    # scstat 
    

    # scswitch -e -j WebSphere MQ-resource
    

Verifying the Sun Cluster HA for WebSphere MQ Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

ProcedureHow to Verify the Sun Cluster HA for WebSphere MQ Installation and Configuration

Steps
  1. Become superuser on one of the nodes in the cluster that will host WebSphere MQ.

  2. Ensure all the WebSphere MQ resources are online with scstat.


    # scstat 
    

    For each WebSphere MQ resource that is not online, use the scswitch command as follows:


    # scswitch -e -j WebSphere MQ- resource
    
  3. Run the scswitch command to switch the WebSphere MQ resource group to another cluster node, such as node2.


    # scswitch -z -g WebSphere MQ-failover-resource-group -h node2
    

Upgrading Sun Cluster HA for WebSphere MQ

Additional configuration parameters for Sun Cluster HA for WebSphere MQ were introduced in Sun Cluster 3.1 9/04, as explained in the subsections that follow. If you need to modify the default value of a parameter, or set a value for a parameter without a default, you must upgrade Sun Cluster HA for WebSphere MQ.

Parameters for Configuring the MQ User

The following parameters for configuring the MQ user were introduced in Sun Cluster 3.1 9/04. Default values are defined for theses parameters.

CLEANUP=YES

Specifies that unused shared memory segments that mqm creates are to be deleted.

USERID=mqm

Specifies that user ID mqm is to be used to issue mq commands.

Parameters for Configuring XAResourceManager Processing

XAResourceManager processing enables WebSphere MQ to manage global units of work with any combination of the following databases:

The following parameters for configuring XAResourceManager processing were introduced in Sun Cluster 3.1 9/04. Null values are defined for these parameters.

DB2INSTANCE=name

Specifies the DB2 instance name for XAResourceManager.

ORACLE_HOME=directory

Specifies the Oracle home directory for XAResourceManager.

ORACLE_SID=identifier

Specifies the Oracle SID for XaResourceManager.

Parameters for Enabling WebSphere MQ to Manage the Startup of WebSphere MQ Queue Manager

You might deploy a WebSphere MQ queue manager's qmgr files and log files on a global file system. In this situation, rename the strmqm program and the endmqm program to prevent the queue manager from being manually started on another node. If you rename these programs, the WebSphere MQ framework manages the startup of WebSphere MQ queue manager.

The following parameters for enabling WebSphere MQ to manage the startup of WebSphere MQ queue manager were introduced in Sun Cluster 3.1 9/04. Null values are defined for these parameters.

START_CMD=start-program

Specifies the full path name and filename of the renamed strmqm program.

STOP_CMD=stop-program

Specifies the full path name and filename of the renamed endmqm program.

ProcedureHow to Upgrade Sun Cluster HA for WebSphere MQ

If you need to modify the default value of a parameter, or set a value for a parameter without a default, you must remove and reregister the Sun Cluster HA for WebSphere MQ resource for which you are changing the parameter.

Only the USERID=mqm applies to the resources for all components, namely:

The remaining parameters that were introduced in Sun Cluster 3.1 9/04 apply only to the resource for the Queue Manager component.

Perform this task for each WebSphere MQ resource that you are modifying.


Note –

Perform this task only if you are setting or modifying parameters that were introduced in Sun Cluster 3.1 9/04.


Steps
  1. Save the resource definitions.


    # scrgadm -pvv -j resource > file1
    
  2. Disable the resource.


    # scswitch -n -j resource
    
  3. Remove the resource.


    # scrgadm -r -j resource
    
  4. Configure and register the resource.

    1. Go to the directory that contains the configuration file and the registration file for the resource.


      # cd /opt/SUNWscmqs/prefixutil
      
    2. Edit the configuration file for the resource.


      vi prefix_config
      
    3. Run the registration file for the resource.


      # ./prefix_register
      

    prefix denotes the component to which the file applies, as follows:

    • mgr denotes the Queue Manager component.

    • chi denotes the Channel Initiator component.

    • csv denotes the Command Server component.

    • lsr denotes the Listener component.

    • trm denotes the Trigger Monitor component.


    Note –

    Only the mgr_config file contains all the parameters that are introduced in Sun Cluster 3.1 9/04. The remaining files contain only the USERID=mqm parameter.


  5. Save the resource definitions.


    # scrgadm -pvv -j resource > file2
    
  6. Compare the updated definitions to the definitions that you saved before you updated the resource.

    Comparing these definitions enables you to determine if any existing extension properties have changed, for example, time-out values.


    # diff file1 file2
    
  7. Amend any resource properties that were reset.


    # scrgadm -c -j resource -x|y resource
    
  8. Bring online the resource.


    # scswitch -e -j resource
    

Understanding Sun Cluster HA for WebSphere MQ Fault Monitor

This section describes the Sun Cluster HA for WebSphere MQ fault monitor's probing algorithm or functionality; states the conditions, messages, and recovery actions associated with unsuccessful probing; and states the conditions and messages associated with unsuccessful probing.

For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.

Resource Properties

Sun Cluster HA for WebSphere MQ fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.

Probing Algorithm and Functionality

Debug Sun Cluster HA for WebSphere MQ

ProcedureHow to turn on debug for Sun Cluster HA for WebSphere MQ

Sun Cluster HA for WebSphere MQ can be used by multiple WebSphere MQ instances. To turn on debug for all WebSphere MQ instances or for a particular WebSphere MQ instance.

Each WebSphere MQ component has a DEBUG file in /opt/SUNWscmqs/xxx/etc, where xxx is a three-character abbreviation for the respective WebSphere MQ component.

These files allow you to turn on debug for all WebSphere MQ instances or for a specific WebSphere MQ instance on a particular node with Sun Cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere MQ across the whole Sun Cluster, repeat this step on all nodes within Sun Cluster.

Perform this step for the Queue Manager component (mgr), then repeat for each of the optional WebSphere MQ components that requires debug output, on each node of Sun Cluster as required.

Steps
  1. Edit /etc/syslog.conf and change daemon.notice to daemon.debug


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.notice;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                     operator
    #

    Change the daemon.notice to daemon.debug and restart syslogd. The output below, from the command grep daemon /etc/syslog.conf, shows that daemon.debug has been set.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.debug;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                    operator
    #
    # pkill -1 syslogd
    #
  2. Edit /opt/SUNWscmqs/mgr/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource.


    # cat /opt/SUNWscmqs/mgr/etc/config
    #
    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    # Usage:
    #       DEBUG=<RESOURCE_NAME> or ALL
    #       START_COMMAND=/opt/mqm/bin/<renamed_strmqm_program>
    #       STOP_COMMAND=/opt/mqm/bin/<renamed_endmqm_program>
    #
    DEBUG=ALL
    START_COMMAND=
    STOP_COMMAND=
    #

    Note –

    To turn off debug, reverse the steps above.