Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Registering and Configuring Sun Cluster HA for WebSphere MQ

This section contains the procedures you need to configure Sun Cluster HA for WebSphere MQ.

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere MQ

Use this procedure to configure Sun Cluster HA for WebSphere MQ as a failover data service. This procedure assumes that you installed the data service packages during your Sun Cluster installation.

If you did not install the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Packages using the scinstall Utility.

  1. Become superuser on one of the nodes in the cluster that will host WebSphere MQ.

  2. Register the SUNW.gds resource type.

    # scrgadm -a -t SUNW.gds
  3. Register the SUNW.HAStoragePlus resource type.

    # scrgadm -a -t SUNW.HAStoragePlus
  4. Create a failover resource group.

    # scrgadm -a -g WebSphere MQ-failover-resource-group
  5. Create a resource for the WebSphere MQ Disk Storage.

    # scrgadm -a -j WebSphere MQ-has-resource  \
    -g WebSphere MQ-failover-resource-group   \
    -t SUNW.HAStoragePlus  \
    -x FilesystemMountPoints=WebSphere MQ- instance-mount-points
  6. Create a resource for the WebSphere MQ Logical Hostname.

    # scrgadm -a -L -j WebSphere MQ-lh-resource  \
    -g WebSphere MQ-failover-resource-group  \
    -l WebSphere MQ-logical-hostname
  7. Enable the failover resource group that now includes the WebSphere MQ Disk Storage and Logical Hostname resources.

    # scswitch -Z -g WebSphere MQ-failover-resource-group
  8. Create and register each required WebSphere MQ component.

    Perform this step for the Queue Manager component (mgr), and repeat for each of the optional WebSphere MQ components that you use, replacing mgr with one of the following:

    chi - Channel Initiator

    csv - Command Server

    lsr - Dedicated Listener

    trm - Trigger monitor

    Note –

    The chi component allows a channel initiator to be managed by Sun Cluster. However, by default WebSphere MQ starts up the default channel initiation queue SYSTEM.CHANNEL.INITQ. If this channel initiation queue is required to be managed by the chi component, then you must code QueueManagerStartup: and Chinit=No on separate lines within the Queue Manager`s qm.ini file. This will prevent the Queue Manager from starting the default channel initiation queue. Instead this will now be started by the chi component.

    Note –

    The lsr component allows for multiple ports. You must specify multiple port numbers separated by / for each port entry required for the PORT parameter within /opt/SUNWscmqs/lsr/util/lsr_config. This will cause the lsr component to start multiple runmqlsr programs for different port entries.

    Note –

    The trm component allows for multiple trigger monitors. You must specify file for the TRMQ parameter within /opt/SUNWscmqs/trm/util/trm_config before you run /opt/SUNWscmqs/trm/util/trm_register. This will cause the trm component to start multiple trigger monitor entries from /opt/SUNWscmqs/trm/etc/<qmgr>_trm_queues, which must contain trigger monitor queue names, where <qmgr> is the name of your Queue Manager. You must create this file which is required on each node within Sun Cluster that will run Sun Cluster HA for WebSphere MQ. Alternatively this could be a symbolic link to a Global File System.

    # cd /opt/SUNWscmqs/mgr/util

    Edit the mgr_config file and follow the comments within that file, for example:

    # Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    # This file will be sourced in by mgr_register and the parameters
    # listed below will be used.
    # These parameters can be customized in (key=value) form
    #          RS - name of the resource for the application
    #          RG - name of the resource group containing RS
    #        QMGR - name of the Queue Manager
    #        PORT - name of the Queue Manager port number
    #          LH - name of the LogicalHostname SC resource
    #      HAS_RS - name of the Queue Manager HAStoragePlus SC resource
    #     CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES)
    #      USERID - name of userid to issue strmqm/endmqm commands 
    #               (Default USERID=mqm)
    #       +++ Optional parameters +++
    # DB2INSTANCE - name of the DB2 Instance name
    # ORACLE_HOME - name of the Oracle Home Directory
    #  ORACLE_SID - name of the Oracle SID
    #   START_CMD - pathname and name of the renamed strmqm program
    #    STOP_CMD - pathname and name of the renamed endmqm program
    # Note 1: Optional parameters
    #       Null entries for optional parameters are allowed if not used.
    # Note 2: XAResourceManager processing
    #       If DB2 will participate in global units of work then set
    #       DB2INSTANCE=
    #       If Oracle will participate in global units of work then set
    #       ORACLE_HOME=
    #       ORACLE_SID=
    # Note 3: Renamed strmqm/endmqm programs
    #       This is only recommended if WebSphere MQ is deployed onto 
    #       Global File Systems for qmgr/log files. You should specify 
    #       the full pathname/program, i.e. /opt/mqm/bin/<renamed_strmqm>
    # Note 4: Cleanup IPC
    #       Under normal shutdown and startup WebSphere MQ manages it's
    #       cleanup of IPC resources with the following fix packs.
    #       MQSeries v5.2 Fix Pack 07 (CSD07) or later
    #       WebSphere MQ v5.3 Fix Pack 04 (CSD04) or later
    #       Please refer to APAR number IY38428.
    #       However, while running in a failover environment, the IPC keys
    #       that get generated will be different between nodes. As a result
    #       after a failover of a Queue Manager, some shared memory segments
    #       can remain allocated on the node although not used. 
    #       Although this does not cause WebSphere MQ a problem when starting
    #       or stopping (with the above fix packs applied), it can deplete
    #       the available swap space and in extreme situations a node may 
    #       run out of swap space. 
    #       To resolve this issue, setting CLEANUP=YES will ensure that 
    #       IPC shared memory segments for WebSphere MQ are removed whenever
    #       a Queue Manager is stopped. However IPC shared memory segments 
    #       are only removed under strict conditions, namely
    #       - The shared memory segment(s) are owned by
    #               CREATOR=mqm and CGROUP=mqm
    #       - The shared memory segment has no attached processes
    #       - The CPID and LPID process ids are not running
    #       - The shared memory removal is performed by userid mqm
    #       Setting CLEANUP=NO will not remove any shared memory segments.
    #       Setting CLEANUP=YES will cleanup shared memory segments under the
    #       conditions described above.

    The following is an example for WebSphere MQ Manager qmgr1.


    After editing mgr_config, register the resource.

    # ./mgr_register
  9. Enable WebSphere MQ Manager protection (if required).

    You should implement WebSphere MQ Manager protection only if you have deployed WebSphere MQ onto a Global File System. Refer to Configuration Requirements for more details to implement WebSphere MQ Manager protection and in particular to Example 4. Otherwise, skip to the next step.

    You must repeat this on each node within Sun Cluster that will host Sun Cluster HA for WebSphere MQ.

  10. Enable each WebSphere MQ resource.

    Repeat this step for each WebSphere MQ component as in the previous step.

    # scstat 

    # scswitch -e -j WebSphere MQ-resource