Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Installing and Configuring Sun Cluster HA for WebSphere MQ

This chapter explains how to install and configure Sun Cluster HA for WebSphere MQ.

This chapter contains the following sections.

Sun Cluster HA for WebSphere MQ Overview

The Sun Cluster HA for WebSphere MQ data service provides a mechanism for the orderly startup and shutdown, fault monitoring, and automatic failover of the WebSphere MQ service.

The following components can be protected by the Sun Cluster HA for WebSphere MQ data service within the global zone, whole root non-global zone or whole root failover non-global zone.

Queue Manager
Channel Initiator
Command Server
Listener
Trigger Monitor

Overview of Installing and Configuring Sun Cluster HA for WebSphere MQ

The following table summarizes the tasks for installing and configuring Sun Cluster HA for WebSphere MQ and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.

Table 1 Tasks for Installing and Configuring Sun Cluster HA for WebSphere MQ

Task 

Instructions 

Plan the installation 

Planning the Sun Cluster HA for WebSphere MQ Installation and Configuration

Install and configure the WebSphere MQ software 

How to Install and Configure WebSphere MQ

Verify the installation and configuration 

How to Verify the Installation and Configuration of WebSphere MQ

Install Sun Cluster HA for WebSphere MQ packages 

Installing the Sun Cluster HA for WebSphere MQ Packages

Register and configure Sun Cluster HA for WebSphere MQ resources 

How to Register and Configure Sun Cluster HA for WebSphere MQ

Verify the Sun Cluster HA for WebSphere MQ installation and configuration 

How to Verify the Sun Cluster HA for WebSphere MQ Installation and Configuration

Upgrade the Sun Cluster HA for WebSphere MQ data service 

Upgrading Sun Cluster HA for WebSphere MQ

Tune the Sun Cluster HA for WebSphere MQ fault monitor 

Understanding the Sun Cluster HA for WebSphere MQ Fault Monitor

Debug Sun Cluster HA for WebSphere MQ 

How to turn on debug for Sun Cluster HA for WebSphere MQ

Planning the Sun Cluster HA for WebSphere MQ Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for WebSphere MQ installation and configuration.

Configuration Restrictions

The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for WebSphere MQ.


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.


Restriction for the supported configurations of Sun Cluster HA for WebSphere MQ

The Sun Cluster HA for WebSphere MQ data service can only be configured as a failover service.

Single or multiple instances of WebSphere MQ can be deployed in the cluster.

WebSphere MQ can be deployed in the global zone, whole root non-global zone or a whole root failover non-global zone. See Restriction for multiple WebSphere MQ instances for more information.

The Sun Cluster HA for WebSphere MQ data service supports different versions of WebSphere MQ, however you must check that the Sun Cluster HA for WebSphere MQ data service has been verified against that version.

Restriction for the Location of WebSphere MQ files

The WebSphere MQ files are where the queue manager data files /var/mqm/qmgr/queue-manager and /var/mqm/log/queue-manager are stored.

These WebSphere MQ files needs to be placed on shared storage as either a cluster file system or a highly available local file system.

Refer to Step 5 and Step 6 in How to Install and Configure WebSphere MQ for a more information.

Restriction for multiple WebSphere MQ instances

The Sun Cluster HA for WebSphere MQ data service can support multiple WebSphere MQ instances, potentially with different versions.

If you intend to deploy multiple WebSphere MQ instances with different versions you will need to consider deploying WebSphere MQ in separate whole root non-global zones.

The purpose of the following discussion is to help you decide how to use whole root non-global zones to deploy multiple WebSphere MQ instances and then to determine what Nodelist entries are required.

Within these examples:


Note –

Although these examples show non-global zones z1 and z2, you may also use global as the zone name or omit the zone entry within the Nodelist property value to use the global zone.



Example 1 Run multiple WebSphere MQ instances in the same failover resource group.

Create a single failover resource group that will contain all the WebSphere MQ instances in the same non-global zones across node1 and node2.


# clresourcegroup create -n node1:z1,node2:z1 RG1


Example 2 Run multiple WebSphere MQ instances in separate failover resource groups.

Create multiple failover resource groups that will each contain one WebSphere MQ instance in the same non-global zones across node1 and node2.


# clresourcegroup create -n node1:z1,node2:z1 RG1
# clresourcegroup create -n node1:z1,node2:z1 RG2


Example 3 Run multiple WebSphere MQ instances within separate failover resource groups and zones.

Create multiple failover resource groups that will each contain one WebSphere MQ instance in separate non-global zones across node1 and node2.


# clresourcegroup create -n node1:z1,node2:z1 RG1
# clresourcegroup create -n node1:z2,node2:z2 RG2


Example 4 Run multiple WebSphere MQ instances in separate failover resource groups that contain separate failover zones across node1 and node2.

Create multiple failover resource groups that will each contain a failover zone. Each failover zone can then contain one or more WebSphere MQ instances.


# clresourcegroup create -n node1,node2 RG1
# clresourcegroup create -n node1,node2 RG2


Note –

If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over a failover zone deployment. Deploying WebSphere MQ within a failover zone will incur additional failover time to boot/halt the failover zone.


Configuration Requirements

The configuration requirements in this section apply only to Sun Cluster HA for WebSphere MQ.


Caution – Caution –

If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Determine which Solaris zone WebSphere MQ will use

Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

You must determine which Solaris zone WebSphere MQ will run in. WebSphere MQ can run within a global zone, non-global zone or in a failover zone configuration. Table 2 provides some reasons to help you decide.


Note –

WebSphere MQ can be deployed within the global zone, whole root non-global zone or whole root failover non-global zone, also referred to as a failover zone.


Table 2 Choosing the appropriate Solaris Zone for WebSphere MQ

Zone type 

Reasons for choosing the appropriate Solaris Zone for WebSphere MQ 

Global Zone 

Only one instance of WebSphere MQ will be installed. 

Non-global zones are not required. 

Non-global Zone 

Several WebSphere MQ instances need to be consolidated and isolated from each other. 

Different versions of WebSphere MQ will be installed. 

Failover testing of WebSphere MQ between non-global zones on the same node is required. 

Failover Zone 

You require WebSphere MQ to run in the same zone regardless of which node the failover zone is running on. 


Note –

If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over a failover zone deployment. Deploying WebSphere MQ within a failover zone will incur additional failover time to boot/halt the failover zone.


Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

If a cluster file system is being used for the WebSphere MQ files, it is possible to manually start the queue manager on one node of the cluster and at the same time to also manually start the same queue manager on another node of the cluster.


Note –

Although it is possible, you should not attempt this as doing so will cause severe damage to the WebSphere MQ files.


Although it is expected that no-one will manually start the same queue manager on separate nodes of the cluster at the same time the Sun Cluster HA for WebSphere MQ provides a mechanism to prevent someone from doing so, albeit by mistake.

To prevent against this happening you must implement one of the following two solutions.

  1. Use a highly available local file system for the WebSphere MQ files.

    This is the recommended approach as the WebSphere MQ files would be mounted only on one node of the cluster at a time. This then limits starting the queue manager on only one node of the cluster at a time.

  2. Create a symbolic link for /opt/mqm/bin/strmqm and /opt/mqm/bin/endmqm to /opt/SUNWscmqs/mgr/bin/check-start.

    /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent manually starting or stopping the queue manager, by verifying that the start or stop is being attempted by the Sun Cluster HA for WebSphere MQ data service.

    /opt/SUNWscmqs/mgr/bin/check-start will report the following error if an attempt to manually start or stop the queue manager.


    $ strmqm qmgr1
    $ Request to run </usr/bin/strmqm qmgr1> within Sun Cluster has been refused

    If a cluster file system is used for the WebSphere MQ files, you must create a symbolic link for strmqm and endmqm to /opt/SUNWscmqs/mgr/bin/check-start and inform the Sun Cluster HA for WebSphere MQ data service of this change.

    To do this, you must perform the following on each node of the cluster.


    # cd /opt/mqm/bin
    #
    # mv strmqm strmqm_sc3
    # mv endmqm endmqm_sc3
    #
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm
    

    After renaming strmqm and endmqm you must use these new program names (strmqm_sc3 and endmqm_sc3) for the START_CMD and STOP_CMD variables when you edit the /opt/SUNWscmqs/mgr/util/mgr_config file in Step 7 in How to Register and Configure Sun Cluster HA for WebSphere MQ


    Note –

    If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you must again apply this workaround.

    Instead the recommended approach is to use a highly available local file system for the WebSphere MQ files.


Installing and Configuring WebSphere MQ

This section contains the procedures you need to install and configure WebSphere MQ.

ProcedureHow to Install and Configure WebSphere MQ

This section contains the procedures you need to install and configure WebSphere MQ.

  1. Determine how many WebSphere MQ instances will be used.

    Refer to Restriction for multiple WebSphere MQ instances for more information.

  2. Determine which Solaris zone to use.

    Refer to Determine which Solaris zone WebSphere MQ will use for more information.

  3. If a zone will be used, create the whole root non-global zone or failover zone.

    Refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones for complete information about installing and configuring a zone.

    Refer to Sun Cluster Data Service for Solaris Containers Guide for complete information about creating a failover zone.

  4. If a non-global zone or failover zone is being used, ensure the zone is booted.

    Repeat this step on all nodes of the cluster for a non-global zone and on one node of the cluster if a failover zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  5. Determine how WebSphere MQ should be deployed in the cluster.

    WebSphere MQ can be deployed onto a cluster file system or highly available file system on the cluster. The following discussion will help you determine the correct approach to take.

    Within this section, a single instance or multiple instances of WebSphere MQ will be considered within a global zone, non-global zone, or failover zone.

    In each scenario, file system options for /var/mqm and the WebSphere MQ files will be listed together with a recommendation.

    1. Single Instance of WebSphere MQ

      1. Global zone deployment

        /var/mqm

        Can be deployed on a cluster file system, highly available local file system or on local storage on each cluster node.

        It is recommended to deploy /var/mqm on local storage on each cluster node.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Can be deployed on a cluster file system or highly available local file system.

        It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.

      2. Non-global zone deployment

        /var/mqm

        Can be deployed on a highly available local file system or on non-global zone local storage on each cluster node.

        It is recommended to deploy /var/mqm on non-global local storage on each cluster node.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Must be deployed on a highly available local file system.

      3. Failover zone deployment

        If considering a failover zone, you must be aware that a failover zone will incur additional failover time to boot/halt the failover zone.

        /var/mqm

        Can be deployed on a highly available local file system or in failover zone's zonepath.

        It is recommended to deploy /var/mqm on the failover zone's zonepath.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Must be deployed on a highly available local file system.

    2. Multiple Instances of WebSphere MQ

      1. Global zone deployment

        /var/mqm

        Can be deployed on a cluster file system or on local storage on each cluster node.

        It is recommended to deploy /var/mqm on local storage on each cluster node.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Can be deployed on a cluster file system or highly available local file system.

        It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.

      2. Non-global zone deployment

        /var/mqm

        Must be deployed on non-global zone local storage on each cluster node.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Must be deployed on a highly available local file system.

      3. Failover zone deployment

        If considering a failover zone, you must be aware that a failover zone will incur additional failover time to boot/halt the failover zone.

        /var/mqm

        Can be deployed on a highly available local file system or on failover zone's zonepath.

        It is recommended to deploy /var/mqm on the failover zone's zonepath.

        /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

        Must be deployed on a highly available local file system.


    Note –

    Refer to Appendix A, Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones for Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones and Appendix B, Deployment Example: Installing aWebSphere MQ Queue Manager in a Failover Zone for Deployment Example: Installing a WebSphere MQ Queue Manager in a Failover Zone for examples on how to set up the WebSphere MQ files.


  6. Create a cluster file system or highly available local file system for the WebSphere MQ files.

    Within this step you will create file systems for the WebSphere MQ files and /var/mqm. Once you have determined how WebSphere MQ should be deployed in the cluster, you can choose one of the sub steps below.

    • Create the WebSphere MQ files and /var/mqm on cluster file systems by using Step a.

    • Create the WebSphere MQ files on SVM highly available local file systems and /var/mqm on cluster file system by using Step b.

    • Create the WebSphere MQ files on ZFS highly available local file systems and /var/mqm on local storage or within a failover zone's zonepath by using Step c.

    1. WebSphere MQ files and /var/mqm on cluster file systems.

      Within this deployment:

      • The WebSphere MQ files are deployed on cluster file systems.

      • The WebSphere MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.


        Note –

        Refer to Step d for more information about setting up this symbolic link.



      # ls -l /var/mqm
      lrwxrwxrwx   1 root     other         11 Jan  8 14:17 /var/mqm ->
       /global/mqm
      #  
      # ls -l /global/mqm/qmgrs
      total 6
      lrwxrwxrwx   1 root      other          512 Dec 16 09:57 @SYSTEM -> 
       /var/mqm_local/qmgrs/@SYSTEM
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
      # 
      # ls -l /global/mqm/log
      total 4
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
      #
      # more /etc/vfstab (Subset of the output)
      /dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
           ufs     3       yes     logging,global
      /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /global/mqm/qmgrs/qmgr1
       ufs     4       yes     logging,global
      /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /global/mqm/log/qmgr1
         ufs     4       yes     logging,global
      /dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /global/mqm/qmgrs/qmgr2
       ufs     4       yes     logging,global
      /dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /global/mqm/log/qmgr2
         ufs     4       yes     logging,global
    2. WebSphere MQ files on SVM highly available local file systems and /var/mqm on cluster file system.

      Within this deployment:

      • The WebSphere MQ files are deployed on SVM highly available local file systems.

      • The WebSphere MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.


        Note –

        Refer to Step d for more information about setting up this symbolic link.



      # ls -l /var/mqm
      lrwxrwxrwx   1 root     other         11 Sep 17 16:53 /var/mqm ->
       /global/mqm
      #
      # ls -l /global/mqm/qmgrs
      total 6
      lrwxrwxrwx   1 root      other          512 Sep 17 09:57 @SYSTEM -> 
       /var/mqm_local/qmgrs/@SYSTEM
      lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr1 ->
       /local/mqm/qmgrs/qmgr1
      lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr2 ->
       /local/mqm/qmgrs/qmgr2
      #
      # ls -l /global/mqm/log
      total 4
      lrwxrwxrwx   1 root     other         20 Sep 17 17:18 qmgr1 ->
       /local/mqm/log/qmgr1
      lrwxrwxrwx   1 root     other         20 Sep 17 17:19 qmgr2 ->
       /local/mqm/log/qmgr2
      #
      # more /etc/vfstab (Subset of the output)
      /dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
           ufs     3       yes     logging,global
      /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /local/mqm/qmgrs/qmgr1
       ufs     4       no     logging
      /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /local/mqm/log/qmgr1
         ufs     4       no     logging
      /dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /local/mqm/qmgrs/qmgr2
       ufs     4       no     logging
      /dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /local/mqm/log/qmgr2
         ufs     4       no     logging
    3. WebSphere MQ files on ZFS highly available local file systems and /var/mqm on local storage or within a failover zone's zonepath.

      Within this deployment:

      • The WebSphere MQ files are deployed on ZFS highly available local file systems.

      • The WebSphere MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses local storage on each cluster node or the zonepath of a failover zone.

        As /var/mqm is on a local file system you must copy /var/mqm/mqs.ini from the node where the queue managers was created to all other nodes or zones in the cluster where the queue manager will run.


        Note –

        Refer to Step 10 for more information about copying /var/mqm/mqs.ini.



      # df -k /var/mqm
      Filesystem            kbytes    used   avail capacity  Mounted on
      /                    59299764 25657791 33048976    44%    /
      #
      # ls -l /var/mqm/qmgrs
      total 6
      drwxrwsr-x   2 mqm      mqm          512 Sep 11 11:42 @SYSTEM
      lrwxrwxrwx   1 mqm      mqm           14 Sep 11 11:45 qmgr1 -> /ZFSwmq1/qmgrs
      lrwxrwxrwx   1 mqm      mqm           14 Sep 11 11:50 qmgr2 -> /ZFSwmq2/qmgrs
      #
      # ls -l /var/mqm/log
      total 4
      lrwxrwxrwx   1 mqm      mqm           12 Sep 11 11:44 qmgr1 -> /ZFSwmq1/log
      lrwxrwxrwx   1 mqm      mqm           12 Sep 11 11:54 qmgr2 -> /ZFSwmq2/log
      #
      # df -k /ZFSwmq1
      Filesystem            kbytes    used   avail capacity  Mounted on
      HAZpool1             4096453   13180 4083273     1%    /ZFSwmq1
      #
      # df -k /ZFSwmq2
      Filesystem            kbytes    used   avail capacity  Mounted on
      HAZpool2             4096453   13133 4083320     1%    /ZFSwmq2
    4. Cluster file system is used for /var/mqm.

      Within this deployment:

      • If /var/mqm is placed on shared storage as a cluster file system, a symbolic link is made from /var/mqm/qmgrs/@SYSTEM to local file /var/mqm_local/qmgrs/@SYSTEM.

      • You must perform this step on all nodes in the cluster only if /var/mqm is a cluster file system.


      # mkdir -p /var/mqm_local/qmgrs/@SYSTEM
      # mkdir -p /var/mqm/qmgrs
      # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM
      

      This restriction is required because WebSphere MQ uses keys to build internal control structures. Mounting /var/mqm as a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file ensures that any derived shared memory keys are unique on each node.

      If multiple queue managers are required and your queue manager was created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link.

      You must stop all queue managers before doing this and perform this on each node of the cluster.


      # mkdir -p /var/mqm_local/qmgrs/@SYSTEM
      # cd /var/mqm/qmgrs
      # cp -rp @SYSTEM/* /var/mqm_local/qmgrs/@SYSTEM
      # rm -r @SYSTEM
      # ln -s /var/mqm_local/qmgrs/@SYSTEM @SYSTEM
      
  7. Mount the highly available local file system

    Perform this step on one node of the cluster.

    1. If a non ZFS highly available file system is being used for the WebSphere MQ files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for WebSphere MQ.


        # mount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Create the mount point on all zones of the cluster that are being used for WebSphere MQ.

        Mount the highly available local file system on one of the zones being used .


        # zlogin zonename mkdir websphere-mq-highly-available-local-file-system
        #
        # mount -F lofs websphere-mq-highly-available-local-file-system \
        > /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool export -f HAZpool
      # zpool import -R /zonepath/root HAZpool
      
  8. Install WebSphere MQ on all nodes or zones of the cluster.

    After you have created and mounted the appropriate file systems for the WebSphere MQ files and /var/mqm, you must install WebSphere MQ on each node of the cluster, either in the global zone and/or the non-global zone or failover zone as required.

    Follow the IBM WebSphere MQ for Sun Solaris Quick Beginnings manual to install WebSphere MQ.

  9. Create the WebSphere MQ queue manager.

    Follow the IBM WebSphere MQ for Sun Solaris Quick Beginnings manual to create a WebSphere MQ queue manager.

  10. If a local file system is used for /var/mqm copy /var/mqm/mqs.ini to all nodes or zones of the cluster.

    Within this deployment:

    • If /var/mqm/mqs.ini is placed on local storage as a local file system, you must copy /var/mqm/mqs.ini from the node or zone where the queue manager was created to all other nodes or zones in the cluster where the queue manager will run.

    • You must perform this step on all nodes or zones in the cluster only if /var/mqm is a local file system.

    1. If the global zone is being used for WebSphere MQ.


      # rcp /var/mqm/mqs.ini remote-node:/var/mqm/mqs.ini
      
    2. If a non-global zone or failover zone is being used for WebSphere MQ.


      # rcp /zonepath/root/var/mqm/mqs.ini \
      > remote-node:/zonepath/root/var/mqm/mqs.ini
      

Verifying the Installation and Configuration of WebSphere MQ

This section contains the procedure you need to verify the installation and configuration.

ProcedureHow to Verify the Installation and Configuration of WebSphere MQ

This procedure does not verify that your application is highly available because you have not yet installed your data service.

Perform this procedure on one node or zone of the cluster unless a specific steps indicates otherwise.

  1. Ensure the zone is booted, if a non-global zone or failover zone is being used.

    Repeat this step on all nodes on the cluster for a non-global zone and on one node of the cluster if a failover zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  2. Login to the zone, if a non-global zone or failover zone is being used.


    # zlogin zonename
    
  3. Start the queue manager, create a persistent queue and put a test message to that queue.


    # su - mqm
    $ strmqm queue-manager
    $ runmqsc queue-manager
    def ql(sc3test) defpsist(yes)
    end
    $
    $ /opt/mqm/samp/bin/amqsput SC3TEST queue-manager
    test test test test test
    ^C
  4. Stop the queue manager.


    $ endmqm -i queue-manager
    $ exit
    
  5. Logout from the zone, if a non-global zone or failover zone is being used.


    # exit
    
  6. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.

      1. If the global zone is being used for WebSphere MQ.


        # umount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool export -f HAZpool
      
  7. Relocate the shared storage to other node.

    Perform this step on another node of the cluster.

    1. If a non ZFS highly available local file system is being used for the WebSphere MQ files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for WebSphere MQ.


        # mount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Create the mount point on all zones of the cluster that are being used for WebSphere MQ.

        Mount the highly available local file system on one of the zones being used .


        # zlogin zonename mkdir websphere-mq-highly-available-local-file-system
        #
        # mount -F lofs websphere-mq-highly-available-local-file-system \
        > /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool import -R /zonepath/root HAZpool
      
  8. Login to the zone, if a non-global zone or failover zone is being used.

    Perform this step on the other node of the cluster.


    # zlogin zonename
    
  9. Start the queue manager, get the test message and delete the queue.

    Perform this step on the other node or zone of the cluster.


    # su - mqm
    $ strmqm queue-manager
    $ /opt/mqm/samp/bin/amqsget SC3TEST queue-manager
    ^C
    $ runmqsc queue-manager
    delete ql(sc3test)
    end
    
  10. Stop the queue manager.

    Perform this step on the other node or zone of the cluster.


    $ endmqm -i queue-manager
    $ exit
    
  11. Logout from the zone, if a non-global zone or failover zone is being used.


    # exit
    
  12. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.

      1. If the global zone is being used for WebSphere MQ.


        # umount websphere-mq-highly-available-local-file-system
        
      2. If a non-global zone or failover zone is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere MQ.


      # zpool export -f HAZpool
      
  13. Shutdown the zone, if a failover zone is being used.


    Note –

    This step is only required if a failover zone is being used.



    # zlogin zonename halt
    

Installing the Sun Cluster HA for WebSphere MQ Packages

If you did not install the Sun Cluster HA for WebSphere MQ packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.

ProcedureHow to Install the Sun Cluster HA for WebSphere MQ Packages

Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere MQ packages.

You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.


Note –

Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.


Before You Begin

Ensure that you have the Sun Java Availability Suite DVD-ROM.

If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.

  1. On the cluster node where you are installing the data service packages, become superuser.

  2. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.

  3. Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.

    • If you are installing the data service packages on the SPARC® platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the data service packages on the x86 platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_x86
      
  4. Start the Sun Java Enterprise System Installation Wizard.


    # ./installer
    
  5. When you are prompted, accept the license agreement.

    If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.

  6. From the list of Sun Cluster agents under Availability Services, select the data service for WebSphere MQ.

  7. If you require support for languages other than English, select the option to install multilingual packages.

    English language support is always installed.

  8. When prompted whether to configure the data service now or later, choose Configure Later.

    Choose Configure Later to perform the configuration after the installation.

  9. Follow the instructions on the screen to install the data service packages on the node.

    The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.

  10. (GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.

    The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step

  11. Exit the Sun Java Enterprise System Installation Wizard.

  12. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      # eject cdrom
      
Next Steps

See Registering and Configuring Sun Cluster HA for WebSphere MQ to register Sun Cluster HA for WebSphere MQ and to configure the cluster for the data service.

Registering and Configuring Sun Cluster HA for WebSphere MQ

This section contains the procedures you need to configure Sun Cluster HA for WebSphere MQ.

Some procedures within this section require you to use certain Sun Cluster commands. Refer to the relevant Sun Cluster command man page for more information about these command and their parameters.

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere MQ

Determine if a single or multiple WebSphere MQ instances will be deployed.

Refer to Restriction for multiple WebSphere MQ instances to determine how to deploy a single or multiple WebSphere MQ instances.

Once you have determined how WebSphere MQ will be deployed, you can chose one or more of the steps below.

  1. Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Resource Group.

    Use How to Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Resource Group for Example 1, Example 2 and Example 3.

  2. Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Zone.

    Use How to Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Zone for Example 4.

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Resource Group

This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.

If you did not install the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Packages.


Note –

Perform this procedure on one node of the cluster only.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Register the following resource types.


    # clresourcetype register SUNW.HAStoragePlus
    # clresourcetype register SUNW.gds
    
  3. Create a failover resource group for WebSphere MQ.


    Note –

    Refer to Restriction for multiple WebSphere MQ instances for more information on the nodelist entry.



    # clresourcegroup create -n nodelist websphere-mq-resource-group
    
  4. Create a resource for the WebSphere MQ Logical Hostname.


    # clreslogicalhostname create -g websphere-mq-resource-group \
    > -h websphere-mq-logical-hostname \
    > websphere-mq-logical-hostname-resource
    
  5. Create a resource for the WebSphere MQ Disk Storage.

    1. If a ZFS highly available local file system is being used.


      # clresource create -g websphere-mq-resource-group  \
      > -t SUNW.HAStoragePlus \
      > -p Zpools=websphere-mq-zspool \
      > websphere-mq-hastorage-resource
      
    2. If a cluster file system or a non ZFS highly available local file system is being used.


      # clresource create -g websphere-mq-resource-group  \
      > -t SUNW.HAStoragePlus \
      > -p FilesystemMountPoints=websphere-mq-filesystem-mountpoint \
      > websphere-mq-hastorage-resource
      
  6. Bring online the failover resource group for WebSphere MQ that now includes the Logical Hostname and Disk Storage resources.


    # clresourcegroup online -M websphere-mq-resource-group
    
  7. Create a resource for the WebSphere MQ queue manager.

    Edit /opt/SUNWscmqs/mgr/util/mgr_config and follow the comments within that file. After you have edited mgr_config, you must register the resource.


    # cd /opt/SUNWscmqs/mgr/util
    # vi mgr_config
    # ./mgr_register
    

    The following deployment example has been taken from Step 1 in Appendix A, Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones and shows /opt/SUNWscmqs/mgr/util/mgr_config that has been edited to configure a queue manager resource.


    Vigor5# cat > /var/tmp/mgr1_config <<-EOF
    # +++ Required parameters +++
    RS=wmq1-qmgr
    RG=wmq1-rg
    QMGR=qmgr1
    LH=wmq1-lh
    HAS_RS=wmq1-ZFShas
    LSR_RS=
    CLEANUP=YES
    SERVICES=NO
    USERID=mqm
    
    # +++ Optional parameters +++
    DB2INSTANCE=
    ORACLE_HOME=
    ORACLE_SID=
    START_CMD=
    STOP_CMD=
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=
    PROJECT=default
    TIMEOUT=300
    EOF
    

    Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr1_config
    
  8. Enable the resource.


    # clresource enable websphere-mq-resource
    
  9. Create and register a resource for any other WebSphere MQ components.

    Repeat this step for each WebSphere MQ component that is required.

    Edit /opt/SUNWscmqs/xxx/util/xxx_config and follow the comments within that file. Where xxx represents one of the following WebSphere MQ components:

    chi		Channel Initiator
    csv		Command Server
    lsr		Listener
    trm		Trigger Monitor

    After you have edited xxx_config, you must register the resource.


    # cd /opt/SUNWscmqs/xxx/util/
    # vi xxx_config
    # ./xxx_register
    

    The following deployment example has been taken from Step 4 in Appendix A, Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones and shows /opt/SUNWscmqs/lsr/util/lsr_config that has been edited to configure a listener resource.


    Vigor5# cat > /var/tmp/lsr1_config <<-EOF
    # +++ Required parameters +++
    RS=wmq1-lsr
    RG=wmq1-rg
    QMGR=qmgr1
    PORT=1414
    IPADDR=
    BACKLOG=100
    LH=wmq1-lh
    QMGR_RS=wmq1-qmgr
    USERID=mqm
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=
    PROJECT=default
    EOF
    

    Vigor5# /opt/SUNWscmqs/lsr/util/lsr_register -f /var/tmp/lsr1_config
    
  10. Enable the WebSphere MQ component resources.


    # clresource enable websphere-mq-resource
    
Next Steps

See Verifying the Sun Cluster HA for WebSphere MQ Installation and Configuration

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Zone

This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.

If you did not install the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Packages.


Note –

Perform this procedure on one node of the cluster only.


  1. Create a failover resource group for WebSphere MQ.

    Follow steps 1, 2, 3, 4, 5 and 6 in How to Register and Configure Sun Cluster HA for WebSphere MQ in a Failover Resource Group.

  2. Register the failover zone in the failover resource group for WebSphere MQ.

    Refer to Sun Cluster Data Service for Solaris Containers Guide for complete information about failover zones.

    Edit the sczbt_config file and follow the comments within that file. Ensure that you specify the websphere-mq-resource-group for the RG= parameter within sczbt_config.

    After you have edited sczbt_config, you must register the resource.


    # cd /opt/SUNWsczone/sczbt/util
    # vi sczbt_config
    # ./sczbt_register
    

    The following deployment example has been taken from Step 7 in Appendix B, Deployment Example: Installing aWebSphere MQ Queue Manager in a Failover Zone and shows /opt/SUNWsczone/sczbt/util/sczbt_config that has been edited to configure a failover zone resource.


    Vigor5# cat > /var/tmp/sczbt_config <<-EOF
    RS=wmq3-FOZ
    RG=wmq3-rg
    PARAMETERDIR=/FOZones
    SC_NETWORK=true
    SC_LH=wmq3-lh
    FAILOVER=true
    HAS_RS=wmq3-SVMhas,wmq3-ZFShas
    
    Zonename=z3
    Zonebootopt=
    Milestone=multi-user-server
    Mounts="/ZFSwmq3/log /ZFSwmq3/qmgrs"
    EOF
    Vigor5#
    Vigor5# /opt/SUNWsczone/sczbt/util/sczbt_register -f /var/tmp/sczbt_config
    
  3. Enable the failover zone resource


    # clresource enable websphere-mq-failover-zone-resource
    
  4. Create a resource for the WebSphere MQ queue manager resource

    Edit /opt/SUNWscmqs/mgr/util/mgr_config and follow the comments within that file. Ensure that the RS_ZONE variable specifies the cluster resource for the failover zone. After you have edited mgr_config, you must register the resource.


    # cd /opt/SUNWscmqs/mgr/util
    # vi mgr_config
    # ./mgr_register
    

    The following deployment example has been taken from Step 1 in Appendix B, Deployment Example: Installing aWebSphere MQ Queue Manager in a Failover Zone and shows /opt/SUNWscmqs/mgr/util/mgr_config that has been edited to configure a queue manager resource within a failover zone resource.


    Vigor5# cat > /var/tmp/mgr3_config <<-EOF
    # +++ Required parameters +++
    RS=wmq3-qmgr
    RG=wmq3-rg
    QMGR=qmgr3
    LH=wmq3-lh
    HAS_RS=wmq3-ZFShas
    LSR_RS=
    CLEANUP=YES
    SERVICES=NO
    USERID=mqm
    
    # +++ Optional parameters +++
    DB2INSTANCE=
    ORACLE_HOME=
    ORACLE_SID=
    START_CMD=
    STOP_CMD=
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=wmq3-FOZ
    PROJECT=default
    TIMEOUT=300
    EOF
    Vigor5#
    Vigor5# /opt/SUNWscmqs/mgr/util/mgr_register -f /var/tmp/mgr1_config
    
  5. Enable the WebSphere MQ resource.


    # clresource enable websphere-mq-resource
    
  6. Create and register a resource for any other WebSphere MQ components.

    Repeat this step for each WebSphere MQ component that is required.

    Edit /opt/SUNWscmqs/xxx/util/xxx_config and follow the comments within that file. Where xxx represents one of the following WebSphere MQ components:

    chi		Channel Initiator
    csv		Command Server
    lsr		Listener
    trm		Trigger Monitor

    Ensure that the RS_ZONE variable specifies the cluster resource for the failover zone. After you have edited xxx_config, you must register the resource.


    # cd /opt/SUNWscmqs/xxx/util
    # vi xxx_config 
    # ./xxx_register
    

    The following deployment example has been taken from Step 4 in Appendix A, Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones and shows a modified /opt/SUNWscmqs/lsr/util/lsr_config that has been edited to configure a listener resource in a failover zone resource.


    Vigor5# cat > /var/tmp/lsr3_config <<-EOF
    # +++ Required parameters +++
    RS=wmq3-lsr
    RG=wmq3-rg
    QMGR=qmgr3
    PORT=1420
    IPADDR=
    BACKLOG=100
    LH=wmq3-lh
    QMGR_RS=wmq3-qmgr3
    USERID=mqm
    
    # +++ Failover zone parameters +++
    # These parameters are only required when WebSphere MQ should run
    #  within a failover zone managed by the Sun Cluster Data Service
    # for Solaris Containers.
    RS_ZONE=wmq3-FOZ
    PROJECT=default
    EOF
    

    Vigor5# /opt/SUNWscmqs/lsr/util/lsr_register -f /var/tmp/lsr3_config
    
  7. Enable the WebSphere MQ component resources.


    # clresource enable websphere-mq-resource
    
Next Steps

See Verifying the Sun Cluster HA for WebSphere MQ Installation and Configuration

Verifying the Sun Cluster HA for WebSphere MQ Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

ProcedureHow to Verify the Sun Cluster HA for WebSphere MQ Installation and Configuration

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Ensure all the WebSphere MQ resources are online.


    # cluster status 
    

    Enable any WebSphere MQ resources that are not online.


    # clresource enable websphere-mq-resource
    
  3. Switch the WebSphere MQ resource group to another cluster node or node:zone.


    # clresourcegroup switch -n node[:zone] websphere-mq-resource-group
    

Upgrading Sun Cluster HA for WebSphere MQ

Upgrade the Sun Cluster HA for WebSphere MQ data service if the following conditions apply:

ProcedureHow to Migrate Existing Resources to a New Version of Sun Cluster HA for WebSphere MQ

Perform steps 1, 2, 3 and 6 if you have an existing Sun Cluster HA for WebSphere MQ deployment and wish to upgrade to the new version. Complete all steps if you need to use the new features of this data service.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable the WebSphere MQ resources.


    # clresource disable websphere-mq-resource
    
  3. Install the new version of Sun Cluster HA for WebSphere MQ to each cluster

    Refer to How to Install the Sun Cluster HA for WebSphere MQ Packages for more information.

  4. Delete the WebSphere MQ resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere MQ.


    # clresource delete websphere-mq-resource
    
  5. Reregister the WebSphere MQ resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere MQ.

    Refer to How to Register and Configure Sun Cluster HA for WebSphere MQ for more information.

  6. Enable the WebSphere MQ resources

    If you have only performed steps 1, 2 and 3 you will need to re-enable the WebSphere MQ resources.


    # clresource enable websphere-mq-resource
    

Understanding the Sun Cluster HA for WebSphere MQ Fault Monitor

This section describes the Sun Cluster HA for WebSphere MQ fault monitor probing algorithm or functionality, states the conditions, and recovery actions associated with unsuccessful probing.

For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.

Resource Properties

The Sun Cluster HA for WebSphere MQ fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.

Probing Algorithm and Functionality

The Sun Cluster HA for WebSphere MQ fault monitor is controlled by the extension properties that control the probing frequency. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for WebSphere MQ fault monitor only if you need to modify this preset behavior.

The Sun Cluster HA for WebSphere MQ fault monitor checks the queue manager and other components within an infinite loop. During each cycle the fault monitor will check the relevant component and report either a failure or success.

If the fault monitor is successful it returns to its infinite loop and continues the next cycle of probing and sleeping.

If the fault monitor reports a failure a request is made to the cluster to restart the resource. If the fault monitor reports another failure another request is made to the cluster to restart the resource. This behavior will continue whenever the fault monitor reports a failure.

If successive restarts exceed the Retry_count within the Thorough_probe_interval a request to failover the resource group onto a different node or zone is made.

Operations of the queue manager probe

The WebSphere MQ queue manager probe checks the queue manager by using a program named create_tdq which is included in the Sun Cluster HA for WebSphere MQ data service.

The create_tdq program connects to the queue manager, creates a temporary dynamic queue, puts a message to the queue and then disconnects from the queue manager.

Operations of the channel initiator, command server, listener and trigger monitor probes

The WebSphere MQ probe for the channel initiator, command server, listener and trigger monitor all operate in a similar manner and will simply restart any component that has failed.

The process monitor facility will request a restart of the resource as soon as any component has failed.

The channel initiator, command server and trigger monitor are all dependent on the queue manger being available. The listener has an optional dependency on the queue manager that is set when the listener resource is configured and registered. Therefore if the queue manager fails the channel initiator, command server, trigger monitor and optional dependent listener will be restarted when the queue manager is available again.

Debug Sun Cluster HA for WebSphere MQ

ProcedureHow to turn on debug for Sun Cluster HA for WebSphere MQ

Sun Cluster HA for WebSphere MQ can be used by multiple WebSphere MQ instances. It is possible to turn debug on for all WebSphere MQ instances or a particular WebSphere MQ instance.

A config file exists under /opt/SUNWscmqs/xxx/etc, where xxx can be mgr (Queue Manager), chi (Channel Initiator), csv (Command Server), lsr (Listener) and trm (Trigger Monitor).

These files allow you to turn on debug for all WebSphere MQ instances or for a specific WebSphere MQ instance on a particular node or zone within the cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere MQ across the whole cluster, repeat this step on all nodes within the cluster.

  1. Edit /etc/syslog.conf and change daemon.notice to daemon.debug.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.notice;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                     operator
    #

    Change the daemon.notice to daemon.debug and restart syslogd. Note that the output below, from grep daemon /etc/syslog.conf, shows that daemon.debug has been set.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.debug;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                    operator

    Restart the syslog daemon.

    1. If running Solaris 9


      # pkill -1 syslogd
      
    2. If running Solaris 10


      # svcadm disable system-log
      # svcadm enable system-log
      
  2. Edit /opt/SUNWscmqs/xxx/etc/config.

    Perform this step for each component that requires debug output, on each node of Sun Cluster as required.

    Edit /opt/SUNWscmqs/xxx/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource.


    # cat /opt/SUNWscmqs/mgr/etc/config
    #
    # Copyright 2006 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    ##ident   "@(#)config 1.2     06/03/08 SMI"
    #
    # Usage:
    #       DEBUG=<RESOURCE_NAME> or ALL
    #
    DEBUG=ALL

    Note –

    To turn off debug, reverse the steps above.