This chapter explains how to install and configure Sun Cluster HA for WebSphere MQ Integrator.
This chapter contains the following sections.
Installing and Configuring Sun Cluster HA for WebSphere MQ Integrator
Planning the Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration
Installing the Sun Cluster HA for WebSphere MQ Integrator Packages
Registering and Configuring Sun Cluster HA for WebSphere MQ Integrator
Verifying the Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration
Understanding Sun Cluster HA for WebSphere MQ Integrator Fault Monitor
Table 1 lists the tasks for installing and configuring Sun Cluster HA for WebSphere MQ Integrator. Perform these tasks in the order that they are listed.
Table 1 Task Map: Installing and Configuring Sun Cluster HA for WebSphere MQ Integrator
Task |
For Instructions, Go To |
---|---|
Plan the installation |
Sun Cluster HA for WebSphere MQ Integrator Overview Planning the Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration |
Install and configure Sun Cluster | |
Verify installation and configuration |
How to Verify the Installation and Configuration of Sun Cluster |
Install Sun Cluster HA for WebSphere MQ Integrator Packages |
How to Install the Sun Cluster HA for WebSphere MQ Integrator Packages using the scinstall Utility |
Register and Configure Sun Cluster HA for WebSphere MQ Integrator |
How to Register and Configure Sun Cluster HA for WebSphere MQ Integrator |
Verify Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration |
How to Verify the Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration |
Upgrading Sun Cluster HA for WebSphere MQ Integrator Installation and Configuration | |
Understand Sun Cluster HA for WebSphere MQ Integrator fault monitor |
Understanding Sun Cluster HA for WebSphere MQ Integrator Fault Monitor |
Debug Sun Cluster HA for WebSphere MQ Integrator |
Sun Cluster works with WebSphere MQ messaging, extending its basic connectivity and transport capabilities to provide a powerful message broker solution. Messages are formed, routed, and transformed according to the rules defined by an easy-to-use graphical user interface (GUI).
The Sun Cluster HA for WebSphere MQ Integrator data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover for the Sun Cluster service. The Sun Cluster components are protected by the Sun Cluster HA for WebSphere MQ Integrator data service.
Table 2 Protection of Components
Component |
Protected by |
---|---|
Broker |
Sun Cluster HA for WebSphere MQ Integrator |
User Name Server |
Sun Cluster HA for WebSphere MQ Integrator |
This section contains the information you need to plan your Sun Cluster HA for WebSphere MQ Integrator installation and configuration.
This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ Integrator only.
Your data service configuration might not be supported if you do not observe these restrictions.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
The Sun Cluster HA for WebSphere MQ Integrator data service can be configured only as a failover service – Sun Cluster cannot operate as a scalable service and therefore, the Sun Cluster HA for WebSphere MQ Integrator data service can be configured only to run as a failover service.
Installing Sun Cluster onto Cluster File Systems – Initially, you install the Sun Cluster product into /opt/mqs and /var/mqsi.
You must mount /var/mqsi as a Global File System with a symbolic link for /var/mqsi/locks to a Local File System. It is recommended that /opt/mqsi be on a local disk. For a discussion of the advantages and disadvantages of installing the software on local versus cluster file system, see “Determining the Location of the Application Binaries” on page 3 of the Sun Cluster Data Services Installation and Configuration Guide
Mount /var/mqsi as a Global File System – Sun Cluster uses several directories within /var/mqsi, which needs to be available on all nodes within Sun Cluster as a Global File System. Generated locks must be located within a Local File System. Because of this, you must setup /var/mqsi/locks as a symbolic link to a Local File System.
It is best practice to mount Global File Systems with the /global prefix and to mount Failover File Systems with the /local prefix.
The following example shows Sun Cluster with /var/mqsi mounted as a Global File System through a symbolic link to /global/mqsi, with /var/mqsi/locks setup as a symbolic link to /var/mqsi_locks on the root file system, that is, local disk.
# ls -l /var/mqsi lrwxrwxrwx 1 root other 12 Sep 5 15:32 /var/mqsi -> /global/mqsi # # ls -l /global/mqsi/locks lrwxrwxrwx 1 root other 15 Sep 18 15:37 /global/mqsi/locks -> /var/mqsi_locks # # df -k /global/mqsi/locks Filesystem kbytes used avail capacity Mounted on /dev/dsk/c0t0d0s0 12731708 5792269 6812122 46% / # # more /etc/vfstab (Subset of the output) /dev/md/dg_d6/dsk/d60 /dev/md/dg_d6/rdsk/d60 /global/mqsi ufs 4 yes logging,global |
The Sun Cluster HA for WebSphere MQ Integrator RDBMS – The Sun Cluster HA for WebSphere MQ Integrator data service can operate only with a local RDBMS, i.e. not a remote RDBMS and more specifically, only with DB2 and Oracle.
This restriction is because the Sun Cluster HA for WebSphere MQ Integrator data service needs to manage the restart scenarios for Sun Cluster, whenever the RDBMS restarts.
Use the requirements in this section to apply to Sun Cluster HA for WebSphere MQ Integrator only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ Integrator installation and configuration.
Your data service configuration might not be supported if you do not adhere to these requirements.
Sun Cluster components and their dependencies – You can configure the Sun Cluster HA for WebSphere MQ Integrator data service to protect a Sun Cluster Broker and UserNameServer. These components and their dependencies are described in Table 3.
Table 3 Sun Cluster components and their dependencies (via -> symbol)
Component |
Description |
---|---|
Broker(Mandatory) |
-> SUNW.HAStoragePlus resource -> WebSphere MQ Queue Manager and Listener resources -> RDBMS resource The SUNW.HAStoragePlus resource manages the Sun Cluster File System Mount point, i.e. /global/mqsi. Dependency on the WebSphere MQ Queue Manager resource ensures that the WebSphere MQ Queue Manager is available. Dependency on the WebSphere MQ Listener resource is required only if runmqlsr is used instead of inetd. Dependency on the RDBMS resource ensures that the RDBMS is available. All these dependencies ensure that Sun Cluster is not started until these services are available. |
UserNameServer(Optional) |
-> SUNW.HAStoragePlus resource -> WebSphere MQ Queue Manager and Listener resources The SUNW.HAStoragePlus resource manages the Sun Cluster File System Mount point, i.e. /global/mqsi. Dependency on the WebSphere MQ Queue Manager resource ensures that the WebSphere MQ Queue Manager is available. Dependency on the WebSphere MQ Listener resource is required only if runmqlsr is used instead of inetd. |
The Sun Cluster Broker component and its dependencies must all reside within the same Resource Group. Likewise the Sun Cluster UserNameServer and its dependencies must also all reside with the same Resource Group.
However, the Sun Cluster Broker and UserNameServer do not have to reside within the same Resource Group, they can reside in separate Resource Groups. Likewise, multiple instances of the Sun Cluster Broker can reside in separate Resource Groups. However, only one instance of the Sun Cluster UserNameServer is allowed.
Example 1 shows two Sun Cluster Brokers (XXX and YYY) and a Sun Cluster UserNameServer within different Resource Groups and shows that all Sun Cluster components (Broker and UserNameServer) use the same Global File System /global/mqsi.
Resource Group 1 with the following resources
SUNW.HAStoragePlus resource with -x FilesystemMountPoints=/local/db2,/global/mqm,/global/mqsi, /local/mqm/qmgrs/qmgr1,/local/mqm/log/qmgr1 RDBMS resource for DB2 WebSphere MQ resource for Queue Manager qmgr1 WebSphere MQ Integrator resource for Broker XXX |
Resource Group 2 with the following resources
SUNW.HAStoragePlus resource with -x FilesystemMountPoints=/global/mqm,/global/mqsi -x AffinityOn=FALSE SUNW.HAStoragePlus resource with -x FilesystemMountPoints=/local/oracle, /local/mqm/qmgrs/qmgr2,/local/mqm/log/qmgr2 RDBMS resource for Oracle RDBMS resource for Oracle Listener WebSphere MQ resource for Queue Manager qmgr2 WebSphere MQ Integrator resource for Broker YYY |
Resource Group 3 with the following resources
SUNW.HAStoragePlus resource with -x FilesystemMountPoints=/global/mqm,/global/mqsi -x AffinityOn=FALSE SUNW.HAStoragePlus resource with -x FilesystemMountPoints=/local/mqm/qmgrs/qmgr3,/local/mqm/log/qmgr3 WebSphere MQ resource for Queue Manager qmgr3 WebSphere MQ Integrator resource for UserNameServer |
For detailed information about these Sun Cluster components, refer to IBM's Sun Cluster Introduction and Planning manual.
Each Sun Cluster component has a configuration and registration file in /opt/SUNWscmqi/xxx/util, where xxx is a three-character abbreviation for the Sun Cluster component. These files allow you to register the Sun Cluster components with Sun Cluster.
Within these files, the appropriate dependencies have been applied.
# cd /opt/SUNWscmqi # # ls -l sib/util total 6 -rwxr-xr-x 1 root sys 1032 Dec 20 14:44 sib_config -rwxr-xr-x 1 root sys 720 Dec 20 14:44 sib_register # # ls -l siu/util -rwxr-xr-x 1 root sys 733 Dec 20 14:44 siu_config -rwxr-xr-x 1 root sys 554 Dec 20 14:44 siu_register # # more sib/util/* :::::::::::::: sib/util/sib_config :::::::::::::: # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # This file will be sourced in by sib_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # SC3_IN - name of the Test Message Flow (Inbound) # SC3_OUT - name of the Test Message Flow (Outbound) # MQSI_ID - name of the WebSphere MQI userid # BROKER - name of the WebSphere MQI Broker # RDBMS_ID - name of the WebSphere MQI RDBMS userid # QMGR_RS - name of the Queue Manager SC resource # RDBMS_RS - name of the RDBMS SC resource and listener (if Oracle) # e.g. RDBMS_RS=<ora-rs>,<lsr-rs> # # +++ Optional parameters +++ # # START_CMD - pathname and name of the renamed strmqm program # STOP_CMD - pathname and name of the renamed endmqm program # # # Note 1: Optional parameters # # Null entries for optional parameters are allowed if not used. # # Note 2: Renamed strmqm/endmqm programs # # This is only recommended if WebSphere MQ is deployed onto # Global File Systems for qmgr/log files. You should specify # the full pathname/program, i.e. /opt/mqm/bin/<renamed_strmqm> # # # RS= RG= QMGR= PORT= LH= HAS_RS= SC3_IN= SC3_OUT= MQSI_ID= BROKER= RDBMS_ID= QMGR_RS= RDBMS_RS= START_CMD= STOP_CMD= :::::::::::::: sib_register :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # . `dirname $0`/sib_config scrgadm -a -j $RS -g $RG -t SUNW.gds \ -x Start_command="/opt/SUNWscmqi/sib/bin/start-broker \ -R $RS -G $RG -Q $QMGR -I $SC3_IN -O $SC3_OUT \ -U $MQSI_ID -B $BROKER -D $RDBMS_ID \ -S '$START_CMD' -E '$STOP_CMD' " \ -x Stop_command="/opt/SUNWscmqi/sib/bin/stop-broker \ -R $RS -G $RG -Q $QMGR -I $SC3_IN -O $SC3_OUT \ -U $MQSI_ID -B $BROKER -D $RDBMS_ID \ -S '$START_CMD' -E '$STOP_CMD' " \ -x Probe_command="/opt/SUNWscmqi/sib/bin/test-broker \ -R $RS -G $RG -Q $QMGR -I $SC3_IN -O $SC3_OUT \ -U $MQSI_ID -B $BROKER -D $RDBMS_ID \ -S '$START_CMD' -E '$STOP_CMD' " \ -y Port_list=$PORT/tcp -y Network_resources_used=$LH \ -x Stop_signal=9 \ -y Resource_dependencies=$HAS_RS,$QMGR_RS,$RDBMS_RS |
Use this procedure to install and configure Sun Cluster.
For this section, follow IBM's WebSphere MQ Integrator for Sun Solaris — Installation Guide to install and create a Broker and UserNameServer.
Mount the Sun Cluster Cluster File Systems.
Before installing Sun Cluster within Sun Cluster, ensure that the Cluster File System /var/mqsi, or /global/mqsi if you have setup a symbolic link, is mounted as a Global File System.
Install Sun Cluster onto all nodes within Sun Cluster.
It is recommended that you install Sun Cluster binaries onto local disks on /opt/mqsi. For a discussion of the advantages and disadvantages of installing the software on local versus cluster file system, see “Determining the Location of the Application Binaries” on page 3 of the Sun Cluster Data Services Installation and Configuration Guide.
Create your Sun Cluster Broker.
After you have installed Sun Cluster onto all nodes within Sun Cluster that will run Sun Cluster, create your Sun Cluster Broker.
This section contains the procedure you need to verify the installation and configuration.
Refer to IBM's WebSphere MQ Intercommunication and IBM's WebSphere MQ Command Reference manuals to create queues and channels for communication between the Broker(s) and UserNameServer within Sun Cluster and the Configuration Manager on Windows NT.
Use this procedure to verify the installation and configuration. This procedure does not verify that your application is highly available because you have not installed your data service yet.
The Sun Cluster HA for WebSphere MQ Integrator data service requires that a message flow has been setup within the Broker.
This section requires that the WebSphere MQ queue manager Logical Hostname IP address be available. This should have been setup if you have completed the Sun Cluster HA for IBM WebSphere MQ data service installation. Ensure that you have completed the installation of the Sun Cluster HA for IBM WebSphere MQ data service before you continue with the next steps.
Create the communication links between the Broker queue manager and Configuration Manager queue manager.
Set up queues and channels between the Broker queue manager(s) and the Configuration Manager queue manager, so that the message flows and rules setup on the Configuration Manager can be deployed from the Configuration Manager to the Broker queue manager(s) within Sun Cluster.
See Chapter 4 in IBM's WebSphere MQ Integrator for Sun Solaris — Installation Guide. Refer to the section Starting your broker domain.
Create the communication links between the Broker queue manager and UserNameServer (UNS) queue manager.
If you are using a UNS, then you need to setup queues and channels between the Broker queue manager(s) and the UserNameServer.
Test the communication links between the queue managers.
After you setup all queues and channels between the Broker, UserNameServer and Configuration Manager, test that all the queue managers can communicate with each other.
Create and deploy a message flow on the Configuration Manager.
After you setup and test all queues between the Broker, UserNameServer and Configuration Manager, create a message flow and deploy it to the Broker queue manager. You will need a separate message flow for each Broker queue manager.
Create a message flow.
Create a simple message flow, that uses two queues, to receive a message from an input queue and put it to an output queue. Within the Control Center on Windows NT, you can use the IBMPrimitives MQInputand MQOutput to achieve this message flow.
See Chapter 5 — Verifying your installation within IBM's WebSphere MQ Integrator for Sun Solaris — Installation Guide . In particular, refer to the section Building and using a message flow.
Deploy the message flow to the broker.
The message flow and message flow queues that you create will be used by the Sun Cluster HA for WebSphere MQ Integrator data service to probe Sun Cluster Broker.
If you did not install the Sun Cluster HA for WebSphere MQ Integrator packages during your Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere MQ Integrator packages. To complete this procedure, you need the Sun Cluster Agents CD-ROM.
If you are installing more than one data service simultaneously, perform the procedure in Installing the Software in Sun Cluster Software Installation Guide for Solaris OS.
Install the Sun Cluster HA for WebSphere MQ Integrator packages by using one of the following installation tools:
Web Start program
scinstall utility
If you are using Solaris 10, install these packages only in the global zone. To ensure that these packages are not propagated to any local zones that are created after you install the packages, use the scinstall utility to install these packages. Do not use the Web Start program.
You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.
On the cluster node where you are installing the Sun Cluster HA for WebSphere MQ Integrator packages, become superuser.
(Optional) If you intend to run the Web Start program with a GUI, ensure that your DISPLAY environment variable is set.
Insert the Sun Cluster Agents CD-ROM into the CD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.
Change to the Sun Cluster HA for WebSphere MQ Integrator component directory of the CD-ROM.
The Web Start program for the Sun Cluster HA for WebSphere MQ Integrator data service resides in this directory.
# cd /cdrom/cdrom0/components/SunCluster_HA_MQI_3.1 |
Start the Web Start program.
# ./installer |
When you are prompted, select the type of installation.
Follow the instructions on the screen to install the Sun Cluster HA for WebSphere MQ Integrator packages on the node.
After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.
Exit the Web Start program.
Remove the Sun Cluster Agents CD-ROM from the CD-ROM drive.
Use this procedure to install the Sun Cluster HA for WebSphere MQ Integrator packages by using the scinstall utility. You need the Sun Java Enterprise System Accessory CD Volume 3 to perform this procedure. This procedure assumes that you did not install the data service packages during your initial Sun Cluster installation.
If you installed the Sun Cluster HA for WebSphere MQ Integrator packages as part of your initial Sun Cluster installation, proceed to Registering and Configuring Sun Cluster HA for WebSphere MQ Integrator.
Otherwise, use this procedure to install the Sun Cluster HA for WebSphere MQ Integrator packages. Perform this procedure on all nodes that can run Sun Cluster HA for WebSphere MQ Integrator data service.
Load the Sun Cluster Agents CD-ROM into the CD-ROM drive.
Run the scinstall utility with no options.
This step starts the scinstall utility in interactive mode.
Choose the menu option, Add Support for New Data Service to This Cluster Node.
The scinstall utility prompts you for additional information.
Provide the path to the Sun Cluster Agents CD-ROM.
The utility refers to the CD as the “data services cd.”
Specify the data service to install.
The scinstall utility lists the data service that you selected and asks you to confirm your choice.
Exit the scinstall utility.
Unload the CD from the drive.
This section contains the procedures you need to configure Sun Cluster HA for WebSphere MQ Integrator.
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
If you did not install the Sun Cluster HA for WebSphere MQ Integrator packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Integrator Packages using the scinstall Utility.
The Sun Cluster Broker component is dependent on WebSphere MQ and an RDBMS. All resources for the Sun Cluster Broker component, WebSphere MQ components, and the RDBMS must reside within the same Resource Group. For example, refer to Example 1.
The Sun Cluster UserNameServer component is dependent only on WebSphere MQ. All resources for the Sun Cluster UserNameServer component, WebSphere MQ components and the RDBMS must reside within the same Resource Group, For example, refer toExample 1
Currently only local RDBMS support for DB2 or Oracle is supported. Refer to Configuration Restrictions, in particular to Sun Cluster HA for WebSphere MQ Integrator RDBMS for a description of this restriction.
Become superuser on one of the nodes in the cluster that will host Sun Cluster.
Register the SUNW.gds resource type.
# scrgadm -a -t SUNW.gds |
Register the SUNW.HAStoragePlus resource type.
# scrgadm -a -t SUNW.HAStoragePlus |
Create a failover resource group.
# scrgadm -a -g WebSphere MQ-failover-resource-group |
Create a resource for the Sun Cluster Disk Storage.
# scrgadm -a -j Sun Cluster-has-resource \ -g WebSphere MQ-failover-resource-group \ -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=Sun Cluster- instance-mount-points |
Enable the failover resource group that now includes the Sun Cluster Disk Storage resource.
# scswitch -Z -g WebSphere MQ-failover-resource-group |
Create and register each required Sun Cluster component.
This section requires that you have installed the Sun Cluster HA for WebSphere MQ and RDBMS data services and that their resources are online within Sun Cluster. Ensure that you have done this before you continue with this step.
Perform this step for the Broker component (sib), then repeat for the optional UserNameServer component, replacing sib with:
siu - UserNameServer
# cd /opt/SUNWscmqi/sib/util |
Edit the sib_config file and follow the comments within that file. For example:
# # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # This file will be sourced in by sib_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # SC3_IN - name of the Test Message Flow (Inbound) # SC3_OUT - name of the Test Message Flow (Outbound) # MQSI_ID - name of the WebSphere MQI userid # BROKER - name of the WebSphere MQI Broker # RDBMS_ID - name of the WebSphere MQI RDBMS userid # QMGR_RS - name of the Queue Manager SC resource # RDBMS_RS - name of the RDBMS SC resource and listener (if Oracle) # e.g. RDBMS_RS=<ora-rs>,<lsr-rs> # # +++ Optional parameters +++ # # START_CMD - pathname and name of the renamed strmqm program # STOP_CMD - pathname and name of the renamed endmqm program # # # Note 1: Optional parameters # # Null entries for optional parameters are allowed if not used. # # Note 2: Renamed strmqm/endmqm programs # # This is only recommended if WebSphere MQ is deployed onto # Global File Systems for qmgr/log files. You should specify # the full pathname/program, i.e. /opt/mqm/bin/<renamed_strmqm> # # |
The following is an example for Sun Cluster Broker XXX, with WebSphere Integrator MQ Manager qmgr1.
RS=wmq-broker-res RG=wmq-rg QMGR=qmgr1 PORT=1414 LH=wmq-lh-res HAS_RS=wmqi-has-res SC3_IN=SC3_IN SC3_OUT=SC3_OUT MQSI_ID=mqsi1 BROKER=XXX RDBMS_ID=db2 QMGR_RS=wmq-qmgr-res RDBMS_RS=wmq-rdbms-res START_CMD= STOP_CMD= |
After editing sib_config, you must register the resource.
# ./sib_register |
Enable each Sun Cluster resource.
Repeat this step for each Sun Cluster component.
# scstat |
# scswitch -e -j Sun Cluster-resource |
This section contains the procedure you need to verify that you installed and configured your data service correctly.
Become superuser on one of the nodes in the cluster that will host Sun Cluster.
Ensure all the Sun Cluster resources are online with scstat.
# scstat |
For each Sun Cluster resource that is not online, use the scswitch command as follows.
# scswitch -e -j Sun Cluster- resource |
Run the scswitch command to switch the Sun Cluster resource group to another cluster node, such as node2.
# scswitch -z -g Sun Cluster-failover-resource-group -h node2 |
Additional configuration parameters for Sun Cluster HA for WebSphere MQ Integrator were introduced in Sun Cluster 3.1 9/04. If you need to set a value for a parameter, you must upgrade Sun Cluster HA for WebSphere MQ Integrator.
You might deploy a WebSphere MQ queue manager's qmgr files and log files on a global file system. In this situation, rename the strmqm program and the endmqm program to prevent the queue manager from being manually started on another node. If you rename these programs, the Sun Cluster framework manage the startup of WebSphere MQ queue manager. For more information, see Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS.
The following parameters for enabling Sun Cluster to manage the startup of WebSphere MQ queue manager were introduced in Sun Cluster 3.1 9/04. Null values are defined for these parameters.
Specifies the full path name and filename of the renamed strmqm program.
Specifies the full path name and filename of the renamed endmqm program.
If you need to set a value for a parameter , you must remove and reregister the Sun Cluster HA for WebSphere MQ Integrator resource for which you are changing the parameter.
The parameters that are introduced in Sun Cluster 3.1 9/04 apply to the resources for all components, namely:
Broker component
User Name Server component
Perform this task for each WebSphere MQ Integrator resource that you are modifying.
Perform this task only if you are setting or modifying parameters that are introduced in Sun Cluster 3.1 9/04.
Save the resource definitions.
# scrgadm -pvv -j resource > file1 |
Disable the resource.
# scswitch -n -j resource |
Remove the resource.
# scrgadm -r -j resource |
Configure and register the resource.
Go to the directory that contains the configuration file and the registration file for the resource.
# cd /opt/SUNWscmqi/prefixutil |
Edit the configuration file for the resource.
vi prefix_config |
Run the registration file for the resource.
# ./prefix_register |
prefix denotes the component to which the file applies, as follows:
sib denotes the Broker component.
siu denotes the User Name Server component.
Save the resource definitions.
# scrgadm -pvv -j resource > file2 |
Compare the updated definitions to the definitions that you saved before you updated the resource.
Comparing these definitions enables you to determine if any existing extension properties have changed, for example, time-out values.
# diff file1 file2 |
Amend any resource properties that were reset.
# scrgadm -c -j resource -x|y resource |
Bring online the resource.
# scswitch -e -j resource |
This section describes the Sun Cluster HA for WebSphere MQ Integrator fault monitor's probing algorithm or functionality, states the conditions, messages, and recovery actions associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
Sun Cluster HA for WebSphere MQ Integrator fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.
Sun Cluster Broker
Sleeps for Thorough_probe_interval.
Test the RDBMS or Queue Manager has been restarted. If the RDBMS has been restarted, then the whole Resource Group will be restarted. If the Queue Manager has been restarted, then the Broker is stopped and waits until the Queue Manager is restarted, after which the Broker is restarted.
If the RDBMS and Queue Manager have not been restarted, then a check against bipservice is made. If bipservice is lost, then the probe will restart the Broker.
If bipservice is available, then the probe checks that the queue names for SC3_IN and SC3_OUT are valid and empty, puts a test message to SC3_IN, and checks that the message flows to SC3_OUT by checking that the CURDEPTH for SC3_OUT is equal to 1. If this test fails, then the probe will restart the Broker.
If the Broker is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is initiated for the Resource Group onto another node.
Sun Cluster UserNameServer
Sleeps for Thorough_probe_interval
If bipservice for the UserNameServer is lost, then the probe will restart the UserNameServer.
If the UserNameServer is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval then a failover is initiated for the Resource Group onto another node.
Sun Cluster HA for WebSphere MQ Integrator can be used by multiple Sun Cluster instances. However, it is possible to turn on debug for all Sun Cluster instances or for a particular Sun Cluster instance.
Each Sun Cluster component has a DEBUG file under /opt/SUNWscmqi/xxx/etc, where xxx is a three-character abbreviation for the respective Sun Cluster component.
These files allow you to turn on debug for all Sun Cluster instances or for a specific Sun Cluster instance on a particular node with Sun Cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere MQ Integrator across the whole Sun Cluster, you will need to repeat these steps on all nodes within Sun Cluster.
Edit /etc/syslog.conf and change daemon.notice to daemon.debug
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Change the daemon.notice to daemon.debug and restart syslogd. The output below, from the command grep daemon /etc/syslog.conf, shows that daemon.debug has been set.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # # pkill -1 syslogd # |
Edit /opt/SUNWscmqi/sib/etc/config
Perform this step for the Broker component (sib), then repeat for the optional UserNameServer (siu) that requires debug output, on each node of Sun Cluster.
Edit /opt/SUNWscmqi/sib/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource
# cat /opt/SUNWscmqi/sib/etc/config # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL # |
To turn off debug, reverse the steps above.