Table 1–1 lists the tasks for installing and configuring Sun Cluster HA for WebSphere MQ. Perform these tasks in the order that they are listed.
Table 1–1 Task Map: Installing and Configuring Sun Cluster HA for WebSphere MQ
Task |
For Instructions, Go To |
---|---|
Plan the installation |
Sun Cluster HA for WebSphere MQ Overview Planning the Sun Cluster HA for WebSphere MQ Installation and Configuration |
Install and configure WebSphere MQ | |
Verify installation and configuration |
How to Verify the Installation and Configuration of WebSphere MQ |
Install Sun Cluster HA for WebSphere MQ Packages |
How to Install the Sun Cluster HA for WebSphere MQ Packages by Using the scinstall Utility |
Register and Configure Sun Cluster HA for WebSphere MQ |
How to Register and Configure Sun Cluster HA for WebSphere MQ |
Verify Sun Cluster HA for WebSphere MQ Installation and Configuration |
How to Verify the Sun Cluster HA for WebSphere MQ Installation and Configuration |
Understand Sun Cluster HA for WebSphere MQ fault monitor | |
Debug Sun Cluster HA for WebSphere MQ |
WebSphere MQ messaging software enables business applications to exchange information across operating platforms in a way that is easy and straightforward for programmers to implement. Programs communicate using the WebSphere MQ API that assures once-only delivery and time-independent communications.
The Sun Cluster HA for WebSphere MQ data service provides a mechanism for orderly startup and shutdown, fault monitoring, and automatic failover of the WebSphere MQ service. Table 1–2 lists components protected by the Sun Cluster HA for WebSphere MQ data service.
Table 1–2 Protection of Components
Component |
Protected by |
---|---|
Queue Manager |
Sun Cluster HA for WebSphere MQ |
Channel Initiator |
Sun Cluster HA for WebSphere MQ |
Command Server |
Sun Cluster HA for WebSphere MQ |
Listener |
Sun Cluster HA for WebSphere MQ |
Trigger Monitor |
Sun Cluster HA for WebSphere MQ |
This section contains the information you need to plan your Sun Cluster HA for WebSphere MQ installation and configuration.
It is best practice to mount Global File Systems with the /global prefix and to mount Failover File Systems with the /local prefix.
This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ only. For restrictions that apply to all data services, see the Sun Cluster Release Notes.
Your data service configuration might not be supported if you do not observe these restrictions.
The Sun Cluster HA for WebSphere MQ data service can be configured only as a failover service – WebSphere MQ cannot operate as a scalable service and, therefore, the Sun Cluster HA for WebSphere MQ data service can be configured to run only as a failover service.
Mounting /var/mqm as a Global File System – If you intend to install multiple WebSphere MQ Managers, then you must mount /var/mqm as a Global File System.
After mounting /var/mqm as a Global File System, you must also create a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System on each node within Sun Cluster that will run WebSphere MQ, for example:
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # mkdir -p /var/mqm/qmgrs # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM # |
This restriction is required because WebSphere MQ uses keys to build internal control structures. These keys are derived from the ftok() function call and need to be unique on each node. Mounting /var/mqm as a Global File System, with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System ensures that any derived shared memory segments keys are unique on each node.
If your Queue Managers were created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link. Furthermore, you must stop all Queue Managers before you do this.
Mounting /var/mqm as a Failover File System – If you intend to only install one WebSphere MQ Manager, then you can mount /var/mqm as a Failover File System. However, we recommend that you still mount /var/mqm as a Global File System to allow you to install multiple WebSphere MQ Managers in the future.
Multiple WebSphere MQ Managers with Failover File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Failover File Systems through a symbolic link from /var/mqm to the Failover File System. Refer to Example 1–1.
Multiple WebSphere MQ Managers with Global File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Global File Systems. Refer to Example 1–2.
Installing WebSphere MQ onto Cluster File Systems – Initially, the WebSphere MQ product is installed into /opt/mqm and /var/mqm. When a WebSphere MQ Manager is created, the default directory locations created are /var/mqm/qmgrs/<qmgr_name> and /var/mqm/log/<qmgr_name>. Before you pkgadd mqm, on all nodes within Sun Cluster that will run WebSphere MQ , you must mount these locations as either Failover File Systems or Global File Systems.
Example 1–1 shows two WebSphere MQ Managers with Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Sep 17 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d3/dsk/d30 /dev/md/dg_d3/rdsk/d30 /global/mqm ufs 3 yes logging,global /dev/md/dg_d3/dsk/d33 /dev/md/dg_d3/rdsk/d33 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d3/dsk/d36 /dev/md/dg_d3/rdsk/d36 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr2 ufs 4 no logging # |
Example 1–2 shows two WebSphere MQ Managers with Global Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Dec 16 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global |
The requirements in this section apply to Sun Cluster HA for WebSphere MQ only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ installation and configuration.
Your data service configuration might not be supported if you do not adhere to these requirements.
WebSphere MQ components and their dependencies —You can configure the Sun Cluster HA for WebSphere MQ data service to protect a WebSphere MQ instance and its respective components. These components and their dependencies are described.
Table 1–3 WebSphere MQ components and their dependencies (via -> symbol)
Component |
Description |
---|---|
Queue Manager(Mandatory) |
-> SUNW.HAStoragePlus resource The SUNW.HAStoragePlus resource manages the WebSphere MQ File System Mount points and ensures that WebSphere MQ is not started until these are mounted. |
Channel Initiator(Optional) |
-> Queue_Manager and Listener resources Dependency on the Listener is required only if runmqlsr is used instead of inetd. By default, a channel initiator is started by WebSphere MQ. However, if you want a different or another channel initiation queue, other than the default (SYSTEM.CHANNEL.INITQ), then you should deploy this component. |
Command Server (Optional) |
-> Queue_Manager and Listener resources Dependency on the Listener is required only if runmqlsr is used instead of inetd. Deploy this component if you want WebSphere MQ to process commands sent to the command queue. |
Listener (Optional) |
->Queue_Manager resource Deploy this component if you want a dedicated listener (runmqlsr) and will not use the inetd listener. |
Trigger Monitor (Optional) |
->Queue_Manager and Listener resources Dependency on the Listener is required only if runmqlsr is used instead of inetd. Deploy this component if you want a trigger monitor. |
For detailed information about these WebSphere MQ components, refer to IBM's WebSphere MQ Application Programming manual.
Each WebSphere MQ component has a configuration and registration file in /opt/SUNWscmqs/xxx/util, where xxx is a three-character abbreviation for the respective WebSphere MQ component. These files allow you to register the WebSphere MQ components with Sun Cluster.
Within these files, the appropriate dependencies have been applied.
# cd /opt/SUNWscmqs # # ls -l chi/util total 4 -rwxr-xr-x 1 root sys 720 Dec 20 14:44 chi_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 chi_register # # ls -l csv/util total 4 -rwxr-xr-x 1 root sys 645 Dec 20 14:44 csv_config -rwxr-xr-x 1 root sys 562 Dec 20 14:44 csv_register # # ls -l lsr/util total 4 -rwxr-xr-x 1 root sys 640 Dec 20 14:44 lsr_config -rwxr-xr-x 1 root sys 624 Dec 20 14:44 lsr_register # # ls -l mgr/util total 4 -rwxr-xr-x 1 root sys 603 Dec 20 14:44 mgr_config -rwxr-xr-x 1 root sys 515 Dec 20 14:44 mgr_register # # ls -l trm/util total 4 -rwxr-xr-x 1 root sys 717 Dec 20 14:44 trm_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 trm_register # # # more mgr/util/* :::::::::::::: mgr/util/mgr_config :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # This file will be sourced in by mgr_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES) # # Under normal shutdown and startup WebSphere MQ manages it's # cleanup of IPC resources with the following fix packs. # # MQSeries v5.2 Fix Pack 07 (CSD07) # WebSphere MQ v5.3 Fix Pack 04 (CSD04) # # Please refer to APAR number IY38428. # # However, while running in a failover environment, the IPC keys # that get generated will be different between nodes. As a result # after a failover of a Queue Manager, some shared memory segments # can remain allocated on the node although not used. # # Although this does not cause WebSphere MQ a problem when starting # or stopping (with the above fix packs applied), it can deplete # the available swap space and in extreme situations a node may # run out of swap space. # # To resolve this issue, setting CLEANUP=YES will ensure that # IPC shared memory segments for WebSphere MQ are removed whenever # a Queue Manager is stopped. However IPC shared memory segments # are only removed under strict conditions, namely # # - The shared memory segment(s) are owned by # CREATOR=mqm and CGROUP=mqm # - The shared memory segment has no attached processes # - The CPID and LPID process ids are not running # - The shared memory removal is performed by userid mqm # # Setting CLEANUP=NO will not remove any shared memory segments. # # Setting CLEANUP=YES will cleanup shared memory segments under the # conditions described above. # RS= RG= QMGR= PORT= LH= HAS_RS= CLEANUP=YES :::::::::::::: mgr/util/mgr_register :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # . `dirname $0`/mgr_config scrgadm -a -j $RS -g $RG -t SUNW.gds \ -x Start_command="/opt/SUNWscmqs/mgr/bin/start-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -x Stop_command="/opt/SUNWscmqs/mgr/bin/stop-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -x Probe_command="/opt/SUNWscmqs/mgr/bin/test-qmgr \ -R $RS -G $RG -Q $QMGR -C $CLEANUP " \ -y Port_list=$PORT/tcp -y Network_resources_used=$LH \ -x Stop_signal=9 \ -y Resource_dependencies=$HAS_RS # |
WebSphere MQ Manager protection—
WebSphere MQ is unable to determine whether a Queue Manager is already running on another node within Sun Cluster if Global File Systems are being used for the WebSphere MQ instance, that is, /global/mqm/qmgrs/<qmgr> and /global/mqm/log/<qmgr>.
Under normal conditions, the Sun Cluster HA for WebSphere MQ data service manages the startup and shutdown of the Queue Manager, regardless of which Cluster File System is being used (for example, FFS or GFS).
However, it is possible that someone could manually start the Queue Manager on another node within Sun Cluster if the WebSphere MQ instance is running on a Global File System.
This has been reported to IBM and a fix is being worked on.
To protect against this happening, two options are available.
Use Failover File Systems for the WebSphere MQ instance
This is the recommended approach because the WebSphere MQ instance files would be mounted only on one node at a time. With this configuration, WebSphere MQ is able to determine whether the Queue Manager is running.
Create a symbolic link for strmqm/endmqm to check-start (Provided script).
The script /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent the WebSphere MQ Manager from being started or stopped.
The check-start script will verify that the WebSphere MQ Manager is being started or stopped by Sun Cluster and will report an error if an attempt is made to start or stop the WebSphere MQ Manager manually.
Example 1–4shows a manual attempt to start the WebSphere MQ Manager. The response was generated by the check-start script.
# strmqm qmgr1 # Request to run </usr/bin/strmqm qmgr1> within SC3.0 has been refused # |
This solution is required only if you require a Global File System for the WebSphere MQ instance. Example 1–5details the steps that you must take to achieve this.
# cd /opt/mqm/bin # # mv strmqm strmqm_sc3 # mv endmqm endmqm_sc3 # # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm # |
Edit the /opt/SUNWscmqs/mgr/etc/config file and change the following entries for START_COMMAND and STOP_COMMAND. In this example we have chosen to add a suffix to the command names with _sc3. You can choose another name.
# cat /opt/SUNWscmqs/mgr/etc/config # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # START_COMMAND=/opt/mqm/bin/<renamed_strmqm_program> # STOP_COMMAND=/opt/mqm/bin/<renamed_endmqm_program> # DEBUG= START_COMMAND=/opt/mqm/bin/strmqm_sc3 STOP_COMMAND=/opt/mqm/bin/endmqm_sc3 # |
The above steps need to be done on each node within the cluster that will host the Sun Cluster HA for WebSphere MQ data service. Do not perform this procedure until you have created your Queue Manager(s), because crtmqm will call strmqm and endmqm on its behalf.
If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you would need to reapply this workaround. The recommended approach is to use Failover File Systems for the WebSphere MQ instance, until a fix has been made to WebSphere MQ.
This section contains the procedures you need to install and configure WebSphere MQ.
Determine how WebSphere MQ will be deployed in Sun Cluster.
Determine how many WebSphere MQ instances will be deployed.
Determine which Cluster File System will be used by each WebSphere MQ instance.
Mount WebSphere MQ Cluster File Systems.
If Failover File Systems will be used by the WebSphere MQ instance, you must mount these manually.
Install WebSphere MQ onto all nodes within Sun Cluster. It is recommended that you install WebSphere MQ onto local disks. For a discussion of the advantages and disadvantages of installing the software on a local versus a cluster file system, see “Determining the Location of the Application Binaries” on page 3 of the Sun Cluster Data Services Installation and Configuration Guide.
Install WebSphere MQ onto all nodes within Sun Cluster that will run WebSphere MQ, regardless of the location of the application binaries. This is required because the pkgadd for WebSphere MQ additionally sets up several symbolic links on the host.
Follow IBM's WebSphere MQ for Sun Solaris — Quick Beginnings manual to install WebSphere MQ.
Create your WebSphere MQ Manager(s).
WebSphere MQ V5.3 has a bug when you use the default setting, LogDefaultPath=/var/mqm/log, when issuing crtmqm to create your WebSphere MQ Manager. For example, the crtmqm command displays AMQ7064: Log path not valid or inaccessible.
To work around this, specify the -ld parameter when creating the WebSphere MQ Manager, for example, crtmqm -ld /global/mqm/log/<qmgr> <qmgr>
This will cause another <qmgr> directory to appear, that is /global/mqm/log/<qmgr>/<qmgr>. However, it overcomes this bug.
# crtmqm qmgr1 AMQ7064: Log path not valid or inaccessible. # # crtmqm -ld /global/mqm/log/qmgr1 qmgr1 WebSphere MQ queue manager created. Creating or replacing default objects for qmgr1 . Default objects statistics : 31 created. 0 replaced. 0 failed. Completing setup. Setup completed. # # cd /global/mqm/log/qmgr1 # # ls -l total 2 drwxrwx--- 3 mqm mqm 512 Jan 10 11:44 qmgr1 # # cd qmgr1 # # ls -l total 12 drwxrwx--- 2 mqm mqm 512 Jan 10 11:44 active -rw-rw---- 1 mqm mqm 4460 Jan 10 11:44 amqhlctl.lfh # # pwd /global/mqm/log/qmgr1/qmgr1 # # cd /global/mqm/qmgrs/qmgr1 # # more qm.ini #*******************************************************************# #* Module Name: qm.ini *# #* Type : MQSeries queue manager configuration file *# # Function : Define the configuration of a single queue manager *# #* *# #*******************************************************************# #* Notes : *# #* 1) This file defines the configuration of the queue manager *# #* *# #*******************************************************************# ExitPath: ExitsDefaultPath=/var/mqm/exits/ #* *# #* *# Log: LogPrimaryFiles=3 LogSecondaryFiles=2 LogFilePages=1024 LogType=CIRCULAR LogBufferPages=0 LogPath=/global/mqm/log/qmgr1/qmgr1/ LogWriteIntegrity=TripleWrite Service: Name=AuthorizationService EntryPoints=10 ServiceComponent: Service=AuthorizationService Name=MQSeries.UNIX.auth.service Module=/opt/mqm/lib/amqzfu ComponentDataSize=0 # |
This bug, of having to specify the -ld parameter when LogDefaultPath=/var/mqm/log is being used, has been reported to IBM and a fix is being worked on.
This section contains the procedure you need to verify the installation and configuration.
This procedure does not verify that your application is highly available because you have not installed your data service yet.
Start the WebSphere MQ Manager, and check the installation.
# su - mqm Sun Microsystems Inc. SunOS 5.8 Generic February 2000 $ strmqm qmgr1 WebSphere MQ queue manager 'qmgr1' started. $ $ runmqsc qmgr1 5724-B41 (C) Copyright IBM Corp. 1994, 2002. ALL RIGHTS RESERVED. Starting WebSphere MQ script Commands. def ql(test) defpsist(yes) 1 : def ql(test) defpsist(yes) AMQ8006: WebSphere MQ queue created. end 2 : end One MQSC command read. No commands have a syntax error. All valid MQSC commands were processed. $ $ /opt/mqm/samp/bin/amqsput TEST qmgr1 Sample AMQSPUT0 start target queue is TEST test test test test test test test Sample AMQSPUT0 end $ $ /opt/mqm/samp/bin/amqsget TEST qmgr1 Sample AMQSGET0 start message <test test test test test test test> ^C$ $ $ runmqsc qmgr1 5724-B41 (C) Copyright IBM Corp. 1994, 2002. ALL RIGHTS RESERVED. Starting WebSphere MQ script Commands. delete ql(test) 1 : delete ql(test) AMQ8007: WebSphere MQ queue deleted. end 2 : end One MQSC command read. No commands have a syntax error. All valid MQSC commands were processed. $ |
Stop the WebSphere MQ Manager.
# su - mqm Sun Microsystems Inc. SunOS 5.8 Generic February 2000 $ $ endmqm -i qmgr1 WebSphere MQ queue manager 'qmgr1' ending. WebSphere MQ queue manager 'qmgr1' ended. $ |
If you did not install the Sun Cluster HA for WebSphere MQ packages during your Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere MQ packages. To complete this procedure, you need the Sun Java Enterprise System Accessory CD Volume 3.
If you are installing more than one data service simultaneously, perform the procedure in “Installing the Software” in Sun Cluster 3.1 10/03 Software Installation Guide.
Install the Sun Cluster HA for WebSphere MQ packages using one of the following installation tools:
The Web Start program
The scinstall utility
The Web Start program is not available in releases earlier than Sun Cluster 3.1 Data Services 10/03.
You can run the Web Start program with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar. For more information about the Web Start program, see the installer(1M) man page.
Become superuser on the cluster node where you are installing the Sun Cluster HA for WebSphere MQ packages.
(Optional) If you intend to run the Web Start program with a GUI, ensure
that your DISPLAY
environment
variable is set.
Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage CD-ROM devices, it automatically mounts the CD-ROM on the /cdrom/cdrom0 directory.
Change to the Sun Cluster HA for WebSphere MQ component directory of the CD-ROM.
The Web Start program for the Sun Cluster HA for WebSphere MQ data service resides in this directory.
# cd /cdrom/cdrom0/components/SunCluster_HA_MQS_3.1 |
Start the Web Start program.
# ./installer |
When you are prompted, select the type of installation.
Follow instructions on the screen to install the Sun Cluster HA for WebSphere MQ packages on the node.
After the installation is finished, the Web Start program provides an installation summary. This summary enables you to view logs that the Web Start program created during the installation. These logs are located in the /var/sadm/install/logs directory.
Exit the Web Start program.
Unload the Sun Java Enterprise System Accessory CD Volume 3 from the CD-ROM drive.
Use this procedure to install the Sun Cluster HA for WebSphere MQ packages. You need the Sun Java Enterprise System Accessory CD Volume 3 to perform this procedure. This procedure assumes that you did not install the data service packages during your initial Sun Cluster installation.
If you installed the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, proceed to Registering and Configuring Sun Cluster HA for WebSphere MQ.
Otherwise, use this procedure to install the Sun Cluster HA for WebSphere MQ packages. Perform this procedure on all nodes that can run Sun Cluster HA for WebSphere MQ.
Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.
Run the scinstall utility with no options.
This step starts the scinstall utility in interactive mode.
Choose the menu option, Add Support for New Data Service to This Cluster Node.
The scinstall utility prompts you for additional information.
Provide the path to the Sun Java Enterprise System Accessory CD Volume 3.
The utility refers to the CD as the “data services cd.”
Specify the data service to install.
The scinstall utility lists the data service that you selected and asks you to confirm your choice.
Exit the scinstall utility.
Unload the CD from the drive.
This section contains the procedures you need to configure Sun Cluster HA for WebSphere MQ.
Use this procedure to configure Sun Cluster HA for WebSphere MQ as a failover data service. This procedure assumes that you installed the data service packages during your Sun Cluster installation.
If you did not install the Sun Cluster HA for WebSphere MQ packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere MQ Packages by Using the scinstall Utility.
Become superuser on one of the nodes in the cluster that will host WebSphere MQ.
Register the SUNW.gds resource type.
# scrgadm -a -t SUNW.gds |
Register the SUNW.HAStoragePlus resource type.
# scrgadm -a -t SUNW.HAStoragePlus |
Create a failover resource group.
# scrgadm -a -g WebSphere MQ-failover-resource-group |
Create a resource for the WebSphere MQ Disk Storage.
# scrgadm -a -j WebSphere MQ-has-resource \ -g WebSphere MQ-failover-resource-group \ -t SUNW.HAStoragePlus \ -x FilesystemMountPoints=WebSphere MQ- instance-mount-points |
Create a resource for the WebSphere MQ Logical Hostname.
# scrgadm -a -L -j WebSphere MQ-lh-resource \ -g WebSphere MQ-failover-resource-group \ -l WebSphere MQ-logical-hostname |
Enable the failover resource group that now includes the WebSphere MQ Disk Storage and Logical Hostname resources.
# scswitch -Z -g WebSphere MQ-failover-resource-group |
Create and register each required WebSphere MQ component.
Perform this step for the Queue Manager component (mgr), and repeat for each of the optional WebSphere MQ components that you use, replacing mgr with one of the following:
chi - Channel Initiator
csv - Command Server
lsr - Dedicated Listener
trm - Trigger monitor
The lsr component allows for multiple ports. You must specify multiple port numbers separated by / for each port entry required for the PORT parameter within /opt/SUNWscmqs/lsr/util/lsr_config. This will cause the lsr component to start multiple runmqlsr programs for different port entries.
The trm component allows for multiple trigger monitors. You must specify file for the TRMQ parameter within /opt/SUNWscmqs/trm/util/trm_config before you run /opt/SUNWscmqs/trm/util/trm_register. This will cause the trm component to start multiple trigger monitor entries from /opt/SUNWscmqs/trm/etc/<qmgr>_trm_queues, which must contain trigger monitor queue names, where <qmgr> is the name of your Queue Manager. You must create this file which is required on each node within Sun Cluster that will run Sun Cluster HA for WebSphere MQ. Alternatively this could be a symbolic link to a Global File System.
# cd /opt/SUNWscmqs/mgr/util |
Edit the mgr_config file and follow the comments within that file, for example:
# These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # CLEANUP - Cleanup IPC entries YES or NO (Default CLEANUP=YES) # # Under normal shutdown and startup WebSphere MQ manages it's # cleanup of IPC resources with the following fix packs. # # MQSeries v5.2 Fix Pack 07 (CSD07) # WebSphere MQ v5.3 Fix Pack 04 (CSD04) # # Please refer to APAR number IY38428. # # However, while running in a failover environment, the IPC keys # that get generated will be different between nodes. As a result # after a failover of a Queue Manager, some shared memory segments # can remain allocated on the node although not used. # # Although this does not cause WebSphere MQ a problem when starting # or stopping (with the above fix packs applied), it can deplete # the available swap space and in extreme situations a node may # run out of swap space. # # To resolve this issue, setting CLEANUP=YES will ensure that # IPC shared memory segments for WebSphere MQ are removed whenever # a Queue Manager is stopped. However IPC shared memory segments # are only removed under strict conditions, namely # # - The shared memory segment(s) are owned by # CREATOR=mqm and CGROUP=mqm # - The shared memory segment has no attached processes # - The CPID and LPID process ids are not running # - The shared memory removal is performed by userid mqm # # Setting CLEANUP=NO will not remove any shared memory segments. # # Setting CLEANUP=YES will cleanup shared memory segments under the # conditions described above. # # |
The following is an example for WebSphere MQ Manager qmgr1.
RS=wmq-qmgr-res RG=wmq-rg QMGR=qmgr1 PORT=1414 LH=wmq-lh-res HAS_RS=wmq-has-res CLEANUP=YES |
After editing mgr_config, register the resource.
# ./mgr_register |
Enable WebSphere MQ Manager protection (if required).
You should implement WebSphere MQ Manager protection only if you have deployed WebSphere MQ onto a Global File System. Refer to Configuration Requirements for more details to implement WebSphere MQ Manager protection and in particular to Example 1–5. Otherwise, skip to the next step.
You must repeat this on each node within Sun Cluster that will host Sun Cluster HA for WebSphere MQ.
Enable each WebSphere MQ resource.
Repeat this step for each WebSphere MQ component as in the previous step.
# scstat |
# scswitch -e -j WebSphere MQ-resource |
This section contains the procedure you need to verify that you installed and configured your data service correctly.
Become superuser on one of the nodes in the cluster that will host WebSphere MQ.
Ensure all the WebSphere MQ resources are online with scstat.
# scstat |
For each WebSphere MQ resource that is not online, use the scswitch command as follows:
# scswitch -e -j WebSphere MQ- resource |
Run the scswitch command to switch the WebSphere MQ resource group to another cluster node, such as node2.
# scswitch -z -g WebSphere MQ-failover-resource-group -h node2 |
This section describes the Sun Cluster HA for WebSphere MQ fault monitor's probing algorithm or functionality; states the conditions, messages, and recovery actions associated with unsuccessful probing; and states the conditions and messages associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
Sun Cluster HA for WebSphere MQ fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.
WebSphere MQ Manager
Sleeps for Thorough_probe_interval.
Connects to the Queue Manager, creates a temporary dynamic queue, puts a message to the queue, and then disconnects from the Queue Manager. If this fails, then the probe will restart the Queue Manager.
If all Queue Manager processes have died, pmf will interrupt the probe to immediately restart the Queue Manager.
If the Queue Manager is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is initiated for the Resource Group onto another node.
Other WebSphere MQ components (chi, csv & trm)
The probing algorithm and functionality for the Channel Initiator, Command Server and Trigger Monitor all behave the same. Therefore the following text simply refers to these components as resource.
Sleeps for Thorough_probe_interval.
Dependent on the Queue Manager, if the Queue Manager fails the resource will fail and get restarted after the Queue Manager is available again.
If the resource has died, pmf will interrupt the probe to immediately restart the process.
If the resource is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval then a failover is not initiated onto another node because Failover_enabled=FALSE has been set. The resource will be restarted.
WebSphere MQ Listener
Sleeps for Thorough_probe_interval
Check whether the runmqlsr process associated with the Queue Manager and Port is running.
The listener can accommodate several port numbers under the same pmftag. If a listener for a particular port is found to be missing, the probe will initiate a restart of that listener without affecting the other listeners.
Although the resource can accommodate several listeners, all listeners would need to fail before the resource is restarted. This provides a granular restart mechanism for a resource that has several listeners running.
If the resource is repeatedly restarted and subsequently exhausts the Retry_count within the Retry_interval, then a failover is not initiated onto another node because Failover_enabled=FALSE has been set. The resource will be restarted.
Sun Cluster HA for WebSphere MQ can be used by multiple WebSphere MQ instances. To turn on debug for all WebSphere MQ instances or for a particular WebSphere MQ instance.
Each WebSphere MQ component has a DEBUG file in /opt/SUNWscmqs/xxx/etc, where xxx is a three-character abbreviation for the respective WebSphere MQ component.
These files allow you to turn on debug for all WebSphere MQ instances or for a specific WebSphere MQ instance on a particular node with Sun Cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere MQ across the whole Sun Cluster, repeat this step on all nodes within Sun Cluster.
Perform this step for the Queue Manager component (mgr), then repeat for each of the optional WebSphere MQ components that requires debug output, on each node of Sun Cluster as required.
Edit /etc/syslog.conf and change daemon.notice to daemon.debug
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Change the daemon.notice to daemon.debug and restart syslogd. The output below, from the command grep daemon /etc/syslog.conf, shows that daemon.debug has been set.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # # pkill -1 syslogd # |
Edit /opt/SUNWscmqs/mgr/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource.
# cat /opt/SUNWscmqs/mgr/etc/config # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # START_COMMAND=/opt/mqm/bin/<renamed_strmqm_program> # STOP_COMMAND=/opt/mqm/bin/<renamed_endmqm_program> # DEBUG=ALL START_COMMAND= STOP_COMMAND= # |
To turn off debug, reverse the steps above.