This section contains the information you need to plan your Sun Cluster HA for WebSphere MQ installation and configuration.
Your data service configuration might not be supported if you do not observe these restrictions.
Consider the restrictions in this section to plan the installation and configuration of Sun Cluster HA for WebSphere MQ. This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ only.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
The Sun Cluster HA for WebSphere MQ data service can only be configured as a failover service – WebSphere MQ cannot operate as a scalable service and therefore the Sun Cluster HA for WebSphere MQ data service can only be configured to run as a failover service.
Mount /var/mqm as a Global File System – If you intend to install multiple instances of WebSphere MQ, then /var/mqm must be mounted as a Global File System.
This restriction is required because WebSphere MQ uses keys to build internal control structures. These keys are derived from the ftok() function call, and are based on the inode numbers in filesystems. If the inodes are on different filesystems, then clashes can occur. Consequently, mounting /var/mqm as a Global File System ensures that we avoid this issue.
If you do not intend to install multiple instances of WebSphere MQ, then you do not need to mount /var/mqm as a Global File System. However, we recommend that you still mount /var/mqm as a Global File System, as in the future if you want to deploy another WebSphere MQ Manager you would then need to adhere to the configuration restriction of mounting /var/mqm as a Global File System.
Installing WebSphere MQ onto Cluster File Systems – Initially, the WebSphere MQ product is installed into /opt/mqm and /var/mqm. However whenever a WebSphere MQ Manager is created then the default directory locations created are /var/mqm/qmgrs/<qmgr_name> and /var/mqm/log/<qmgr_name>. These locations can be mounted as either Failover File Systems (FFS) or Global File Systems (GFS).
If a WebSphere MQ Manager is to be deployed onto a Failover File System and you intend to deploy multiple WebSphere MQ Managers, then you must create a symbolic link from the Global File System /var/mqm to the Failover File System. Local nested mounts onto a Global File System are not allowed, however symbolic links to a Failover File System overcomes this issue.
It is considered best practice when mounting Global File Systems to mount them with the /global prefix and to mount Failover File Systems with the /local prefix. However, be aware that this is simply viewed as best practice.
The following example shows two WebSphere MQ Managers with Failover File Systems and /var/mqm symbolically linked to a Global File System. The final output shows a subset of the /etc/vfstab entries for WebSphere MQ deployed using Solaris Volume Manager.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 drwxrwxr-x 8 mqm mqm 512 Sep 17 09:57 @SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d3/dsk/d30 /dev/md/dg_d3/rdsk/d30 /global/mqm ufs 3 yes logging,global /dev/md/dg_d3/dsk/d33 /dev/md/dg_d3/rdsk/d33 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d3/dsk/d36 /dev/md/dg_d3/rdsk/d36 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr2 ufs 4 no logging # |
The following example shows two WebSphere MQ Managers with Global File Systems and /var/mqm symbolically linked to a Global file System. The final output shows a subset of the /etc/vfstab entries for WebSphere MQ deployed using Solaris Volume Manager.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 drwxrwxr-x 8 mqm mqm 512 Dec 16 09:57 @SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global # |
Your data service configuration might not be supported if you do not adhere to these requirements.
Use the requirements in this section to plan the installation and configuration of Sun Cluster HA for WebSphere MQ. These requirements apply to Sun Cluster HA for WebSphere MQ only. You must meet these requirements before you proceed with your Sun Cluster HA for WebSphere MQ installation and configuration.
WebSphere MQ components and their dependencies —The Sun Cluster HA for WebSphere MQ data service can be configured to protect a WebSphere MQ instance and its respective components. These components and their dependencies between each other, are briefly described below.
Table 1–3 WebSphere MQ components and their dependencies (via -> symbol)
Componet |
Description |
---|---|
Queue Manager(Mandatory) |
-> SUNW.HAStoragePlus resource The SUNW.HAStoragePlus resource manages the WebSphere MQ File System Mount points and ensures that WebSphere MQ is not started until these are mounted. |
Channel Initiator(Optional) |
-> Queue_Manager and Listener resources Dependency on the Listener is only required if runmqlsr is used instead of inetd. By default a channel initiator is started by WebSphere MQ, however if you require different or another channel initiation queue, other than the default (SYSTEM.CHANNEL.INITQ) then you should deploy this component. |
Command Server (Optional) |
-> Queue_Manager and Listener resources Dependency on the Listener is only required if runmqlsr is used instead of inetd. Deploy this component if you require WebSphere MQ to process commands sent to the command queue. |
Listener (Optional) |
->Queue_Manager resource Deploy this component if you require a dedicated listener (runmqlsr) and will not use the inetd listener. |
Trigger Monitor (Optional) |
->Queue_Manager and Listener resources Dependency on the Listener is only required if runmqlsr is used instead of inetd. Deploy this component if you require a trigger monitor. |
For more detailed information about these WebSphere MQ components, refer to IBM's WebSphere MQ Application Programming manual.
Each WebSphere MQ component has a configuration and registration file under, /opt/SUNWscmqs/xxx/util, where xxx is a three character abbreviation for the respective WebSphere MQ component. These files allow you to register the WebSphere MQ components with Sun Cluster.
Within these files, the appropriate dependencies have already been applied.
# cd /opt/SUNWscmqs # # ls -l chi/util total 4 -rwxr-xr-x 1 root sys 720 Dec 20 14:44 chi_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 chi_register # # ls -l csv/util total 4 -rwxr-xr-x 1 root sys 645 Dec 20 14:44 csv_config -rwxr-xr-x 1 root sys 562 Dec 20 14:44 csv_register # # ls -l lsr/util total 4 -rwxr-xr-x 1 root sys 640 Dec 20 14:44 lsr_config -rwxr-xr-x 1 root sys 624 Dec 20 14:44 lsr_register # # ls -l mgr/util total 4 -rwxr-xr-x 1 root sys 603 Dec 20 14:44 mgr_config -rwxr-xr-x 1 root sys 515 Dec 20 14:44 mgr_register # # ls -l trm/util total 4 -rwxr-xr-x 1 root sys 717 Dec 20 14:44 trm_config -rwxr-xr-x 1 root sys 586 Dec 20 14:44 trm_register # # # more mgr/util/* :::::::::::::: mgr/util/mgr_config :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # This file will be sourced in by mgr_register and the parameters # listed below will be used. # # These parameters can be customized in (key=value) form # # RS - name of the resource for the application # RG - name of the resource group containing RS # QMGR - name of the Queue Manager # PORT - name of the Queue Manager port number # LH - name of the LogicalHostname SC resource # HAS_RS - name of the Queue Manager HAStoragePlus SC resource # RS= RG= QMGR= PORT= LH= HAS_RS= :::::::::::::: mgr/util/mgr_register :::::::::::::: # # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # . `dirname $0`/mgr_config scrgadm -a -j $RS -g $RG -t SUNW.gds \ -x Start_command="/opt/SUNWscmqs/mgr/bin/start-qmgr \ -R $RS -G $RG -Q $QMGR " \ -x Stop_command="/opt/SUNWscmqs/mgr/bin/stop-qmgr \ -R $RS -G $RG -Q $QMGR " \ -x Probe_command="/opt/SUNWscmqs/mgr/bin/test-qmgr \ -R $RS -G $RG -Q $QMGR " \ -y Port_list=$PORT/tcp -y Network_resources_used=$LH \ -x Stop_signal=9 \ -y Resource_dependencies=$HAS_RS # |
WebSphere MQ Manager protection—
Currently, WebSphere MQ is unable to determine if a Queue Manager is already running on another node within Sun Cluster if Global File Systems are being used for the WebSphere MQ instance, ie /global/mqm/qmgrs/<qmgr> and /global/mqm/log/<qmgr>
Under normal conditions, the Sun Cluster HA for WebSphere MQ data service manages the startup and shutdown of the Queue Manager, regardless of what Cluster File System is being used (ie FFS or GFS).
However, albeit by mistake, it is possible that someone could manually start the Queue Manager on another node within Sun Cluster if the WebSphere MQ instance is running on a Global File System.
This bug has been reported to IBM and a fix is being worked on.
In order to protect against this happening, two options are available.
Use Failover File Systems for the WebSphere MQ instance
This is the recommended approach as the WebSphere MQ instance files would only be mounted on one node at a time. With this configuration, WebSphere MQ is able to determine if the Queue Manager is already running.
Create a symbolic link for strmqm/endmqm to check-start (Provided script)
The script /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent the WebSphere MQ Manager from being started or stopped by mistake.
The check-start script will verify that the WebSphere MQ Manager is being started or stopped by Sun Cluster, and will report an error if an attempt is made to start or stop the WebSphere MQ Manager manually.
The following command shows a manual attempt to start the WebSphere MQ Manager. The response was generated by the check-start script.
# strmqm qmgr1 # Request to run </usr/bin/strmqm qmgr1> within SC3.0 has been refused # |
This solution is only required if you require a Global File System for the WebSphere MQ instance. The following details the steps that you must take to achieve this.
# cd /opt/mqm/bin # # mv strmqm strmqm_sc3 # mv endmqm endmqm_sc3 # # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm # |
Edit the /opt/SUNWscmqs/mgr/etc/config file and change the following entries for START_COMMAND and STOP_COMMAND. In our example we have chosen to suffix the command names with _sc3 , however you can choose another name.
# cat /opt/SUNWscmqs/mgr/etc/config # Copyright 2003 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # START_COMMAND=/opt/mqm/bin/<renamed_strmqm_program> # STOP_COMMAND=/opt/mqm/bin/<renamed_endmqm_program> # DEBUG= START_COMMAND=/opt/mqm/bin/strmqm_sc3 STOP_COMMAND=/opt/mqm/bin/endmqm_sc3 # |
The above steps need to be done on each node within the cluster that will host the Sun Cluster HA for WebSphere MQ data service. However, do not perform this procedure until you have created your Queue Manager(s), as crtmqm will call strmqm and endmqm on its behalf.
Be aware that if you implement this workaround then you will need to back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you would need to reapply this workaround. It is for this reason, that the recommended approach is to use Failover File Systems for the WebSphere MQ instance, until a fix has been made to WebSphere MQ.