This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ only. For restrictions that apply to all data services, see the Sun Cluster Release Notes.
Your data service configuration might not be supported if you do not observe these restrictions.
The Sun Cluster HA for WebSphere MQ data service can be configured only as a failover service – WebSphere MQ cannot operate as a scalable service and, therefore, the Sun Cluster HA for WebSphere MQ data service can be configured to run only as a failover service.
Mounting /var/mqm as a Global File System – If you intend to install multiple WebSphere MQ Managers, then you must mount /var/mqm as a Global File System.
After mounting /var/mqm as a Global File System, you must also create a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System on each node within Sun Cluster that will run WebSphere MQ, for example:
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # mkdir -p /var/mqm/qmgrs # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM # |
This restriction is required because WebSphere MQ uses keys to build internal control structures. These keys are derived from the ftok() function call and need to be unique on each node. Mounting /var/mqm as a Global File System, with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a Local File System ensures that any derived shared memory segments keys are unique on each node.
If your Queue Managers were created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link. Furthermore, you must stop all Queue Managers before you do this.
Mounting /var/mqm as a Failover File System – If you intend to only install one WebSphere MQ Manager, then you can mount /var/mqm as a Failover File System. However, we recommend that you still mount /var/mqm as a Global File System to allow you to install multiple WebSphere MQ Managers in the future.
Multiple WebSphere MQ Managers with Failover File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Failover File Systems through a symbolic link from /var/mqm to the Failover File System. Refer to Example 1.
Multiple WebSphere MQ Managers with Global File Systems – As you are installing multiple WebSphere MQ Managers you must mount /var/mqm as a Global File System, as described earlier. However, the data files for each Queue Manager can be mounted as Global File Systems. Refer to Example 2.
Installing WebSphere MQ onto Cluster File Systems – Initially, the WebSphere MQ product is installed into /opt/mqm and /var/mqm. When a WebSphere MQ Manager is created, the default directory locations created are /var/mqm/qmgrs/<qmgr_name> and /var/mqm/log/<qmgr_name>. Before you pkgadd mqm, on all nodes within Sun Cluster that will run WebSphere MQ , you must mount these locations as either Failover File Systems or Global File Systems.
Example 1 shows two WebSphere MQ Managers with Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
Example 2 shows two WebSphere MQ Managers with Global Failover File Systems. /var/mqm is mount, via a symbolic link, as a Global File System. A subset of the /etc/vfstab entries for WebSphere MQ are shown.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Sep 17 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d3/dsk/d30 /dev/md/dg_d3/rdsk/d30 /global/mqm ufs 3 yes logging,global /dev/md/dg_d3/dsk/d33 /dev/md/dg_d3/rdsk/d33 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d3/dsk/d36 /dev/md/dg_d3/rdsk/d36 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr2 ufs 4 no logging # |
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Dec 16 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global |