Your data service configuration might not be supported if you do not observe these restrictions.
Consider the restrictions in this section to plan the installation and configuration of Sun Cluster HA for WebSphere MQ. This section provides a list of software and hardware configuration restrictions that apply to Sun Cluster HA for WebSphere MQ only.
For restrictions that apply to all data services, see the Sun Cluster Release Notes.
The Sun Cluster HA for WebSphere MQ data service can only be configured as a failover service – WebSphere MQ cannot operate as a scalable service and therefore the Sun Cluster HA for WebSphere MQ data service can only be configured to run as a failover service.
Mount /var/mqm as a Global File System – If you intend to install multiple instances of WebSphere MQ, then /var/mqm must be mounted as a Global File System.
This restriction is required because WebSphere MQ uses keys to build internal control structures. These keys are derived from the ftok() function call, and are based on the inode numbers in filesystems. If the inodes are on different filesystems, then clashes can occur. Consequently, mounting /var/mqm as a Global File System ensures that we avoid this issue.
If you do not intend to install multiple instances of WebSphere MQ, then you do not need to mount /var/mqm as a Global File System. However, we recommend that you still mount /var/mqm as a Global File System, as in the future if you want to deploy another WebSphere MQ Manager you would then need to adhere to the configuration restriction of mounting /var/mqm as a Global File System.
Installing WebSphere MQ onto Cluster File Systems – Initially, the WebSphere MQ product is installed into /opt/mqm and /var/mqm. However whenever a WebSphere MQ Manager is created then the default directory locations created are /var/mqm/qmgrs/<qmgr_name> and /var/mqm/log/<qmgr_name>. These locations can be mounted as either Failover File Systems (FFS) or Global File Systems (GFS).
If a WebSphere MQ Manager is to be deployed onto a Failover File System and you intend to deploy multiple WebSphere MQ Managers, then you must create a symbolic link from the Global File System /var/mqm to the Failover File System. Local nested mounts onto a Global File System are not allowed, however symbolic links to a Failover File System overcomes this issue.
It is considered best practice when mounting Global File Systems to mount them with the /global prefix and to mount Failover File Systems with the /local prefix. However, be aware that this is simply viewed as best practice.
The following example shows two WebSphere MQ Managers with Failover File Systems and /var/mqm symbolically linked to a Global File System. The final output shows a subset of the /etc/vfstab entries for WebSphere MQ deployed using Solaris Volume Manager.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 drwxrwxr-x 8 mqm mqm 512 Sep 17 09:57 @SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d3/dsk/d30 /dev/md/dg_d3/rdsk/d30 /global/mqm ufs 3 yes logging,global /dev/md/dg_d3/dsk/d33 /dev/md/dg_d3/rdsk/d33 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d3/dsk/d36 /dev/md/dg_d3/rdsk/d36 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr2 ufs 4 no logging # |
The following example shows two WebSphere MQ Managers with Global File Systems and /var/mqm symbolically linked to a Global file System. The final output shows a subset of the /etc/vfstab entries for WebSphere MQ deployed using Solaris Volume Manager.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 drwxrwxr-x 8 mqm mqm 512 Dec 16 09:57 @SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global # |