This section contains the procedures you need to install and configure IBM MQ.
This section contains the procedures you need to install and configure IBM MQ.
Refer to Restriction for Multiple IBM MQ Instances for more information.
Refer to Determine Which Oracle Solaris Zone IBM MQ Will Use for more information.
IBM MQ can be deployed onto a cluster file system or highly available file system on the cluster.
Can be deployed on a cluster file system, highly available local file system or on local storage on each cluster node.
It is recommended to deploy /var/mqm on local storage on each cluster node.
Can be deployed on a cluster file system or highly available local file system.
It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.
Within this step you will create file systems for the IBM MQ files and /var/mqm. Once you have determined how IBM MQ should be deployed in the cluster, you can choose one of the sub steps below.
Create the IBM MQ files and /var/mqm on cluster file systems by using Step 4.a.
Create the IBM MQ files on SVM highly available local file systems and /var/mqm on cluster file system by using Step 4.b.
Create the IBM MQ files on ZFS highly available local file systems and /var/mqm on local storage by using Step 4.c.
Within this deployment:
The IBM MQ files are deployed on cluster file systems.
The IBM MQ instances are qmgr1 and qmgr2.
/var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Jan 8 14:17 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Dec 16 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # ls -l /global/mqm/log total 4 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr1 drwxr-xr-x 4 root root 512 Dec 18 14:20 qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /global/mqm/qmgrs/qmgr1 ufs 4 yes logging,global /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /global/mqm/log/qmgr1 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /global/mqm/qmgrs/qmgr2 ufs 4 yes logging,global /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /global/mqm/log/qmgr2 ufs 4 yes logging,global
Within this deployment:
The IBM MQ files are deployed on SVM highly available local file systems.
The IBM MQ instances are qmgr1 and qmgr2.
/var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.
# ls -l /var/mqm lrwxrwxrwx 1 root other 11 Sep 17 16:53 /var/mqm -> /global/mqm # # ls -l /global/mqm/qmgrs total 6 lrwxrwxrwx 1 root other 512 Sep 17 09:57 @SYSTEM -> /var/mqm_local/qmgrs/@SYSTEM lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr1 -> /local/mqm/qmgrs/qmgr1 lrwxrwxrwx 1 root other 22 Sep 17 17:19 qmgr2 -> /local/mqm/qmgrs/qmgr2 # # ls -l /global/mqm/log total 4 lrwxrwxrwx 1 root other 20 Sep 17 17:18 qmgr1 -> /local/mqm/log/qmgr1 lrwxrwxrwx 1 root other 20 Sep 17 17:19 qmgr2 -> /local/mqm/log/qmgr2 # # more /etc/vfstab (Subset of the output) /dev/md/dg_d4/dsk/d40 /dev/md/dg_d4/rdsk/d40 /global/mqm ufs 3 yes logging,global /dev/md/dg_d4/dsk/d43 /dev/md/dg_d4/rdsk/d43 /local/mqm/qmgrs/qmgr1 ufs 4 no logging /dev/md/dg_d4/dsk/d46 /dev/md/dg_d4/rdsk/d46 /local/mqm/log/qmgr1 ufs 4 no logging /dev/md/dg_d5/dsk/d53 /dev/md/dg_d5/rdsk/d53 /local/mqm/qmgrs/qmgr2 ufs 4 no logging /dev/md/dg_d5/dsk/d56 /dev/md/dg_d5/rdsk/d56 /local/mqm/log/qmgr2 ufs 4 no logging
Within this deployment:
The IBM MQ files are deployed on ZFS highly available local file systems.
The IBM MQ instances are qmgr1 and qmgr2.
/var/mqm uses local storage on each cluster node.
As /var/mqm is on a local file system you must copy /var/mqm/mqs.ini from the node where the queue managers was created to all other nodes in the cluster where the queue manager will run.
# df -k /var/mqm Filesystem kbytes used avail capacity Mounted on / 59299764 25657791 33048976 44% / # # ls -l /var/mqm/qmgrs total 6 drwxrwsr-x 2 mqm mqm 512 Sep 11 11:42 @SYSTEM lrwxrwxrwx 1 mqm mqm 14 Sep 11 11:45 qmgr1 -> /ZFSwmq1/qmgrs lrwxrwxrwx 1 mqm mqm 14 Sep 11 11:50 qmgr2 -> /ZFSwmq2/qmgrs # # ls -l /var/mqm/log total 4 lrwxrwxrwx 1 mqm mqm 12 Sep 11 11:44 qmgr1 -> /ZFSwmq1/log lrwxrwxrwx 1 mqm mqm 12 Sep 11 11:54 qmgr2 -> /ZFSwmq2/log # # df -k /ZFSwmq1 Filesystem kbytes used avail capacity Mounted on HAZpool1 4096453 13180 4083273 1% /ZFSwmq1 # # df -k /ZFSwmq2 Filesystem kbytes used avail capacity Mounted on HAZpool2 4096453 13133 4083320 1% /ZFSwmq2
Within this deployment:
If /var/mqm is placed on shared storage as a cluster file system, a symbolic link is made from /var/mqm/qmgrs/@SYSTEM to local file /var/mqm_local/qmgrs/@SYSTEM.
You must perform this step on all nodes in the cluster only if /var/mqm is a cluster file system.
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # mkdir -p /var/mqm/qmgrs # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM
This restriction is required because IBM MQ uses keys to build internal control structures. Mounting /var/mqm as a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file ensures that any derived shared memory keys are unique on each node.
If multiple queue managers are required and your queue manager was created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link.
You must stop all queue managers before doing this and perform this on each node of the cluster.
# mkdir -p /var/mqm_local/qmgrs/@SYSTEM # cd /var/mqm/qmgrs # cp -rp @SYSTEM/* /var/mqm_local/qmgrs/@SYSTEM # rm -r @SYSTEM # ln -s /var/mqm_local/qmgrs/@SYSTEM @SYSTEM
Perform this step on one node of the cluster.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager:
# metaset -s disk-set -t
# mount websphere-mq-highly-available-local-file-system
Create the mount point on all zones of the cluster that are being used for IBM MQ.
Mount the highly available local file system on one of the zones being used .
# zlogin zonename mkdir websphere-mq-highly-available-local-file-system # # mount -F lofs websphere-mq-highly-available-local-file-system \ /zonepath/root/websphere-mq-highly-available-local-file-system
# zpool export -f HAZpool # zpool import -R /zonepath/root HAZpool
After you have created and mounted the appropriate file systems for the IBM MQ files and /var/mqm, you must install IBM MQ on each node of the cluster, either in the global zone or zone cluster, as required.
Follow the IBM MQ documentation to install IBM MQ.
If you choose to locate the mqm userid/group within a network information name service such as NIS or NIS+, IBM MQ maybe affected if the network information name service is unavailable.
Follow the IBM MQ documentation to create a queue manager.
Within this deployment:
If /var/mqm/mqs.ini is placed on local storage as a local file system, you must copy /var/mqm/mqs.ini from the node where the queue manager was created to all other nodes in the cluster where the queue manager will run.
You must perform this step on all nodes in the cluster only if /var/mqm is a local file system.
# rcp /var/mqm/mqs.ini remote-node:/var/mqm/mqs.ini
# rcp /zonepath/root/var/mqm/mqs.ini \ remote-node:/zonepath/root/var/mqm/mqs.ini