Go to main content

Oracle® Solaris Cluster Data Service for IBM WebSphere MQ Guide

Exit Print View

Updated: September 2015
 
 

Installing and Configuring IBM MQ

This section contains the procedures you need to install and configure IBM MQ.

How to Install and Configure IBM MQ

This section contains the procedures you need to install and configure IBM MQ.

  1. Determine how many IBM MQ instances will be used.

    Refer to Restriction for Multiple IBM MQ Instances for more information.

  2. Determine which Oracle Solaris Zone to use.

    Refer to Determine Which Oracle Solaris Zone IBM MQ Will Use for more information.

  3. Determine how IBM MQ should be deployed in the cluster.

    IBM MQ can be deployed onto a cluster file system or highly available file system on the cluster.

    /var/mqm

    Can be deployed on a cluster file system, highly available local file system or on local storage on each cluster node.

    It is recommended to deploy /var/mqm on local storage on each cluster node.

    /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager

    Can be deployed on a cluster file system or highly available local file system.

    It is recommended to deploy /var/mqm/qmgrs/queue-manager and /var/mqm/log/queue-manager on highly available local file system.

  4. Create a cluster file system or highly available local file system for the IBM MQ files.

    Within this step you will create file systems for the IBM MQ files and /var/mqm. Once you have determined how IBM MQ should be deployed in the cluster, you can choose one of the sub steps below.

    • Create the IBM MQ files and /var/mqm on cluster file systems by using Step a.

    • Create the IBM MQ files on SVM highly available local file systems and /var/mqm on cluster file system by using Step b.

    • Create the IBM MQ files on ZFS highly available local file systems and /var/mqm on local storage by using Step c.

    1. IBM MQ files and /var/mqm on cluster file systems.

        Within this deployment:

      • The IBM MQ files are deployed on cluster file systems.

      • The IBM MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.


        Note -  Refer to Step d for more information about setting up this symbolic link.
      # ls -l /var/mqm
      lrwxrwxrwx   1 root     other         11 Jan  8 14:17 /var/mqm ->
       /global/mqm
      #  
      # ls -l /global/mqm/qmgrs
      total 6
      lrwxrwxrwx   1 root      other          512 Dec 16 09:57 @SYSTEM -> 
       /var/mqm_local/qmgrs/@SYSTEM
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
      # 
      # ls -l /global/mqm/log
      total 4
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr1
      drwxr-xr-x   4 root     root         512 Dec 18 14:20 qmgr2
      #
      # more /etc/vfstab (Subset of the output)
      /dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
           ufs     3       yes     logging,global
      /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /global/mqm/qmgrs/qmgr1
       ufs     4       yes     logging,global
      /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /global/mqm/log/qmgr1
         ufs     4       yes     logging,global
      /dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /global/mqm/qmgrs/qmgr2
       ufs     4       yes     logging,global
      /dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /global/mqm/log/qmgr2
         ufs     4       yes     logging,global
    2. IBM MQ files on SVM highly available local file systems and /var/mqm on cluster file system.

        Within this deployment:

      • The IBM MQ files are deployed on SVM highly available local file systems.

      • The IBM MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file (/var/mqm_local/qmgrs/@SYSTEM) on each node in the cluster.


        Note -  Refer to Step d for more information about setting up this symbolic link.
      # ls -l /var/mqm
      lrwxrwxrwx   1 root     other         11 Sep 17 16:53 /var/mqm ->
       /global/mqm
      #
      # ls -l /global/mqm/qmgrs
      total 6
      lrwxrwxrwx   1 root      other          512 Sep 17 09:57 @SYSTEM -> 
       /var/mqm_local/qmgrs/@SYSTEM
      lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr1 ->
       /local/mqm/qmgrs/qmgr1
      lrwxrwxrwx   1 root     other         22 Sep 17 17:19 qmgr2 ->
       /local/mqm/qmgrs/qmgr2
      #
      # ls -l /global/mqm/log
      total 4
      lrwxrwxrwx   1 root     other         20 Sep 17 17:18 qmgr1 ->
       /local/mqm/log/qmgr1
      lrwxrwxrwx   1 root     other         20 Sep 17 17:19 qmgr2 ->
       /local/mqm/log/qmgr2
      #
      # more /etc/vfstab (Subset of the output)
      /dev/md/dg_d4/dsk/d40   /dev/md/dg_d4/rdsk/d40  /global/mqm
           ufs     3       yes     logging,global
      /dev/md/dg_d4/dsk/d43   /dev/md/dg_d4/rdsk/d43  /local/mqm/qmgrs/qmgr1
       ufs     4       no     logging
      /dev/md/dg_d4/dsk/d46   /dev/md/dg_d4/rdsk/d46  /local/mqm/log/qmgr1
         ufs     4       no     logging
      /dev/md/dg_d5/dsk/d53   /dev/md/dg_d5/rdsk/d53  /local/mqm/qmgrs/qmgr2
       ufs     4       no     logging
      /dev/md/dg_d5/dsk/d56   /dev/md/dg_d5/rdsk/d56  /local/mqm/log/qmgr2
         ufs     4       no     logging
    3. IBM MQ files on ZFS highly available local file systems and /var/mqm on local storage.

        Within this deployment:

      • The IBM MQ files are deployed on ZFS highly available local file systems.

      • The IBM MQ instances are qmgr1 and qmgr2.

      • /var/mqm uses local storage on each cluster node.

        As /var/mqm is on a local file system you must copy /var/mqm/mqs.ini from the node where the queue managers was created to all other nodes in the cluster where the queue manager will run.


        Note -  Refer to Step 8 for more information about copying /var/mqm/mqs.ini.
      # df -k /var/mqm
      Filesystem            kbytes    used   avail capacity  Mounted on
      /                    59299764 25657791 33048976    44%    /
      #
      # ls -l /var/mqm/qmgrs
      total 6
      drwxrwsr-x   2 mqm      mqm          512 Sep 11 11:42 @SYSTEM
      lrwxrwxrwx   1 mqm      mqm           14 Sep 11 11:45 qmgr1 -> /ZFSwmq1/qmgrs
      lrwxrwxrwx   1 mqm      mqm           14 Sep 11 11:50 qmgr2 -> /ZFSwmq2/qmgrs
      #
      # ls -l /var/mqm/log
      total 4
      lrwxrwxrwx   1 mqm      mqm           12 Sep 11 11:44 qmgr1 -> /ZFSwmq1/log
      lrwxrwxrwx   1 mqm      mqm           12 Sep 11 11:54 qmgr2 -> /ZFSwmq2/log
      #
      # df -k /ZFSwmq1
      Filesystem            kbytes    used   avail capacity  Mounted on
      HAZpool1             4096453   13180 4083273     1%    /ZFSwmq1
      #
      # df -k /ZFSwmq2
      Filesystem            kbytes    used   avail capacity  Mounted on
      HAZpool2             4096453   13133 4083320     1%    /ZFSwmq2
    4. Cluster file system is used for /var/mqm.

        Within this deployment:

      • If /var/mqm is placed on shared storage as a cluster file system, a symbolic link is made from /var/mqm/qmgrs/@SYSTEM to local file /var/mqm_local/qmgrs/@SYSTEM.

      • You must perform this step on all nodes in the cluster only if /var/mqm is a cluster file system.

      # mkdir -p /var/mqm_local/qmgrs/@SYSTEM
      # mkdir -p /var/mqm/qmgrs
      # ln -s /var/mqm_local/qmgrs/@SYSTEM /var/mqm/qmgrs/@SYSTEM

      This restriction is required because IBM MQ uses keys to build internal control structures. Mounting /var/mqm as a cluster file system with a symbolic link for /var/mqm/qmgrs/@SYSTEM to a local file ensures that any derived shared memory keys are unique on each node.

      If multiple queue managers are required and your queue manager was created before you setup a symbolic link for /var/mqm/qmgrs/@SYSTEM, you must copy the contents, with permissions, of /var/mqm/qmgrs/@SYSTEM to /var/mqm_local/qmgrs/@SYSTEM before creating the symbolic link.

      You must stop all queue managers before doing this and perform this on each node of the cluster.

      # mkdir -p /var/mqm_local/qmgrs/@SYSTEM
      # cd /var/mqm/qmgrs
      # cp -rp @SYSTEM/* /var/mqm_local/qmgrs/@SYSTEM
      # rm -r @SYSTEM
      # ln -s /var/mqm_local/qmgrs/@SYSTEM @SYSTEM
  5. Mount the highly available local file system.

    Perform this step on one node of the cluster.

    1. If a non-ZFS highly available file system is being used for the IBM MQ files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager:

      # metaset -s disk-set -t
      1. If the global zone is being used for IBM MQ.
        # mount websphere-mq-highly-available-local-file-system
      2. If a zone cluster is being used for IBM MQ.

        Create the mount point on all zones of the cluster that are being used for IBM MQ.

        Mount the highly available local file system on one of the zones being used .

        # zlogin zonename mkdir websphere-mq-highly-available-local-file-system
        #
        # mount -F lofs websphere-mq-highly-available-local-file-system \
        /zonepath/root/websphere-mq-highly-available-local-file-system
    2. If a ZFS highly available file system is being used for IBM MQ.
      # zpool export -f HAZpool
      # zpool import -R /zonepath/root HAZpool
  6. Install IBM MQ on all nodes of the cluster.

    After you have created and mounted the appropriate file systems for the IBM MQ files and /var/mqm, you must install IBM MQ on each node of the cluster, either in the global zone or zone cluster, as required.

    Follow the IBM MQ documentation to install IBM MQ.


    Note -  You may choose to locate the mqm userid and group within /etc/passwd and /etc/group or within a name service such as NIS or NIS+. However, as the HA for WebSphere MQ uses the su user command to start, stop and probe IBM MQ, it is recommend that the mqm userid/group is located within /etc/passwd and /etc/group in the cluster. This is to ensure that the su(1M)command is not impacted if a name service such as NIS or NIS+ is unavailable.

    If you choose to locate the mqm userid/group within a network information name service such as NIS or NIS+, IBM MQ maybe affected if the network information name service is unavailable.


  7. Create the IBM MQ queue manager.

    Follow the IBM MQ documentation to create a queue manager.

  8. If a local file system is used for /var/mqm, copy /var/mqm/mqs.ini to all nodes of the cluster.

      Within this deployment:

    • If /var/mqm/mqs.ini is placed on local storage as a local file system, you must copy /var/mqm/mqs.ini from the node where the queue manager was created to all other nodes in the cluster where the queue manager will run.

    • You must perform this step on all nodes in the cluster only if /var/mqm is a local file system.

    1. If the global zone is being used for IBM MQ.
      # rcp /var/mqm/mqs.ini remote-node:/var/mqm/mqs.ini
    2. If a zone cluster is being used for IBM MQ.
      # rcp /zonepath/root/var/mqm/mqs.ini \
      remote-node:/zonepath/root/var/mqm/mqs.ini