Go to main content

Oracle® Solaris Cluster Data Service for IBM WebSphere MQ Guide

Exit Print View

Updated: November 2019
 
 

Deployment Example: Installing an IBM MQ Queue Manager in the Global Cluster

How to Install an IBM MQ Queue Manager in the Global Cluster

Target Cluster Configuration 

This example uses a two-node cluster, pnode1 and pnode2.

Software Configuration 

This deployment example uses the following software products:

- Oracle Solaris for SPARC or x86 platforms
- Oracle Solaris Cluster 4.4 software
- HA for WebSphere MQ data service
- IBM MQ v8.0 for Oracle Solaris 

This example assumes that you have already installed and configured Oracle Solaris Cluster. It illustrates the installation and configuration of the data service only.

Assumptions

The instructions in this example have been developed with the following assumptions:
 
- Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment. 
- User login: Unless otherwise specified, perform all procedures by assuming a root role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read authorization. 

Installing and Configuring IBM MQ v8.0 

The instructions within the deployment example will install IBM MQ v8.0 for Oracle Solaris on Oracle Solaris 11.2 and will configure IBM MQ on an Oracle ZFS highly available local file system.

1. Install and configure the cluster as instructed in the Oracle Solaris Cluster 4.4 Software Installation Guide. 
It is recommended that you install the ha-cluster-full IPS package.

2. Create the IBM MQ group and userid.

Perform on each node of the global cluster.

root@pnode1:~# groupadd -g 1002 mqm
root@pnode1:~# useradd -u 1002 -g 1002 -d /var/mqm mqm
root@pnode1:~# 
root@pnode1:~# projadd -c "WebSphere MQ default settings" -K "process.max-file-descriptor=(basic,10000,deny)" -K "project.max-shm-memory=(priv,4GB,deny)" -K "project.max-shm-ids=(priv,1024,deny)" -K "project.max-sem-ids=(priv,128,deny)" group.mqm
root@pnode1:~#

3. Add a logical host name to /etc/hosts.

Perform on each node of the global cluster.

The following output shows a logical host entry for Queue Manager qmgr1.

root@pnode1:~# grep qmgr1 /etc/hosts
10.134.84.62    qmgr1
root@pnode1:~#

4. Create a ZFS pool and ZFS file systems on the shared storage.

root@pnode1:~# scdidadm -L d1
1        pnode2:/dev/rdsk/c0t60000970000196800795533030303146d0 /dev/did/rdsk/d1     
1        pnode1:/dev/rdsk/c0t60000970000196800795533030303146d0 /dev/did/rdsk/d1     
root@pnode1:~# 
root@pnode1:~# zpool create -m /ZFSwmq1 wmq1 /dev/did/dsk/d1s0
root@pnode1:~# zfs create wmq1/log
root@pnode1:~# zfs create wmq1/qmgrs

5. Create appropriate mount points and symlinks for the queue manager.

Note: Some commands are issued on both the global cluster nodes.

root@pnode1:~# mkdir -p /var/mqm/log /var/mqm/qmgrs
root@pnode1:~# ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
root@pnode1:~# ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
root@pnode1:~# chown -R mqm:mqm /var/mqm
root@pnode1:~# chown -R mqm:mqm /ZFSwmq1

root@pnode2:~# mkdir -p /var/mqm/log /var/mqm/qmgrs
root@pnode2:~# ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
root@pnode2:~# ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
root@pnode2:~# chown -R mqm:mqm /var/mqm

6. Install the IBM MQ software.

Note: You must install IBM MQ in /opt/mqm.

Perform on each node of the global cluster.

root@pnode1:~# cd software install directory
root@pnode1:~# ./mqlicense.sh 
root@pnode1:~# pkgadd -d  

7. Set the primary instance to use /opt/mqm.

Perform on each node of the zone cluster.

root@pnode1:~# /opt/mqm/bin/setmqinst -i -p /opt/mqm
90 of 90 tasks have been completed successfully.
'Installation1' (/opt/mqm) set as the primary installation.
root@pnode1:~# 

8. Verify the IBM MQ installation 

a) Create the Queue Manager qmgr1 on pnode1.

root@pnode1:~# su - mqm
-bash-4.1$             
-bash-4.1$ crtmqm qmgr1
WebSphere MQ queue manager created.
Directory '/var/mqm/qmgrs/qmgr1' created.
The queue manager is associated with installation 'Installation1'.
Creating or replacing default objects for queue manager 'qmgr1'.
Default objects statistics : 79 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
-bash-4.1$ 

b) Start the Queue Manager qmgr1 on pnode1.

-bash-4.1$ strmqm qmgr1
WebSphere MQ queue manager 'qmgr1' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'qmgr1' during the log replay phase.
Log replay for queue manager 'qmgr1' complete.
Transaction manager state recovered for queue manager 'qmgr1'.
WebSphere MQ queue manager 'qmgr1' started using V8.0.0.0.
-bash-4.1$ 

c) Create a test local queue on pnode1.

-bash-4.1$ runmqsc qmgr1
5724-H72 (C) Copyright IBM Corp. 1994, 2014.
Starting MQSC for queue manager qmgr1.


def ql(sc3test) defpsist(yes)
     1 : def ql(sc3test) defpsist(yes)
AMQ8006: WebSphere MQ queue created.
end 
     2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
-bash-4.1$ 

d) Put a message to the test local queue on pnode1.

-bash-4.1$ /opt/mqm/samp/bin/amqsput SC3TEST qmgr1
Sample AMQSPUT0 start
target queue is SC3TEST
test test test
?^C
-bash-4.1$ 

e) Stop the Queue Manager qmgr1 on pnode1.

-bash-4.1$ endmqm -i qmgr1
WebSphere MQ queue manager 'qmgr1' ending.
WebSphere MQ queue manager 'qmgr1' ended.
-bash-4.1$ 
-bash-4.1$ exit
logout
root@pnode1:~#

f) Export the zpool from pnode1.

root@pnode1:~# zpool export wmq1

g) Copy the queue manager definition from pnode1 to pnode2.

Since the Queue Manager qmgr1 was created on node pnode1, we need to ensure that qmgr1 is known on node pbikns2. Either copy /var/mqm/mqs.ini between nodes or copy just the Queue Manager definition.

root@pnode1:~# su – mqm
-bash-4.1$
-bash-4.1$ cat /var/mqm/mqs.ini


<... snipped ...>

QueueManager:
   Name=qmgr1
   Prefix=/var/mqm
   Directory=qmgr1
   InstallationName=Installation1
-bash-4.1$

h) Add the Queue Manager entry to /var/mqm/mqs.ini on node pnode2.

root@pnode2:~# su – mqm
-bash-4.1$
-bash-4.1$ cat /var/mqm/mqs.ini

<... snipped ...>

QueueManager:
   Name=qmgr1
   Prefix=/var/mqm
   Directory=qmgr1
   InstallationName=Installation1
-bash-4.1$

i) Import the zpool on pnode2.

root@pnode2:~# zpool import wmq1

j) Start the Queue Manager qmgr1 on pnode2.

root@pnode2:~# su – mqm
-bash-4.1$
-bash-4.1$ strmqm qmgr1
WebSphere MQ queue manager 'qmgr1' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'qmgr1' during the log replay phase.
Log replay for queue manager 'qmgr1' complete.
Transaction manager state recovered for queue manager 'qmgr1'.
WebSphere MQ queue manager 'qmgr1' started using V8.0.0.0.
-bash-4.1$ 

k) Get the message from the test local queue on pnode2.

-bash-4.1$ /opt/mqm/samp/bin/amqsget SC3TEST qmgr1
Sample AMQSGET0 start
message test
?^C
-bash-4.1$

l) Delete the test local queue on pnode2.

-bash-4.1$ runmqsc qmgr1
5724-H72 (C) Copyright IBM Corp. 1994, 2014.
Starting MQSC for queue manager qmgr1.


delete ql(sc3test)
     1 : delete ql(sc3test)
AMQ8007: WebSphere MQ queue deleted.
end
     2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
-bash-4.1$

m) Stop the Queue Manager qmgr1 on pnode2.

-bash-4.1$ endmqm -i qmgr1
WebSphere MQ queue manager 'qmgr1' ending.
WebSphere MQ queue manager 'qmgr1' ended.
-bash-4.1$ 
-bash-4.1$ exit
logout
root@pnode1:~#

n) Export the zpool from pnode2.

root@pnode2:~# zpool export wmq1

9. Create the Oracle Solaris Cluster resources for IBM MQ.

Perform on one node of the global cluster.

a) Register the required resource types.

root@pnode1:~# clrt register SUNW.HAStoragePlus
root@pnode1:~# clrt register SUNW.gds

b) Create a failover resource group.

root@pnode1:~# clrg create wmq1-rg
 
c) Create a logical host resource.

root@pnode1:~# clrslh create -g wmq1-rg -h qmgr1 wmq1-lh

d) Create a storage resource.

root@pnode1:~# clrs create -g wmq1-rg -t SUNW.HAStoragePlus -p Zpools=wmq1 wmq1-has

e) Enable the resource groups and its resources.

root@pnode1:~# clrg online -eM wmq1-rg
root@pnode1:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-has            pnode2          Online       Online
                    pnode1          Offline      Offline

wmq1-lh             pnode2          Online       Online - LogicalHostname online.
                    pnode1          Offline      Offline

root@pnode1:~# 

f) Create a Queue Manager resource.

root@pnode1:~# cd /opt/SUNWscmqs/mgr/util
root@pnode1:/opt/SUNWscmqs/mgr/util# vi mgr_config

g) Set the following entries:

RS=wmq1-rs
RG=wmq1-rg
QMGR=qmgr1
LH=wmq1-lh
HAS_RS=wmq1-has

h) Register the Queue Manager resource.

root@pnode1:/opt/SUNWscmqs/mgr/util# ./mgr_register

i) Enable the Queue Manager resource.

root@pnode1:/opt/SUNWscmqs/mgr/util# clrs enable wmq1-rs
root@pnode1:/opt/SUNWscmqs/mgr/util# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-rs             pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-has            pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-lh             pnode2         Online       Online - LogicalHostname online.
                    pnode1         Offline      Offline

root@pnode1:/opt/SUNWscmqs/mgr/util#

j) Create a Listener resource.

root@pnode1:~# cd /opt/SUNWscmqs/lsr/util
root@pnode1:/opt/SUNWscmqs/lsr/util# vi lsr_config

k) Set the following entries:

RS=wmq1-lsr
RG=wmq1-rg
QMGR=qmgr1
PORT=1414
IPADDR=10.134.84.62
BACKLOG=100
LH=wmq1-lh
QMGR_RS=wmq1-rs
USERID=mqm

l) Register the Listener resource.

root@pnode1:/opt/SUNWscmqs/lsr/util# ./lsr_register

m) Enable the Listener resource.

root@pnode1:/opt/SUNWscmqs/lsr/util# clrs enable wmq1-lsr
root@pnode1:/opt/SUNWscmqs/lsr/util# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-rs             pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-has            pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-lh             pnode2         Online       Online - LogicalHostname online.
                    pnode1         Offline      Offline

root@pnode1:/opt/SUNWscmqs/lsr/util#

10. Switch the resource group between nodes.

root@pnode1:~# clrg switch -n pnode1 wmq1-rg
root@pnode1:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-lsr            pnode2         Offline      Offline
                    pnode1         Online       Online

wmq1-rs             pnode2         Offline      Offline
                    pnode1         Online       Online

wmq1-has            pnode2         Offline      Offline
                    pnode1         Online       Online

wmq1-lh             pnode2         Offline      Offline - LogicalHostname offline.
                    pnode1         Online       Online - LogicalHostname online.

root@pnode1:~# 
root@pnode1:~# clrg switch -n pnode2 wmq1-rg
root@pnode1:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-lsr            pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-rs             pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-has            pnode2         Online       Online
                    pnode1         Offline      Offline

wmq1-lh             pnode2         Online       Online - LogicalHostname online.
                    pnode1         Offline      Offline - LogicalHostname offline.

root@pnode1:~# 

(Optional) Repeat for each IBM MQ component that is required. Edit /opt/SUNWscmqs/xxx/util/xxx_config and follow the comments within that file. Where 
xxx represents one of the following IBM MQ components: 
                  chi  Channel Initiator
                  csv  Command Server
                  trm  Trigger Monitor

After you edit the xxx_config file, you must register the resource. 
# cd /opt/SUNWscmqs/xxx/util/ 
# vi xxx_config 
# ./xxx_register