Go to main content

Oracle® Solaris Cluster Data Service for IBM WebSphere MQ Guide

Exit Print View

Updated: September 2015
 
 

Deployment Example: Installing an IBM MQ Queue Manager in the Zone Cluster

How to Install an IBM MQ Queue Manager in the Zone Cluster

Target Cluster Configuration 

This example uses a two-node cluster, pnode1 and pnode2, that has a zone cluster named zc1. Zone cluster nodes are named vznode1 and vznode2.

root@pnode1:~# clzc status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name   Brand     Node Name   Zone Host Name   Status   Zone Status
----   -----     ---------   --------------   ------   -----------
zc1    solaris   pnode1      vznode1          Online   Running
                 pnode2      vznode2          Online   Running

root@pnode1:~#

Software Configuration 

This deployment example can use the following software products:

- Oracle Solaris for SPARC or x86 platforms
- Oracle Solaris Cluster 4.3 software
- HA for WebSphere MQ data service
- IBM MQ v8.0 for Oracle Solaris 

This example assumes that you have already installed and configured Oracle Solaris Cluster. It shows the installation and configuration of the data service only.

Assumptions

The instructions in this example have been developed with the following assumptions:
 
- Shell environment: All commands and the environment setup in this example are for the Korn shell environment. If you use a different shell, replace any Korn shell-specific information or instructions with the appropriate information for you preferred shell environment. 
- User login: Unless otherwise specified, perform all procedures by assuming a root role that provides solaris.cluster.admin, solaris.cluster.modify, and solaris.cluster.read authorization. 

Installing and Configuring IBM WebSphere MQ v8.0 

The instructions within the deployment example will install IBM MQ v8.0 for Solaris on Oracle Solaris 11.2 and will configure IBM MQ on an Oracle ZFS highly available local file system.

1. Install and configure the cluster as instructed in the Oracle Solaris Cluster 4.3 Software Installation Guide. 
It is recommended that you install the ha-cluster-full IPS package.

2. Create the zone cluster.

Later on Queue Manager qmgr1 will be deployed within a zone cluster. Once created, IBM MQ will be installed within the zone cluster nodes. 
Note that the logical host qmgr1 and Oracle ZFS Storage Pool wmq1 have been included within the zone cluster configuration.

Perform on one node of the global cluster.

root@pnode1:~# cat /var/tmp/zc1.txt
create
set zonepath=/zones/zc1
set autoboot=true
add node
set physical-host=pndoe1
set hostname=vznode1
add net
set address=10.134.84.88
set physical=sc_ipmp0
end
end
add node
set physical-host=pnode2
set hostname=vznode2
add net
set address=10.134.84.90
set physical=sc_ipmp0
end
end
add net
set address=qmgr1
end
add dataset
set name=wmq1
end
commit
exit
root@pnode1:~#
root@pnode1:~# clzc configure -f /var/tmp/zc1.txt zc1
root@pnode1:~#clzc install zc1
Waiting for zone install commands to complete on all the nodes of the zone cluster "zc1"...
root@pnode1:~# 
root@pnode1:~# clzc boot zc1
Waiting for zone boot commands to complete on all the nodes of the zone cluster "zc1"...
root@pnode1:~#
root@pnode1:~# clzc status

=== Zone Clusters ===

--- Zone Cluster Status ---

Name   Brand     Node Name   Zone Host Name   Status   Zone Status
----   -----     ---------   --------------   ------   -----------
zc1    solaris   pnode1      vznode1        Online   Running
                 pnode2      vznode2        Online   Running

root@pnode1:~# 

3. Create the IBM MQ group and userid.

Perform on each node of the zone cluster.

root@pnode1:~# zlogin zc1
[Connected to zone 'zc1' pts/2]
?Oracle Corporation	SunOS 5.11	11.2	July 2015
root@vznode1:~#
root@vznode1:~# groupadd -g 1002 mqm
root@vznode1:~# useradd -u 1002 -g 1002 -d /var/mqm mqm
root@vznode1:~# 
root@vznode1:~# projadd -c "WebSphere MQ default settings" -K "process.max-file-descriptor=(basic,10000,deny)" -K "project.max-shm-memory=(priv,4GB,deny)" -K "project.max-shm-ids=(priv,1024,deny)" -K "project.max-sem-ids=(priv,128,deny)" group.mqm
root@vznode1:~#

4. Add a logical host name to /etc/hosts.

Perform on each node of the zone cluster.

The following output shows a logical host entry for Queue Manager qmgr1.

root@vznode1:~# grep qmgr1 /etc/hosts
10.134.84.62    qmgr1
root@vznode1:~#

5. Create a ZFS pool and ZFS file systems on a shared storage on a global cluster node.

Perform this step on one node of the global cluster.

root@pnode1:~# scdidadm -L d1
1        pnode2:/dev/rdsk/c0t60000970000196800795533030303146d0 /dev/did/rdsk/d1     
1        pnode1:/dev/rdsk/c0t60000970000196800795533030303146d0 /dev/did/rdsk/d1     
root@pnode1:~# 
root@pnode1:~# zpool create -m /ZFSwmq1 wmq1 /dev/did/dsk/d1s0
root@pnode1:~# zfs create wmq1/log
root@pnode1:~# zfs create wmq1/qmgrs

6. Create Oracle Solaris Cluster logical host and storage resources.

Perform on one node of the zone cluster.

a) Register the required resource types.

root@vznode1:~# export PATH=$PATH:/usr/cluster/bin
root@vznode1:~# clrt register SUNW.HAStoragePlus
root@vznode1:~# clrt register SUNW.gds

b) Create a failover resource group.

root@vznode1:~# clrg create wmq1-rg
 
c) Create a logical host resource.

root@vznode1:~# clrslh create -g wmq1-rg -h qmgr1 wmq1-lh

d) Create a storage resource.

root@vznode1:~# clrs create -g wmq1-rg -t SUNW.HAStoragePlus -p Zpools=wmq1 wmq1-has

e) Enable the resource groups and its resources.

root@vznode1:~# clrg online -eM wmq1-rg
root@vznode1:~# clrg online -eM -n vznode1 wmq1-rg
root@vznode1:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-has            vznode2        Offline      Offline
                    vznode1        Online       Online

wmq1-lh             vznode2        Offline      Offline - LogicalHostname offline.
                    vznode1        Online       Online - LogicalHostname online.

root@vznode1:~#


7. Create appropriate mount points and symlinks for the queue manager.

Note: Some commands are issued on both the nodes of the zone cluster.

root@vznode1:~# mkdir -p /var/mqm/log /var/mqm/qmgrs
root@vznode1:~# ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
root@vznode1:~# ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
root@vznode1:~# chown -R mqm:mqm /var/mqm
root@vznode1:~# chown -R mqm:mqm /ZFSwmq1

root@vznode2:~# mkdir -p /var/mqm/log /var/mqm/qmgrs
root@vznode2:~# ln -s /ZFSwmq1/log /var/mqm/log/qmgr1
root@vznode2:~# ln -s /ZFSwmq1/qmgrs /var/mqm/qmgrs/qmgr1
root@vznode2:~# chown -R mqm:mqm /var/mqm

8. Install IBM WebSphere MQ software.

Note: You must install IBM MQ in /opt/mqm.

Perform on each node of the zone cluster.

root@vznode1:~# cd software intall directory
root@vznode1:~# ./mqlicense.sh 
root@vznode1:~# pkgadd -d . 


9. Set the primary instance to use /opt/mqm.

Perform on each node of the zone cluster.

root@vznode1:~# /opt/mqm/bin/setmqinst -i -p /opt/mqm
90 of 90 tasks have been completed successfully.
'Installation1' (/opt/mqm) set as the primary installation
root@vznode1:~# 

10. Verify the IBM MQ installation 

a) Create the Queue Manager qmgr1 on vznode1.

root@vznode1:~# su - mqm
-bash-4.1$             
-bash-4.1$ crtmqm qmgr1
WebSphere MQ queue manager created.
Directory '/var/mqm/qmgrs/qmgr1' created.
The queue manager is associated with installation 'Installation1'.
Creating or replacing default objects for queue manager 'qmgr1'.
Default objects statistics : 79 created. 0 replaced. 0 failed.
Completing setup.
Setup completed.
-bash-4.1$ 

b) Start the Queue Manager qmgr1 on vznode1.

-bash-4.1$ strmqm qmgr1
WebSphere MQ queue manager 'qmgr1' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'qmgr1' during the log replay phase.
Log replay for queue manager 'qmgr1' complete.
Transaction manager state recovered for queue manager 'qmgr1'.
WebSphere MQ queue manager 'qmgr1' started using V8.0.0.0.
-bash-4.1$ 

c) Create a test local queue on vznode1.

-bash-4.1$ runmqsc qmgr1
5724-H72 (C) Copyright IBM Corp. 1994, 2014.
Starting MQSC for queue manager qmgr1.


def ql(sc3test) defpsist(yes)
     1 : def ql(sc3test) defpsist(yes)
AMQ8006: WebSphere MQ queue created.
end 
     2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
-bash-4.1$ 

d) Put a message to the test local queue on vznode1.

-bash-4.1$ /opt/mqm/samp/bin/amqsput SC3TEST qmgr1
Sample AMQSPUT0 start
target queue is SC3TEST
test test test
?^C
-bash-4.1$ 

e) Stop the Queue Manager qmgr1 on vznode1.

-bash-4.1$ endmqm -i qmgr1
WebSphere MQ queue manager 'qmgr1' ending.
WebSphere MQ queue manager 'qmgr1' ended.
-bash-4.1$ 
-bash-4.1$ exit
logout
root@vznode1:~#

f) Switch the Oracle Solaris Cluster resource group to vznode2.

root@vznode1:~# clrg switch -n vznode2 wmq1-rg
root@vznode1:~# clrs status -g wmq1-rg +

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-has            vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-lh             vznode1        Online       Online - LogicalHostname online.
                    vznode1        Offline      Offline - LogicalHostname offline.

root@vznode1:~#

g) Copy the queue manager definition from vznode1 to vznode2.

Since the Queue Manager qmgr1 was created on node vznode1, ensure that qmgr1 is known on node vznode2. Either copy /var/mqm/mqs.ini between nodes or just copy the Queue Manager definition.

root@vznode1:~# su – mqm
-bash-4.1$
-bash-4.1$ cat /var/mqm/mqs.ini


<... snipped ...>

QueueManager:
   Name=qmgr1
   Prefix=/var/mqm
   Directory=qmgr1
   InstallationName=Installation1
-bash-4.1$

h) Add the Queue Manager entry to /var/mqm/mqs.ini on node vznode2.

root@vznode2:~# su – mqm
-bash-4.1$
-bash-4.1$ cat /var/mqm/mqs.ini


<... snipped ...>

QueueManager:
   Name=qmgr1
   Prefix=/var/mqm
   Directory=qmgr1
   InstallationName=Installation1
-bash-4.1$


i) Start the Queue Manager qmgr1 on vznode2.

-bash-4.1$ strmqm qmgr1
WebSphere MQ queue manager 'qmgr1' starting.
The queue manager is associated with installation 'Installation1'.
5 log records accessed on queue manager 'qmgr1' during the log replay phase.
Log replay for queue manager 'qmgr1' complete.
Transaction manager state recovered for queue manager 'qmgr1'.
WebSphere MQ queue manager 'qmgr1' started using V8.0.0.0.
-bash-4.1$ 

j) Get the message from the test local queue on vznode2.

-bash-4.1$ /opt/mqm/samp/bin/amqsget SC3TEST qmgr1
Sample AMQSGET0 start
message test
?^C
-bash-4.1$

k) Delete the test local queue on vznode2.

-bash-4.1$ runmqsc qmgr1
5724-H72 (C) Copyright IBM Corp. 1994, 2014.
Starting MQSC for queue manager qmgr1.


delete ql(sc3test)
     1 : delete ql(sc3test)
AMQ8007: WebSphere MQ queue deleted.
end
     2 : end
One MQSC command read.
No commands have a syntax error.
All valid MQSC commands were processed.
-bash-4.1$

l) Stop the Queue Manager qmgr1 on vznode2.

-bash-4.1$ endmqm -i qmgr1
WebSphere MQ queue manager 'qmgr1' ending.
WebSphere MQ queue manager 'qmgr1' ended.
-bash-4.1$ 
-bash-4.1$ exit
logout
root@vznode2:~#


11. Create Oracle Solaris Cluster resources for IBM MQ.

Perform on one node of the zone cluster.

a) Create a Queue Manager resource.

root@vznode2:~# cd /opt/SUNWscmqs/mgr/util
root@vznode2:/opt/SUNWscmqs/mgr/util# vi mgr_config

b) Set the following entries:

RS=wmq1-rs
RG=wmq1-rg
QMGR=qmgr1
LH=wmq1-lh
HAS_RS=wmq1-has

c) Register the Queue Manager resource.

root@vznode2:/opt/SUNWscmqs/mgr/util# ./mgr_register

d) Enable the Queue Manager resource.

root@vznode2:/opt/SUNWscmqs/mgr/util# clrs enable wmq1-rs
root@vznode2:/opt/SUNWscmqs/mgr/util# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-rs             vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-has            vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-lh             vznode2        Online       Online - LogicalHostname online.
                    vznode1        Offline      Offline

root@vznode2:/opt/SUNWscmqs/mgr/util#

e) Create a Listener resource.

root@vznode2:~# cd /opt/SUNWscmqs/lsr/util
root@vznode2:/opt/SUNWscmqs/lsr/util# vi lsr_config

f) Set the following entries:

RS=wmq1-lsr
RG=wmq1-rg
QMGR=qmgr1
PORT=1414
IPADDR=10.134.84.62
BACKLOG=100
LH=wmq1-lh
QMGR_RS=wmq1-rs
USERID=mqm

g) Register the Listener resource.

root@vznode2:/opt/SUNWscmqs/lsr/util# ./lsr_register

h) Enable the Listener resource.

root@vznode2:/opt/SUNWscmqs/lsr/util# clrs enable wmq1-lsr
root@vznode2:/opt/SUNWscmqs/lsr/util# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-rs             vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-has            vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-lh             vznode2        Online       Online - LogicalHostname online.
                    vznode1        Offline      Offline

root@vznode2:/opt/SUNWscmqs/lsr/util#


12. Switch the resource group between nodes.

root@vznode2:~# clrg switch -n vznode1 wmq1-rg
root@vznode2:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-lsr            vznode2          Offline      Offline
                    vznode1          Online       Online

wmq1-rs             vznode2          Offline      Offline
                    vznode1          Online       Online

wmq1-has            vznode2          Offline      Offline
                    vznode1          Online       Online

wmq1-lh             vznode2          Offline      Offline - LogicalHostname offline.
                    vznode1          Online       Online - LogicalHostname online.

root@vznode2:~# 
root@vznode2:~# clrg switch -n vznode2 wmq1-rg
root@vznode2:~# clrs status -g wmq1-rg

=== Cluster Resources ===

Resource Name       Node Name      State        Status Message
-------------       ---------      -----        --------------
wmq1-lsr            vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-rs             vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-has            vznode2        Online       Online
                    vznode1        Offline      Offline

wmq1-lh             vznode2        Online       Online - LogicalHostname online.
                    vznode1        Offline      Offline - LogicalHostname offline.

root@vznode2:~#

(Optional) Repeat for each IBM MQ component that is required. Edit /opt/SUNWscmqs/xxx/util/xxx_config and follow the comments within that file. Where 
xxx represents one of the following IBM MQ components: 
                  chi  Channel Initiator
                  csv  Command Server
                  trm  Trigger Monitor

After you edit the xxx_config file, you must register the resource. 
# cd /opt/SUNWscmqs/xxx/util/ 
# vi xxx_config 
# ./xxx_register