This chapter explains how to install and configure Sun Cluster HA for WebSphere Message Broker.
This chapter contains the following sections.
Overview of Installing and Configuring Sun Cluster HA for WebSphere Message Broker
Planning the Sun Cluster HA for WebSphere Message Broker Installation and Configuration
Verifying the Installation and Configuration of WebSphere Message Broker
Installing the Sun Cluster HA for WebSphere Message Broker Packages
Registering and Configuring Sun Cluster HA for WebSphere Message Broker
Verifying the Sun Cluster HA for WebSphere Message Broker Installation and Configuration
Understanding the Sun Cluster HA for WebSphere Message Broker Fault Monitor
Throughout this document the term zone reflects a non-global Solaris zone. The term global zone will remain.
The Sun Cluster HA for WebSphere Message Broker data service provides a mechanism for the orderly startup and shutdown, fault monitoring, and automatic failover of the WebSphere Message Broker service.
The following components can be protected by the Sun Cluster HA for WebSphere Message Broker data service within the global zone or whole root zone.
Broker Configuration Manager UserNameServer
The following table summarizes the tasks for installing and configuring Sun Cluster HA for WebSphere Message Broker and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.
Table 1 Tasks for Installing and Configuring Sun Cluster HA for WebSphere Message Broker
This section contains the information you need to plan your Sun Cluster HA for WebSphere Message Broker installation and configuration.
The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for WebSphere Message Broker.
Your data service configuration might not be supported if you do not observe these restrictions.
The Sun Cluster HA for WebSphere Message Broker data service can only be configured as a failover service.
Single or multiple instances of WebSphere Message Broker can be deployed in the cluster.
WebSphere Message Broker can be deployed in the global zone or a whole root zone. The See Restriction for multiple WebSphere Message Broker instances for more information about deploying in a zone.
The Sun Cluster HA for WebSphere Message Broker data service supports different versions of WebSphere Message Broker. Before proceeding with the installation of WebSphere Message Broker you must check that the Sun Cluster HA for WebSphere Message Broker data service has been verified against that version.
The WebSphere Message Broker files are the data files used by the broker in /var/mqsi. Within this document references will be made to the WebSphere Message Broker files which implies all of the contents of /var/mqsi, unless specified otherwise.
These WebSphere Message Broker files needs to be placed on shared storage as either a cluster file system or a highly available local file system. However, this placement will depend on how WebSphere Message Broker is being deployed, if a single or multiple instances are being deployed, and if that deployment will be in the global zone or zones.
Refer to Step 5 and Step 6 in How to Install and Configure WebSphere Message Broker for a more information.
WebSphere Message Broker requires WebSphere MQ and a database.
If you are installing WebSphere Business Integration Message Broker v5, the Sun Cluster HA for WebSphere Message Broker requires that the broker, queue manager and database are all registered within the same resource group. This implies that a remote database cannot be used for WebSphere Business Integration Message Broker v5.
This restriction is required because WebSphere Business Integration Message Broker v5 has very specific restart dependencies if the queue manager or database fails. More specifically, it is not possible for the cluster to manage the restart of a remote database that is outside of the cluster.
Table 2 describes the restart dependencies that the WebSphere Business Integration Message Broker v5 software has on additional software.
Table 2 WebSphere Business Integration Message Broker v5 restart dependencies
Failure |
Intended Action |
Actual Action |
---|---|---|
Broker |
Broker Start |
Sun Cluster Broker resource restarted |
Broker Queue Manager |
Broker Stop Broker Queue Manager Start Broker Start |
Sun Cluster Queue Manager resource restarted Sun Cluster Broker resource restarted |
Broker Database |
Broker Stop Broker Queue Manager Stop Broker Database Start Broker Queue Manager Start Broker Start |
Sun Cluster Database resource restarted Sun Cluster Queue Manager resource restarted Sun Cluster Broker resource restarted |
If you are installing WebSphere Message Broker v6, the restart dependency for the broker database listed in Table 2 is no longer required. This implies that a remote database can be used for WebSphere Message Broker v6. WebSphere Message Broker and WebSphere MQ are still required to be registered within the same resource group.
The broker database needs to be available for WebSphere Message Broker v6 to fully initialize. Therefore if you are deploying a remote broker database you must consider the availability of the broker database and the impact that can have on the broker if the broker database is not available.
The Sun Cluster HA for WebSphere Message Broker data service can support multiple WebSphere Message Broker instances, potentially with different versions.
If you intend to deploy multiple WebSphere Message Broker instances you will need to consider how you deploy WebSphere Message Broker in the global zone or whole root zones.
The purpose of the following discussion is to help you decide how to use the global zone or whole root zones to deploy multiple WebSphere Message Broker instances and then to determine what Nodelist entries are required.
The Nodelist entry is used when the resource group is defined using the clresourcegroup command. The Sun Cluster HA for WebSphere Message Broker must use the same resource group that is used for the WebSphere MQ and database resources.
You must therefore determine how the WebSphere Message Broker will be deployed in the cluster before the WebSphere MQ resource group is created so that you can specify the appropriate Nodelist entry.
Within these examples:
There are two nodes within the cluster, node1 and node2.
Both nodes have two zones named z1 and z2.
Each example listed simply shows the required Nodelist property value, via the -n parameter, which is used when creating a failover resource group.
Benefits and drawbacks are listed within each example as + and -.
Create a single failover resource group that will contain all the WebSphere Message Broker instances that will run in the global zones across node1 and node2.
# clresourcegroup create -n node1,node2 RG1 |
+ Only the global zone per node is required.
- Multiple WebSphere Message Broker instances do not have independent failover as they are all within the same failover resource group.
- Under normal operation, only one node of the cluster at any time is actively processing the WebSphere Message Broker workload.
Create multiple failover resource groups that will each contain one WebSphere Message Broker instance that will run in the global zones across node1 and node2.
# clresourcegroup create -n node1,node2 RG1 # clresourcegroup create -n node2,node1 RG2 |
+ Only the global zone per node is required.
+ Multiple WebSphere Message Broker instances have independent failover in separate failover resource groups.
+ Under normal operation, each node of the cluster is actively processing a WebSphere Message Broker workload, thereby utilizing each node of the cluster.
Create a single failover resource group that will contain all the WebSphere Message Broker instances that will run in the same zones across node1 and node2.
# clresourcegroup create -n node1:z1,node2:z1 RG1 |
+ Only one zone per node is required.
- Although all zones are booted, only one zone at any time is actively processing the WebSphere Message Broker workload.
- Multiple WebSphere Message Broker instances do not have independent failover as they are all within the same failover resource group.
- Multiple WebSphere Message Broker instances are not isolated within their own separate zones.
Create multiple zones, where each zone pair will contain just one WebSphere Message Broker instance that will run in the same zones across node1 and node2.
# clresourcegroup create -n node1:z1,node2:z1 RG1 # clresourcegroup create -n node2:z2,node1:z2 RG2 |
+ Multiple WebSphere Message Broker instances have independent failover in separate failover resource groups and separate zones.
+ All WebSphere Message Broker instances are isolated within their own separate zones.
- Each resource group requires a unique zone per node.
The configuration requirements in this section apply only to Sun Cluster HA for WebSphere Message Broker.
If your data service configuration does not conform to these requirements, the data service configuration might not be supported.
Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
You must determine which Solaris zone WebSphere Message Broker will run in. WebSphere Message Broker can run within a global zone or zone configuration. Table 3 provides some reasons to help you decide which zone is appropriate.
Table 3 Choosing the appropriate Solaris zone for WebSphere Message Broker
Zone type |
Reasons for choosing the appropriate Solaris Zone for WebSphere Message Broker |
---|---|
Global Zone |
Only one instance of WebSphere Message Broker will be installed. You are upgrading your cluster where previously a single or multiple WebSphere Message Broker instances were deployed on the cluster nodes. Zones are not required. |
Non-global Zone |
Multiple WebSphere Message Broker instances need to be consolidated and isolated from each other. Different versions of WebSphere Message Broker will be installed. Failover testing of WebSphere Message Broker between zones on a single node cluster is required. |
This section contains the procedures you need to install and configure WebSphere Message Broker.
This section contains the procedures you need to install and configure WebSphere Message Broker.
Determine how many WebSphere Message Broker instances will be used.
Refer to Restriction for multiple WebSphere Message Broker instances for more information.
Determine which Solaris zone to use.
Refer to Determine which Solaris zone WebSphere Message Broker will use for more information.
If a zone will be used, create the whole root zone.
Refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones for complete information about installing and configuring a zone.
When creating a zone for use by the cluster, autoboot=true must be used.
If a zone is being used, ensure the zone is booted.
Repeat this step on all nodes of the cluster if a zone is being used.
Boot the zone if it is not running.
# zoneadm list -v # zoneadm -z zonename boot |
Determine how WebSphere Message Broker should be deployed in the cluster.
The WebSphere Message Broker files can be deployed onto a cluster file system or highly available file system in the cluster. The following discussion will help you determine the correct approach to take.
Within this section, a single instance or multiple instances of WebSphere Message Broker will be considered within a global zone or zone.
In each scenario, file system options for the WebSphere Message Broker files (/var/mqsi) will be listed together with a recommendation where appropriate.
Refer to Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones for Deployment Example: Installing a WebSphere Message Broker in zones for an example on how to set up the WebSphere Message Broker files.
Create a cluster file system or highly available local file system for the WebSphere Message Broker files.
Within this step you will create a file system for the WebSphere Message Broker files (/var/mqsi). Once you have determined how WebSphere Message Broker should be deployed in the cluster, you can choose one of the sub steps below.
Create the WebSphere Message Broker files on a cluster file system by using Step a.
Create the WebSphere Message Broker files on a highly available local file systems by using Step b.
WebSphere Message Broker files on a cluster file system.
Within this deployment:
The WebSphere Message Broker files (/var/mqsi) are deployed on a cluster file system.
However, /var/mqsi/local or /var/mqsi/common/local requires a symbolic link to a local file system. This is required as WebSphere Message Broker generates specific locks that require the locks directory to be located on local storage within each node.
If WebSphere Business Integration Message Broker v5 is being deployed you must create a symbolic link for /var/mqsi/locks to a local file system, e.g. /local/mqsi/locks on each node in the cluster.
If WebSphere Message Broker v6 is being deployed you must create a symbolic link for /var/mqsi/common/locks to a local file system, e.g. /local/mqsi/locks on each node in the cluster.
WebSphere Message Broker files on a highly available local file system.
Within this deployment:
The WebSphere Message Broker files (/var/mqsi) are deployed on a highly available local file system.
Highly available local file systems can include the Zettabyte File System (ZFS).
A symbolic link for the locks directory is not required, regardless if you are deploying WebSphere Business Integration Message Broker v5 or WebSphere Message Broker v6.
Mount the highly available local file system
Perform this step on one node of the cluster.
If a non ZFS highly available local file system is being used for the WebSphere Message Broker.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager.
# metaset -s disk-set -t |
For Veritas Volume Manager.
# vxdg -C import disk-group # vxdg -g disk-group startall |
If the global zone is being used for WebSphere Message Broker.
# mount websphere-message-broker-highly-available-local-file-system |
If a zone is being used for WebSphere Message Broker.
Create the mount point on all zones of the cluster that are being used for WebSphere Message Broker.
# zlogin zonename mkdir websphere-message-broker-highly-available-local-file-system |
Mount the highly available local file system on one of the zones being used.
# mount -F lofs websphere-message-broker-highly-available-local-file-system \ > /zonepath/root/websphere-message-broker-highly-available-local-file-system |
If a ZFS highly available local file system is being used for WebSphere Message Broker.
If the global zone is being used for WebSphere Message Broker.
# zpool import -R / HAZpool |
If a zone is being used for WebSphere Message Broker.
# zpool import -R /zonepath/root HAZpool |
If you are repeating this step to mount the ZFS highly available local file system on another node or zone before installing the WebSphere Message Broker software, you must first export the ZFS pool from the node that currently has the ZFS pool imported.
To export the ZFS pool, issue the following,
# zpool export -f HAZpool |
Install WebSphere Message Broker on all nodes or zones of the cluster.
After you have created and mounted the appropriate file system for the WebSphere Message Broker files, you must install WebSphere Message Broker on each node of the cluster, either in the global zone or zone as required.
For compatibility reasons, the Sun Cluster HA for WebSphere Message Broker data service requires that /opt/mqsi exists on all nodes or zones in the cluster, even if WebSphere Message Broker v6 is being deployed. Therefore you must create the directory /opt/mqsi.
Follow IBM's WebSphere Message Broker Installation Guide to install WebSphere Message Broker.
If the WebSphere Message Broker files will use a highly available local file system, you will need to mount the highly available local file system on each node or zone before installing the WebSphere Message Broker software.
Repeat Step 7 as required.
Ensure that WebSphere MQ and the appropriate database are running.
WebSphere Message Broker requires that a queue manager and appropriate database are running when creating a Broker, Configuration Manager or UserNameServer. You must ensure that the queue manger and database are running on the node where you will create a Broker, Configuration Manager or UserNameServer.
Create the WebSphere Message Broker, Configuration Manager or UserNameServer as required.
Follow IBM's WebSphere Message Broker Installation Guide to create a WebSphere Message Broker.
This section contains the procedure you need to verify the installation and configuration.
This procedure does not verify that your application is highly available because you have not yet installed your data service.
Perform this procedure on one node or zone of the cluster unless a specific step indicates otherwise.
Ensure the zone is booted, if a zone is being used.
Repeat this step on all nodes on the cluster if a zone is being used.
Boot the zone if it is not running.
# zoneadm list -v # zoneadm -z zonename boot |
Login to the zone, if a zone is being used.
# zlogin zonename |
Start the WebSphere Message Broker, Configuration Manager or UserNameServer.
# su - message-broker-userid $ mqsistart message-broker |
List all WebSphere Message Brokers that are running.
Perform this step as the message-broker-userid.
$ mqsilist |
Stop the WebSphere Message Broker, Configuration Manager or UserNameServer.
Perform this step as the message-broker-userid.
$ mqsistop -i message-broker $ exit |
Logout from the zone, if a zone is being used.
# exit |
Unmount the highly available local file system.
Perform this step in the global zone only.
You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere Message Broker
If a non ZFS highly available local file system is being used for WebSphere Message Broker.
If the global zone is being used for WebSphere Message Broker.
# umount websphere-message-broker-highly-available-local-file-system |
If a zone is being used for WebSphere Message Broker.
Unmount the highly available local file system from the zone.
# umount /zonepath/root/websphere-message-broker-highly-available-local-file-system |
If a ZFS highly available file system is being used for WebSphere Message Broker.
# zpool export -f HAZpool |
Relocate the shared storage to other node.
Perform this step on another node of the cluster.
If a non ZFS highly available local file system is being used for the WebSphere Message Broker files.
Ensure the node has ownership of the disk set or disk group.
For Solaris Volume Manager.
# metaset -s disk-set -t |
For Veritas Volume Manager.
# vxdg -C import disk-group # vxdg -g disk-group startall |
If the global zone is being used for WebSphere Message Broker.
# mount websphere-message-broker-highly-available-local-file-system |
If a zone is being used for WebSphere Message Broker.
Create the mount point on all zones of the cluster that are being used for WebSphere Message Broker.
Mount the highly available local file system on one of the zones being used .
# zlogin zonename mkdir websphere-message-broker-highly-available-local-file-system # # mount -F lofs websphere-message-broker-highly-available-local-file-system \ > /zonepath/root/websphere-message-broker-highly-available-local-file-system |
If a ZFS highly available file system is being used for WebSphere Message Broker.
Login to the zone, if a zone is being used.
Perform this step on the other node of the cluster.
# zlogin zonename |
Start the WebSphere Message Broker, Configuration Manager or UserNameServer.
Perform this step on the other node or zone of the cluster.
# su - message-broker-userid $ mqsistart message-broker |
List all WebSphere Message Brokers that are running.
Perform this step as the message-broker-userid.
$ mqsilist |
Stop the WebSphere Message Broker, Configuration Manager or UserNameServer.
Perform this step as the message-broker-userid.
$ mqsistop -i message-broker $ exit |
Logout from the zone, if a zone is being used.
# exit |
Unmount the highly available local file system.
Perform this step in the global zone only.
You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere Message Broker
If a non ZFS highly available local file system is being used for WebSphere Message Broker.
If the global zone is being used for WebSphere Message Broker.
# umount websphere-message-broker-highly-available-local-file-system |
If a zone is being used for WebSphere Message Broker.
Unmount the highly available local file system from the zone.
# umount /zonepath/root/websphere-message-broker-highly-available-local-file-system |
If a ZFS highly available file system is being used for WebSphere Message Broker.
# zpool export -f HAZpool |
If you did not install the Sun Cluster HA for WebSphere Message Broker packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.
Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere Message Broker packages.
You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.
Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.
Ensure that you have the Sun Java Availability Suite DVD-ROM.
If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.
On the cluster node where you are installing the data service packages, become superuser.
Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.
If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.
Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.
Start the Sun Java Enterprise System Installation Wizard.
# ./installer |
When you are prompted, accept the license agreement.
If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.
From the list of Sun Cluster agents under Availability Services, select the data service for WebSphere Message Broker.
If you require support for languages other than English, select the option to install multilingual packages.
English language support is always installed.
When prompted whether to configure the data service now or later, choose Configure Later.
Choose Configure Later to perform the configuration after the installation.
Follow the instructions on the screen to install the data service packages on the node.
The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.
(GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.
The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step
Exit the Sun Java Enterprise System Installation Wizard.
Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.
See Registering and Configuring Sun Cluster HA for WebSphere Message Broker to register Sun Cluster HA for WebSphere Message Broker and to configure the cluster for the data service.
This section contains the procedures you need to configure Sun Cluster HA for WebSphere Message Broker.
Some procedures within this section require you to use certain Sun Cluster commands. Refer to the relevant Sun Cluster command man page for more information about these command and their parameters.
The Sun Cluster HA for WebSphere Message Broker data service
Perform this procedure on one node of the cluster only.
This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.
If you did not install the Sun Cluster HA for WebSphere Message Broker packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere Message Broker Packages.
This procedure requires that WebSphere MQ and a broker database have been installed and that the Sun Cluster HA for WebSphere MQ data service and database data service have been registered and configured.
The registration and configuration of Sun Cluster HA for WebSphere Message Broker must use the same resource group that WebSphere MQ and the broker database use.
You must therefore have completed the installation of the Sun Cluster Data Service for WebSphere MQ data service and the database data service before continuing with this procedure.
On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create a resource for the WebSphere Message Broker Disk Storage.
If a ZFS highly available local file system is being used.
# clresource create -g websphere-mq-resource-group \ > -t SUNW.HAStoragePlus \ > -p Zpools=websphere-message-broker-zspool \ > websphere-message-broker-hastorage-resource |
Alternatively, you can simply add the websphere-message-broker-zspool to the existing websphere-mq-hastorage-resource.
# clresource set \ > -p Zpools=websphere-mq-zspools,websphere-message-broker-zspool > websphere-mq-hastorage-resource |
If a cluster file system or a non ZFS highly available local file system is being used.
# clresource create -g websphere-mq-resource-group \ > -t SUNW.HAStoragePlus \ > -p FilesystemMountPoints=websphere-message-broker-filesystem-mountpoint \ > websphere-message-broker-hastorage-resource |
Alternatively, you can simply add the websphere-message-broker-filesystem-mountpoint to the existing websphere-mq-hastorage-resource.
# clresource set \ > -p FilesystemMountPoints=mq-filesystem-mountpoints,message-broker-filesystem-mountpoint \ > websphere-mq-hastorage-resource |
Enable the Disk Storage resource.
# clresource enable websphere-message-broker-hastorage-resource |
Create and register a resource for the Broker.
Edit /opt/SUNWscmqi/sib/util/sib_config and follow the comments within that file. After you have edited sib_config, you must register the resource.
If you require the broker probe to perform a simple message flow test, you must create a message flow and specify the inbound queue in the SC3_IN variable and the outbound queue in the SC3_OUT variable.
Refer to IBM's WebSphere Message Broker Message Flows to create a simple message flow.
Alternatively, the default values for SC3_IN and SC3_OUT are set to NONE which will cause the broker probe to not perform a simple message flow and just check that the bipservice program is running.
A value for the RDBMS_RS parameter is not required if WebSphere Message Broker v6 is being deployed. This implies that a remote database can be used for WebSphere Message Broker v6 and that the broker does not need to be restarted if the broker database is restarted.
# cd /opt/SUNWscmqi/sib/util # vi sib_config # ./sib_register |
The following listing has been taken from the deployment example, Step 2, which can be found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/sib/util/sib_config that has been edited to configure a broker resource.
Vigor5# cat > /var/tmp/brk_config <<-EOF RS=wmq1-brk RG=wmq1-rg QMGR=qmgr1 LH=wmq1-lh HAS_RS=wmq1-ZFShas SC3_IN=NONE SC3_OUT=NONE MQSI_ID=mqsiuser BROKER=brk QMGR_RS=wmq1-qmgr RDBMS_RS= START_CMD= STOP_CMD= EOF |
Vigor5# /opt/SUNWscmqi/sib/util/sib_register -f /var/tmp/brk_config |
Enable the Broker resource.
Before you enable the Broker resource, ensure that /opt/mqsi exists.
For compatibility reasons, the Sun Cluster HA for WebSphere Message Broker data service requires that /opt/mqsi exists on all nodes or zones in the cluster.
# clresource enable websphere-message-broker-resource |
(Optional) Create and register a resource for the Configuration Manager.
Edit /opt/SUNWscmqi/sib/util/sib_config and follow the comments within that file. After you have edited sib_config, you must register the resource.
The configuration manager resource must specify NONE for the SC3_IN and SC3_OUT variables.
A value for the RDBMS_RS parameter is not required if WebSphere Message Broker v6 Configuration Manager is being deployed.
# cd /opt/SUNWscmqi/sib/util # vi sib_config # ./sib_register |
The following listing has been taken from the deployment example, Step 4, which can be found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/sib/util/sib_config that has been edited to configure a configuration manager resource.
Vigor5# cat > /var/tmp/cmg_config <<-EOF RS=wmq1-cmg RG=wmq1-rg QMGR=qmgr1 LH=wmq1-lh HAS_RS=wmq1-ZFShas SC3_IN=NONE SC3_OUT=NONE MQSI_ID=mqsiuser BROKER=cmg QMGR_RS=wmq1-qmgr RDBMS_RS= START_CMD= STOP_CMD= EOF |
Vigor5# /opt/SUNWscmqi/sib/util/sib_register -f /var/tmp/cmg_config |
(Optional) Enable the Configuration Manager resource.
# clresource enable websphere-message-broker-configuration-manager-resource |
(Optional) Create and register a resource for the UserNameServer.
Edit /opt/SUNWscmqi/siu/util/siu_config and follow the comments within that file. After you have edited siu_config, you must register the resource.
# cd /opt/SUNWscmqi/siu/util # vi siu_config # ./siu_register |
The following listing has been taken from the deployment example, Step 6, which can found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/siu/util/siu_config that has been edited to configure a UserNameServer resource.
Vigor5# cat > /var/tmp/cmg_config <<-EOF RS=wmq1-uns RG=wmq1-rg QMGR=qmgr1 LH=wmq1-lh HAS_RS=wmq1-ZFShas MQSI_ID=mqsiuser QMGR_RS=wmq1-qmgr START_CMD= STOP_CMD= EOF |
Vigor5# /opt/SUNWscmqi/siu/util/siu_register -f /var/tmp/uns_config |
(Optional) Enable the UserNameServer resource.
# clresource enable websphere-message-broker-usernameserver-resource |
This section contains the procedure you need to verify that you installed and configured your data service correctly.
On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Ensure all the WebSphere Message Broker resources are online.
# cluster status |
Enable any WebSphere Message Broker resources that are not online.
# clresource enable websphere-message-broker-resource |
Switch the WebSphere Message Broker resource group to another cluster node or node:zone.
# clresourcegroup switch -n node[:zone] websphere-mq-resource-group |
Upgrade the Sun Cluster HA for WebSphere Message Broker data service if the following conditions apply:
You are upgrading from an earlier version of the Sun Cluster HA for WebSphere Message Broker data service, that was previously known as Sun Cluster HA for WebSphere MQ Integrator.
You need to use the new features of this data service.
Perform steps 1, 2, 3 and 6 if you have an existing Sun Cluster HA for WebSphere Message Broker deployment and wish to upgrade to the new version. Complete all steps if you need to use the new features of this data service.
If you intend to run all steps, you should consider if your current WebSphere Message Broker resources have been modified to have specific timeout values that suit your deployment. If timeout values were previously adjusted you should reapply those timeout values to your new WebSphere Message Broker resources.
On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Disable the WebSphere Message Broker resources.
# clresource disable websphere-messabge-broker-resource |
Install the new version of Sun Cluster HA for WebSphere Message Broker to each cluster
Refer to How to Install the Sun Cluster HA for WebSphere Message Broker Packages for more information.
Delete the WebSphere Message Broker resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere Message Broker.
# clresource delete websphere-message-broker-resource |
Reregister the WebSphere Message Broker resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere Message Broker.
Refer to How to Register and Configure Sun Cluster HA for WebSphere Message Broker for more information.
Enable the WebSphere Message Broker resources
If you have only performed steps 1, 2 and 3 you will need to re-enable the WebSphere Message Broker resources.
# clresource enable websphere-message-broker-resource |
This section describes the Sun Cluster HA for WebSphere Message Broker fault monitor probing algorithm or functionality, states the conditions, and recovery actions associated with unsuccessful probing.
For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.
The Sun Cluster HA for WebSphere Message Broker fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.
The Sun Cluster HA for WebSphere Message Broker fault monitor is controlled by the extension properties that control the probing frequency. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for WebSphere Message Broker fault monitor only if you need to modify this preset behavior.
Setting the interval between fault monitor probes (Thorough_probe_interval)
Setting the timeout for fault monitor probes (Probe_timeout)
Setting the number of times the fault monitor attempts to restart the resource (Retry_count)
The Sun Cluster HA for WebSphere Message Broker fault monitor checks the broker and other components within an infinite loop. During each cycle the fault monitor will check the relevant component and report either a failure or success.
If the fault monitor is successful it returns to its infinite loop and continues the next cycle of probing and sleeping.
If the fault monitor reports a failure a request is made to the cluster to restart the resource. If the fault monitor reports another failure another request is made to the cluster to restart the resource. This behavior will continue whenever the fault monitor reports a failure.
If successive restarts exceed the Retry_count within the Thorough_probe_interval a request to failover the resource group onto a different node or zone is made.
The broker probe can check the broker by using a simple message flow test, if SC3_IN and SC3_OUT are set to the inbound and outbound queues.
If set, the broker probe puts a message to the inbound queue referenced by the SC3_IN variable. After waiting two seconds, the broker probe checks that the message has arrived at the outbound queue referenced by the SC3_OUT variable.
If SC3_IN and SC3_OUT are set to NONE the simple message flow is not performed. Instead the broker probe checks that the bipservice process is still running.
SC3_IN and SC3_OUT are set when the broker resource was configured and registered within /opt/SUNWscmqi/sib/util/sib_config.
The broker probe checks the configuration manager to see if the bipservice process is still running.
The configuration manager resource must set SC3_IN and SC3_OUT to NONE. This ensures that the simple message flow test is not performed.
The broker probe checks the UserNameServer to see if the bipservice process is still running.
Sun Cluster HA for WebSphere Message Broker can be used by multiple WebSphere Message Broker instances. It is possible to turn debug on for all WebSphere Message Broker instances or a particular WebSphere Message Broker instance.
A config file exists under /opt/SUNWscmqi/xxx/etc, where xxx can be sib (Broker or Configuration Manager) or siu (UserNameServer).
These files allow you to turn on debug for all WebSphere Message Broker instances or for a specific WebSphere Message Broker instance on a particular node or zone within the cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere Message Broker across the whole cluster, repeat this step on all nodes within the cluster.
Edit /etc/syslog.conf and change daemon.notice to daemon.debug.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.notice;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator # |
Change the daemon.notice to daemon.debug and restart syslogd. Note that the output below, from grep daemon /etc/syslog.conf, shows that daemon.debug has been set.
# grep daemon /etc/syslog.conf *.err;kern.debug;daemon.debug;mail.crit /var/adm/messages *.alert;kern.err;daemon.err operator |
Restart the syslog daemon.
Edit /opt/SUNWscmqi/xxx/etc/config.
Perform this step for each component that requires debug output, on each node of Sun Cluster as required.
Edit /opt/SUNWscmqi/xxx/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource.
# cat /opt/SUNWscmqi/sib/etc/config # # Copyright 2006 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # ##ident "@(#)config 1.2 06/03/21 SMI" # # Usage: # DEBUG=<RESOURCE_NAME> or ALL # DEBUG=ALL |
To turn off debug, reverse the steps above.