Sun Cluster Data Service for WebSphere Message Broker Guide for Solaris OS

Installing and Configuring Sun Cluster HA for WebSphere Message Broker

This chapter explains how to install and configure Sun Cluster HA for WebSphere Message Broker.

This chapter contains the following sections.

Sun Cluster HA for WebSphere Message Broker Overview


Note –

Throughout this document the term zone reflects a non-global Solaris zone. The term global zone will remain.


The Sun Cluster HA for WebSphere Message Broker data service provides a mechanism for the orderly startup and shutdown, fault monitoring, and automatic failover of the WebSphere Message Broker service.

The following components can be protected by the Sun Cluster HA for WebSphere Message Broker data service within the global zone or whole root zone.

Broker
Configuration Manager
UserNameServer

Overview of Installing and Configuring Sun Cluster HA for WebSphere Message Broker

The following table summarizes the tasks for installing and configuring Sun Cluster HA for WebSphere Message Broker and provides cross-references to detailed instructions for performing these tasks. Perform the tasks in the order that they are listed in the table.

Table 1 Tasks for Installing and Configuring Sun Cluster HA for WebSphere Message Broker

Task 

Instructions 

Plan the installation 

Planning the Sun Cluster HA for WebSphere Message Broker Installation and Configuration

Install and configure the WebSphere Message Broker software 

How to Install and Configure WebSphere Message Broker

Verify the installation and configuration 

How to Verify the Installation and Configuration of WebSphere Message Broker

Install Sun Cluster HA for WebSphere Message Broker packages 

How to Install the Sun Cluster HA for WebSphere Message Broker Packages

Register and configure Sun Cluster HA for WebSphere Message Broker resources 

How to Register and Configure Sun Cluster HA for WebSphere Message Broker

Verify the Sun Cluster HA for WebSphere Message Broker installation and configuration 

How to Verify the Sun Cluster HA for WebSphere Message Broker Installation and Configuration

Upgrade the Sun Cluster HA for WebSphere Message Broker data service 

How to Upgrade to the New Version of Sun Cluster HA for WebSphere Message Broker

Tune the Sun Cluster HA for WebSphere Message Broker fault monitor 

Understanding the Sun Cluster HA for WebSphere Message Broker Fault Monitor

Debug Sun Cluster HA for WebSphere Message Broker 

How to turn on debug for Sun Cluster HA for WebSphere Message Broker

Planning the Sun Cluster HA for WebSphere Message Broker Installation and Configuration

This section contains the information you need to plan your Sun Cluster HA for WebSphere Message Broker installation and configuration.

Configuration Restrictions

The configuration restrictions in the subsections that follow apply only to Sun Cluster HA for WebSphere Message Broker.


Caution – Caution –

Your data service configuration might not be supported if you do not observe these restrictions.


Restriction for the supported configurations of Sun Cluster HA for WebSphere Message Broker

The Sun Cluster HA for WebSphere Message Broker data service can only be configured as a failover service.

Single or multiple instances of WebSphere Message Broker can be deployed in the cluster.

WebSphere Message Broker can be deployed in the global zone or a whole root zone. The See Restriction for multiple WebSphere Message Broker instances for more information about deploying in a zone.

The Sun Cluster HA for WebSphere Message Broker data service supports different versions of WebSphere Message Broker. Before proceeding with the installation of WebSphere Message Broker you must check that the Sun Cluster HA for WebSphere Message Broker data service has been verified against that version.

Restriction for the location of WebSphere Message Broker files

The WebSphere Message Broker files are the data files used by the broker in /var/mqsi. Within this document references will be made to the WebSphere Message Broker files which implies all of the contents of /var/mqsi, unless specified otherwise.

These WebSphere Message Broker files needs to be placed on shared storage as either a cluster file system or a highly available local file system. However, this placement will depend on how WebSphere Message Broker is being deployed, if a single or multiple instances are being deployed, and if that deployment will be in the global zone or zones.

Refer to Step 5 and Step 6 in How to Install and Configure WebSphere Message Broker for a more information.

Restriction for the WebSphere Message Broker additional software

WebSphere Message Broker requires WebSphere MQ and a database.

If you are installing WebSphere Business Integration Message Broker v5, the Sun Cluster HA for WebSphere Message Broker requires that the broker, queue manager and database are all registered within the same resource group. This implies that a remote database cannot be used for WebSphere Business Integration Message Broker v5.

This restriction is required because WebSphere Business Integration Message Broker v5 has very specific restart dependencies if the queue manager or database fails. More specifically, it is not possible for the cluster to manage the restart of a remote database that is outside of the cluster.

Table 2 describes the restart dependencies that the WebSphere Business Integration Message Broker v5 software has on additional software.

Table 2 WebSphere Business Integration Message Broker v5 restart dependencies

Failure 

Intended Action 

Actual Action 

Broker 

Broker Start 

Sun Cluster Broker resource restarted 

Broker Queue Manager 

Broker Stop 

Broker Queue Manager Start 

Broker Start 

Sun Cluster Queue Manager resource restarted 

Sun Cluster Broker resource restarted 

Broker Database 

Broker Stop 

Broker Queue Manager Stop 

Broker Database Start 

Broker Queue Manager Start 

Broker Start 

Sun Cluster Database resource restarted 

Sun Cluster Queue Manager resource restarted 

Sun Cluster Broker resource restarted 

If you are installing WebSphere Message Broker v6, the restart dependency for the broker database listed in Table 2 is no longer required. This implies that a remote database can be used for WebSphere Message Broker v6. WebSphere Message Broker and WebSphere MQ are still required to be registered within the same resource group.


Note –

The broker database needs to be available for WebSphere Message Broker v6 to fully initialize. Therefore if you are deploying a remote broker database you must consider the availability of the broker database and the impact that can have on the broker if the broker database is not available.


Restriction for multiple WebSphere Message Broker instances

The Sun Cluster HA for WebSphere Message Broker data service can support multiple WebSphere Message Broker instances, potentially with different versions.

If you intend to deploy multiple WebSphere Message Broker instances you will need to consider how you deploy WebSphere Message Broker in the global zone or whole root zones.

The purpose of the following discussion is to help you decide how to use the global zone or whole root zones to deploy multiple WebSphere Message Broker instances and then to determine what Nodelist entries are required.

The Nodelist entry is used when the resource group is defined using the clresourcegroup command. The Sun Cluster HA for WebSphere Message Broker must use the same resource group that is used for the WebSphere MQ and database resources.

You must therefore determine how the WebSphere Message Broker will be deployed in the cluster before the WebSphere MQ resource group is created so that you can specify the appropriate Nodelist entry.

Within these examples:


Example 1 Run multiple WebSphere Message Broker instances in the global zone in one resource group.

Create a single failover resource group that will contain all the WebSphere Message Broker instances that will run in the global zones across node1 and node2.


# clresourcegroup create -n node1,node2 RG1


Example 2 Run multiple WebSphere Message Broker instances in the global zone in separate resource groups.

Create multiple failover resource groups that will each contain one WebSphere Message Broker instance that will run in the global zones across node1 and node2.


# clresourcegroup create -n node1,node2 RG1
# clresourcegroup create -n node2,node1 RG2


Example 3 Run multiple WebSphere Message Broker instances in zones in one resource group.

Create a single failover resource group that will contain all the WebSphere Message Broker instances that will run in the same zones across node1 and node2.


# clresourcegroup create -n node1:z1,node2:z1 RG1


Example 4 Run multiple WebSphere Message Broker instances in zones in separate resource groups.

Create multiple zones, where each zone pair will contain just one WebSphere Message Broker instance that will run in the same zones across node1 and node2.


# clresourcegroup create -n node1:z1,node2:z1 RG1
# clresourcegroup create -n node2:z2,node1:z2 RG2

Configuration Requirements

The configuration requirements in this section apply only to Sun Cluster HA for WebSphere Message Broker.


Caution – Caution –

If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Determine which Solaris zone WebSphere Message Broker will use

Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

You must determine which Solaris zone WebSphere Message Broker will run in. WebSphere Message Broker can run within a global zone or zone configuration. Table 3 provides some reasons to help you decide which zone is appropriate.

Table 3 Choosing the appropriate Solaris zone for WebSphere Message Broker

Zone type 

Reasons for choosing the appropriate Solaris Zone for WebSphere Message Broker 

Global Zone 

Only one instance of WebSphere Message Broker will be installed. 

You are upgrading your cluster where previously a single or multiple WebSphere Message Broker instances were deployed on the cluster nodes. 

Zones are not required. 

Non-global Zone 

Multiple WebSphere Message Broker instances need to be consolidated and isolated from each other. 

Different versions of WebSphere Message Broker will be installed. 

Failover testing of WebSphere Message Broker between zones on a single node cluster is required. 

Installing and Configuring WebSphere Message Broker

This section contains the procedures you need to install and configure WebSphere Message Broker.

ProcedureHow to Install and Configure WebSphere Message Broker

This section contains the procedures you need to install and configure WebSphere Message Broker.

  1. Determine how many WebSphere Message Broker instances will be used.

    Refer to Restriction for multiple WebSphere Message Broker instances for more information.

  2. Determine which Solaris zone to use.

    Refer to Determine which Solaris zone WebSphere Message Broker will use for more information.

  3. If a zone will be used, create the whole root zone.

    Refer to System Administration Guide: Solaris Containers-Resource Management and Solaris Zones for complete information about installing and configuring a zone.


    Note –

    When creating a zone for use by the cluster, autoboot=true must be used.


  4. If a zone is being used, ensure the zone is booted.

    Repeat this step on all nodes of the cluster if a zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  5. Determine how WebSphere Message Broker should be deployed in the cluster.

    The WebSphere Message Broker files can be deployed onto a cluster file system or highly available file system in the cluster. The following discussion will help you determine the correct approach to take.

    Within this section, a single instance or multiple instances of WebSphere Message Broker will be considered within a global zone or zone.

    In each scenario, file system options for the WebSphere Message Broker files (/var/mqsi) will be listed together with a recommendation where appropriate.

    1. Single Instance of WebSphere Message Broker

      1. Global zone deployment

        /var/mqsi

        Can be deployed on a cluster file system, however you are recommend to deploy on a highly available local file system.

      2. Zone deployment

        /var/mqsi

        Must be deployed on a highly available local file system.

    2. Multiple Instances of WebSphere Message Broker

      1. Global zone deployment

        /var/mqsi

        Must be deployed on a cluster file system.

      2. Zone deployment

        /var/mqsi

        Must be deployed on a highly available local file system.


    Note –

    Refer to Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones for Deployment Example: Installing a WebSphere Message Broker in zones for an example on how to set up the WebSphere Message Broker files.


  6. Create a cluster file system or highly available local file system for the WebSphere Message Broker files.

    Within this step you will create a file system for the WebSphere Message Broker files (/var/mqsi). Once you have determined how WebSphere Message Broker should be deployed in the cluster, you can choose one of the sub steps below.

    • Create the WebSphere Message Broker files on a cluster file system by using Step a.

    • Create the WebSphere Message Broker files on a highly available local file systems by using Step b.

    1. WebSphere Message Broker files on a cluster file system.

      Within this deployment:

      • The WebSphere Message Broker files (/var/mqsi) are deployed on a cluster file system.

      • However, /var/mqsi/local or /var/mqsi/common/local requires a symbolic link to a local file system. This is required as WebSphere Message Broker generates specific locks that require the locks directory to be located on local storage within each node.

        If WebSphere Business Integration Message Broker v5 is being deployed you must create a symbolic link for /var/mqsi/locks to a local file system, e.g. /local/mqsi/locks on each node in the cluster.

        If WebSphere Message Broker v6 is being deployed you must create a symbolic link for /var/mqsi/common/locks to a local file system, e.g. /local/mqsi/locks on each node in the cluster.

    2. WebSphere Message Broker files on a highly available local file system.

      Within this deployment:

      • The WebSphere Message Broker files (/var/mqsi) are deployed on a highly available local file system.

      • Highly available local file systems can include the Zettabyte File System (ZFS).

      • A symbolic link for the locks directory is not required, regardless if you are deploying WebSphere Business Integration Message Broker v5 or WebSphere Message Broker v6.

  7. Mount the highly available local file system

    Perform this step on one node of the cluster.

    1. If a non ZFS highly available local file system is being used for the WebSphere Message Broker.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for WebSphere Message Broker.


        # mount websphere-message-broker-highly-available-local-file-system
        
      2. If a zone is being used for WebSphere Message Broker.

        Create the mount point on all zones of the cluster that are being used for WebSphere Message Broker.


        # zlogin zonename mkdir websphere-message-broker-highly-available-local-file-system
        

        Mount the highly available local file system on one of the zones being used.


        # mount -F lofs websphere-message-broker-highly-available-local-file-system \
        > /zonepath/root/websphere-message-broker-highly-available-local-file-system
        
    2. If a ZFS highly available local file system is being used for WebSphere Message Broker.

      1. If the global zone is being used for WebSphere Message Broker.


        # zpool import -R / HAZpool
        
      2. If a zone is being used for WebSphere Message Broker.


        # zpool import -R /zonepath/root HAZpool
        

        Note –

        If you are repeating this step to mount the ZFS highly available local file system on another node or zone before installing the WebSphere Message Broker software, you must first export the ZFS pool from the node that currently has the ZFS pool imported.

        To export the ZFS pool, issue the following,


        # zpool export -f HAZpool
        

  8. Install WebSphere Message Broker on all nodes or zones of the cluster.

    After you have created and mounted the appropriate file system for the WebSphere Message Broker files, you must install WebSphere Message Broker on each node of the cluster, either in the global zone or zone as required.

    For compatibility reasons, the Sun Cluster HA for WebSphere Message Broker data service requires that /opt/mqsi exists on all nodes or zones in the cluster, even if WebSphere Message Broker v6 is being deployed. Therefore you must create the directory /opt/mqsi.

    Follow IBM's WebSphere Message Broker Installation Guide to install WebSphere Message Broker.


    Note –

    If the WebSphere Message Broker files will use a highly available local file system, you will need to mount the highly available local file system on each node or zone before installing the WebSphere Message Broker software.

    Repeat Step 7 as required.


  9. Ensure that WebSphere MQ and the appropriate database are running.

    WebSphere Message Broker requires that a queue manager and appropriate database are running when creating a Broker, Configuration Manager or UserNameServer. You must ensure that the queue manger and database are running on the node where you will create a Broker, Configuration Manager or UserNameServer.

  10. Create the WebSphere Message Broker, Configuration Manager or UserNameServer as required.

    Follow IBM's WebSphere Message Broker Installation Guide to create a WebSphere Message Broker.

Verifying the Installation and Configuration of WebSphere Message Broker

This section contains the procedure you need to verify the installation and configuration.

ProcedureHow to Verify the Installation and Configuration of WebSphere Message Broker

This procedure does not verify that your application is highly available because you have not yet installed your data service.

Perform this procedure on one node or zone of the cluster unless a specific step indicates otherwise.

  1. Ensure the zone is booted, if a zone is being used.

    Repeat this step on all nodes on the cluster if a zone is being used.

    Boot the zone if it is not running.


    # zoneadm list -v
    # zoneadm -z zonename boot
    
  2. Login to the zone, if a zone is being used.


    # zlogin zonename
    
  3. Start the WebSphere Message Broker, Configuration Manager or UserNameServer.


    # su - message-broker-userid
    $ mqsistart message-broker
    
  4. List all WebSphere Message Brokers that are running.

    Perform this step as the message-broker-userid.


    $ mqsilist
    
  5. Stop the WebSphere Message Broker, Configuration Manager or UserNameServer.

    Perform this step as the message-broker-userid.


    $ mqsistop -i message-broker
    $ exit
    
  6. Logout from the zone, if a zone is being used.


    # exit
    
  7. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere Message Broker

    1. If a non ZFS highly available local file system is being used for WebSphere Message Broker.

      1. If the global zone is being used for WebSphere Message Broker.


        # umount websphere-message-broker-highly-available-local-file-system
        
      2. If a zone is being used for WebSphere Message Broker.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-message-broker-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere Message Broker.


      # zpool export -f HAZpool
      
  8. Relocate the shared storage to other node.

    Perform this step on another node of the cluster.

    1. If a non ZFS highly available local file system is being used for the WebSphere Message Broker files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.


      # metaset -s disk-set -t
      

      For Veritas Volume Manager.


      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      
      1. If the global zone is being used for WebSphere Message Broker.


        # mount websphere-message-broker-highly-available-local-file-system
        
      2. If a zone is being used for WebSphere Message Broker.

        Create the mount point on all zones of the cluster that are being used for WebSphere Message Broker.

        Mount the highly available local file system on one of the zones being used .


        # zlogin zonename mkdir websphere-message-broker-highly-available-local-file-system
        #
        # mount -F lofs websphere-message-broker-highly-available-local-file-system \
        > /zonepath/root/websphere-message-broker-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere Message Broker.

      1. If the global zone is being used for WebSphere Message Broker.


        # zpool import -R / HAZpool
        
      2. If a zone is being used for WebSphere Message Broker.


        # zpool import -R /zonepath/root HAZpool
        
  9. Login to the zone, if a zone is being used.

    Perform this step on the other node of the cluster.


    # zlogin zonename
    
  10. Start the WebSphere Message Broker, Configuration Manager or UserNameServer.

    Perform this step on the other node or zone of the cluster.


    # su - message-broker-userid
    $ mqsistart message-broker
    
  11. List all WebSphere Message Brokers that are running.

    Perform this step as the message-broker-userid.


    $ mqsilist
    
  12. Stop the WebSphere Message Broker, Configuration Manager or UserNameServer.

    Perform this step as the message-broker-userid.


    $ mqsistop -i message-broker
    $ exit
    
  13. Logout from the zone, if a zone is being used.


    # exit
    
  14. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere Message Broker

    1. If a non ZFS highly available local file system is being used for WebSphere Message Broker.

      1. If the global zone is being used for WebSphere Message Broker.


        # umount websphere-message-broker-highly-available-local-file-system
        
      2. If a zone is being used for WebSphere Message Broker.

        Unmount the highly available local file system from the zone.


        # umount /zonepath/root/websphere-message-broker-highly-available-local-file-system
        
    2. If a ZFS highly available file system is being used for WebSphere Message Broker.


      # zpool export -f HAZpool
      

Installing the Sun Cluster HA for WebSphere Message Broker Packages

If you did not install the Sun Cluster HA for WebSphere Message Broker packages during your initial Sun Cluster installation, perform this procedure to install the packages. To install the packages, use the Sun JavaTM Enterprise System Installation Wizard.

ProcedureHow to Install the Sun Cluster HA for WebSphere Message Broker Packages

Perform this procedure on each cluster node where you are installing the Sun Cluster HA for WebSphere Message Broker packages.

You can run the Sun Java Enterprise System Installation Wizard with a command-line interface (CLI) or with a graphical user interface (GUI). The content and sequence of instructions in the CLI and the GUI are similar.


Note –

Even if you plan to configure this data service to run in non-global zones, install the packages for this data service in the global zone. The packages are propagated to any existing non-global zones and to any non-global zones that are created after you install the packages.


Before You Begin

Ensure that you have the Sun Java Availability Suite DVD-ROM.

If you intend to run the Sun Java Enterprise System Installation Wizard with a GUI, ensure that your DISPLAY environment variable is set.

  1. On the cluster node where you are installing the data service packages, become superuser.

  2. Load the Sun Java Availability Suite DVD-ROM into the DVD-ROM drive.

    If the Volume Management daemon vold(1M) is running and configured to manage DVD-ROM devices, the daemon automatically mounts the DVD-ROM on the /cdrom directory.

  3. Change to the Sun Java Enterprise System Installation Wizard directory of the DVD-ROM.

    • If you are installing the data service packages on the SPARC® platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_sparc
      
    • If you are installing the data service packages on the x86 platform, type the following command:


      # cd /cdrom/cdrom0/Solaris_x86
      
  4. Start the Sun Java Enterprise System Installation Wizard.


    # ./installer
    
  5. When you are prompted, accept the license agreement.

    If any Sun Java Enterprise System components are installed, you are prompted to select whether to upgrade the components or install new software.

  6. From the list of Sun Cluster agents under Availability Services, select the data service for WebSphere Message Broker.

  7. If you require support for languages other than English, select the option to install multilingual packages.

    English language support is always installed.

  8. When prompted whether to configure the data service now or later, choose Configure Later.

    Choose Configure Later to perform the configuration after the installation.

  9. Follow the instructions on the screen to install the data service packages on the node.

    The Sun Java Enterprise System Installation Wizard displays the status of the installation. When the installation is complete, the wizard displays an installation summary and the installation logs.

  10. (GUI only) If you do not want to register the product and receive product updates, deselect the Product Registration option.

    The Product Registration option is not available with the CLI. If you are running the Sun Java Enterprise System Installation Wizard with the CLI, omit this step

  11. Exit the Sun Java Enterprise System Installation Wizard.

  12. Unload the Sun Java Availability Suite DVD-ROM from the DVD-ROM drive.

    1. To ensure that the DVD-ROM is not being used, change to a directory that does not reside on the DVD-ROM.

    2. Eject the DVD-ROM.


      # eject cdrom
      
Next Steps

See Registering and Configuring Sun Cluster HA for WebSphere Message Broker to register Sun Cluster HA for WebSphere Message Broker and to configure the cluster for the data service.

Registering and Configuring Sun Cluster HA for WebSphere Message Broker

This section contains the procedures you need to configure Sun Cluster HA for WebSphere Message Broker.

Some procedures within this section require you to use certain Sun Cluster commands. Refer to the relevant Sun Cluster command man page for more information about these command and their parameters.

The Sun Cluster HA for WebSphere Message Broker data service

ProcedureHow to Register and Configure Sun Cluster HA for WebSphere Message Broker

Perform this procedure on one node of the cluster only.

This procedure assumes that you installed the data service packages during your initial Sun Cluster installation.

If you did not install the Sun Cluster HA for WebSphere Message Broker packages as part of your initial Sun Cluster installation, go to How to Install the Sun Cluster HA for WebSphere Message Broker Packages.


Note –

This procedure requires that WebSphere MQ and a broker database have been installed and that the Sun Cluster HA for WebSphere MQ data service and database data service have been registered and configured.

The registration and configuration of Sun Cluster HA for WebSphere Message Broker must use the same resource group that WebSphere MQ and the broker database use.

You must therefore have completed the installation of the Sun Cluster Data Service for WebSphere MQ data service and the database data service before continuing with this procedure.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create a resource for the WebSphere Message Broker Disk Storage.

    1. If a ZFS highly available local file system is being used.


      # clresource create -g websphere-mq-resource-group  \
      > -t SUNW.HAStoragePlus \
      > -p Zpools=websphere-message-broker-zspool \
      > websphere-message-broker-hastorage-resource
      

      Alternatively, you can simply add the websphere-message-broker-zspool to the existing websphere-mq-hastorage-resource.


      # clresource set \
      > -p Zpools=websphere-mq-zspools,websphere-message-broker-zspool 
      > websphere-mq-hastorage-resource
      
    2. If a cluster file system or a non ZFS highly available local file system is being used.


      # clresource create -g websphere-mq-resource-group  \
      > -t SUNW.HAStoragePlus \
      > -p FilesystemMountPoints=websphere-message-broker-filesystem-mountpoint \
      > websphere-message-broker-hastorage-resource
      

      Alternatively, you can simply add the websphere-message-broker-filesystem-mountpoint to the existing websphere-mq-hastorage-resource.


      # clresource set \
      > -p FilesystemMountPoints=mq-filesystem-mountpoints,message-broker-filesystem-mountpoint \
      > websphere-mq-hastorage-resource
      
  3. Enable the Disk Storage resource.


    # clresource enable websphere-message-broker-hastorage-resource
    
  4. Create and register a resource for the Broker.

    Edit /opt/SUNWscmqi/sib/util/sib_config and follow the comments within that file. After you have edited sib_config, you must register the resource.

    If you require the broker probe to perform a simple message flow test, you must create a message flow and specify the inbound queue in the SC3_IN variable and the outbound queue in the SC3_OUT variable.

    Refer to IBM's WebSphere Message Broker Message Flows to create a simple message flow.

    Alternatively, the default values for SC3_IN and SC3_OUT are set to NONE which will cause the broker probe to not perform a simple message flow and just check that the bipservice program is running.

    A value for the RDBMS_RS parameter is not required if WebSphere Message Broker v6 is being deployed. This implies that a remote database can be used for WebSphere Message Broker v6 and that the broker does not need to be restarted if the broker database is restarted.


    # cd /opt/SUNWscmqi/sib/util
    # vi sib_config
    # ./sib_register
    

    The following listing has been taken from the deployment example, Step 2, which can be found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/sib/util/sib_config that has been edited to configure a broker resource.


    Vigor5# cat > /var/tmp/brk_config <<-EOF
    RS=wmq1-brk
    RG=wmq1-rg
    QMGR=qmgr1
    LH=wmq1-lh
    HAS_RS=wmq1-ZFShas
    SC3_IN=NONE
    SC3_OUT=NONE
    MQSI_ID=mqsiuser
    BROKER=brk
    QMGR_RS=wmq1-qmgr
    RDBMS_RS=
    START_CMD=
    STOP_CMD=
    EOF
    

    Vigor5# /opt/SUNWscmqi/sib/util/sib_register -f /var/tmp/brk_config
    
  5. Enable the Broker resource.


    Note –

    Before you enable the Broker resource, ensure that /opt/mqsi exists.

    For compatibility reasons, the Sun Cluster HA for WebSphere Message Broker data service requires that /opt/mqsi exists on all nodes or zones in the cluster.



    # clresource enable websphere-message-broker-resource
    
  6. (Optional) Create and register a resource for the Configuration Manager.

    Edit /opt/SUNWscmqi/sib/util/sib_config and follow the comments within that file. After you have edited sib_config, you must register the resource.

    The configuration manager resource must specify NONE for the SC3_IN and SC3_OUT variables.

    A value for the RDBMS_RS parameter is not required if WebSphere Message Broker v6 Configuration Manager is being deployed.


    # cd /opt/SUNWscmqi/sib/util
    # vi sib_config
    # ./sib_register
    

    The following listing has been taken from the deployment example, Step 4, which can be found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/sib/util/sib_config that has been edited to configure a configuration manager resource.


    Vigor5# cat > /var/tmp/cmg_config <<-EOF
    RS=wmq1-cmg
    RG=wmq1-rg
    QMGR=qmgr1
    LH=wmq1-lh
    HAS_RS=wmq1-ZFShas
    SC3_IN=NONE
    SC3_OUT=NONE
    MQSI_ID=mqsiuser
    BROKER=cmg
    QMGR_RS=wmq1-qmgr
    RDBMS_RS=
    START_CMD=
    STOP_CMD=
    EOF
    

    Vigor5# /opt/SUNWscmqi/sib/util/sib_register -f /var/tmp/cmg_config
    
  7. (Optional) Enable the Configuration Manager resource.


    # clresource enable websphere-message-broker-configuration-manager-resource
    
  8. (Optional) Create and register a resource for the UserNameServer.

    Edit /opt/SUNWscmqi/siu/util/siu_config and follow the comments within that file. After you have edited siu_config, you must register the resource.


    # cd /opt/SUNWscmqi/siu/util
    # vi siu_config
    # ./siu_register
    

    The following listing has been taken from the deployment example, Step 6, which can found in Appendix A, Deployment Example: Installing WebSphere Message Broker in Zones and shows /opt/SUNWscmqi/siu/util/siu_config that has been edited to configure a UserNameServer resource.


    Vigor5# cat > /var/tmp/cmg_config <<-EOF
    RS=wmq1-uns
    RG=wmq1-rg
    QMGR=qmgr1
    LH=wmq1-lh
    HAS_RS=wmq1-ZFShas
    MQSI_ID=mqsiuser
    QMGR_RS=wmq1-qmgr
    START_CMD=
    STOP_CMD=
    EOF
    

    Vigor5# /opt/SUNWscmqi/siu/util/siu_register -f /var/tmp/uns_config
    
  9. (Optional) Enable the UserNameServer resource.


    # clresource enable websphere-message-broker-usernameserver-resource
    

Verifying the Sun Cluster HA for WebSphere Message Broker Installation and Configuration

This section contains the procedure you need to verify that you installed and configured your data service correctly.

ProcedureHow to Verify the Sun Cluster HA for WebSphere Message Broker Installation and Configuration

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Ensure all the WebSphere Message Broker resources are online.


    # cluster status 
    

    Enable any WebSphere Message Broker resources that are not online.


    # clresource enable websphere-message-broker-resource
    
  3. Switch the WebSphere Message Broker resource group to another cluster node or node:zone.


    # clresourcegroup switch -n node[:zone] websphere-mq-resource-group
    

Upgrading Sun Cluster HA for WebSphere Message Broker

Upgrade the Sun Cluster HA for WebSphere Message Broker data service if the following conditions apply:

ProcedureHow to Upgrade to the New Version of Sun Cluster HA for WebSphere Message Broker

Perform steps 1, 2, 3 and 6 if you have an existing Sun Cluster HA for WebSphere Message Broker deployment and wish to upgrade to the new version. Complete all steps if you need to use the new features of this data service.


Note –

If you intend to run all steps, you should consider if your current WebSphere Message Broker resources have been modified to have specific timeout values that suit your deployment. If timeout values were previously adjusted you should reapply those timeout values to your new WebSphere Message Broker resources.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable the WebSphere Message Broker resources.


    # clresource disable websphere-messabge-broker-resource
    
  3. Install the new version of Sun Cluster HA for WebSphere Message Broker to each cluster

    Refer to How to Install the Sun Cluster HA for WebSphere Message Broker Packages for more information.

  4. Delete the WebSphere Message Broker resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere Message Broker.


    # clresource delete websphere-message-broker-resource
    
  5. Reregister the WebSphere Message Broker resources, if you want to use new features that have been introduced in the new version of Sun Cluster HA for WebSphere Message Broker.

    Refer to How to Register and Configure Sun Cluster HA for WebSphere Message Broker for more information.

  6. Enable the WebSphere Message Broker resources

    If you have only performed steps 1, 2 and 3 you will need to re-enable the WebSphere Message Broker resources.


    # clresource enable websphere-message-broker-resource
    

Understanding the Sun Cluster HA for WebSphere Message Broker Fault Monitor

This section describes the Sun Cluster HA for WebSphere Message Broker fault monitor probing algorithm or functionality, states the conditions, and recovery actions associated with unsuccessful probing.

For conceptual information on fault monitors, see the Sun Cluster Concepts Guide.

Resource Properties

The Sun Cluster HA for WebSphere Message Broker fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.

Probing Algorithm and Functionality

The Sun Cluster HA for WebSphere Message Broker fault monitor is controlled by the extension properties that control the probing frequency. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune the Sun Cluster HA for WebSphere Message Broker fault monitor only if you need to modify this preset behavior.

The Sun Cluster HA for WebSphere Message Broker fault monitor checks the broker and other components within an infinite loop. During each cycle the fault monitor will check the relevant component and report either a failure or success.

If the fault monitor is successful it returns to its infinite loop and continues the next cycle of probing and sleeping.

If the fault monitor reports a failure a request is made to the cluster to restart the resource. If the fault monitor reports another failure another request is made to the cluster to restart the resource. This behavior will continue whenever the fault monitor reports a failure.

If successive restarts exceed the Retry_count within the Thorough_probe_interval a request to failover the resource group onto a different node or zone is made.

Operations of the Broker probe

The broker probe can check the broker by using a simple message flow test, if SC3_IN and SC3_OUT are set to the inbound and outbound queues.

If set, the broker probe puts a message to the inbound queue referenced by the SC3_IN variable. After waiting two seconds, the broker probe checks that the message has arrived at the outbound queue referenced by the SC3_OUT variable.

If SC3_IN and SC3_OUT are set to NONE the simple message flow is not performed. Instead the broker probe checks that the bipservice process is still running.

SC3_IN and SC3_OUT are set when the broker resource was configured and registered within /opt/SUNWscmqi/sib/util/sib_config.

Operations of the Configuration Manager probe

The broker probe checks the configuration manager to see if the bipservice process is still running.

The configuration manager resource must set SC3_IN and SC3_OUT to NONE. This ensures that the simple message flow test is not performed.

Operations of the UserNameServer probe

The broker probe checks the UserNameServer to see if the bipservice process is still running.

Debug Sun Cluster HA for WebSphere Message Broker

ProcedureHow to turn on debug for Sun Cluster HA for WebSphere Message Broker

Sun Cluster HA for WebSphere Message Broker can be used by multiple WebSphere Message Broker instances. It is possible to turn debug on for all WebSphere Message Broker instances or a particular WebSphere Message Broker instance.

A config file exists under /opt/SUNWscmqi/xxx/etc, where xxx can be sib (Broker or Configuration Manager) or siu (UserNameServer).

These files allow you to turn on debug for all WebSphere Message Broker instances or for a specific WebSphere Message Broker instance on a particular node or zone within the cluster. If you require debug to be turned on for Sun Cluster HA for WebSphere Message Broker across the whole cluster, repeat this step on all nodes within the cluster.

  1. Edit /etc/syslog.conf and change daemon.notice to daemon.debug.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.notice;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                     operator
    #

    Change the daemon.notice to daemon.debug and restart syslogd. Note that the output below, from grep daemon /etc/syslog.conf, shows that daemon.debug has been set.


    # grep daemon /etc/syslog.conf
    *.err;kern.debug;daemon.debug;mail.crit        /var/adm/messages
    *.alert;kern.err;daemon.err                    operator

    Restart the syslog daemon.

    1. If running Solaris 9


      # pkill -1 syslogd
      
    2. If running Solaris 10


      # svcadm disable system-log
      # svcadm enable system-log
      
  2. Edit /opt/SUNWscmqi/xxx/etc/config.

    Perform this step for each component that requires debug output, on each node of Sun Cluster as required.

    Edit /opt/SUNWscmqi/xxx/etc/config and change DEBUG= to DEBUG=ALL or DEBUG=resource.


    # cat /opt/SUNWscmqi/sib/etc/config
    #
    # Copyright 2006 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    #
    ##ident   "@(#)config 1.2     06/03/21 SMI"
    #
    # Usage:
    #       DEBUG=<RESOURCE_NAME> or ALL
    #
    DEBUG=ALL

    Note –

    To turn off debug, reverse the steps above.