JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for WebSphere MQ Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Solaris Cluster HA for WebSphere MQ

HA for WebSphere MQ Overview

Overview of Installing and Configuring HA for WebSphere MQ

Planning the HA for WebSphere MQ Installation and Configuration

Configuration Restrictions

Restriction for the supported configurations of HA for WebSphere MQ

Restriction for the Location of WebSphere MQ files

Restriction for multiple WebSphere MQ instances

Configuration Requirements

Determine which Solaris zone WebSphere MQ will use

Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

Installing and Configuring WebSphere MQ

How to Install and Configure WebSphere MQ

Verifying the Installation and Configuration of WebSphere MQ

How to Verify the Installation and Configuration of WebSphere MQ

Installing the HA for WebSphere MQ Packages

How to Install the HA for WebSphere MQ Packages

Registering and Configuring Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group

How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container

Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration

How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration

Upgrading HA for WebSphere MQ

How to Migrate Existing Resources to a New Version of HA for WebSphere MQ

Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor

Resource Properties

Probing Algorithm and Functionality

Operations of the queue manager probe

Operations of the channel initiator, command server, listener and trigger monitor probes

Debug Solaris Cluster HA for WebSphere MQ

How to turn on debug for Solaris Cluster HA for WebSphere MQ

A.  Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones

B.  Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container

Index

Verifying the Installation and Configuration of WebSphere MQ

This section contains the procedure you need to verify the installation and configuration.

How to Verify the Installation and Configuration of WebSphere MQ

This procedure does not verify that your application is highly available because you have not yet installed your data service.

Perform this procedure on one node or zone of the cluster unless a specific steps indicates otherwise.

  1. Ensure the zone is booted, if a non-global zone or HA container is being used.

    Repeat this step on all nodes on the cluster for a non-global zone and on one node of the cluster if a HA container is being used.

    Boot the zone if it is not running.

    # zoneadm list -v
    # zoneadm -z zonename boot
  2. Login to the zone, if a non-global zone or HA container is being used.
    # zlogin zonename
  3. Start the queue manager, create a persistent queue and put a test message to that queue.
    # su - mqm
    $ strmqm queue-manager
    $ runmqsc queue-manager
    def ql(sc3test) defpsist(yes)
    end
    $
    $ /opt/mqm/samp/bin/amqsput SC3TEST queue-manager
    test test test test test
    ^C
  4. Stop the queue manager.
    $ endmqm -i queue-manager
    $ exit
  5. Logout from the zone, if a non-global zone or HA container is being used.
    # exit
  6. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.
      1. If the global zone is being used for WebSphere MQ.
        # umount websphere-mq-highly-available-local-file-system
      2. If a non-global zone or HA container is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.

        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
    2. If a ZFS highly available file system is being used for WebSphere MQ.
      # zpool export -f HAZpool
  7. Relocate the shared storage to other node.

    Perform this step on another node of the cluster.

    1. If a non ZFS highly available local file system is being used for the WebSphere MQ files.

      Ensure the node has ownership of the disk set or disk group.

      For Solaris Volume Manager.

      # metaset -s disk-set -t

      For Veritas Volume Manager.

      # vxdg -C import disk-group
      # vxdg -g disk-group startall
      1. If the global zone is being used for WebSphere MQ.
        # mount websphere-mq-highly-available-local-file-system
      2. If a non-global zone or HA container is being used for WebSphere MQ.

        Create the mount point on all zones of the cluster that are being used for WebSphere MQ.

        Mount the highly available local file system on one of the zones being used .

        # zlogin zonename mkdir websphere-mq-highly-available-local-file-system
        #
        # mount -F lofs websphere-mq-highly-available-local-file-system \
        > /zonepath/root/websphere-mq-highly-available-local-file-system
    2. If a ZFS highly available file system is being used for WebSphere MQ.
      # zpool import -R /zonepath/root HAZpool
  8. Login to the zone, if a non-global zone or HA container is being used.

    Perform this step on the other node of the cluster.

    # zlogin zonename
  9. Start the queue manager, get the test message and delete the queue.

    Perform this step on the other node or zone of the cluster.

    # su - mqm
    $ strmqm queue-manager
    $ /opt/mqm/samp/bin/amqsget SC3TEST queue-manager
    ^C
    $ runmqsc queue-manager
    delete ql(sc3test)
    end
  10. Stop the queue manager.

    Perform this step on the other node or zone of the cluster.

    $ endmqm -i queue-manager
    $ exit
  11. Logout from the zone, if a non-global zone or HA container is being used.
    # exit
  12. Unmount the highly available local file system.

    Perform this step in the global zone only.

    You should unmount the highly available file system you mounted in Step 7 in How to Install and Configure WebSphere MQ

    1. If a non ZFS highly available local file system is being used for WebSphere MQ.
      1. If the global zone is being used for WebSphere MQ.
        # umount websphere-mq-highly-available-local-file-system
      2. If a non-global zone or HA container is being used for WebSphere MQ.

        Unmount the highly available local file system from the zone.

        # umount /zonepath/root/websphere-mq-highly-available-local-file-system
    2. If a ZFS highly available file system is being used for WebSphere MQ.
      # zpool export -f HAZpool
  13. Shutdown the zone, if a HA container is being used.

    Note - This step is only required if a HA container is being used.


    # zlogin zonename halt