JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for WebSphere MQ Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Solaris Cluster HA for WebSphere MQ

HA for WebSphere MQ Overview

Overview of Installing and Configuring HA for WebSphere MQ

Planning the HA for WebSphere MQ Installation and Configuration

Configuration Restrictions

Restriction for the supported configurations of HA for WebSphere MQ

Restriction for the Location of WebSphere MQ files

Restriction for multiple WebSphere MQ instances

Configuration Requirements

Determine which Solaris zone WebSphere MQ will use

Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

Installing and Configuring WebSphere MQ

How to Install and Configure WebSphere MQ

Verifying the Installation and Configuration of WebSphere MQ

How to Verify the Installation and Configuration of WebSphere MQ

Installing the HA for WebSphere MQ Packages

How to Install the HA for WebSphere MQ Packages

Registering and Configuring Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ

How to Register and Configure Solaris Cluster HA for WebSphere MQ in a Failover Resource Group

How to Register and Configure Solaris Cluster HA for WebSphere MQ in an HA Container

Verifying the Solaris Cluster HA for WebSphere MQ Installation and Configuration

How to Verify the Solaris Cluster HA for WebSphere MQ Installation and Configuration

Upgrading HA for WebSphere MQ

How to Migrate Existing Resources to a New Version of HA for WebSphere MQ

Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor

Resource Properties

Probing Algorithm and Functionality

Operations of the queue manager probe

Operations of the channel initiator, command server, listener and trigger monitor probes

Debug Solaris Cluster HA for WebSphere MQ

How to turn on debug for Solaris Cluster HA for WebSphere MQ

A.  Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones

B.  Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container

Index

Understanding the Solaris Cluster HA for WebSphere MQ Fault Monitor

This section describes the Solaris Cluster HA for WebSphere MQ fault monitor probing algorithm or functionality, states the conditions, and recovery actions associated with unsuccessful probing.

For conceptual information on fault monitors, see the Oracle Solaris Cluster Concepts Guide.

Resource Properties

The Solaris Cluster HA for WebSphere MQ fault monitor uses the same resource properties as resource type SUNW.gds. Refer to the SUNW.gds(5) man page for a complete list of resource properties used.

Probing Algorithm and Functionality

The HA for WebSphere MQ fault monitor is controlled by the extension properties that control the probing frequency. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Oracle Solaris Cluster installations. Therefore, you should tune the HA for WebSphere MQ fault monitor only if you need to modify this preset behavior.

The HA for WebSphere MQ fault monitor checks the queue manager and other components within an infinite loop. During each cycle the fault monitor will check the relevant component and report either a failure or success.

If the fault monitor is successful it returns to its infinite loop and continues the next cycle of probing and sleeping.

If the fault monitor reports a failure a request is made to the cluster to restart the resource. If the fault monitor reports another failure another request is made to the cluster to restart the resource. This behavior will continue whenever the fault monitor reports a failure.

If successive restarts exceed the Retry_count within the Thorough_probe_interval a request to failover the resource group onto a different node or zone is made.

Operations of the queue manager probe

The WebSphere MQ queue manager probe checks the queue manager by using a program named create_tdq which is included in the Solaris Cluster HA for WebSphere MQ data service.

The create_tdq program connects to the queue manager, creates a temporary dynamic queue, puts a message to the queue and then disconnects from the queue manager.

Operations of the channel initiator, command server, listener and trigger monitor probes

The WebSphere MQ probe for the channel initiator, command server, listener and trigger monitor all operate in a similar manner and will simply restart any component that has failed.

The process monitor facility will request a restart of the resource as soon as any component has failed.

The channel initiator, command server and trigger monitor are all dependent on the queue manger being available. The listener has an optional dependency on the queue manager that is set when the listener resource is configured and registered. Therefore if the queue manager fails the channel initiator, command server, trigger monitor and optional dependent listener will be restarted when the queue manager is available again.