Sun Cluster Data Service for WebSphere MQ Guide for Solaris OS

Configuration Requirements

The configuration requirements in this section apply only to Sun Cluster HA for WebSphere MQ.


Caution – Caution –

If your data service configuration does not conform to these requirements, the data service configuration might not be supported.


Determine which Solaris zone WebSphere MQ will use

Solaris zones provides a means of creating virtualized operating system environments within an instance of the Solaris 10 OS. Solaris zones allow one or more applications to run in isolation from other activity on your system. For complete information about installing and configuring a Solaris Container, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

You must determine which Solaris zone WebSphere MQ will run in. WebSphere MQ can run within a global zone, non-global zone or in a failover zone configuration. Table 2 provides some reasons to help you decide.


Note –

WebSphere MQ can be deployed within the global zone, whole root non-global zone or whole root failover non-global zone, also referred to as a failover zone.


Table 2 Choosing the appropriate Solaris Zone for WebSphere MQ

Zone type 

Reasons for choosing the appropriate Solaris Zone for WebSphere MQ 

Global Zone 

Only one instance of WebSphere MQ will be installed. 

Non-global zones are not required. 

Non-global Zone 

Several WebSphere MQ instances need to be consolidated and isolated from each other. 

Different versions of WebSphere MQ will be installed. 

Failover testing of WebSphere MQ between non-global zones on the same node is required. 

Failover Zone 

You require WebSphere MQ to run in the same zone regardless of which node the failover zone is running on. 


Note –

If your requirement is simply to make WebSphere MQ highly available you should consider choosing a global or non-global zone deployment over a failover zone deployment. Deploying WebSphere MQ within a failover zone will incur additional failover time to boot/halt the failover zone.


Requirements if multiple WebSphere MQ instances are deployed on cluster file systems.

If a cluster file system is being used for the WebSphere MQ files, it is possible to manually start the queue manager on one node of the cluster and at the same time to also manually start the same queue manager on another node of the cluster.


Note –

Although it is possible, you should not attempt this as doing so will cause severe damage to the WebSphere MQ files.


Although it is expected that no-one will manually start the same queue manager on separate nodes of the cluster at the same time the Sun Cluster HA for WebSphere MQ provides a mechanism to prevent someone from doing so, albeit by mistake.

To prevent against this happening you must implement one of the following two solutions.

  1. Use a highly available local file system for the WebSphere MQ files.

    This is the recommended approach as the WebSphere MQ files would be mounted only on one node of the cluster at a time. This then limits starting the queue manager on only one node of the cluster at a time.

  2. Create a symbolic link for /opt/mqm/bin/strmqm and /opt/mqm/bin/endmqm to /opt/SUNWscmqs/mgr/bin/check-start.

    /opt/SUNWscmqs/mgr/bin/check-start provides a mechanism to prevent manually starting or stopping the queue manager, by verifying that the start or stop is being attempted by the Sun Cluster HA for WebSphere MQ data service.

    /opt/SUNWscmqs/mgr/bin/check-start will report the following error if an attempt to manually start or stop the queue manager.


    $ strmqm qmgr1
    $ Request to run </usr/bin/strmqm qmgr1> within Sun Cluster has been refused

    If a cluster file system is used for the WebSphere MQ files, you must create a symbolic link for strmqm and endmqm to /opt/SUNWscmqs/mgr/bin/check-start and inform the Sun Cluster HA for WebSphere MQ data service of this change.

    To do this, you must perform the following on each node of the cluster.


    # cd /opt/mqm/bin
    #
    # mv strmqm strmqm_sc3
    # mv endmqm endmqm_sc3
    #
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start strmqm
    # ln -s /opt/SUNWscmqs/mgr/bin/check-start endmqm
    

    After renaming strmqm and endmqm you must use these new program names (strmqm_sc3 and endmqm_sc3) for the START_CMD and STOP_CMD variables when you edit the /opt/SUNWscmqs/mgr/util/mgr_config file in Step 7 in How to Register and Configure Sun Cluster HA for WebSphere MQ


    Note –

    If you implement this workaround, then you must back it out whenever you need to apply any maintenance to WebSphere MQ. Afterwards, you must again apply this workaround.

    Instead the recommended approach is to use a highly available local file system for the WebSphere MQ files.