JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for WebSphere MQ Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring Solaris Cluster HA for WebSphere MQ

A.  Deployment Example: Installing a WebSphere MQ Queue Manager in Non-Global Zones

B.  Deployment Example: Installing aWebSphere MQ Queue Manager in an HA Container

Target Cluster Configuration

Software Configuration

Assumptions

Installing and Configuring WebSphere MQ in an HA Container

Example: Prepare the Cluster for WebSphere MQ

Example: Configure the HA Container

Example: Install WebSphere MQ in the HA Container

Example: Verify WebSphere MQ

Example: Configure Cluster Resources for WebSphere MQ

Example: Enable the WebSphere MQ Software to Run in the Cluster

Example: Verify the HA for WebSphere MQ resource group

Example: Creating Multiple Instances

Index

Example: Prepare the Cluster for WebSphere MQ

  1. Install and configure the cluster as instructed in Oracle Solaris Cluster Software Installation Guide.

    Install the following cluster software components on node Vigor5.

    • Oracle Solaris Cluster core software

    • Oracle Solaris Cluster data service for WebSphere MQ

    • Oracle Solaris Cluster data service for Solaris Containers

  2. Add the logical host name to /etc/hosts and /etc/inet/ipnodes in the global zone and HA container.

    The following output shows the logical host name entry for qmgr3 in the global zone.

    Vigor5# grep qmgr1 /etc/hosts /etc/inet/ipnodes
    /etc/hosts:192.168.1.150    qmgr1
    /etc/inet/ipnodes:192.168.1.150    qmgr1
  3. Install and configure a Zettabyte File System

    Note - The following zpool definition represents a very basic configuration for deployment on a single-node cluster.

    You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.


    Create a ZFS pool

    Vigor5# zpool create -m /ZFSwmq3/log HAZpool1 c1t1d0
    Vigor5# zpool create -m /ZFSwmq3/qmgr HAZpool2 c1t4d0
  4. Install and Configure a Solaris Volume Manager File System

    Note - The following metaset definitions represent a very basic configuration for deployment on a single-node cluster.

    You should not consider this example for use within a productive deployment, instead it is a very basic configuration for testing or development purposes only.


    1. Create a SVM Disk Set.
      Vigor5# metaset -s dg_d1 -a -h Vigor5
    2. Add a Disk to the SVM Disk Set
      Vigor5# metaset -s dg_d1 -a /dev/did/rdsk/d2
    3. Add the Disk Information to the metainit utility input file
      Vigor5# cat >> /etc/lvm/md.tab <<-EOF dg_d1/d100 -m dg_d1/d110 dg_d1/d110 1 1 /dev/did/rdsk/d2s0 EOF
    4. Configure the metadevices
      Vigor5# metainit -s dg_d1 -a
    5. Create a Mount Point for the SVM Highly Available Local File System
      Vigor5# mkdir /FOZones
    6. Add the SVM highly available local file system to /etc/vfstab
      Vigor5# cat >> /etc/vfstab <<-EOF /dev/md/dg_d1/dsk/d100 /dev/md/dg_d1/rdsk/d100 /FOZones ufs 3 no logging EOF
    7. Create the File System
      Vigor5# newfs /dev/md/dg_d1/rdsk/d100
    8. Mount the File System
      Vigor5# mount /FOZones