JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster 4.1 Hardware Administration Manual     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction to Oracle Solaris Cluster Hardware

2.  Installing and Configuring the Terminal Concentrator

3.  Installing Cluster Interconnect Hardware and Configuring VLANs

4.  Maintaining Cluster Interconnect Hardware

5.  Installing and Maintaining Public Network Hardware

6.  Maintaining Platform Hardware

7.  Campus Clustering With Oracle Solaris Cluster Software

8.  Verifying Oracle Solaris Cluster Hardware Redundancy

Testing Node Redundancy

How to Test Device Group Redundancy Using Resource Group Failover

Testing Cluster Interconnect Redundancy

How to Test Cluster Interconnects

Testing Public Network Redundancy

How to Test Public Network Redundancy

Index

Testing Node Redundancy

This section provides the procedure for testing node redundancy and high availability of device groups. Perform the following procedure to confirm that the secondary node takes over the device group that is mastered by the primary node when the primary node fails.

How to Test Device Group Redundancy Using Resource Group Failover

Before You Begin

To perform this procedure, assume a role that provides solaris.cluster.modify RBAC authorization.

  1. Create an HAStoragePlus resource group with which to test.

    Use the following command:

    # clresourcegroup create testgroup
    # clresourcetype register SUNW.HAStoragePlus
    # clresource create -t HAStoragePlus -g testgroup \
      -p GlobalDevicePaths=/dev/md/red/dsk/d0 \
      -p Affinityon=true testresource
    clresourcetype register

    If the HAStoragePlus resource type is not already registered, register it.

    /dev/md/red/dsk/d0

    Replace this path with your device path.

  2. Identify the node that masters the testgroup.
    # clresourcegroup status testgroup
  3. Power off the primary node for the testgroup.

    Cluster interconnect error messages appear on the consoles of the existing nodes.

  4. On another node, verify that the secondary node took ownership of the resource group that is mastered by the primary node.

    Use the following command to check the output for the resource group ownership:

    # clresourcegroup status testgroup
  5. Power on the initial primary node. Boot the node into cluster mode.

    Wait for the system to boot. The system automatically starts the membership monitor software. The node then rejoins the cluster.

  6. From the initial primary node, return ownership of the resource group to the initial primary node.
    # clresourcegroup switch -n nodename testgroup

    In these commands, nodename is the name of the primary node.

  7. Verify that the initial primary node has ownership of the resource group.

    Use the following command to look for the output that shows the device group ownership.

    # clresourcegroup status testgroup