Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

Testing Node Redundancy

This section provides the procedure for testing node redundancy and high availability of device groups. Perform the following procedure to confirm that the secondary node takes over the device group that is mastered by the primary node when the primary node fails.

ProcedureHow to Test Device Group Redundancy Using Resource Group Failover

Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Geographic Edition Object-Oriented Commands.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  1. Create an HAStoragePlus resource group with which to test.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup create testgroup
      # clresourcetype register SUNW.HAStoragePlus
      # clresource create -t HAStoragePlus -g testgroup \
        -p GlobalDevicePaths=/dev/md/red/dsk/d0 \
        -p Affinityon=true testresource
      
      clresourcetype register

      If the HAStoragePlus resource type is not already registered, register it.

      /dev/md/red/dsk/d0

      Replace this path with your device path.

    • If you are using Sun Cluster 3.1, use the following commands:


      # scrgadm -a -g testgroup
      # scrgadm -a -t SUNW.HAStoragePlus
      # scrgadm -a -g devicegroup -t SUNW.HAStoragePlus -j testgroup \
        -x GlobalDevicePaths=/dev/md/red/dsk/d0
      # scswitch -Z -g testgroup
      
      /dev/md/red/dsk/d0

      Replace this path with your device path.

  2. Identify the node that masters the testgroup.

    Run one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup status testgroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -g
      
  3. Power off the primary node for the testgroup.

    Cluster interconnect error messages appear on the consoles of the existing nodes.

  4. On another node, verify that the secondary node took ownership of the resource group that is mastered by the primary node.

    Check the output for the resource group ownership.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup status testgroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. Power on the initial primary node. Boot the node into cluster mode.

    Wait for the system to boot. The system automatically starts the membership monitor software. The node then rejoins the cluster.

  6. From the initial primary node, return ownership of the resource group to the initial primary node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename testgroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g testgroup -h nodename
      

    In these commands, nodename is the name of the primary node.

  7. Verify that the initial primary node has ownership of the resource group.

    Look for the output that shows the device group ownership.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup status testgroup
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat