Sun Cluster 3.1 - 3.2 With Sun StorEdge T3 or T3+ Array Manual for Solaris OS

ProcedureHow to Remove a Partner Group

Use this procedure to permanently remove storage array partner groups and their submirrors from a running cluster. To remove a storage array in single-controller configuration, see How to Remove a Storage Array in a Single-Controller Configuration.


Note –

When you upgrade firmware on a storage device or on an enclosure, redefine the stripe size of a LUN, or perform other LUN operations, a device ID might change unexpectedly. When you perform a check of the device ID configuration by running the cldevice check or scdidadm -c command, the following error message appears on your console if the device ID changed unexpectedly.


device id for nodename:/dev/rdsk/cXtYdZsN does not match physical 
device's id for ddecimalnumber, device may have been replaced.

To fix device IDs that report this error, run the cldevice repair or scdidadm -R command for each affected device.


This procedure defines Node A as the cluster node that you begin working with. Node B is another node in the cluster.


Caution – Caution –

During this procedure, you lose access to the data that resides on each storage array partner group that you are removing.


This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

Before You Begin

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If necessary, back up all database tables, data services, and volumes associated with each partner group that you are removing.

  2. If necessary, detach the submirrors from each storage array or partner group that you are removing. Detach the submirrors to stop all I/O activity to the storage array or partner group.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  3. Remove references to each LUN. Each LUN belongs to the storage array or partner group that you are removing.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  4. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you use this information in Step 19 and Step 20 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status +
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  5. Shut down Node A.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  6. Disconnect from both storage arrays the fiber-optic cables connecting to the FC switches, then the Ethernet cable(s).

  7. If any storage array that you are removing is the last storage array connected to an FC switch that is on Node A, disconnect the fiber-optic cable between Node A and the FC switch that was connected to this storage array.

    If no array that you are removing is the last array connected to the node, skip to Step 11.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.


  8. If you want to remove the host adapters from Node A, power off the node.

    If you do not want to remove host adapters, skip to Step 11.

  9. Remove the host adapters from Node A.

    For the procedure about how to remove host adapters, see the documentation that shipped with your host adapter and nodes.

  10. Without enabling the node to boot, power on Node A

    For more information, see your Sun Cluster system administration documentation.

  11. Boot Node A into cluster mode.

    For more information on booting nodes, see your Sun Cluster system administration documentation.

  12. Shut down Node B.

    For the procedure about how to shut down and power off a node, see your Sun Cluster system administration documentation.

  13. If any storage array that you are removing is the last storage array connected to an FC switch that is on Node B, disconnect the fiber-optic cable that connects this FC switch to Node B.

    If array that you are removing is the last array connected to the node, , proceed to Step 14.

    For the procedure about how to remove a fiber-optic cable, see the Sun StorEdge T3 and T3+ Array Configuration Guide.


    Note –

    If you are using your storage arrays in a SAN-configured cluster, you must keep two FC switches configured in parallel. This configuration maintains cluster availability. See SAN Considerations for more information.


  14. If you want to remove the host adapters from Node B, power off the node.

    If you do not want to remove host adapters, skip to Step 17.

  15. Remove the host adapters from Node B.

    For the procedure about how to remove host adapters, see the documentation that shipped with your nodes.

  16. Without enabling the node to boot, power on Node B

    For more information, see your Sun Cluster system administration documentation.

  17. Boot Node B into cluster mode.

    For more information about how to boot nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  18. On all nodes, update the /devices and /dev entries.

    • If you are using Sun Cluster 3.2, use the following command:


      # devfsadm -C
      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # devfsadm -C
      # scdidadm -C
      
  19. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  20. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -g resourcegroup -h nodename