Sun Cluster 3.1 - 3.2 With Sun StorEdge 3510 or 3511 FC RAID Array Manual for Solaris OS

ProcedureHow to Remove a Storage Array From a Running Cluster

Use this procedure to permanently remove storage arrays and their submirrors from a running cluster.

If you need to remove a storage array from more than two nodes, repeat Step 6 to Step 13 for each additional node that connects to the storage array.


Caution – Caution –

During this procedure, you lose access to the data that resides on each storage array that you are removing.


Before You Begin

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify RBAC authorization.

  1. If the storage array you are removing contains any quorum devices, choose another disk drive to configure as the quorum device. Then remove the old quorum device.

    To determine whether the LUN is configured as a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures on adding and removing quorum devices, see Chapter 6, Administering Quorum, in Sun Cluster System Administration Guide for Solaris OS.

  2. If necessary, back up all database tables, data services, and drives associated with each storage array that you are removing.

  3. Determine the resource groups and device groups that are running on all nodes.

    Record this information because you will use it in Step 17 and Step 18 of this procedure to return resource groups and device groups to these nodes.

    • If you are using Sun Cluster 3.2, use the following commands:


      # clresourcegroup status + 
      # cldevicegroup status + 
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat
      
  4. If necessary, run the appropriate Solstice DiskSuite or Veritas Volume Manager commands to detach the submirrors from each storage array that you are removing to stop all I/O activity to the storage array.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  5. Run the appropriate volume manager commands to remove references to each LUN that belongs to the storage array that you are removing.

    For more information, see your Solstice DiskSuite or Veritas Volume Manager documentation.

  6. Shut down the node.

    For the full procedure on shutting down and powering off a node, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  7. If necessary, disconnect the storage arrays from the nodes or the FC switches.

  8. If the storage array that you are removing is not the last storage array connected to the node, skip to Step 10.

  9. If the storage array that you are removing is the last storage array connected to the node, disconnect the fiber-optic cable between the node and the FC switch that was connected to this storage array.

  10. If you do not want to remove the host adapters from the node, skip to Step 13.

  11. If you want to remove the host adapters from the node, power off the node.

  12. Remove the host adapters from the node.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and nodes.

  13. Boot the node into cluster mode.

    For more information on booting nodes, see Chapter 3, Shutting Down and Booting a Cluster, in Sun Cluster System Administration Guide for Solaris OS.

  14. Repeat steps Step 6 to Step 13 on each additional node that you need to disconnect from the storage array.

  15. On all cluster nodes, remove the paths to the devices that you are deleting.


    # devfsadm -C
    
  16. On all cluster nodes, remove all obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  17. (Optional) Restore the device groups to the original node.

    Perform the following step for each device group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevicegroup switch -n nodename devicegroup1[ devicegroup2 …]
      -n nodename

      The node to which you are restoring device groups.

      devicegroup1[ devicegroup2 …]

      The device group or groups that you are restoring to the node.

    • If you are using Sun Cluster 3.1, use the following command:


      # scswitch -z -D devicegroup -h nodename
      
  18. (Optional) Restore the resource groups to the original node.

    Perform the following step for each resource group you want to return to the original node.

    • If you are using Sun Cluster 3.2, use the following command:


      # clresourcegroup switch -n nodename resourcegroup1[ resourcegroup2 …]
      
      nodename

      For failover resource groups, the node to which the groups are returned. For scalable resource groups, the node list to which the groups are returned.

      resourcegroup1[ resourcegroup2 …]

      The resource group or groups that you are returning to the node or nodes.

    • If you are using Sun Cluster 3.1, use the following command:


       # scswitch -z -g resourcegroup -h nodename
      
See Also

To prepare the storage array for later use, unmap and delete all LUNs and logical drives. See How to Unmap and Remove a LUN for information about LUN removal. For more information about removing logical drives, see the Sun StorEdge 3000 Family Installation, Operation, and Service Manual.