Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS

ProcedureHow to Remove a Storage Array

Removing a storage array enables you to downsize or reallocate your existing storage pool.

Before You Begin

This procedure relies on the following prerequisites and assumptions.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS.

To perform this procedure, become superuser or assume a role that provides solaris.cluster.read and solaris.cluster.modify role-based access control (RBAC) authorization.

  1. If the storage array that you want to remove contains a quorum device, add a new quorum device that will not be affected by this procedure. Then remove the old quorum device.

    To determine whether the affected array contains a quorum device, use one of the following commands.

    • If you are using Sun Cluster 3.2, use the following command:


      # clquorum show
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scstat -q
      

    For procedures about how to add and remove quorum devices, see the Sun Cluster system administration documentation.

  2. If necessary, back up the metadevice or volume.

    For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation.

  3. On each node that is connected to the storage array, perform volume management administration to remove the storage array from the configuration.

    If a volume manager does manage the disk drives, run the appropriate volume manager commands to remove the disk drives from any diskset or disk group. For more information, see your Solaris Volume Manager or Veritas Volume Manager documentation. See the following paragraph for additional Veritas Volume Manager commands that are required.


    Note –

    Disk drives that were managed by Veritas Volume Manager must be completely removed from Veritas Volume Manager control before you can remove the disk drives from the Sun Cluster environment. After you delete the disk drives from any disk group, use the following commands on both nodes to remove the disk drives from Veritas Volume Manager control.



    # vxdisk offline cNtXdY
    # vxdisk rm cNtXdY
    
  4. Identify the disk drives that you plan to remove.


    # cfgadm -al
    
  5. On all nodes, remove references to the disk drives in the storage array that you plan to remove.


    # cfgadm -c unconfigure cN::dsk/cNtXdY
    
  6. Disconnect the SCSI cables from the storage array.

  7. On all nodes, update device namespaces.


    # devfsadm -C
    
  8. On all nodes, remove all obsolete device IDs.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice clear
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -C
      
  9. Power off the storage array. Disconnect the storage array from the power source.

    For the procedure about how to power off a storage array, see your storage documentation. For a list of storage documentation, see Related Documentation.

  10. Remove the storage array.

    For the procedure about how to remove a storage array, see your storage documentation. For a list of storage documentation, see Related Documentation.

  11. If you plan to remove a host adapter that has an entry in the nvramrc script, delete the references to the host adapters in the nvramrc script.


    Note –

    If other parallel SCSI devices are connected to the nodes, you can delete the contents of the nvramrc script. Then, at the OpenBoot PROM, set setenv use-nvramrc? false. Afterward, reset the scsi-initiator-id to 7 as outlined in Installing a Storage Array.


  12. If necessary, remove any unused host adapters from the nodes.

    For the procedure about how to remove a host adapter, see your host adapter and server documentation.

  13. From any node, verify that the configuration is correct.

    • If you are using Sun Cluster 3.2, use the following command:


      # cldevice list -v
      
    • If you are using Sun Cluster 3.1, use the following command:


      # scdidadm -l