Sun Cluster 3.1 10/03 System Administration Guide

How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three- or four-node connectivity.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    # scstat
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution – Caution –

    If your cluster is running OPS/RAC software, shut down the OPS/RAC database instance that is running on the node before you move the groups off the node. For instructions see the Oracle Database Administration Guide.



    # scswitch -S -h from-node
    
  4. Put the device groups into maintenance state.

    For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VERITAS Volume Manager documentation.

    For the procedure on putting a device group in maintenance state, see the “Administering the Cluster” in Sun Cluster 3.1 10/03 System Administration Guide.

  5. Remove the node from the device groups.

    • If you use VERITAS Volume Manager or raw disk, use the scconf(1M) command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. If the cluster is running HAStorage or HAStoragePlus, remove the node from the resource group's nodelist.


    # scrgadm -a -g resource-group -h nodelist 
    

    See the Sun Cluster 3.1 Data Service Planning and Administration Guide for more information on changing a resource group's nodelist.


    Note –

    Resource type, resource group, and resource property names are case insensitive when executing scrgadm.


  7. If the storage array you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. Do you want to remove the host adapter from the node you are disconnecting?

    • If yes, shut down and power off the node.

    • If no, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation that shipped with your node.

  10. Without allowing the node to boot, power on the node.

  11. Boot the node into non-cluster mode.


    ok boot -x 
    

    Caution – Caution –

    The node must be in non-cluster mode before you remove OPS/RAC software in the next step or the node will panic and potentially cause a loss of data availability.


  12. If OPS/RAC software has been installed, remove the OPS/RAC software package from the node that you are disconnecting.


    # pkgrm SUNWscucm 
    

    Caution – Caution –

    If you do not remove the OPS/RAC software from the node you disconnected, the node will panic when the node is reintroduced to the cluster and potentially cause a loss of data availability.


  13. Boot the node into cluster mode.


    ok> boot
    

  14. On the node, update the device namespace by updating the /devices and /dev entries.


    # devfsadm -C 
    # scdidadm -C
    
  15. Bring the device groups back online.

    For procedures on bringing a VERITAS shared disk group online, see your VERITAS Volume Manager documentation.

    For the procedure on bringing a device group online, see the procedure on putting a device group into maintenance state.