Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three- or four-node connectivity.

Steps
  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    # scstat
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution (SPARC only) – Caution (SPARC only) –

    If your cluster is running Oracle Parallel Server/Real Application Clusters software, shut down the Oracle Parallel Server/Real Application Clusters database instance that is running on the node before you move the groups off the node. For instructions see the Oracle Database Administration Guide.



    # scswitch -S -h from-node
    
  4. Put the device groups into maintenance state.

    For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VxVM documentation.

    For the procedure on putting a device group in maintenance state, see the Chapter 7, Administering the Cluster.

  5. Remove the node from the device groups.

    • If you use VxVM or raw disk, use the scconf(1M) command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. If the cluster is running HAStorage or HAStoragePlus, remove the node from the resource group's nodelist.


    # scrgadm -a -g resource-group -h nodelist 
    

    See the Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information on changing a resource group's nodelist.


    Note –

    Resource type, resource group, and resource property names are case insensitive when executing scrgadm.


  7. If the storage array you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. Do you want to remove the host adapter from the node you are disconnecting?

    • If yes, shut down and power off the node.

    • If no, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation that shipped with your node.

  10. Without allowing the node to boot, power on the node.

  11. Boot the node into non-cluster mode.

    • SPARC:


      ok boot -x
      
    • x86:


                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/
      sd@0,0:a
      Boot args:
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b -x
      

    Caution (SPARC only) – Caution (SPARC only) –

    The node must be in non-cluster mode before you remove Oracle Parallel Server/Real Application Clusters software in the next step or the node panics and potentially causes a loss of data availability.


  12. SPARC: If Oracle Parallel Server/Real Application Clusters software has been installed, remove the Oracle Parallel Server/Real Application Clusters software package from the node that you are disconnecting.


    # pkgrm SUNWscucm 
    

    Caution (SPARC only) – Caution (SPARC only) –

    If you do not remove the Oracle Parallel Server/Real Application Clusters software from the node you disconnected, the node will panic when the node is reintroduced to the cluster and potentially cause a loss of data availability.


  13. Boot the node into cluster mode.

    • SPARC:


      ok boot
      
    • x86:


                            <<< Current Boot Parameters >>>
      Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/
      sd@0,0:a
      Boot args:
      
      Type    b [file-name] [boot-flags] <ENTER>  to boot with options
      or      i <ENTER>                           to enter boot interpreter
      or      <ENTER>                             to boot with defaults
      
                        <<< timeout in 5 seconds >>>
      Select (b)oot or (i)nterpreter: b
      
  14. On the node, update the device namespace by updating the /devices and /dev entries.


    # devfsadm -C 
    # scdidadm -C
    
  15. Bring the device groups back online.

    For procedures about bringing a VERITAS shared disk group online, see your VERITAS Volume Manager documentation.

    For the procedure on bringing a device group online, see the procedure on putting a device group into maintenance state.