Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.

  2. Determine the resource groups and device groups that are running on the node to be disconnected.


    phys-schost# clresourcegroup status
    phys-schost# cldevicegroup status
    
  3. If necessary, move all resource groups and device groups off the node to be disconnected.


    Caution (SPARC only) – Caution (SPARC only) –

    If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide.



    phys-schost# clnode evacuate node
    

    The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from voting or non-voting nodes on the specified node to the next-preferred voting or non-voting node.

  4. Put the device groups into maintenance state.

    For the procedure on acquiescing I/O activity to Veritas shared disk groups, see your VxVM documentation.

    For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.

  5. Remove the node from the device groups.

    • If you use VxVM or a raw disk, use the cldevicegroup(1CL) command to remove the device groups.

    • If you use Solstice DiskSuite, use the metaset command to remove the device groups.

  6. For each resource group that contains an HAStoragePlus resource, remove the node from the resource group's node list.


    phys-schost# clresourcegroup remove-node -z zone -n node + | resourcegroup
    
    node

    The name of the node.

    zone

    The name of the non-voting node that can master the resource group. Specify zone only if you specified a non-voting node when you created the resource group.

    See the Sun Cluster Data Services Planning and Administration Guide for Solaris OS for more information about changing a resource group's node list.


    Note –

    Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.


  7. If the storage array that you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array (otherwise, skip this step).

  8. If you are removing the host adapter from the node that you are disconnecting, and power off the node. If you are removing the host adapter from the node that you are disconnecting, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation for the node.

  10. Without booting the node, power on the node.

  11. If Oracle RAC software has been installed, remove the Oracle RAC software package from the node that you are disconnecting.


    phys-schost# pkgrm SUNWscucm 
    

    Caution (SPARC only) – Caution (SPARC only) –

    If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability.


  12. Boot the node in cluster mode.

    • On SPARC based systems, run the following command.


      ok boot
      
    • On x86 based systems, run the following commands.

      When the GRUB menu is displayed, select the appropriate Solaris entry and press Enter. The GRUB menu appears similar to the following:


      GNU GRUB version 0.95 (631K lower / 2095488K upper memory)
      +-------------------------------------------------------------------------+
      | Solaris 10 /sol_10_x86                                                  |
      | Solaris failsafe                                                        |
      |                                                                         |
      +-------------------------------------------------------------------------+
      Use the ^ and v keys to select which entry is highlighted.
      Press enter to boot the selected OS, 'e' to edit the
      commands before booting, or 'c' for a command-line.
  13. On the node, update the device namespace by updating the /devices and /dev entries.


    phys-schost# devfsadm -C 
     cldevice refresh
    
  14. Bring the device groups back online.

    For procedures about bringing a Veritas shared disk group online, see your Veritas Volume Manager documentation.

    For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.