Oracle® Solaris Cluster System Administration Guide

Exit Print View

Updated: October 2015
 
 

How to Remove Connectivity Between an Array and a Single Node, in a Cluster With Greater Than Two-Node Connectivity

Use this procedure to detach a storage array from a single cluster node, in a cluster that has three-node or four-node connectivity.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Oracle Solaris Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical.

  1. Back up all database tables, data services, and volumes that are associated with the storage array that you are removing.
  2. Determine the resource groups and device groups that are running on the node to be disconnected.
    phys-schost# clresourcegroup status
    phys-schost# cldevicegroup status
  3. If necessary, move all resource groups and device groups off the node to be disconnected.

    Caution

    Caution (SPARC only) -  If your cluster is running Oracle RAC software, shut down the Oracle RAC database instance that is running on the node before you move the groups off the node. For instructions, see the Oracle Database Administration Guide.


    phys-schost# clnode evacuate node

    The clnode evacuate command switches over all device groups from the specified node to the next-preferred node. The command also switches all resource groups from the specified node to the next-preferred node.

  4. Put the device groups into maintenance state.

    For the procedure on putting a device group in maintenance state, see How to Put a Node Into Maintenance State.

  5. Remove the node from the device groups.

    If you use a raw disk, use the cldevicegroup(1CL) command to remove the device groups.

  6. For each resource group that contains an HAStoragePlus resource, remove the node from the resource group's node list.
    phys-schost# clresourcegroup remove-node -n node + | resourcegroup
    node

    The name of the node.

    See the Oracle Solaris Cluster Data Services Planning and Administration Guide for more information about changing a resource group's node list.


    Note -  Resource type, resource group, and resource property names are case sensitive when clresourcegroup is executed.
  7. If the storage array that you are removing is the last storage array that is connected to the node, disconnect the fiber-optic cable between the node and the hub or switch that is connected to this storage array.

    Otherwise, skip this step.

  8. If you are removing the host adapter from the node that you are disconnecting, and power off the node.

    If you are removing the host adapter from the node that you are disconnecting, skip to Step 11.

  9. Remove the host adapter from the node.

    For the procedure on removing host adapters, see the documentation for the node.

  10. Without booting the node, power on the node.
  11. If Oracle RAC software has been installed, remove the Oracle RAC software package from the node that you are disconnecting.
    phys-schost# pkg uninstall /ha-cluster/library/ucmm 

    Caution

    Caution (SPARC only) -  If you do not remove the Oracle RAC software from the node that you disconnected, the node panics when the node is reintroduced to the cluster and potentially causes a loss of data availability.


  12. Boot the node in cluster mode.
    • On SPARC based systems, run the following command.

      ok boot
    • On x86 based systems, run the following commands.

      When the GRUB menu is displayed, select the appropriate Oracle Solaris entry and press Enter.

  13. On the node, update the device namespace by updating the /devices and /dev entries.
    phys-schost# devfsadm -C
    cldevice refresh
  14. Bring the device groups back online.

    For information about bringing a device group online, see How to Bring a Node Out of Maintenance State.