Deleting a Cluster Node on Linux and UNIX Systems

This section describes the procedure for deleting a node from a cluster.

Note:

  • You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance

  • If you delete the last node of a cluster that is serviced by GNS, then you must delete the entries for that cluster from GNS.

  • If you have nodes in the cluster that are unpinned, then Oracle Clusterware ignores those nodes after a time and there is no need for you to remove them.

  • If one creates node-specific configuration for a node (such as disabling a service on a specific node, or adding the node to the candidate list for a server pool) that node-specific configuration is not removed when the node is deleted from the cluster. Such node-specific configuration must be removed manually.

  • Voting files are automatically backed up in OCR after any changes you make to the cluster.

  • When you want to delete a Leaf Node from an Oracle Flex Cluster, you need only complete steps 1 through 4 of this procedure.

To delete a node from a cluster:

  1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.
  2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:
    $ olsnodes -s -t
    

    If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

  3. On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:
    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {node_to_be_deleted}" CRS=TRUE -silent -local
    
  4. On the node that you are deleting, depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:
    • For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:

      $ Grid_home/deinstall/deinstall -local
      

      Caution:

      • If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster.

      • If you cut and paste the preceding command, then paste it into a text editor before pasting it to the command line to remove any formatting this document might contain.

    • If you have a shared home, then run the following commands in the following order on the node you want to delete.

      Run the following command to deconfigure Oracle Clusterware:

      $ Grid_home/perl/bin/perl Grid_home/crs/install/rootcrs.pl -deconfig -force
      

      Run the following command from the Grid_home/oui/bin directory to detach the Grid home:

      $ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
      

      Manually delete any configuration files, as prompted by the installation utility.

  5. On any node other than the node you are deleting (except for a Leaf Node in an Oracle Flex Cluster), run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:
    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {remaining_nodes_list}" CRS=TRUE -silent
    

    Note:

    • You must run this command a second time from the Oracle RAC home, where ORACLE_HOME=ORACLE__RAC_HOME and CRS=TRUE -silent is omitted from the syntax, as follows:

      $ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME
       "CLUSTER_NODES={remaining_nodes_list}"
      
    • Because you do not have to run this command if you are deleting a Leaf Node from an Oracle Flex Cluster, remaining_nodes_list must list only Hub Nodes.

    • If you have a shared Oracle Grid Infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.

  6. From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:
    # crsctl delete node -n node_to_be_deleted
    
  7. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:
    $ cluvfy stage -post nodedel -n node_list [-verbose]
    

    See Also:

    "cluvfy stage -post nodedel" for more information about this CVU command

  8. If you remove a cluster node on which Oracle Clusterware is down, then determine whether the VIP for the deleted node still exists, as follows:
    $ srvctl config vip -node deleted_node_name
    

    If the VIP still exists, then delete it, as follows:

    $ srvctl stop vip -node deleted_node_name
    $ srvctl remove vip -node deleted_node_name