Skip Headers
Oracle® Clusterware Administration and Deployment Guide
11g Release 2 (11.2)

Part Number E10717-08
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

4 Adding and Deleting Cluster Nodes

This chapter describes how to add nodes to an existing cluster, and how to delete nodes from clusters. This chapter provides procedures for these tasks for Linux and UNIX systems, and Windows systems.

Note:

The topics in this chapter include the following:

Prerequisite Steps for Adding Cluster Nodes

Complete the following steps to prepare nodes to add to the cluster:

  1. Make physical connections.

    Connect the nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step.

  2. Install the operating system.

    Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches, updates, and drivers. See your operating system vendor documentation for details about this process.

    Note:

    Oracle recommends that you use a cloned image. However, if the installation fulfills the installation requirements, then you can use this procedure on your environment.
  3. Create Oracle users.

    Note:

    Perform this step only for Linux and UNIX systems.

    As root, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.

  4. Ensure that SSH is configured on the node.

    Note:

    SSH configuration is done when you install Oracle Clusterware 11g release 2 (11.2). If, however, SSH is not configured, see Oracle Grid Infrastructure Installation Guide for information about configuring SSH.
  5. Verify the hardware and operating system installations with the Cluster Verification Utility (CVU).

    After you configure the hardware and operating systems on the nodes you want to add, you can run the following commands to verify that the nodes you want to add are reachable by other nodes in the cluster. You can also use this command to verify user equivalence to all given nodes the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

    1. From the Grid_home/bin directory on an existing node, run the CVU command to verify your installation at the post-hardware installation stage as shown in the following example, where node_list is a comma-delimited list of nodes you want to add to your cluster:

      $ cluvfy stage -post hwos -n node_list | all [-verbose]
      

      See Also:

      Appendix A, "Cluster Verification Utility Reference" for more information about CVU command usage
    2. From the Grid_home/bin directory on an existing node, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment. Replace ref_node with the name of a node in your existing cluster against which you want CVU to compare the nodes to be added. Specify a comma-delimited list of nodes after the -n option. In the following example, orainventory_group is the name of the Oracle inventory group, and osdba_group is the name of the OSDBA group:

      $ cluvfy comp peer [-refnode ref_node] -n node_list
      [-orainv orainventory_group] [-osdba osdba_group] [-verbose]
      

    Note:

    For the reference node, select a cluster node against which you want CVU to compare, for example, the nodes that you want to add that you specify with the -n option.

After completing the procedures in this section, you are ready to add the nodes to the cluster.

Note:

Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

Adding and Deleting Cluster Nodes on Linux and UNIX Systems

This section explains cluster node addition and deletion on Linux and UNIX systems. The procedures in this section assume that you have performed the steps in the "Prerequisite Steps for Adding Cluster Nodes" section.

The last step of the node addition process includes extending the Oracle Clusterware home from an Oracle Clusterware home on an existing node to the nodes that you want to add.

This section includes the following topics:

Adding a Cluster Node on Linux and UNIX Systems

This procedure describes how to add a node to your cluster. This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3

  • You have successfully installed Oracle Clusterware on node1 and node2 in a non-shared home, where Grid_home represents the successfully installed home

To add a node:

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home must identify your successfully installed Oracle Clusterware home.

    See Also:

    Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions
  2. Verify the integrity of the cluster and node3:

    $ cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
    

    You can specify the -fixup option and a directory into which CVU prints instructions to fix the cluster or node if the verification fails.

  3. Navigate to the Grid_home/oui/bin directory on node1 and run the addNode.sh script using the following syntax, where node3 is the name of the node that you are adding and node3-vip is the VIP name for the node:

    If you are using Grid Naming Service (GNS):

    $ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"
    

    If you are not using GNS:

    $ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
    HOSTNAMES={node3-vip}"
    

    Alternatively, you can specify the entries shown in Example 4-1 in a response file, where file_name is the name of the file, and run the addNode.sh script, as follows:

    $ addNode.sh -silent -responseFile file_name
    

    When the addNode.sh script completes, a message window displays a list of nodes in the cluster and root scripts that must be run on those nodes.

    Example 4-1 Response File Entries for Adding Oracle Clusterware Home

    RESPONSEFILE_VERSION=2.2.1.0.0
    
    CLUSTER_NEW_NODES = {"newnode3"}
    CLUSTER_NEW_VIRTUAL_HOSTNAMES = {"newnode3-vip"}
    

    See Also:

    Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files

    Notes:

    • If you are not using Oracle Grid Naming Service (GNS), then you must add the name and address of node3 to DNS.

    • Command-line values always override response file values.

  4. Check that your cluster is integrated and that the cluster is not divided into separate parts by running the following CVU command. This command verifies that any number of specific nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:

    $ cluvfy stage -post nodeadd -n node3 [-verbose]
    

    See Also:

    "cluvfy stage [-pre | -post] nodeadd" for more information about this CVU command

After you complete the procedure in this section for adding nodes, you can optionally extend Oracle Database with Oracle Real Application Clusters (Oracle RAC) components to the new nodes, making them members of an existing Oracle RAC database.

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about extending Oracle Database with Oracle RAC to new nodes

Deleting a Cluster Node on Linux and UNIX Systems

This section describes the procedure for deleting a node from a cluster.

Notes:

  • If you run a dynamic Grid Plug and Play cluster using DHCP and GNS, then you need only perform steps 3 (remove VIP resource), step 4 (delete node), and 7 (update inventory on remaining nodes).

  • Voting disks are automatically backed up in OCR after any changes you make to the cluster.

To delete a node from a cluster:

  1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.

  2. If Cluster Synchronization Services (CSS) is not running on the node you are deleting, then the crsctl unpin css command in this step fails. Before you run the crsctl unpin css command, run the following command as either root or the user that installed Oracle Clusterware:

    $ olsnodes -s -t
    

    This command shows whether the node is active and whether it is pinned. If the node is unpinned, then you do not need to run the crsctl unpin css command.

  3. Disable the Oracle Clusterware applications and daemons running on the node. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted, as follows:

    Note:

    Before you run this command, you must stop the EMAGENT.
    # ./rootcrs.pl -deconfig -force
    

    If you are deleting multiple nodes, then run this script on each node that you are deleting.

    If this is the last node in a cluster that you are deleting, then append the -lastnode option to the preceding command, as follows:

    # ./rootcrs.pl -deconfig -force -lastnode
    

    Notes:

    • If you do not use the -force option in the preceding command or the node you are deleting is not accessible for you to execute the preceding command, then the VIP resource remains running on the node. You must manually stop and remove the VIP resource using the following commands as root from any node that you are not deleting:

      # srvctl stop vip -i vip_name -f
      # srvctl remove vip -i vip_name -f
      

      Where vip_name is the VIP for the node to be deleted. If you specify multiple VIP names, then separate the names with commas and surround the list in double quotation marks ("").

    • If the node you are deleting is not accessible, then you can skip steps 5 and 6.

  4. From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:

    # crsctl delete node -n node_to_be_deleted
    
  5. On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {node_to_be_deleted}" CRS=TRUE -local
    
  6. On the node that you are deleting, run the runInstaller command as the user that installed Oracle Clusterware. Depending on whether you have a shared or non-shared Oracle home, complete one of the following procedures:

    • If you have a shared home, then run the following command from the Grid_home/oui/bin directory on the node you want to delete:

      $ ./runInstaller -detachHome  ORACLE_HOME=Grid_home
      
    • For a non-shared home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command from the Grid_home/deinstall directory, where Grid_home is the path defined for the Oracle Clusterware home:

      $ ./deinstall –local
      

      Caution:

      If you do not specify the -local flag, then the command removes the grid infrastructure homes from every node in the cluster.
  7. On any node other than the node you are deleting, run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home
    "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE
    
  8. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

    $ cluvfy stage -post nodedel -n node_list [-verbose]
    

    See Also:

    "cluvfy stage -post nodedel" for more information about this CVU command

Adding and Deleting Cluster Nodes on Windows Systems

This section explains cluster node addition and deletion on Windows systems. This section includes the following topics:

See Also:

Oracle Grid Infrastructure Installation Guide for more information about deleting an entire cluster

Adding a Node to a Cluster on Windows Systems

Ensure that you complete the prerequisites listed in "Prerequisite Steps for Adding Cluster Nodes" before adding nodes.

This procedure describes how to add a node to your cluster. This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3

  • You have successfully installed Oracle Clusterware on node1 and node2 in a non-shared home, where Grid_home represents the successfully installed home

Note:

Do not use the procedures described in this section to add cluster nodes in configurations where the Oracle database has been upgraded from Oracle Database 10g release 1 (10.1) on Windows systems.

To add a node:

  1. Verify the integrity of the cluster and node3:

    C:\>cluvfy stage -pre nodeadd -n node3 [-fixup [-fixupdir fixup_dir]] [-verbose]
    

    You can specify the -fixup option and a directory into which CVU prints instructions to fix the cluster or node if the verification fails.

  2. On node1, go to the Grid_home\oui\bin directory and run the addNode.bat script, as follows:

    C:\>addNode.bat -silent "CLUSTER_NEW_NODES={node3}"
    "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
    

    You can alternatively specify the entries shown in Example 4-2 in a response file and run addNode.bat as follows:

    C:\>addNode.bat -silent -responseFile filename
    

    Example 4-2 Response File Entries for Adding a Node

    CLUSTER_NEW_NODES = {"node3"}
    CLUSTER_NEW_VIRTUAL_HOSTNAMES = {"node3-vip"}
    

    Command-line values always override response file values.

    See Also:

    Oracle Universal Installer and OPatch User's Guide for details about how to configure command-line response files
  3. Run the following command on the new node:

    C:\>Grid_home\install\utl\root.bat
    
  4. Run the following command to verify the integrity of the Oracle Clusterware components on all of the configured nodes, both the preexisting nodes and the nodes that you have added:

    C:\>cluvfy stage -post crsinst -n all [-verbose]
    

After you complete the procedure in this section for adding nodes, you can optionally extend Oracle Database with Oracle Real Application Clusters (Oracle RAC) components to the new nodes, making them members of an existing Oracle RAC database.

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about extending Oracle Database with Oracle RAC to new nodes

Deleting a Cluster Node on Windows Systems

This section describes how to delete a cluster node on Windows systems. This procedure assumes that Oracle Clusterware is installed on node1, node2, and node3, and that you are deleting node3 from the cluster.

Notes:

  • Oracle does not support using Oracle Enterprise Manager to delete nodes on Windows systems.

  • Oracle recommends that you back up your OCRs after you complete any node addition or deletion procedures.

To delete a cluster node on Windows systems:

  1. Delete the database homes on the nodes you want to remove as described in Oracle Real Application Clusters Administration and Deployment Guide.

  2. Before you delete a node, you must disable the Oracle Clusterware applications and services running on the node. On the node you want to delete, run the rootcrs.pl -delete script as a member of the Administrators groups from the Grid_home\crs\install directory. If you are deleting multiple nodes, then run this script on each node that you are deleting.

  3. On each node to delete (node3 in this case), run the following command from the Grid_home\oui\bin directory:

    C:\>setup.exe –updateNodeList ORACLE_HOME=Grid_home
    "CLUSTER_NODES={node3}" CRS=TRUE –local
    
  4. On node3, run the deinstall tool located in the %ORACLE_HOME%\deinstall directory to deinstall and deconfigure the Oracle Clusterware home, as follows:

    C:\>deinstall.bat -local
    
  5. On node1, or on any node that you are not deleting, run the following command from the Grid_home\oui\bin directory, where node_list is a comma-delimited list of nodes that are to remain part of the cluster:

    C:\>setup.exe –updateNodeList ORACLE_HOME=Grid_home
    "CLUSTER_NODES={node_list}" CRS=TRUE
    
  6. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

    C:\>cluvfy stage -post nodedel -n node_list [-verbose]
    

    See Also:

    "cluvfy stage -post nodedel" for more information about this CVU command