Skip Headers
Oracle® Collaboration Suite Administrator's Guide
10g Release 1 (10.1.1) for Windows or UNIX

Part Number B14476-03
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

13 Managing High-Availability Environments

This chapter describes how to manage the high-availability environments in Oracle Collaboration Suite. Managing high-availability environments primarily consists of managing nodes of Real Application Clusters (RAC) and Oracle Collaboration Suite Applications. The chapter consists of the following sections:

Adding and Removing RAC nodes

This section describes how to add and delete nodes and instances in Oracle RAC databases. The topics in this section are:

Overview of Node Addition Procedures

This section explains how to add nodes to clusters. The steps to add nodes to cluster are as follows:

  • Set up the new nodes to be part of your cluster at the network level.

  • Extend the Cluster Ready Services (CRS) home from an existing CRS home to the new nodes.

  • Extend the Oracle database software with RAC components to the new nodes.

  • Finally, make the new nodes members of the existing cluster database.

Note:

If your clusterware supports, you can add nodes on some UNIX-based platforms without stopping existing nodes. Refer to your vendor-specific clusterware documentation for more information.

If the nodes, that you are adding to your cluster, do not have clusterware or Oracle software, then you must complete the following five steps. The procedures in the steps assume that you already have an operational UNIX-based or Windows-based RAC environment. The details of the steps appear in the following subsections.

Step 1: Connecting New Nodes to the Cluster

Complete the following procedures to connect the new nodes to the cluster and to prepare them to support your cluster database:

  • Making Physical Connections

  • Installing Operating System

  • Creating Oracle Users

  • Checking the Installation

Making Physical Connections

Connect the hardware of the new nodes to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. Refer to your hardware vendor documentation for details about this step.

Installing Operating System

Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches and drivers. Refer to your hardware vendor documentation for details about this process.

Creating Oracle Users

As root user on UNIX-based systems, create the Oracle users and groups using the same user ID and group ID as on the existing nodes. On Windows-based systems, perform the installation as an Administrator.

Checking the Installation

To verify that your installation is configured correctly, perform the following steps:

  1. Ensure that the new nodes can access the private interconnect. This interconnect must be properly configured before you can complete the procedures in Step 2: Extending Clusterware and Oracle Software to New Nodes.

  2. If you are not using a cluster file system, then determine the location on which your cluster software was installed on the existing nodes. Make sure that you have at least 250MB of free space on the same location on each of the new nodes to install the CRS software. In addition, ensure you have enough free space on each new node to install the Oracle binaries.

  3. Ensure that user equivalence is established on the new nodes.

  4. Run the following platform-specific procedures:

    • On UNIX-based systems:

      Verify user equivalence to and from an existing node to the new nodes using rsh or ssh.

    • On Windows-based systems:

      Make sure that you can run the following command from each of the existing nodes of your cluster where the host_name is the public network name of the new node.

      NET USE \\host_name\C$
      
      

      You have the required administrative privileges on each node if the operating system responds if the following message is displayed:

      Command completed successfully.
      
      

      After completing the procedures in this section, your new nodes are connected to the cluster and configured with the required software to make them visible to the clusterware. Configure the new nodes as members of the cluster by extending the cluster software to the new nodes as described in Step 2: Extending Clusterware and Oracle Software to New Nodes.

      Note:

      Do not change a host name after CRS installation. This includes adding or deleting a domain qualification.

Step 2: Extending Clusterware and Oracle Software to New Nodes

The following topics describe how to add new nodes to the clusterware and to the Oracle database software layers using Oracle Universal Installer.

  • Adding Nodes at the Vendor Clusterware Layer (UNIX Only)

  • Adding Nodes at the Oracle Clusterware Layer (UNIX and Windows)

Adding Nodes at the Vendor Clusterware Layer (UNIX Only)

If you are using a Windows-based system, then skip this subsection and proceed to the next section. For UNIX-based systems, add the new nodes at the clusterware layer according to the vendor clusterware documentation. For systems using shared storage for the CRS home, ensure that the existing clusterware is accessible by the new nodes. Also ensure that the new nodes can be brought online as part of the existing cluster. Proceed to the next section to add the nodes at the clusterware layer.

Adding Nodes at the Oracle Clusterware Layer (UNIX and Windows)

On all platforms, complete the following steps. Oracle Universal Installer requires access to the private interconnect that you checked as part of the installation validation in Step 1.

  1. On one of the existing nodes, navigate to the CRS home/oui/bin directory on UNIX-based systems or to the CRS home\oui\bin directory on Windows-based systems. On UNIX, run the addNode.sh script and on Windows run the addNode.bat script to start Oracle Universal Installer.

  2. Oracle Universal Installer runs in the add node mode and the Oracle Universal Installer Welcome page appears. Click Next and the Specify Cluster Nodes for Node Addition page appears.

  3. The upper table on the Specify Cluster Nodes for Node Addition page shows the existing nodes associated with the CRS home from which you launched Oracle Universal Installer. Use the lower table to enter the public and private node names of the new nodes.

  4. If you are using vendor clusterware, then the public node names automatically appear in the lower table. Click Next and Oracle Universal Installer verifies connectivity on the existing nodes and on the new nodes. The verifications that Oracle Universal Installer performs include determining if the following conditions are true:

    • The nodes are up

    • The nodes are accessible by way of the network

    • The user has write permission to create the CRS home on the new nodes

    • The user has write permission to the Oracle Universal Installer inventory in the oraInventory directory on UNIX or Inventory directory on Windows

  5. If Oracle Universal Installer detects that the new nodes do not have an inventory location, then:

    • On UNIX platforms Oracle Universal Installer displays a dialog box asking you to run the oraInstRoot.sh script on the new nodes.

    • On Windows platforms Oracle Universal Installer automatically updates the inventory location in the Registry key.

    If any verifications fail, then Oracle Universal Installer redisplays the Specify Cluster Nodes for Node Addition page with a Status column in both tables indicating errors. Correct the errors or deselect the nodes that have errors and proceed. However, you cannot deselect existing nodes. You must correct problems on nodes that are already part of your CRS cluster before you can proceed with node addition. If all the checks succeed, then Oracle Universal Installer displays the Node Addition Summary page.

    Note:

    Oracle recommends that you install CRS on every node in the cluster that has vendor clusterware installed.
  6. The Node Addition Summary page displays the following information showing the products that are installed in the CRS home that you are extending to the new nodes.

    • The source for the add node process, which in this case is the CRS home

    • The private node names that you entered for the new nodes

    • The new nodes that you entered

    • The required and available space on the new nodes

    • The installed products listing the products that are already installed on the existing CRS home

  7. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following platform-specific content.

    On UNIX-based systems, this page shows the following four Oracle Universal Installer phases:

    • Instantiate Root Scripts: The root script rootaddnode.sh is instantiated with the public and private node names that you entered on the Cluster Node Addition page.

    • Copy the CRS Home to the New Nodes: CRS home is copied to the new nodes unless the CRS home is on a cluster file system.

    • Run rootaddnode.sh and root.sh: A dialog box is displayed prompting you to run rootaddnode.sh on the local node from which you are running Oracle Universal Installer. Then you are prompted to run root.sh on the new nodes.

    • Save Cluster Inventory: The node list, associated with the CRS home and its inventory, is updated.

    On Windows-based systems, this page shows the following three Oracle Universal Installer phases:

    • Copy CRS Home to New Nodes: CRS home is copied to the new nodes unless the CRS home is on the Oracle Cluster File System.

    • Performs Oracle Home Setup: The Registry entries are updated for the new nodes, services are created, and folder entries are created.

    • Save Cluster Inventory: The node list, associated with the CRS home and its inventory, is updated.

    For all platforms, the Status column of the Cluster Node Addition Progress page displays In Progress while the phase is in progress, Suspended when the phase is pending execution, and Succeeded after the phase completes. On completion, click Exit to end Oracle Universal Installer session. After Oracle Universal Installer displays the End of Node Addition page, click Exit to end the Oracle Universal Installer session.

  8. On Windows-based systems, run the following command to identify the node names and node numbers that are currently in use:

    CRS home\bin\olsnodes -n
    
    

    Run the crssetup.exe command using the next available node names and node numbers to add CRS information for the new nodes. Use the following syntax for crssetup.exe where I is the first new node number, nodeI through nodeI+n is a list of the nodes that you are adding, nodeI-number through nodeI+n-number represent the node numbers assigned to the new nodes, and pnI through pnI+1 is the list of private networks for the new nodes:

    CRS home\bin\crssetup.exe add
    -nn nodeI,nodeI-number,nodeI+1,nodeI+1-number,...nodeI+n,nodeI+n-number
    -pn pnI,nodeI-number,pnI+1,nodeI+1-number,...pnodeI+n,nodeI+n-number
    
    crssetup.exe add -nn nodeI,node number -pn pnodeI,node number
    
    

    These are the private network names or Internet Protocol (IP) addresses that you entered in Step 3 of this procedure in the Specify Cluster Nodes for Node Addition page. For example:

    crssetup.exe add -nn node3,3,node4,4 -pn node3_pvt,3,node4_pvt,4
    
    

    On all platforms, run the racgons utility from the bin subdirectory of the CRS home to configure the Oracle Notification Services port number as follows:

    racgons add_config new_node_name:4948
    
    

    After you have completed the procedures in this section for adding nodes at the Oracle clusterware layer, you have successfully extended the CRS home from your existing CRS home to the new nodes. Proceed to Step 3: Preparing Storage for RAC on New Nodes to prepare storage for RAC on the new nodes.

Step 3: Preparing Storage for RAC on New Nodes

To extend an existing RAC database to your new nodes, configure storage for the new nodes so that the storage is the same as on the existing nodes. For example, the Oracle Cluster Registry (OCR) and the voting disk must be accessible by the new nodes using the same path as the other nodes use. In addition, the OCR and voting disk devices must have the same permissions on the new node as those on the existing nodes. Prepare the same type of storage on the new nodes as you are using on the other nodes in the RAC environment that you want to extend as follows:

  • Automatic Storage Management (ASM)

    If you are using ASM, then make sure that the new nodes can access the ASM disks with the same permissions as the existing nodes.

  • Oracle Cluster File System (OCFS)

    If you are using OCFS, then make sure that the new nodes can access the cluster file systems in the same way that the other nodes access them.

  • Vendor Cluster File Systems

    If your cluster database uses vendor cluster file systems, then configure the new nodes to use the vendor cluster file systems. Refer to the vendor clusterware documentation for the pre-installation steps for your platform.

  • Raw Device Storage

    If your cluster database uses raw devices, then prepare the new raw devices by following the procedures described in the following section.

Raw Device Storage Preparation for New Nodes

To prepare raw device storage on the new nodes, you need at least two new disk partitions to accommodate the redo logs for each new instance. Make these disk partitions the same size as the redo log partitions that you configured for the instances of existing nodes. Also create an additional logical partition for the undo tablespace for automatic undo management.

On operating systems, you can create symbolic links to your raw devices. Optionally, on all platforms you can create a raw device mapping file and set the DBCA_RAW_CONFIG environment variable so that it points to the raw device mapping file.

Configure Raw Storage on UNIX-Based Systems

Use your vendor-supplied tools to configure the required raw storage.

Configure Raw Partitions on Windows-Based Systems

Perform the following steps from one of the existing nodes of the cluster:

  1. Create or identify an extended partition.

  2. Click inside a unallocated part of the extended partition.

  3. Select Create from the Partition menu. A dialog box appears in which you should enter the size of the partition. Ensure you use the same sizes as those you used on your existing nodes.

  4. Click the newly created partition and select Assign Drive Letter from the Tool menu.

  5. Select Don't Assign Drive Letter, and click OK.

  6. Repeat steps 2 through 5 for the second and any additional partitions.

  7. Select Commit Changes Now from the Partition menu to save the new partition information.

  8. Create symbolic links so that the existing nodes and new nodes can recognize the new partitions you just created and the new nodes can recognize the pre-existing symbolic links to logical drives by following these steps:

    1. Start the Object Link Manager (OLM) by entering the following command from the CRS home\bin directory on one of the existing nodes:

      GUIOracleOBJManager
      
      
    2. The OLM starts and automatically detects the symbolic links to the logical drives and displays them in the graphical user interface of OLM.

    3. Recall the disk and partition numbers for the partitions that you created in the previous section. Look for the disk and partition numbers in the OLM page and perform the following tasks:

      • Right-click next to the box under the New Link column and enter the link name for the first partition.

      • Repeat the previous step for the second and any additional partitions.

        For example, if your RAC database name is db and it consists of two instances running on two nodes and you are adding a third instance on the third node, then your link names for your redo logs should be db_ redo3_1, db_redo3_2, and so on.

      • To enable automatic undo management for the instance of a new node, enter the link name for the logical partition for the undo tablespace that you created in the previous section. For example, if your RAC database name is db and if it has two instances running on two nodes and you are adding a third instance on a third node, then your link name for the undo tablespace should be db_undotbs3.

      • Select Commit from the Options menu. This creates the new links on the current node.

      • Select Sync Nodes from the Options menu. This makes the new links visible to all of the nodes in the cluster, including the new nodes.

      • Select Exit from the Options menu to exit Object Link Manager.

After completing the procedures in this section, you have configured your cluster storage so that the new nodes can access the Oracle software. Additionally, the existing nodes can access the new nodes and instances. Use Oracle Universal Installer as described in the procedures in Step 4 to configure the new nodes at the RAC database layer.

Step 4: Adding Nodes at the Oracle RAC Database Layer

To add nodes at the Oracle RAC database later, run Oracle Universal Installer in add node mode to configure your new nodes. If you have multiple Oracle homes, then perform the following steps for each Oracle home that you want to include on the new nodes:

  1. On an existing node from the $ORACLE_HOME/oui/bin directory on UNIX-based systems, run the addNode.sh script. From the %ORACLE_HOME%\oui\bin on Windows-based systems, run the addNode.bat script. This starts Oracle Universal Installer in the add node mode and displays the Oracle Universal Installer Welcome page. Click Next on the Welcome page and Oracle Universal Installer displays the Specify Cluster Nodes for Node Addition page.

  2. The Specify Cluster Nodes for Node Addition page has a table showing the existing nodes associated with the Oracle home from which you launched Oracle Universal Installer. A node selection table appears on the bottom of this page showing the nodes that are available for addition. Select the nodes that you want to add and click Next.

  3. Oracle Universal Installer verifies connectivity and performs availability checks on both the existing nodes and on the nodes that you want to add. Some of checks performed determine whether:

    • The nodes are up

    • The nodes are accessible by way of the network

    • The user has write permission to create the Oracle home on the new nodes

    • The user has write permission to the Oracle Universal Installer inventory in the oraInventory directory on UNIX or the Inventory directory on Windows on the existing nodes and on the new nodes

  4. If the new nodes do not have an inventory set up, then on UNIX-based systems Oracle Universal Installer displays a dialog box asking you to run the oraInstRoot.sh script on the new nodes. On Windows-based systems Oracle Universal Installer automatically updates the Registry entries for the inventory location. If any of the other checks fail, then fix the problem and proceed or deselect the node that has the error and proceed. You cannot deselect existing nodes. You must correct problems on the existing nodes before proceeding with node addition. If all the checks succeed then Oracle Universal Installer displays the Node Addition Summary page.

  5. The Node Addition Summary page has the following information about the products that are installed in the Oracle home that you are going to extend to the new nodes:

    • The source for the add node process, which in this case is the Oracle home

    • The existing nodes and new nodes

    • The new nodes that you selected

    • The required and available space on the new nodes

    • The installed products listing all the products that are already installed in the existing Oracle home

    Click Finish and Oracle Universal Installer displays the Cluster Node Addition Progress page.

  6. The Cluster Node Addition Progress page shows the status of the cluster node addition process. The table on this page has two columns showing the phase of the node addition process and the phase's status according to the following platform-specific content.

    On UNIX-based systems, the Cluster Node Addition Progress page shows the following four Oracle Universal Installer phases:

    • Instantiate Root Scripts: The root.sh script in the Oracle home is instantiated by copying it from the local node

    • Copy the Oracle Home to the New Nodes: The entire Oracle home from the local node to the new nodes is copied unless the Oracle home is on a cluster file system

    • Run root.sh: The dialog box is displayed prompting you to run root.sh on the new nodes

    • Save Cluster Inventory: The node list, associated with the Oracle home and its inventory, is updated

    On Windows-based systems, the Cluster Node Addition Progress shows the following three Oracle Universal Installer phases:

    • Copy the Oracle Home To New Nodes: The entire Oracle home is copied to the new nodes unless the Oracle home is on a cluster file system

    • Performs Oracle Home Setup: The Registry entries are updated for the new nodes, services are created, and folder entries are created

    • Save Cluster Inventory: The node list, associated with the Oracle home and its inventory, is updated

    For all platforms, the Status column of the Cluster Node Addition Progress page displays Succeeded if the phase completes, In Progress if the phase is in progress, and Suspended when the phase is pending execution. After Oracle Universal Installer displays the End of Node Addition page, click Exit to end the Oracle Universal Installer session.

  7. On UNIX-based systems only, run the root.sh script.

  8. Run the Virtual IP Configuration Assistant (VIPCA) utility from the bin subdirectory of the Oracle home using the -nodelist option with the following syntax that identifies the complete set of nodes that are now part of your RAC database beginning with Node1 and ending with NodeN:

    vipca -nodelist Node1,Node2,Node3,...NodeN
    
    

    Note:

    You must run the VIPCA utility as root user on UNIX-based systems, and as a user with Administrative privileges on Windows-based systems.
  9. If the private interconnect interface names on the new nodes are not the same as the interconnect names that are on the existing nodes, then change the private interconnect configuration for the new nodes as described in this step, otherwise, proceed to step 10. Change the configuration by executing the oifcfg utility with the setif option from the bin directory of the Oracle home using the following syntax where subnet is the subnet for the private interconnect of the RAC databases to which you are adding nodes. Specify the -n nodename option to enter node-specific configuration information for the new nodes. The syntax for oifcfg commands are as follows:

    oifcfg iflist
    
    oifcfg setif {-node nodename | -global} {if_name/subnet:if_type}...
    
    oifcfg getif [-node nodename | -global] [ -if if_name[/subnet] [-type if_type]
    ]
    
    oifcfg delif [-node nodename | -global] [if_name[/subnet]]
    
    oifcfg [-help]
    
    

    A Cluster Ready Services (CRS) installation issues the oifcfg command as in the following example:

    oifcfg setif -global eth0/146.56.76.0:public eth1/192.0.0.0:cluster_
    interconnect
    
    

    This sets both networks to global. Therefore, you do not need to run the oifcfg command manually after you add a node unless the network interfaces differ.

  10. Add a listener to the new node by running the Net Configuration Assistant (NetCA).

After completing the procedures in the previous section, you have defined the new nodes at the cluster database layer. You can now add database instances to the new nodes as described in Step 5.

Step 5: Adding Database Instances to New Nodes

Run the following procedures for each new node to add instances:

  1. Start the Database Configuration Assistant (DBCA) by entering dbca at the system prompt from the bin directory in the $ORACLE_HOME on UNIX. On Windows-based systems, select Start, Programs, Oracle - HOME_NAME, Configuration and Migration Tools, Database Configuration Assistant.

    The DBCA displays the Welcome page for RAC. Click Help on any DBCA page for additional information.

  2. Select Real Application Clusters database, click Next, and the DBCA displays the Operations page.

  3. Select Instance Management, click Next, and the DBCA displays the Instance Management page.

  4. Select Add Instance and click Next. The DBCA displays the List of Cluster Databases page that shows the databases and their current status, such as ACTIVE, or INACTIVE.

  5. From the List of Cluster Databases page, select the active RAC database to which you want to add an instance. If your user ID is not operating-system authenticated, then the DBCA prompts you for a user ID and password for a database user that has SYSDBA privileges. If the DBCA prompts you, then enter a valid user ID and password and click Next. The DBCA displays the List of Cluster Database Instances page showing the names of the existing instances for the RAC database that you selected.

  6. Click Next to add a new instance and the DBCA displays the Adding an Instance page.

  7. On the Adding an Instance page, enter the instance name in the field at the top of this page if the instance name that the DBCA provides does not match your existing instance name sequence. Then select the new node name from the list, click Enter the services information for the new node's instance, click Next, and the DBCA displays the Services Page.

  8. Enter the services information for the instance of the new node, click Next, and the DBCA displays the Instance Storage page.

  9. If you are using raw devices or raw partitions, then on the Instance Storage page select the Tablespaces folder and expand it. Then select the undo tablespace storage object and a dialog box appears on the right-hand side. Change the default data file name to the raw device name for the tablespace.

  10. If you are using raw devices or raw partitions or if you want to change the default redo log group file name, then on the Instance Storage page select and expand the Redo Log Groups folder. For each redo log group number that you select, the DBCA displays another dialog box.

    • For UNIX-based systems, enter the raw device name.

    • On Windows-based systems, enter the symbolic link name.

  11. If you are using a cluster file system, then click Finish on the Instance Storage page. If you are using raw devices on UNIX-based systems or disk partitions on Windows-based systems, then repeat step 10 for all of the other redo log groups, click Finish, and the DBCA displays a Summary dialog box.

  12. Review the information on the Summary dialog box and click OK. Click Cancel to end the instance addition operation. The DBCA displays a progress dialog box showing the DBCA performing the instance addition operation. When the DBCA completes the instance addition operation, the DBCA displays a dialog box asking whether you want to perform another operation.

  13. Click No and exit the DBCA, or click Yes to perform another operation. If you click Yes, then the DBCA displays the Operations page.

    After you have completed the procedures in this section, the DBCA has successfully added the new instance to the new node and completed the following steps:

    • Created and started an ASM instance on each new node if the existing instances were using ASM

    • Created a new database instance on each new node

    • For Windows-based systems, created and started the required services

    • Created and configured high-availability components

    • Configured and started node applications for the GSD, Oracle Net Services listener, and Enterprise Manager agent

    • Created the Oracle Net configuration

    • Started the new instance

    • Created and started services if you entered services information on the Services Configuration page

After adding the instances to the new nodes using the steps described in this section, perform any needed service configuration procedures.

Updating Path Environment Variables on New Nodes on Windows-Based Systems

When you add a new node, you must update the Path environment variable on each new node on Windows-based systems.

  1. Navigate to Start, Settings, Control Panel, System, Advanced, Environment Variables.

  2. In the System variables dialog box, select the Path variable and ensure that the value for the Path variable contains Oracle home\BIN, where Oracle home is your new Oracle home. If the variable does not contain this value, then click Edit and add this value to the start of the path variable definition in the Edit System Variable dialog box. Click OK in the Environment Variables page, then click OK in the System Properties page, and then close the Control Panel.

  3. Click OK in the Environment Variables page, then click OK in the System Properties page, and then close the Control Panel.

Connecting to iSQL*Plus after Adding a Node on Windows-Based Platforms

After you add a node to a RAC database on Windows platforms, you must manually create the following directories in the ORACLE_BASE\ORACLE HOME\oc4j\j2ee\isqlplus directory before you can run iSQL*Plus on the new node:

  • connectors

  • log

  • persistence

  • tldcache

After you create these directories, you can start iSQL*Plus either by running isqlplusctl start at the command prompt or by starting iSQL*Plus from the Windows Control Panel Services tool. If you try to connect to the iSQL*Plus URL without creating these directories, then you will not be able to connect.

Adding Nodes that Already Have Clusterware and Oracle Software to a Cluster

To add nodes to a cluster that already have clusterware and Oracle software installed on them, you must configure the new nodes with the Oracle software that is on the existing nodes of the cluster. To do this, you must run two versions of an Oracle Universal Installer process. One version is for the clusterware and other one for the database layer as described in the following procedures:

  1. Add new nodes at the Oracle clusterware layer by running Oracle Universal Installer from the CRS home on an existing node according to the following platform-specific procedures.

    • On UNIX run the following command:

      CRS home/oui/bin/addNode.sh -noCopy
      
      
    • On Windows run the following command:

      CRS home\oui\bin\addNode.bat -noCopy
      
      
  2. Add new nodes at the Oracle software layer by running Oracle Universal Installer from the Oracle home as follows:

    • On UNIX, run the following command:

      $ORACLE_HOME/oui/bin/addNode.sh -noCopy
      
      
    • On Windows, run the following command:

      %ORACLE_HOME%\oui\bin\addNode.bat -noCopy
      
      

    In the -noCopy mode, Oracle Universal Installer performs all add node operations except for the copying of software to the new nodes.

Note:

Oracle recommends that you back up your voting disk and OCR files after you complete the node addition process.

Adding a Node on a Shared Oracle Home

If you are using Oracle Universal Installer to add a node on a shared Oracle home, then an error similar to the following may appear:

Alert: The following file(s) have been modified on the disk:
y:\oracle\rac\inventory\ContentsXML\comps.xml y:\oracle\rac\inventory
\ContentsXML\libs.xml
Proceeding with the installation may corrupt some important data. You should
stop this session and restart OUI. Do you want to stop this session now?

Ignore this error and click No and continue.

Deleting Instances from Real Application Clusters Databases

The procedures in this section explain how to use the DBCA to delete an instance from a RAC database. To delete an instance:

  1. Start the DBCA on a node other than the node that hosts the instance that you want to delete. On the DBCA Welcome page select Oracle Real Application Clusters Database, click Next, and the DBCA displays the Operations page.

  2. On the DBCA Operations page, select Instance Management, click Next, and the DBCA displays the Instance Management page.

  3. On the Instance Management page, Select Delete Instance, click Next, and the DBCA displays the List of Cluster Databases page.

  4. Select a RAC database from which to delete an instance. If your user ID is not operating-system authenticated, then the DBCA also prompts you for a user ID and password for a database user that has SYSDBA privileges. If the DBCA prompts you for this, then enter a valid user ID and password. Click Next and the DBCA displays the List of Cluster Database Instances page. The List of Cluster Database Instances page shows the instances associated with the RAC database that you selected and the status of each instance.

  5. Select a remote instance to delete and click Finish.

  6. If you have services assigned to this instance, then the DBCA Services Management page appears. Use this feature to reassign services from this instance to other instances in the cluster database.

  7. Review the information about the instance deletion operation on the Summary page and click OK. Otherwise, click Cancel to cancel the instance deletion operation. If you click OK, then the DBCA displays a Confirmation dialog box.

  8. Click OK on the Confirmation dialog box to proceed with the instance deletion operation and the DBCA displays a progress dialog box showing that the DBCA is performing the instance deletion operation. During this operation, the DBCA removes the instance and the Oracle Net configuration of the instance. When the DBCA completes this operation, the DBCA displays a dialog box asking whether you want to perform another operation.

  9. Click No and exit the DBCA or click Yes to perform another operation. If you click Yes, then the DBCA displays the Operations page.

At this point, you have accomplished the following:

  • Deregistered the selected instance from its associated Oracle Net Services listeners

  • Deleted the selected database instance from the instance's configured node

  • Deleted the selected instance's services for Windows-based systems

  • Removed the Oracle Net configuration

  • Deleted the Oracle Flexible Architecture directory structure from the instance's configured node.

Deleting Nodes from Oracle Clusters on UNIX-Based Systems

Use the following procedures to delete nodes from Oracle clusters on UNIX-based systems:

  1. If there are instances on the node that you want to delete, then run the procedures in the Deleting Instances from Real Application Clusters Databases before executing these procedures. If you are deleting more than one node, then delete the instances from all the nodes that you are going to delete.

  2. If you use ASM, then perform the procedures in the following section, ASM Instance Clean-Up Procedures for Node Deletion.

  3. If this is the Oracle home from which the node-specific listener named LISTENER_ nodename runs, then use NetCA to remove this listener and its CRS resources. If necessary, re-create this listener in another home.

  4. After you have deleted the instances from the nodes that you want to delete, delete the node applications for each node by running the following command from the Oracle home (not the CRS home) where node1, node2 are the nodes that you are removing from your cluster:

    rootdeletenode.sh node1,node2
    
    
  5. On the same node that you are deleting, run the command runInstaller -updateNodeList ORACLE_HOME=Home location CLUSTER_ NODES=node1,node2,...nodeN where node1 through nodeN is a comma-delimited list of nodes that are remaining in the cluster. This list must exclude the nodes that you are deleting. The runInstaller command is located in the directory $ORACLE_HOME/oui/bin. Executing this command does not launch an installer GUI.

  6. If you are not using a cluster file system for the Oracle home, then on the node that you are deleting, remove the Oracle database software by executing the rm command. Make sure that you are in the correct Oracle home of the node that you are deleting when you run the rm command. Run this command on all the nodes that you are deleting.

  7. On the node that you are deleting, run the command CRS Home/install/rootdelete.sh to disable the CRS applications that are on the node. Only run this command once and use the nosharedhome argument if you are using a local file system. The default for this command is sharedhome which prevents you from updating the permissions of local files such that they can be removed by the oracle user.

    If the ocr.loc file is on a shared file system, then run the command CRS home/install/rootdelete.sh remote sharedvar. If the ocr.loc file is not on a shared file system, then run the CRS home/install/rootdelete.sh remote nosharedvar command. If you are deleting more than one node from your cluster, then repeat this step on each node that you are deleting.

  8. Run CRS Home/install/rootdeletenode.sh on any remaining node in the cluster to delete the nodes from the Oracle cluster and to update the Oracle Cluster Registry (OCR). If you are deleting multiple nodes, then run the command CRS Home/install/rootdeletenode.sh node1,node1-number,node2,node2-number,... nodeN,nodeN-number where node1 through nodeN is a list of the nodes that you want to delete, and node1-number through nodeN-number represents the node number. To determine the node number of any node, run the command CRS Home/bin/olsnodes -n. To delete only one node, enter the node name and number of the node that you want to delete.

  9. <CRS home>/install/rootdeletenode.sh node1,node1-number

  10. On the same node and as the oracle user, run the command CRS home/oui/bin/runInstaller -updateNodeList ORACLE_HOME=CRS home CLUSTER_NODES=node1,node2,... nodeN where node1 through nodeN is a comma-delimited list of nodes that are remaining in the cluster.

  11. If you are not using a cluster file system, then on the node that you are deleting, remove the Oracle CRS software by executing the rm command. Make sure that you run the rm command from the correct Oracle CRS home. Run the rm command on every node that you are deleting.

ASM Instance Clean-Up Procedures for Node Deletion

Perform the following procedure to remove the ASM instances as follows:

  1. If this is the Oracle home which from which the ASM instance runs, then remove the ASM configuration by completing the following steps. Run the command srvctl stop asm -n node for all nodes on which this Oracle home exists. Run the command srvctl remove asm -n node for all nodes on which this Oracle home exists.

  2. If you are using a cluster file system for your ASM Oracle home, then run the following commands on the local node:

    rm -r $ORACLE_BASE/admin/+ASM
    rm -f $ORACLE_HOME/dbs/*ASM*
    
    
  3. If you are not using a cluster file system for your ASM Oracle home, then run the rm or delete commands mentioned in the previous step on each node on which the Oracle home exists.

  4. Remove oratab entries beginning with +ASM.

Note:

Oracle recommends that you back up your voting disk and OCR files after you complete the node deletion process.

Deleting Nodes from Oracle Clusters on Windows-Based Platforms

The following procedures are used to delete nodes from Oracle clusters on Windows-based platforms.

Perform the following steps on a node other than the node you want to delete:

  1. Use the Database Configuration Assistant (DBCA) to delete the instance.

  2. Use NetCA to delete the listener.

  3. If the node that you are deleting has an ASM instance, then delete the ASM instance using the srvctl stop asm and srvctl remove asm commands.

  4. Run the command srvctl stop nodeapps -n nodename of the node to be deleted to stop the node applications.

  5. Run the command srvctl remove nodeapps -n nodename of the node to be deleted to remove the node applications.

  6. Stop isqlplus if it is running.

  7. Run the command setup.exe -updateNodeList ORACLE_HOME=Oracle_ home ORACLE_HOME_NAME=Oracle_home_name CLUSTER_NODES=remaining nodes where remaining nodes is a list of the nodes that are to remain part of the cluster.

Perform the following steps on the deleted RAC node:

  1. Run the command setup.exe -updateNodeList -local -noClusterEnabled ORACLE_HOME=Oracle_home ORACLE_HOME_ NAME=Oracle_home_name CLUSTER_NODES="". Note that you do not need a value for "" after the CLUSTER_NODES= entry in this command. If you delete more than one node, then you must run this command on every deleted node to remove the Oracle home if you have a non-shared Oracle home (non-cluster file system) installation.

  2. On the same node, delete the Windows Registry entries and ASM services using Oradim.

  3. From the deleted RAC node, run the command Oracle_ home\oui\bin\setup.exe to start Oracle Universal Installer. Select Deinstall Products and select the Oracle home that you want to de-install.

  4. Then to delete the CRS node, from a remaining node, run the command crssetup del -nn node_name of the deleted node, node number

  5. Then run the command setup.exe -updateNodeList ORACLE_HOME=CRS home ORACLE_HOME_NAME=CRS home name CLUSTER_NODES=remaining nodes where remaining nodes is a list of the nodes that are to remain in the cluster.

  6. Then on the deleted CRS node, run the command setup.exe -updateNodeList -local -noClusterEnabled ORACLE_HOME=CRS home ORACLE_HOME_NAME=CRS home name CLUSTER_NODES=""

  7. Remove the Oracle home manually from the new node if the home is not shared and then manually remove the HKLM/software/Oracle registry keys and the Oracle services.

  8. After adding or deleting nodes from your Oracle Database 10g with RAC environment, and after you are sure that your system is functioning properly, make a backup of the contents of the voting disk using the dd.exe utility. The dd.exe utility is part of the MKS toolkit.

ASM Instance Cleanup Procedures after Node Deletion on Windows-Based Platforms

The procedure of node deletion requires the following additional steps on Windows-based systems to remove the ASM instances:

  1. If this is the Oracle home from which the node-specific listener named LISTENER_ nodename runs, then use NetCA to remove this listener and its CRS resources. If necessary, re-create this listener in another home.

  2. If this is the Oracle home from which the ASM instance runs, then remove the ASM configuration by running the following command for all nodes on which this Oracle home exists:

    srvctl stop asm -n node
    
    

    Then run the following command for the nodes that you are removing:

    srvctl remove asm -n node
    
    
  3. If you are using a cluster file system for your ASM Oracle home, then run the following commands on the local node:

    rd -s -q %ORACLE_BASE%\admin\+ASM
    delete %ORACLE_HOME%\database\*ASM*
    
    
  4. If you are not using a cluster file system for your ASM Oracle home, then run the delete command mentioned in the previous step on each node on which the Oracle home exists.

  5. Run the following command on each node that has an ASM instance:

    oradim -delete -asmsid +ASMnode_number
    

    Note:

    Oracle recommends that you back up your voting disk and OCR files after you complete the node deletion process.

Adding or Deleting Nodes from Oracle Collaboration Suite Database

This section describes how to remove nodes from Oracle Collaboration Suite Applications. The steps are as follows:

Step 1: Modify the RAC Database Connect String in Oracle Internet Directory

The steps to modify the RAC database connect string in Oracle Internet Directory to reflect the new node list are as follows:

  1. On the infrastructure machine, open $ORACLE_HOME/bin/oidadmin and delete the cn=OracleContext, cn=<RAC database name> entry under Entry Management. You must delete its subentries one by one first.

  2. On the RAC database system, populate $ORACLE_HOME/install/OCSdbSchemaReg.ini with the correct values after addition or deletion. Then, run OCSdbSchemaReg.sh -f OCSdbSchemaReg.ini. This will put the new correct string in Oracle Internet Directory.

    You can use your original OCSdbSchemaReg.ini file but you must modify the line named our $hostList appropriately to reflect your new node configuration before running this step. For example, before any changes are done, the node list in OCSdbSchemaReg.ini is as follows:

    our $hostList = "rac1vip1.us.oracle.com:1521,rac2vip1.us.oracle.com:1521,rac3vip1.us.oracle.com:1521";
    
    

    After adding node rac4vip1.us.oracle.com, the node list in OCSdbSchemaReg.ini is as follows:

    our $hostList = "rac1vip1.us.oracle.com:1521,rac2vip1.us.oracle.com:1521,rac3vip1.us.oracle.com:1521,rac4vip1.us.oracle.com:1521";
    
    

    After deleting node rac3vip1.us.oracle.com, the node list in OCSdbSchemaReg.ini is as follows:

    our $hostList = "rac1vip1.us.oracle.com:1521,rac2vip1.us.oracle.com:1521";
    
    
  3. Next, run OCSdbSchemaReg.sh –f OCSdbSchemaReg.ini. This will put the new correct string in Oracle Internet Directory.

Step 2: Modify the Crawler's Connect String Through the Search Admin Application

When RAC uses the Cluster File System (CFS), the Ultra Search crawler can be launched from any of the RAC nodes, as long as at least one RAC node is up and running. When RAC is not using CFS, the Ultra Search crawler always runs on a specified node. If this node stops operating, then you must run the wk0reconfig.sql script to move Ultra Search to another RAC node. In the wk0reconfig.sql script, instance_name is the name of the RAC instance that Ultra Search uses for crawling. After connecting to the database, to get the name of the current instance, run the following command:

SELECT instance_name FROM v$instance

connect_url is the JDBC connection string that guarantees a connection only to the specified instance. An example is as follows:

(DESCRIPTION= 
        (ADDRESS_LIST= 
          (ADDRESS=(PROTOCOL=TCP) 
                   (HOST=<nodename>) 
                   (PORT=<listener_port>))) 
        (CONNECT_DATA=(SERVICE_NAME=<service_name>)))

Step 3: Bounce the Oracle Collaboration Suite Applications Tier Processes

From the Oracle Collaboration Suite Application tier, run the following commands.

$ORACLE_HOME/opmn/bin/opmnctl stopall
$ORACLE_HOME/ocas/bin/ocasctl -stopall
$ORACLE_HOME/opmn/bin/opmnctl startall
$ORACLE_HOME /ocas/bin/ocasctl –start –t ochecklet –p 8020 –n 1
$ORACLE_HOME /ocasctl –start –t ocas –p 8010 –n 5

The default ports are 8010, 8020. The valid range is 8010-8020.

Removing Nodes from Oracle Collaboration Suite Applications

The steps for removing nodes from multiple Oracle Collaboration Suite Applications tiers that are front ended by a load balancer are as follows:

  1. Shutdown all Oracle Collaboration Suite Applications on the node of the Oracle Collaboration Suite Applications tier, that has to be removed, as follows:

    opmnctl stopall
    
    

    After shutting down Oracle Collaboration Suite Applications, the load balancer routes all requests to the active nodes assuming that the load balancer monitors have been setup correctly.

  2. Remove the node the load balancer virtual server node pool.

Configuring Manual Cold Failover for Oracle Calendar Server

In an Oracle Collaboration Suite high-availability environment, Oracle Calendar Server can be setup in an active-passive configuration which is also known as cold failover cluster configuration.

In an Oracle Calendar Server cold failover cluster configuration, in the event of a failure of an active node, if you want an automated cold failover for Oracle Calendar Server, then you must use vendor clusterware. Manual cold failover is done when the vendor clusterware is not installed. The steps for the manual cold failover configuration are as follows:

  1. Ensure that the Calendar Server virtual IP address is inaccessible by pinging it. If it is still accessible, then it may not be necessary to execute a cold failover. Validate that the system is truly down and also that it requires a cold failover before proceeding. If it is necessary and the calendar server virtual IP is still accessible, then bring down the virtual IP address on the node where it is active.

  2. Mount the shared storage to the destination node which is the original passive node.

  3. Bring up the virtual IP address on the destination node.

  4. Mount the Oracle Calendar Server shared storage containing the ORACLE_HOME of Oracle Calendar Server on the destination node.

  5. Start the Oracle Calendar Server.

Starting and Stopping Oracle Collaboration Suite in a High-Availability Environment

This section describes how to stop and start Oracle Collaboration Suite in a high-availability environment. Set up the environment for the correct ORACLE_HOME when you log on to each tier before the stop and start processes. On Infrastructure tiers, also set the ORACLE_SID.

See Also:

Chapter 2, "Starting and Stopping Oracle Collaboration Suite" for detailed instructions for starting and stopping non-high-availability environments.

Stopping Oracle Collaboration Suite

The steps for stopping Oracle Collaboration Suite in a high-availability environment are as follows:

  1. Log on to each Applications tier node and run the following commands:

    ORACLE_HOME/opmn/bin/opmnctl stopall
    ORACLE_HOME/ocas/bin/ocasctl -stopall
    ORACLE_HOME/bin/emctl stop iasconsole
    ORACLE_HOME/bin/lsnrctl stop listener_es
    
    
  2. For the Oracle Calendar Server node, run the following commands:

    ORACLE_HOME/opmn/bin/opmnctl stopall
    ORACLE_HOME/bin/emctl stop iasconsole
    
    
  3. For each Identity Management Infrastructure tier, run the following commands:

    ORACLE_HOME/opmn/bin/opmnctl stopall
    ORACLE_HOME/bin/emctl stop iasconsole
    
    
  4. For each RAC database tier, run the following command:

    emctl stop dbconsole
    
    

    From any RAC database node, run the command:

    ORACLE_HOME/bin/srvctl stop database -d $dbname
    ORACLE_HOME/bin/svrctl stop nodeapps -n $nodename
    
    

    In the preceding commands, $dbname is the name of the database and $nodename is the machine name.

Starting Oracle Collaboration Suite

The steps for starting Oracle Collaboration Suite in a high-availability environment are as follows:

  1. In each RAC database tier, run the following command:

    emctl start dbconsole
    
    

    From any RAC database node, run the command:

    ORACLE_HOME/bin/srvctl start database -d $dbname
    ORACLE_HOME/bin/svrctl start nodeapps -n $nodename
    
    

    In the preceding commands, $dbname is the name of the database and $nodename is the machine name.

  2. Log on to each Identity Management Infrastructure tier and run the following commands:

    ORACLE_HOME/opmn/bin/opmnctl stopall
    ORACLE_HOME/bin/emctl stop iasconsole
    
    

    In a Distributed Identity Management Architecture, ensure that the Oracle Internet Directory tier is started before the OracleAS Single Sign-On tier.

  3. Oracle Calendar Server is set up separately from the Applications tier in its own ORACLE_HOME. For the Oracle Calendar Server node, ensure that the virtual IP is up and run the following commands:

    ORACLE_HOME/opmn/bin/opmnctl startall
    ORACLE_HOME/bin/emctl start iasconsole
    
    
  4. For each of the Applications tier instances, run the following commands by logging in as root and also ensure that port 25 is not in use:

    ORACLE_HOME/bin/tnslsnr listener_es -user $uid -group $gid
    ORACLE_HOME/opmn/bin/opmnctl startall
    ORACLE_HOME/ocas/bin/ocasctl -start -t ochecklet -p 8020 -n 1
    ORACLE_HOME/ocasctl -start -t ocas -p 8010 -n 5
    ORACLE_HOME/bin/emctl start iasconsole
    
    

    Note:

    This step assumes that you are using the default port (port 25) for the Oracle Mail listener. On UNIX systems, port 25 is a privileged port, so you must log in as root. If you have installed the listener on a non-privileged port, you do not need to log in as root. On Windows systems, you do not need to log in as root.