Skip Headers
Oracle® Database 2 Day + Real Application Clusters Guide
11g Release 1 (11.1)

B28252-06
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

3 Installing and Configuring Oracle Clusterware and Oracle RAC

This chapter explains how to install Oracle Real Application Clusters (Oracle RAC) using Oracle Universal Installer (OUI). You must install Oracle Clusterware before installing Oracle RAC. After your Oracle Clusterware is operational, you can use OUI to install the Oracle Database software with the Oracle RAC components.

The example Oracle RAC environment described in this guide uses Automatic Storage Management (ASM), so this chapter also includes instructions on how to install ASM in its own home directory.

This chapter includes the following sections:

Preparing the Oracle Media Installation File

Oracle Clusterware is installed as part of Oracle Database 11g. OUI installs Oracle Clusterware into a directory structure that is referred to as CRS_home. This home is separate from the home directories of other Oracle software products installed on the same server. Because Oracle Clusterware works closely with the operating system, system administrator access is required for some of the installation tasks. In addition, some of the Oracle Clusterware processes must run as the special operating system user, root.

The Oracle RAC database software is installed from the same Oracle Database 11g installation media. By default, the standard Oracle Database 11g software installation process installs the Oracle RAC option when OUI recognizes that you are performing the installation on a cluster. OUI installs Oracle RAC into a directory structure that is referred to as Oracle_home. This home is separate from the home directories of other Oracle software products installed on the same server.

To prepare the Oracle Media installation files:

  1. If you have the Oracle Database software on CD or DVD, insert the distribution media for the database into a disk drive on your computer. Make sure the disk drive has been mounted at the operating system level.

    If you do not have installation disks, but are instead installing from ZIP files, continue on to Step 2.

  2. If the Oracle Database installation software is in one or more ZIP files, create a staging directory on one node, for example, docrac1, to store the unzipped files, as shown here:

    mkdir -p /stage/oracle/11.1.0
    
  3. Copy the ZIP files to this staging directory. For example, if the files were downloaded to a directory named /home/user1, and the ZIP file is named 11100_linux_db.zip, you would you use the following command to move the ZIP file to the staging directory:

    cd /home/user1
    cp 11100_linux_db.zip /stage/oracle/11.1.0
    
  4. As the oracle user on docrac1, unzip the Oracle media, as shown in the following example:

    cd /stage/oracle/11.1.0
    unzip 11100_linux_db.zip
    

Installing Oracle Clusterware 11g

The following topics describe the process of installing Oracle Clusterware:

Configuring the Operating System Environment

You run OUI from the oracle user account. Before you start OUI you must configure the environment of the oracle user. You must set the ORACLE_BASE environment variables to the directory in which you want the Oracle central inventory files located.

For example, if you want the central inventory files located on the mount point directory /opt/oracle, you might set ORACLE_BASE to the directory /opt/oracle/11gR1.

Prior to installing the Oracle Database software and creating an Oracle database, you should also set the ORACLE_HOME environment variable to the location in which you want to install the Oracle Database software. Optionally, you can also set the ORACLE_SID environment variable to the name you have chosen for your database.

To modify the user environment prior to installing Oracle Clusterware on Red Hat Linux:

  1. As the oracle user, execute the following commands:

    [oracle]$ unset ORACLE_HOME
    [oracle]$ unset ORACLE_SID
    [oracle]$ unset ORACLE_BASE
    [oracle]$ export ORACLE_BASE=/opt/oracle/11gR1
    
  2. Verify the changes have been made by executing the following commands:

    [oracle]$ echo $ORACLE_SID
    
    [oracle]$ echo $ORACLE_HOME
    
    [oracle]$ echo $ORACLE_BASE
    /opt/oracle/11gR1
    

To modify the user environment prior to installing Oracle Database on Red Hat Linux:

  1. As the oracle user, modify the user profile in the /home/oracle directory on both nodes using the following commands:

    [oracle] $ cd $HOME
    [oracle] $ vi .bash_profile
    

    Add the following lines at the end of the file:

    export ORACLE_SID=sales
    export ORACLE_BASE=/opt/oracle/11gR1
    export ORACLE_HOME=/opt/oracle/11gR1/db
    
  2. Read and implement the changes made to the .bash_profile file:

    source .bash_profile
    
  3. Verify the changes have been made by executing the following commands:

    [oracle]$ echo $ORACLE_SID
    sales
    [oracle]$ echo $ORACLE_HOME
    /opt/oracle/11gR1/db
    [oracle]$ echo $ORACLE_BASE
    /opt/oracle/11gR1
    

Verifying the Configuration Using the Cluster Verification Utility

If you have not configured your nodes, network, and operating system correctly, your installation of the Oracle Clusterware or Oracle Database 11g software will not complete successfully.

To verify your hardware and operating system setup:

  1. As the oracle user, change directories to the staging directory for the Oracle Clusterware software, or to the mounted installation disk. In the following example, staging_area represents the location of the installation media (for example, /home/oracle/downloads/11gR1/11.1.0 or /dev/dvdrom):

    [oracle] $ cd /staging_area
    
  2. Run the runcluvfy.sh script, as shown in the following example, where docrac1 and docrac2 are the name of the nodes in your cluster:

    [oracle] $ ./runcluvfy.sh stage -pre crsinst -n docrac1,docrac2 -verbose
    

    The preceding command instructs the Cluster Verification Utility (CVU) to verify that the system meets all the criteria for an Oracle Clusterware installation. It checks that all the nodes are reachable from the local nodes, proper user equivalence exists, connectivity exists between all the nodes through the public and private interconnects, the user has proper permissions to install the software, and that all system requirements (including kernel version, kernel parameters, memory, swap space, temporary directory space, and required software packages) are met.

Using Oracle Universal Installer to Install Oracle Clusterware

As the oracle user on the docrac1 node, install Oracle Clusterware. Note that OUI uses Secure Shell (SSH) to copy the binary files from docrac1 to docrac2 during the installation. Make sure SSH is configured before starting the installer.

Note:

If you are installing Oracle Clusterware on a server that already has a single-instance Oracle Database 11g installation, then stop the existing ASM instances, if any. After Oracle Clusterware is installed, start the ASM instances again. When you restart the single-instance Oracle database and then the ASM instances, the ASM instances use the Cluster Synchronization Services Daemon (CSSD) instead of the daemon for the single-instance Oracle database.

To install Oracle Clusterware:

  1. Use the following command to start OUI, where staging_area is the location of the staging area on disk, or the location of the mounted installation disk:

    cd /staging_area/Disk1
    ./runInstaller
    

    The Select a Product to Install window appears.

  2. Select Oracle Clusterware from the list, then click Next.

    Description of install_product.gif follows
    Description of the illustration install_product.gif

    If you have not installed any Oracle software previously on this server, the Specify Inventory directory and credentials window appears.

  3. Change the path for the inventory location, if required. Select oinstall for the operating system group name. Click Next.

    The path displayed for the inventory directory should be the oraInventory subdirectory of your Oracle base directory. For example, if you set the ORACLE_BASE environment variable to /opt/oracle/11gR1 before starting OUI, then the path displayed is /opt/oracle/11gR1/oraInventory.

    Description of install031.gif follows
    Description of the illustration install031.gif

    The Specify Home Details window appears.

  4. Accept the default value for the Name field, which is the name of the Oracle home directory for this product. For the Path field, click Browse. In the Choose Directory window Go up the path until you reach the root directory (/), click /u01/app/crs, then click Choose Directory.

    After you have selected the path, click Next. The next window, Product-Specific Prerequisite Checks, appears after a short period of time.

  5. When you see the message "Check complete. The overall result of this check is: Passed", as shown in the following screenshot, click Next.

    Description of install27.gif follows
    Description of the illustration install27.gif

    The Specify Cluster Configuration window appears.

  6. Change the default cluster name to a name that is unique throughout your entire enterprise network. For example, you might choose a name that is based on the node names' common prefix. This guide will use the cluster name docrac.

    The local node, docrac1, appears in the Cluster Nodes section. If the private node name includes the domain name, click Edit and remove the domain name from the private node name. For example, if the private node name is docrac1-priv.us.oracle.com, edit the entry so that it is displayed as docrac1-priv.

    When you have finished removing the domain name in the "Modify a node in the existing cluster" window, click OK.

  7. When you are returned to the Specify Cluster Configuration window, click Add.

  8. In the "Add a new node to the existing cluster" dialog window, enter the second node's public name (docrac2.us.oracle.com), private name (docrac2-priv), and virtual IP name (docrac2-vip.us.oracle.com), and then click OK.

    The Specify Cluster Configuration window now displays both nodes in the Cluster Nodes.

    Description of install003_fixed.gif follows
    Description of the illustration install003_fixed.gif

    Click Next. The Specify Network Interface Usage window appears.

  9. Verify eth0 and eth1 are configured correctly (proper subnet and interface type displayed), then click Next.

    The Specify Oracle Cluster Registry (OCR) Location window appears.

  10. Select Normal Redundancy for the OCR Configuration. You will be prompted for two file locations. In the Specify OCR Location field, enter the name of the device configured for the first OCR file, for example, /dev/sda1.

    In the Specify OCR Mirror Location field, enter the name of the device configured for the OCR mirror file, for example /dev/sdb1. When finished, click Next.

    During installation, the OCR data will be written to the specified locations.

    Description of install028.gif follows
    Description of the illustration install028.gif

    The Specify Voting Disk Location window appears.

  11. Select Normal Redundancy for the voting disk location. You will be prompted for three file locations. For the Voting Disk Location, enter the name of the device configured for the first voting disk file, for example, /dev/sda2. Repeat this process for the other two Voting Disk Location fields.

    Description of install005.gif follows
    Description of the illustration install005.gif

    When finished, click Next. The OUI Summary window appears.

  12. Review the contents of the Summary window and then click Install.

    OUI displays a progress indicator during the installation process.

  13. During the installation process, the Execute Configuration Scripts window appears. Do not click OK until you have run the scripts.

    Description of install033_fixed.gif follows
    Description of the illustration install033_fixed.gif

    The Execute Configuration Scripts window shows configuration scripts, and the path where the configuration scripts are located. Run the scripts on all nodes as directed, in the order shown. For example, on Red Hat Linux you perform the following steps (note that for clarity, the examples show the current user, node and directory in the prompt):

    1. As the oracle user on docrac1, open a terminal window, and enter the following commands:

      [oracle@docrac1 oracle]$ cd /opt/oracle/11gR1/oraInventory
      [oracle@docrac1 oraInventory]$ su
      
    2. Enter the password for the root user, and then enter the following command to run the first script on docrac1:

      [root@docrac1 oraInventory]# ./orainstRoot.sh
      
    3. After the orainstRoot.sh script finishes on docrac1, open another terminal window, and as the oracle user, enter the following commands:

      [oracle@docrac1 oracle]$ ssh docrac2
      [oracle@docrac2 oracle]$ cd /opt/oracle/11gR1/oraInventory
      [oracle@docrac2 oraInventory]$ su
      
    4. Enter the password for the root user, and then enter the following command to run the first script on docrac2:

      [root@docrac2 oraInventory]# ./orainstRoot.sh
      
    5. After the orainstRoot.sh script finishes on docrac2, go to the terminal window you opened in Step 15a. As the root user on docrac1, enter the following commands to run the second script, root.sh:

      [root@docrac1 oraInventory]# cd /u01/app/crs
      [root@docrac1 crs]# ./root.sh
      

      Note:

      Do not attempt to run the root.sh script on other nodes, or it might fail. Wait until the script finishes running on the local node.

      At the completion of this script, the following message is displayed:

      Description of install030_fixed.gif follows
      Description of the illustration install030_fixed.gif

    6. After the root.sh script finishes on docrac1, go to the terminal window you opened in Step 15c. As the root user on docrac2, enter the following commands:

      [root@docrac2 oraInventory]# cd /u01/app/crs
      [root@docrac2 crs]# ./root.sh
      

    After the root.sh script completes, return to the OUI window where the Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click OK.

    The Configuration Assistants window appears. When the configuration assistants finish, OUI displays the End of Installation window.

  14. Click Exit to complete the installation process, then Yes to confirm you want to exit the installer.

    If you encounter any problems, refer to the configuration log for information. The path to the configuration log is displayed on the Configuration Assistants window.

Completing the Oracle Clusterware Configuration

After you have installed Oracle Clusterware, verify that the node applications are running. Depending on which operating system you use, you may need to perform some postinstallation tasks to configure the Oracle Clusterware components properly.

To complete the Oracle Clusterware configuration on Red Hat Linux:

  1. As the oracle user on docrac1, check the status of the Oracle Clusterware targets by entering the following command:

    /u01/app/crs/bin/crs_stat -t
    

    This command provides output showing if all the important cluster services, such as gsd, ons, and vip, are running on the nodes of your cluster.

    Description of install030b.gif follows
    Description of the illustration install030b.gif

Configuring Automatic Storage Management in an ASM Home Directory

This section explains how to install the ASM software in its own home directory. Installing ASM in its own home directory enables you to keep the ASM home separate from the database home directory (Oracle_home). By using separate home directories, you can upgrade and patch ASM and the Oracle Database software independently, and you can deinstall Oracle Database software without affecting the ASM instance.

As the oracle user, install ASM by installing the Oracle Database 11g Release 1 software on the docrac1 node. Note that the Installer copies the binary files from docrac1 to docrac2 during the installation.

During the installation process, you are asked to configure ASM. You configure ASM by creating disk groups that become the default location for files created in the database. The disk group type determines how ASM mirrors files. When you create a disk group, indicate whether the disk group is a normal redundancy disk group (2-way mirroring for most files by default), or a high redundancy disk group (3-way mirroring), or an external redundancy disk group (no mirroring by ASM). Use an external redundancy disk group only if your storage system already provides mirroring at the hardware level, or if you have no need for redundant data. The default disk group type is normal redundancy.

To install ASM in a home directory separate from the home directory used by Oracle Database:

  1. Use the following commands to start OUI, where staging_area is the location of the staging area on disk, or the location of the mounted installation disk:

    cd /staging_area/database
    ./runInstaller
    

    When you start Oracle Universal Installer, the Select a Product to Install window appears.

  2. Select Oracle Database 11g from the list, then click Next.

    Description of install_product_db.gif follows
    Description of the illustration install_product_db.gif

    The Select Installation Type window appears.

  3. Select either Enterprise Edition or Standard Edition and then click Next.

  4. In the Specify Home Details window, specify a name for the ASM home directory, for example, OraASM11g_home. Select a directory that is a subdirectory of your Oracle Base directory, for example, /opt/oracle/11gR1/asm. Click Browse to change the directory in which ASM will be installed.

    Description of install006.gif follows
    Description of the illustration install006.gif

    After you have specified the ASM home directory, click Next.

    The Specify Hardware Cluster Installation Mode window appears.

  5. Click Select All to select all nodes for installation, and then click Next.

    If your Oracle Clusterware installation was successful, then the Specify Hardware Cluster Installation Mode window lists the nodes that you identified for your cluster, such as docrac1 and docrac2.

    After you click Next, the Product-Specific Prerequisites Checks window appears.

  6. When you see the message "Check complete. The overall result of this check is: Passed", as shown in the following screenshot, click Next.

    Description of install008.gif follows
    Description of the illustration install008.gif

    The Select Configuration Option window appears.

  7. Select the Configure Automatic Storage Management (ASM) option to install and configure ASM. The ASM instance is managed by a privileged role called SYSASM, which grants full access to ASM disk groups.

    Enter a password for the SYSASM user account. The passwords should be at least 8 characters in length and include at least one alphabetic and one numeric character.

    Confirm the password by typing it in again in the Confirm ASM SYS Password field.

    Description of install009.gif follows
    Description of the illustration install009.gif

    When finished, click Next.

    The Configure Automatic Storage Management window appears.

  8. In the Configure Automatic Storage Management window, the Disk Group Name defaults to DATA. You can enter a new name for the disk group, or use the default name.

    Check with your system administrator to determine if the disks used by ASM are mirrored at the storage level. If so, select External for the redundancy. If the disks are not mirrored at the storage level, then choose Normal for the redundancy.

  9. At the bottom right of the Add Disks section, click Change Disk Discovery Path to select any devices that will be used by ASM but are not listed.

    In the Change Disk Discovery Path window, enter a string to use to search for devices that ASM will use, such as /dev/sd*, and then click OK.

    Description of install010_alt.gif follows
    Description of the illustration install010_alt.gif

    You are returned to the Configure Automatic Storage Management window.

  10. Select the disks to be used by ASM, for example, /dev/sdd and /dev/sde.

    Description of install011_alt.gif follows
    Description of the illustration install011_alt.gif

    After you have finished selecting the disks to be used by ASM, click Next. The Privileged Operating Systems Groups window appears.

  11. Select the name of the operating system group you created in the previous chapter for the OSDBA group, the OSASM group, and the database operator group. If you choose to create only the dba group, then you can use that group for all three privileged groups. If you created a separate asm group, then use that value for the OSASM group.

    Description of install_priv_grps.gif follows
    Description of the illustration install_priv_grps.gif

    After you have supplied values for the privileged groups, click Next. The Oracle Configuration Manager Registration window appears.

  12. The Oracle Configuration Manager Registration window enables you to configure the credentials used for connecting to OracleMetaLink. You can provide this information now, or configure it after the database has been installed. Click Next to continue.

    OUI displays the Summary window.

  13. Review the information displayed in the Summary window. If any of the information appears incorrect, then click Back to return to a previous window and change it. When you are ready to proceed, click Install.

    OUI displays a progress window indicating that the installation has started.

  14. The installation takes several minutes to complete. During this time, OUI configures ASM on the specified nodes, and then configures a listener on each node.

    After ASM has been installed, OUI runs the Configuration Assistants. When the assistants have finished successfully, click Next to continue.

    The Execute Configuration Scripts window appears.

  15. Run the scripts as instructed in the Execute Configuration scripts window. For the installation demonstrated in this guide, only one script, root.sh, must be run, and it must be run on both nodes.

    Description of install013a_fixed.gif follows
    Description of the illustration install013a_fixed.gif

    The following steps demonstrate how to complete this task on a Linux system (note that for clarity, the examples show the user, node name, and directory in the prompt):

    1. Open a terminal window. As the oracle user on docrac1, change directories to the ASM home directory, and then switch to the root user:

      [oracle@docrac1 oracle]$ cd /opt/oracle/11gR1/asm
      [oracle@docrac1 oracle]$ su
      
    2. Enter the password for the root user, and then run the script specified in the Execute Configuration scripts window:

      [root@docrac1 oracle]# ./root.sh
      
    3. As the root.sh script runs, it prompts you for the path to the local bin directory. The information displayed in the brackets is the information it has obtained from your system configuration. Press the Enter key each time you are prompted for input to accept the default choices.

    4. After the script has completed, the prompt appears. Open another terminal window, and enter the following commands:

      [oracle@docrac1 oracle]$ ssh docrac2
      Enter the passphrase for key '/home/oracle/.ssh/id_rsa':
      [oracle@docrac2 oracle]$ cd /opt/oracle/11gR1/asm
      [oracle@docrac2 asm]$ su
      Password:
      
    5. Enter the password for the root user, and then run the script specified in the Execute Configuration scripts window:

      [root@docrac2 asm]# ./root.sh
      
    6. Accept all default choices by pressing the Enter key.

    7. After you finish executing the script on all nodes, return to the Execute Configuration Scripts window and click OK to continue.

    OUI displays the End of Installation window.

  16. Review the information in the End of Installation window. The Web addresses displayed are not used in this guide, but may be needed for your business applications.

  17. Click Exit, and then click Yes to verify that you want to exit the installation.

Verifying Your ASM Installation

Verify that all the database services for ASM are up and running.

To verify ASM is operational following the installation:

  1. Change directories to the bin directory in the CRS home directory:

    cd /u01/app/crs/bin
    
  2. Run the following command as the oracle user, where docrac1 is the name of the node you want to check:

    ./srvctl status asm -n docrac1
    ASM instance +ASM1 is running on node docrac1.
    

    The example output shows that there is one ASM instance running on the local node.

  3. Repeat the command shown in Step 2, substituting docrac2 for docrac1 to verify the successful installation on the other node in your cluster.

Installing the Oracle Database Software and Creating a Cluster Database

The next step is to install the Oracle Database 11g Release 1 software on the docrac1 node. OUI copies the binary files from docrac1 to docrac2, the other node in the cluster, during the installation process.

Before you start OUI you must configure the environment of the oracle user. You must set the ORACLE_SID, ORACLE_BASE, and ORACLE_HOME environment variables to the desired values for your environment.

For example, if you want to create a clustered database named sales and install the Oracle Database software in the /opt/oracle/11gR1/db directory, you would set ORACLE_SID to sales, ORACLE_BASE to the directory /opt/oracle/11gR1, and ORACLE_HOME to the directory /opt/oracle/11gR1/db. See "Configuring the Operating System Environment" for more information on configuring the environment variables.

Note:

The value of ORACLE_SID cannot be more than 12 characters and can only contain alphanumeric characters.

To install Oracle Database on your cluster:

  1. As the oracle user, use the following commands to start OUI, where staging_area is the location of the staging area on disk, or the location of the mounted installation disk:

    cd /staging_area/database
    ./runInstaller
    

    When you start Oracle Universal Installer, the Select a Product to Install window appears.

  2. Select Oracle Database 11g from the list, then click Next.

    The Select Installation Type window appears.

  3. Select either Enterprise Edition or Standard Edition. The Enterprise Edition option is selected by default. When finished, click Next.

    The Install Location window appears.

  4. Specify a name for the Oracle home, for example, OraDb11g_home.

  5. Select an Oracle home directory that is a subdirectory of your Oracle base directory, for example, /opt/oracle/11gR1/db_1.

    You can click Browse to change the directory in which the Oracle Database software will be installed. After you have selected the directory, click Choose Directory to close the Choose Directory window.

    If the directory does not exist, you can type in the directory path in the File Name field, and then click Choose Directory. If a window appears asking if you want to create the directory, click Yes.

    Description of install_db_choose_home_dir.gif follows
    Description of the illustration install_db_choose_home_dir.gif

    After you have verified the information on the Install Location window, click Next.

    The Specify Hardware Cluster Installation Mode window appears.

  6. Select the nodes on which the Oracle Database software will be installed. You can also click Select All to select all available nodes. After you have selected the nodes on which to install the Oracle Database software, click Next.

    The Product-Specific Prerequisite Checks window appears.

    Note:

    In the Product-Specific Prerequisite Checks window, you might see a warning that says the host IP addresses are generated by the dynamic host configuration protocol (DHCP), which is not a recommended best practice. You can ignore this warning.
  7. When you see the confirmation message that your system has passed the prerequisite checks, click Next.

    The Select Configuration Option window appears.

  8. In the Select Configuration Option window, accept the default option of Create a Database and click Next.

    The Select Database Configuration window appears.

  9. Select one of the following different types of databases to be created:

    • General Purpose

    • Transaction Processing

    • Data Warehouse

    • Advanced (for customized database creation)

    The General Purpose database type is selected by default. Select the type of database that best suits your business needs. For the example used by this guide, the default value is sufficient. After you have selected the database type, click Next.

    The Specify Database Configuration Options window appears.

  10. In the Global Database Name field, enter a fully qualified name for your database, such as sales.mycompany.com. Ensure that the SID field contains the first part of the database name, for example, sales.

    Description of install016.gif follows
    Description of the illustration install016.gif

    After you have entered the database name and SID, click Next. The Specify Database Config Details window appears.

    Note:

    The value for the system identifier (SID) will be used as a prefix for the instance names. Thus if the SID is set to sales, the instance names will be sales1, sales2, and so on.
  11. Check the settings on each of the tabs. If you are not sure what values to use, then accept the default values. On the Sample Schemas tab, if you want sample data and schemas to be created in your database, then select the Create database with sample schemas option. When finished, click Next to continue.

    The Select Database Management Option window appears.

  12. By default, the Use Database Control for Database Management option is selected instead of the Use Grid Control for Database Management option. The examples in this guide use Database Control, which is the default value.

    Do not select the option Enable Email Notifications if your cluster is not connected to a mail server.

    Description of install017.gif follows
    Description of the illustration install017.gif

    After you have made your selections, click Next.

    The Specify Database Storage Option window appears.

  13. If you configured ASM on the cluster, select the option Automatic Storage Management (ASM) for the database storage. Otherwise, select File System and enter the location of your shared storage, then click Next.

    The Specify Backup and Recovery Options window appears.

  14. Select the default option Do not enable Automated backup, and then click Next. You can modify the backup settings at a later time.

    If you selected ASM as your storage solution, the Select ASM Disk Group window appears.

    Note:

    If you want to use ASM as the backup area, you must create an additional ASM disk group when configuring ASM.
  15. The Select ASM Disk Group window shows you where the database files will be created. Select the disk group that was created during the ASM installation, and then click Next.

    Description of install020.gif follows
    Description of the illustration install020.gif

    The Specify Database Schema Passwords window appears.

  16. Assign and confirm a password for each of the Oracle database schemas.

    Unless you are performing a database installation for testing purposes only, do not select the Use the same password for all the accounts option, as this can compromise the security of your data. Each password should be at least 8 characters in length and include at least one alphabetic, one numeric, and one punctuation mark character.

    When finished entering passwords, click Next. OUI displays the Privileged Operating System Groups window.

  17. Select the name of the operating system group you created in the previous chapter for the OSDBA group, the OSASM group, and the database operator group. If you choose to create only the dba group, then you can use that group for all three privileged groups. If you created a separate asm group, then use that value for the OSASM group.

    Description of install_priv_grps.gif follows
    Description of the illustration install_priv_grps.gif

    After you have supplied values for the privileged groups, click Next. The Oracle Configuration Manager Registration window appears.

  18. The Oracle Configuration Manager Registration window enables you to configure the credentials used for connecting to OracleMetaLink. You can provide this information now, or configure it after the database has been installed. Click Next to continue.

    OUI displays the Summary window.

  19. Review the information displayed in the Summary window. If any of the information is incorrect, click Back to return to a previous window and correct it. When you are ready to proceed, click Install.

    OUI displays a progress indicator to show that the installation has begun. This step takes several minutes to complete.

  20. As part of the software installation process, the sales database is created. At the end of the database creation, you will see the Oracle Database Configuration Assistant (DBCA) window with the URL for the Database Control console displayed.

    Description of install022a_fixed.gif follows
    Description of the illustration install022a_fixed.gif

    Make note of the URL, and then click OK. Wait for DBCA to start the cluster database and its instances.

  21. After the installation, you are prompted to perform the postinstallation task of running the root.sh script on both nodes.

    Description of install023a_fixed.gif follows
    Description of the illustration install023a_fixed.gif

    On each node, run the scripts listed in the Execute Configuration scripts window before you click OK. Perform the following steps to run the root.sh script:

    1. Open a terminal window. As the oracle user on docrac1, change directories to your Oracle home directory, and then switch to the root user by entering the following commands:

      [oracle@docrac1 oracle]$ cd /opt/oracle/11gR1/db_1
      [oracle@docrac1 db_1]$ su
      
    2. Enter the password for the root user, and then run the script specified in the Execute Configuration scripts window:

      [root@docrac1 db_1]# ./root.sh
      
    3. As the root.sh script runs, it prompts you for the path to the local bin directory. The information displayed in the brackets is the information it has obtained from your system configuration. Press the Enter key each time you are prompted for input to accept the default choices.

    4. After the script has completed, the prompt appears. Enter the following commands:

      [oracle@docrac1 oracle]$ ssh docrac2
      [oracle@docrac2 oracle]$ cd /opt/oracle/11gR1/db_1
      [oracle@docrac2 db_1]$ su
      
    5. Enter the password for the root user, and then run the script specified in the Execute Configuration scripts window:

      [root@docrac2 db_1]# ./root.sh
      
    6. Accept all default choices by pressing the Enter key.

    After you finish executing the script on all nodes, return to the Execute Configuration scripts window and click OK.

    OUI displays the End of Installation window

  22. Click Exit and then click Yes to verify that you want to exit OUI.

Verifying Your Oracle RAC Database Installation

At this point, you should verify that all the database services are up and running.

To verify the Oracle RAC database services are running:

  1. Log in as the oracle user and go to the CRS_home/bin directory:

    [oracle] $ cd /u01/app/crs/bin
    
  2. Run the following command to view the status of the applications managed by Oracle Clusterware:

    [oracle] $ ./crs_stat –t
    

    The output of the command should show that the database instances are available (online) for each host.

    Description of install034.gif follows
    Description of the illustration install034.gif

Configuring the Operating System Environment for Database Management

After you have installed the Oracle RAC software and created a cluster database, there are two additional tasks to perform to configure your operating system environment for easier database management:

Updating the oratab File

Several of the Oracle Database utilities use the oratab file to determine the available Oracle homes and instances on each node. The oratab file is created by the root.sh script and is updated by Oracle Database Configuration Assistant when creating or deleting a database.

The following is an example of the oratab file:

# This file is used by ORACLE utilities. It is created by root.sh
# and updated by the Database Configuration Assistant when creating
# a database.

# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third field indicates
# to the dbstart utility that the database should, "Y", or should not, 
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
+ASM1:/opt/oracle/11gR1/asm:N
sales:/opt/oracle/11gR1/db_1:N
sales1:/opt/oracle/11gR1/db_1:N

To update the oratab file on Red Hat Linux after creating an Oracle RAC database:

  1. Open the /etc/oratab file for editing by using the following command on the docrac1 node:

    vi /etc/oratab
    
  2. Add the Oracle_sid and Oracle_home for the local instance to the end of the /etc/oratab file, for example:

    sales1:/opt/oracle/11gR1/db_1:N
    
  3. Save the file and exit the vi editor.

  4. Modify the /etc/oratab file on each node in the cluster, adding in the appropriate instance information.

    Note:

    In a single-instance database, setting the last field of each entry to N disables the automatic startup of a database when the server it runs on is restarted. For an Oracle RAC database, these fields are set to N because Oracle Clusterware starts the instances and processes, not the dbstart utility.

Reconfiguring the User Shell Profile

There are several environment variables that can be used with Oracle RAC or Oracle Database. These variables can be set manually in your current operating system session, using shell commands such as set and export.

You can also have these variables set automatically when you log in as a specific operating system user. To do this, modify the Bourne, Bash, or Korn shell configuration file (for example .profile or .login) for that operating system user.

To modify the oracle user's profile for the bash shell on Red Hat Linux:

  1. As the oracle user, open the user profile in the /home/oracle directory for editing using the following commands:

    [oracle] $ cd $HOME
    [oracle] $ vi .bash_profile
    
  2. Modify the following lines in the file so they point to the location of the newly created Oracle RAC database:

    export ORACLE_BASE=/opt/oracle/11gR1
    export ORACLE_HOME=/opt/oracle/11gR1/db_1
    export PATH=$ORACLE_HOME/bin:$PATH
    

    Note:

    For the RMAN utility to work properly, the $ORACLE_HOME/bin directory must appear in the PATH variable before the /usr/X11R6/bin directory on Linux platforms.
  3. On each node, modify the .bash_profile file to set the ORACLE_SID environment variable to the name of the local instance. For example, on the host docrac1 you would add the following line to the .bash_profile file:

    export ORACLE_SID=sales1
    

    On the host docrac2 you would set ORACLE_SID to the value sales2.

  4. Read and implement the changes made to the .bash_profile file on each instance:

    source .bash_profile
    
  5. On each client computer, configure user access to use a service name, such as sales, for connecting to the database.

See Also:

Performing Postinstallation Tasks

After you have installed the Oracle RAC software, there are additional tasks that you can perform before your cluster database is ready for use. These steps are recommended, but are not required.

This section contains the following topics:

About Verifying the Oracle Clusterware Installation

After the Oracle Clusterware installation is complete, OUI automatically runs the cluvfy utility as a Configuration Assistant to verify that the Clusterware installation has been completed successfully.

If the CVU reports problems with your configuration, correct these errors before proceeding.

See Also:

About Backing Up the Voting Disk

After your Oracle Database 11g with Oracle RAC installation is complete, and after you are sure that your system is functioning properly, make a backup of the contents of the voting disk. Use the dd utility, as described in the section "About Backing Up and Recovering Voting Disks".

Also, make a backup copy of the voting disk contents after you complete any node additions or deletions, and after running any deinstallation procedures.

About Downloading and Installing RDBMS Patches

Periodically, Oracle issues bug fixes for its software called patches. Patch sets are a collection of bug fixes that were produced up to the time of the patch set release. Patch sets are fully tested product fixes. Application of a patch set affects the software residing in your Oracle home.Ensure that you are running the latest patch set of the installed software. You might also need to apply patches that are not included in a patch set. Information about downloading and installing patches and patch sets is covered in Chapter 10, " Managing Oracle Software and Applying Patches".

Verifying Oracle Enterprise Manager Operations

When you create an Oracle RAC database and choose Database Control for your database management, the Enterprise Manager Database Control utility is installed and configured automatically.

To verify Oracle Enterprise Manager Database Control has been started in your new Oracle RAC environment:

  1. Make sure the ORACLE_SID environment variable is set to the name of the instance to which you want to connect, for example sales1. Also make sure the ORACLE_HOME environment variable is set to the location of the installed Oracle Database software.

    $ echo $ORACLE_SID
    sales
    $ export ORACLE_SID=sales1
    $ echo $ORACLE_HOME
    /opt/oracle/11gR1/db_1
    
  2. Go to the Oracle_home/bin directory.

  3. Run the following command as the oracle user:

    ./emctl status dbconsole
    

    The Enterprise Manager Control (EMCTL) utility displays the current status of the Database Control console on the current node.

  4. If the EMCTL utility reports that Database Control is not started, use the following command to start it:

    ./emctl start dbconsole
    
  5. Repeat Step 1 through Step 3 for each node in the cluster.

Recommended Postinstallation Tasks

Oracle recommends that you complete the following tasks after installing Oracle RAC:

About Backing Up the root.sh Script

Oracle recommends that you back up the root.sh script after you complete an installation. If you install other products in the same Oracle home directory, OUI updates the contents of the existing root.sh script during the installation. If you require information contained in the original root.sh script, then you can recover it from the root.sh backup copy.

About Configuring User Accounts

The oracle user operating system account is the account that you used to install the Oracle software. You can use different operating system accounts for accessing and managing your Oracle RAC database.

See Also:

Converting an Oracle Database to an Oracle RAC Database

You can use rconfig, or Oracle Enterprise Manager to assist you with the task of converting a single-instance database installation to an Oracle RAC database. The first of these, rconfig, is a command line utility. Oracle Enterprise Manager Grid Control database administration option, Convert to Cluster Database, provides a GUI conversion tool.

This section contains the following topics:

Preparing for Database Conversion

Before you start the process of converting your database to a cluster database, your database environment must meet certain prerequisites:

  • The existing database and the target Oracle RAC database must be on the same release of Oracle Database 11g and must be running on the same platform.

  • The hardware and operating system software used to implement your Oracle RAC database must be certified for use with the release of the Oracle RAC software you are installing.

  • You must configure shared storage for your Oracle RAC database.

  • You must verify that any applications that will run against the Oracle RAC database do not need any additional configuration before they can be used successfully with the cluster database. This applies to both Oracle applications and database features, such as Oracle Streams, and applications and products that do not come from Oracle.

  • Backup procedures should be available before converting from a single-instance Oracle Database to Oracle RAC.

  • For archiving in Oracle RAC environments, the archive log file format requires a thread number.

  • The archived redo log files from all instances of an Oracle RAC database are required for media recovery. Because of this, if you archive to a file and you do not use a cluster file system, or some other means to provide shared file systems, then you require a method of accessing the archived redo log files from all nodes on which the cluster database has instances.

Note:

Before using individual Oracle Database 11g database products or options, refer to the product documentation library, which is available in the DOC directory on the 11g Release 1 (11.1) installation media, or on the OTN Web site at http://www.oracle.com/technetwork/indexes/documentation/index.html

Overview of the Database Conversion Process Using Grid Control

The following list provides an outline of the process of converting a single-instance database to an Oracle RAC database using Oracle Enterprise Manager Grid Control:

  • Complete the prerequisite tasks for converting to an Oracle RAC database:

    • Oracle Clusterware and Oracle Database software is installed on all target nodes.

    • Oracle Clusterware is started.

    • The Oracle Database binary is enabled for Oracle RAC on all target nodes.

    • Shared storage is configured and accessible from all nodes.

    • User equivalency is configured for the operating system user performing the conversion.

    • Enterprise Manager agents are configured and running on all nodes, and are configured with the cluster and host information.

    • The database being converted has been backed up successfully.

  • Access the Database Home page for the database you want to convert.

  • Go to the Server subpage and select Convert to Cluster Database.

  • Provide the necessary credentials.

  • Select the host nodes that will contain instances of the new database.

  • Provide listener and instance configuration information.

  • Specify the location of the shared storage to be used for the datafiles.

  • Submit the job.

  • Complete the post-conversion tasks.

See Also:

Oracle Real Application Clusters Installation Guide for Linux and UNIX, or for a different platform, for a complete description of this process

Overview of the Database Conversion Process Using rconfig

The following list provides an outline of the process of converting a single-instance database to an Oracle RAC database using the rconfig utility:

  • Complete the prerequisite tasks for converting to an Oracle RAC database.

    • Oracle Clusterware and Oracle Database software is installed on all target nodes.

    • Oracle Clusterware is started.

    • The Oracle Database binary is enabled for Oracle RAC on all target nodes.

    • Shared storage is configured and accessible from all nodes.

    • User equivalency is configured for the operating system user performing the conversion.

    • The database being converted has been backed up successfully.

  • Modify the parameters in the Oracle_home/assistants/rconfig/sampleXMLs/ConvertToRAC.xml file as appropriate for your environment, then save the file.

  • Run the rconfig command, supplying the name of the modified ConvertToRAC.xml file as input.

  • Complete the post-conversion tasks.

Note:

When converting a single-instance database to a Oracle RAC database using the rconfig utility, if the single-instance database has Database Control configured, Oracle recommends de-configuring Database Control prior to conversion so the converted database will have Oracle RAC Database Control configured.Use the following steps:
  • De-configure Database Control on the single-instance database using the following command:

    emca -deconfig dbcontrol db
    
  • Run rconfig utility to convert the single-instance database to an Oracle RAC database

  • Run DBCA to configure Database Control for Cluster Database

See Also:

Oracle Real Application Clusters Installation Guide for Linux and UNIX, or for a different platform, for a complete description of this process