3 Installing Oracle Grid Infrastructure

Before installing Oracle Real Application Clusters (Oracle RAC) and Oracle RAC One Node using Oracle Universal Installer (OUI), you must first install the Oracle Grid Infrastructure for a cluster software, which consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM).

3.1 Using Rapid Home Provisioning to Install Oracle Grid Infrastructure

Rapid Home Provisioning is a method of deploying software homes to any number of nodes in a data center from a single cluster, and also facilitates patching and upgrading software.

With Rapid Home Provisioning, you create, store, and manage templates of Oracle homes as images (called gold images) of Oracle software, such as databases, middleware, and applications. You can make a working copy of any gold image, and then you can provision that working copy to any node in the data center.

You store the gold images in a repository located on a Rapid Home Provisioning Server, which runs on one server in the Rapid Home Provisioning Server cluster that is a highly available provisioning system. With a single command, Rapid Home Provisioning can provision new Grid homes to servers for Oracle Grid Infrastructure 11.2.0.4, 12.1.0.2 and 12.2. There are no prerequisites for the target servers. You do not need to install any client or agent software on the servers before the provisioning the Oracle Grid Infrastructure software.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about Rapid Home Provisioning

3.2 Installing Oracle Grid Infrastructure for a Cluster

The software for Oracle Grid Infrastructure for a cluster consists of Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM).

3.2.1 About Cluster Verification Utility Fixup Scripts on Linux and UNIX

During installation on Linux and UNIX platforms, for certain prerequisite check failures, you can click Fix & Check Again to generate a fixup script.

The installer detects when the minimum requirements for an installation are not met, and creates shell scripts, called Fixup scripts, to finish incomplete system configuration steps. If the installer detects an incomplete task, then it generates a Fixup script (runfixup.sh). You also can have CVU generate Fixup scripts before installation.

Fixup scripts do the following:

  • If necessary, set kernel parameters to values required for successful installation, including:

    • Shared memory parameters.

    • Open file descriptor and UDP send/receive parameters.

  • Create and set permissions on the Oracle Inventory (central inventory) directory.

  • Create or reconfigure primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory and the operating system privileges groups.

  • Set shell limits if necessary to required values.

You can run the script after you click Fix and Check Again. You are prompted by the installer to run the fixup script as the root user in a separate session or you can specify through the installer interface that the scripts should be run automatically. You must run the script on all the nodes specified by the installer.

Modifying the contents of the generated fixup script is not recommended.

Note:

Using fixup scripts does not ensure that all the required prerequisites for installing Oracle Grid Infrastructure for a cluster and Oracle RAC are satisfied. You must still verify that all the requirements listed in Preparing Your Cluster are met to ensure a successful installation.

3.2.2 Installing Oracle Grid Infrastructure for a New Cluster

Complete this procedure to install Oracle Grid Infrastructure (Oracle Clusterware and Oracle Automatic Storage Management) on your cluster.

Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), the installation media is replaced with a zip file for the Oracle Grid Infrastructure installer. Run the installation wizard after extracting the zip file into the target home path.

At any time during installation, if you have a question about what you are being asked to do, or what input you are required to provide during installation, click the Help button on the installer page.

You should have your network information, storage information, and operating system users and groups available to you before you start installation, and you should be prepared to run root scripts.

As the user that owns the software for Oracle Grid Infrastructure for a cluster (grid)on the first node, install Oracle Grid Infrastructure for a cluster. Note that the installer uses Secure Shell (SSH) to copy the binary files from this node to the other nodes during the installation. During installation, in the Cluster Node Information window, about you specify the nodes in your cluster, you can click SSH Connectivity and the installer configures SSH connectivity between the specified nodes for you.

Note:

These installation instructions assume you do not already have any Oracle software installed on your system. If you have already installed Oracle ASMLIB, then you will not be able to install Oracle ASM Filter Driver (Oracle ASMFD) until you uninstall Oracle ASMLIB. You can use Oracle ASMLIB instead of Oracle ASMFD for managing the disks used by Oracle ASM, but those instructions are not included in this guide.

To install the software for Oracle Grid Infrastructure for a cluster:

  1. As the grid user, download the Oracle Grid Infrastructure image files and unzip the files into the Grid home. For example:
    mkdir -p /u01/app/12.2.0/grid
    chown grid:oinstall /u01/app/12.2.0/grid
    cd /u01/app/12.2.0/grid
    unzip -q download_location/grid_home_image.zip

    Note:

    • You must extract the zip image software into the directory where you want your Grid home to be located.

    • Download and copy the Oracle Grid Infrastructure image files to the local node only. During installation, the software is copied and installed on all other nodes in the cluster.

  2. Configure the shared disks for use with Oracle ASM Filter Driver:
    1. Log in as the root user and set the environment variable $ORACLE_HOME to the location of the Grid home and the environment variable $ORACLE_BASE to a temporary location.
      su root
      set ORACLE_HOME=/u01/app/12.2.0/grid
      set ORACLE_BASE=/tmp
      You set $ORACLE_BASE to a temporary location to avoid creating diagnostic or trace files in the Grid home before the Oracle Grid Infrastructure installation.
    2. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices for use with Oracle ASM Filter Driver.
      ./u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdb --init
      ./u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdc --init
      ./u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdd --init
      
    3. Verify the device has been marked for use with Oracle ASMFD.
      ./u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdb
      ./u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdc
      ./u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdd
    4. Unset the ORACLE_BASE environment variable.
      unset ORACLE_BASE
  3. Start the Oracle Grid Infrastructure wizard by running the following command:
    Grid_home/gridSetup.sh
    The installer starts and the Select Configuration Option window appears.
  4. Choose the option Configure Grid Infrastructure for a New Cluster, then click Next.
    The Select Cluster Type window appears.
  5. Choose the option Configure an Oracle Standalone Cluster , then click Next.
    The Grid Plug and Play Information window appears.
  6. In the Cluster Name and SCAN Name fields, enter the names for your cluster and cluster scan that are unique throughout your entire enterprise network, then click Next.

    For example, you might choose a name that is based on the node names' common prefix. This guide uses the cluster name docrac and the cluster SCAN name of docrac-scan.

    The Cluster Node Information screen appears.

  7. In the Public Hostname column of the table of cluster nodes, you should see your local node, for example racnode1.example.com.
    1. Click Add to add another node to the cluster.
    2. Enter the second node's public name (racnode2), and virtual IP name (racnode2-vip), then click OK.
      You are returned to the Cluster Node Information window. You should now see both nodes listed in the table of cluster nodes. Make sure the Role column is set to HUB for both nodes.
    3. Make sure both nodes are selected, then click the SSH Connectivity button at the bottom of the window.
      The bottom panel of the window displays the SSH Connectivity information.
    4. Enter the operating system user name and password for the Oracle software owner (grid). Select the option If you have configured SSH connectivity between the nodes, then select the Reuse private and public keys existing in user home option. Click Setup.
      A message window appears, indicating that it might take several minutes to configure SSH connectivity between the nodes. After a short period, another message window appears indicating that passwordless SSH connectivity has been established between the cluster nodes. Click OK to continue.
    5. When returned to the Cluster Node Information window, click Next to continue.
    The Specify Network Interface Usage page appears.
  8. Select the usage type for each network interface displayed, then click Next.
    Verify that each interface has the correct interface type associated with it. If you have network interfaces that should not be used by Oracle Clusterware, then set the network interface type to Do Not Use. For example, if you have only two network interfaces, then set the public interface to have a Use For value of Public and set the private network interface to have a Use For value of ASM & Private.
    Make sure both nodes are selected, then click Next. The Grid Infrastructure Management Repository Option window appears.
  9. Choose whether you want to store the Grid Infrastructure Management Repository in a separate Oracle ASM disk group, then click Next.
    The Storage Option Information window appears.
  10. Select the option Use Oracle Flex ASM for storage, then click Next to continue.
    The Grid Infrastructure Management Repository Option window appears.
  11. Select the Local option, then click Next to continue
    The Create ASM Disk Group window appears.
  12. Provide the name and specifications for the Oracle ASM disk group.
    1. In the Disk Group Name field, enter a name for the disk group, for example DATA.
    2. Choose the Redundancy level for this disk group. Normal is the recommended option.
    3. In the Add Disks section, choose the disks to add to this disk group.
      In the Add Disks section you should see the disks that you labeled in Step 2. If you do not see the disks, click the Change Discovery Path button and provide a path and pattern match for the disk, for example, /dev/sd*
    4. Check the option Configure Oracle ASM Filter Driver.
    When you have finished providing the information for the disk group, click Next.
    The Specify ASM Password window appears.
  13. Choose to use the same password for the Oracle ASM SYS and ASMSNMP account, or specify different passwords for each account, then click Next.
    The Failure Isolation Support window appears.
  14. Select the option Do not use Intelligent Platform Management Interface (IMPI), then click Next.
    The Specify Management Options window appears.
  15. If you have Enterprise Manager Cloud Control installed in your enterprise, then choose the option Register with Enterprise Manager (EM) Cloud Control and provide the EM configuration information. If you do not have Enterprise Manager Cloud Control installed in your enterprise, then click Next to continue.
    The Privileged Operating System Groups window appears.
  16. Accept the default operating system group names for Oracle ASM administration and click Next.
    The Specify Install Location window appears.
  17. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure installation, then click Next.
    If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid home directory as directed in Step 1, then the default location for the Oracle base directory should display as /u01/app/grid.
    If you have not installed Oracle software previously on this computer, then the Create Inventory window appears.
  18. Change the path for the inventory directory, if required. Then, click Next.
    If you are using the same directory names as the examples in this book, then it should show a value of /u01/app/oraInventory. The group name for the oraInventory directory should show oinstall.
    The Root Script Execution Configuration window appears.
  19. Select the option to Automatically run configuration scripts. Enter the credentials for the root user or a sudo account, then click Next.
    Alternatively, you can Run the scripts manually as the root user at the end of the installation process when prompted by the installer.
    The Perform Prerequisite Checks window appears.
  20. If any of the checks have a status of Failed and are not Fixable, then you must manually correct these issues. After you have fixed the issue, you can click the Check Again button to have the installer recheck the requirement and update the status. Repeat as needed until all the checks have a status of Succeeded. Click Next.

    Figure 3-1 Perform Prerequisite Checks Window

    Description of Figure 3-1 follows
    Description of "Figure 3-1 Perform Prerequisite Checks Window"

    The Summary window appears.

  21. Review the contents of the Summary window and then click Install.
    The installer displays a progress indicator enabling you to monitor the installation process.
  22. If you did not configure automation of the root scripts, then you are required to run certain scripts as the root user, as specified in the Execute Configuration Scripts window appears. Do not click OK until you have run the scripts. Run the scripts on all nodes as directed, in the order shown.

    For example, on Oracle Linux you perform the following steps (note that for clarity, the examples show the current user, node and directory in the prompt):

    1. As the oracle user on racnode1, open a terminal window, and enter the following commands:

      [oracle@racnode1 oracle]$ cd /u01/app/oraInventory
      [oracle@racnode1 oraInventory]$ su
      
    2. Enter the password for the root user, and then enter the following command to run the first script on racnode1:

      [root@racnode1 oraInventory]# ./orainstRoot.sh
      
    3. After the orainstRoot.sh script finishes on racnode1, open another terminal window, and as the oracle user, enter the following commands:

      [oracle@racnode1 oracle]$ ssh racnode2
      [oracle@racnode2 oracle]$ cd /u01/app/oraInventory
      [oracle@racnode2 oraInventory]$ su
      
    4. Enter the password for the root user, and then enter the following command to run the first script on racnode2:

      [root@racnode2 oraInventory]# ./orainstRoot.sh
      
    5. After the orainstRoot.sh script finishes on racnode2, go to the terminal window you opened in part a of this step. As the root user on racnode1, enter the following commands to run the second script, root.sh:

      [root@racnode1 oraInventory]# cd /u01/app/12.2.0/grid
      [root@racnode1 grid]# ./root.sh
      

      Press Enter at the prompt to accept the default value.

      Note:

      You must run the root.sh script on the first node and wait for it to finish. You can run root.sh scripts concurrently on all other nodes except for the last node on which you run the script. Like the first node, the root.sh script on the last node must be run separately.

    6. After the root.sh script finishes on racnode1, go to the terminal window you opened in part c of this step. As the root user on racnode2, enter the following commands:

      [root@racnode2 oraInventory]# cd /u01/app/12.2.0/grid
      [root@racnode2 grid]# ./root.sh
      

      After the root.sh script completes, return to the OUI window where the Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click OK.

      The software installation monitoring window reappears.

  23. Continue monitoring the installation until the Finish window appears. Then click Close to complete the installation process and exit the installer.

Caution:

After installation is complete, do not remove manually or run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle directories or their files while Oracle software is running on the server. If you remove these files, then the Oracle software can encounter intermittent hangs. Oracle Clusterware installations can fail with the error:

CRS-0184: Cannot communicate with the CRS daemon.

3.2.3 Completing the Oracle Clusterware Configuration

After you have installed Oracle Clusterware, verify that the node applications are started.

Depending on which operating system you use, you may have to perform some postinstallation tasks to configure the Oracle Clusterware components properly.

To complete the Oracle Clusterware configuration on Oracle Linux:

  1. As the grid user on the first node, check the status of the Oracle Clusterware targets by entering the following command:
    /u01/app/12.2.0/grid/bin/crsctl check cluster -all
    

    This command provides output showing if all the required cluster services, such as gsd, ons, and vip, are started on the nodes of your cluster.

  2. In the displayed output, you should see the Oracle Clusterware daemons are online for each node in the cluster.
    ******************************************************************
    racnode1:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    ******************************************************************
    racnode2:
    CRS-4537: Cluster Ready Services is online
    CRS-4529: Cluster Synchronization Services is online
    CRS-4533: Event Manager is online
    ******************************************************************
    

    If you see that one or more Oracle Clusterware resources are offline, or are missing, then the Oracle Clusterware software did not install properly.

Note:

Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

3.3 About Verifying the Oracle Clusterware Installation

Use Cluster Verification Utility (CVU) to verify that your installation is configured correctly.

After the Oracle Clusterware installation is complete, OUI automatically runs Cluster Verification Utility (CVU) as a configuration assistant to verify that the Oracle Clusterware installation has been completed successfully.

If CVU reports problems with your configuration, then correct these errors before proceeding.

See Also:

Oracle Clusterware Administration and Deployment Guide for more information about using CVU and resolving configuration problems

3.4 Confirming Oracle ASM Function for Oracle Clusterware Files

Confirm Oracle ASM is running after installing Oracle Grid Infrastructure.

After Oracle Grid Infrastructure installation, Oracle Clusterware files are stored on Oracle ASM. Use the following command syntax as the Oracle Grid Infrastructure installation owner (grid) to confirm that your Oracle ASM installation is running:

srvctl status asm

For example:

srvctl status asm
ASM is running on node1,node2, node3, node4

Note:

To manage Oracle ASM or Oracle Net 11g Release 2 (11.2) or later installations, use the srvctl binary in the Oracle Grid Infrastructure home for a cluster (Grid home). If you have Oracle Real Application Clusters or Oracle Database installed, then you cannot use the srvctl binary in the database home to manage Oracle ASM or Oracle Net.