Installing Oracle Grid Infrastructure to Manage Generic Applications

Complete this procedure to install and configure Oracle Grid Infrastructure software to manage generic applications, or single-server rolling database maintenance..

  1. As the grid user, download the Oracle Grid Infrastructure image files and extract the files into the Grid home. For example:
    mkdir -p /u01/app/23.0.0/grid
    chown grid:oinstall /u01/app/23.0.0/grid
    cd /u01/app/23.0.0/grid
    unzip -q download_location/grid.zip

    grid.zip is the name of the Oracle Grid Infrastructure image zip file.

    Note:

    • You must extract the zip image software into the directory where you want your Grid home to be located.

    • Download and copy the Oracle Grid Infrastructure image files to the local node only. During installation, the software is copied and installed on all other nodes in the cluster.

    • Oracle home or Oracle base cannot be symlinks, nor can any of their parent directories, all the way to up to the root directory.

  2. Configure the shared disks for use with Oracle ASM Filter Driver:
    1. Log in as the root user and set the environment variable ORACLE_HOME to the location of the Grid home.

      For C shell:

      su root
      setenv ORACLE_HOME /u01/app/23.0.0/grid
      

      For bash shell:

      su root
      export ORACLE_HOME=/u01/app/23.0.0/grid
      
    2. Use Oracle ASM command line tool (ASMCMD) to provision the disk devices for use with Oracle ASM Filter Driver.
      cd /u01/app/23.0.0/grid/bin
      ./asmcmd afd_label DATA1 /dev/sdb --init
      ./asmcmd afd_label DATA2 /dev/sdc --init
      ./asmcmd afd_label DATA3 /dev/sdd --init
    3. Verify the device has been marked for use with Oracle ASMFD.
      ./asmcmd afd_lslbl /dev/sdb
      ./asmcmd afd_lslbl /dev/sdc
      ./asmcmd afd_lslbl /dev/sdd
  3. Log in as the grid user, and start the Oracle Grid Infrastructure installer by running the following command:
    /u01/app/23.0.0/grid/gridSetup.sh

    Note:

    You can run the gridSetup.sh command with oracle_install_crs_AHF_InstallLoc=path and oracle_install_crs_AHF_RepositoryLoc=path flags to change Autonomous Health Framework (AHF) install location and repository location respectively.
    The installer starts and the Select Configuration Option window appears.
  4. Choose the option Configure Oracle Grid Infrastructure for a New Cluster, then click Next.
    The Select Cluster Configuration window appears.
  5. Choose the option Configure cluster to manage generic applications, provide a name for your cluster, and then click Next.
    Select the Configure as Extended Cluster option to extend the cluster across two or more separate sites, each equipped with its own storage.
    The Cluster Node Information window appears.
  6. In the Public Hostname column of the table of cluster nodes, you should see your local node, for example node1.example.com.

    The following is a list of additional information about node public hostnames:

    • For the local node only, OUI automatically fills in public hostname field. When you enter the public node name, use the primary host name of each node. In other words, use the name displayed by the /bin/hostname command.

    • Virtual host names are not required. You can use a single network interface for Oracle ASM, private interconnect, and public communication.

    1. Click Add to add another node to the cluster.
    2. Enter the second node's public name (node2), then click OK.
      You are returned to the Cluster Node Information window. You should now see all nodes listed in the table of cluster nodes.
    3. Make sure all nodes are selected, then click the SSH Connectivity button at the bottom of the window.
      The bottom panel of the window displays the SSH Connectivity information.
    4. Enter the operating system user name and password for the Oracle software owner (grid). If you have configured SSH connectivity between the nodes, then select the Reuse private and public keys existing in user home option. Click Setup.
      A message window appears, indicating that it might take several minutes to configure SSH connectivity between the nodes. After a short period, another message window appears indicating that passwordless SSH connectivity has been established between the cluster nodes. Click OK to continue.
    5. When returned to the Cluster Node Information window, click Next to continue.
    The Specify Network Interface Usage window appears.
  7. Select the usage type Public, Private & ASM for one network interface.

    Note:

    You can use only one network interface for Oracle Grid Infrastructure deployment to manage generic applications. If you have additional network interfaces, then set these network interface type to Do Not Use.
    Click Next. The Storage Option Information window appears.
  8. Select storage option for Oracle Cluster Registry (OCR) and voting files:
    1. Select Use Oracle Flex ASM for storage to store OCR and voting files on an Oracle ASM disk group.
      You can also select Configure a separate disk group to store backups of OCR to create another Oracle ASM disk group for OCR backups.
    2. Select Configure as ASM Client Cluster to store OCR and voting files on an Oracle ASM disk group configured on a storage server cluster. Specify the complete path to the ASM client data file in the ASM Client Data field.
    3. Select Use Shared File System to store OCR and voting files on a shared file system.
    Click Next. The Create ASM Disk Group window appears.
  9. Provide the name and specifications for the Oracle ASM disk group.
    1. In the Disk Group Name field, enter a name for the disk group, for example DATA.
    2. Choose the Redundancy level for this disk group. Normal is the recommended option.
    3. In the Select Disks section, choose the disks to add to this disk group.

      In the Add Disks section you should see the disks that you labeled in Step 2. If you do not see the disks, click the Change Discovery Path button and provide a path and pattern match for the disk. For example, /dev/sd* for local Oracle ASM disks and n:/*/* for NVMe over Fabrics disks.

      During installation, disks labelled as Oracle ASMFD disks or Oracle ASMLIB disks are listed as candidate disks when using the default discovery string. However, if the disk has a header status of MEMBER, then it is not a candidate disk.

    4. If you want to use Oracle ASM Filter Driver (Oracle ASMFD) to manage your Oracle ASM disk devices, then select the option Configure Oracle ASM Filter Driver.
      If you are installing on Linux systems, and you want to use Oracle ASM Filter Driver (Oracle ASMFD) to manage your Oracle ASM disk devices, then you must deinstall Oracle ASM library driver (Oracle ASMLIB) before starting Oracle Grid Infrastructure installation.
    When you have finished providing the information for the disk group, click Next.
  10. If you selected to use ASM client cluster in Step 8, then ASM Client Storage Option window appears. Select an Oracle ASM disk group from the storage server cluster to store the OCR and voting files.
  11. If you selected to use a separate disk group for OCR backup, then the Backup Data Disk Group window appears. Provide the name and specifications for the OCR backup disk group.
    1. In the Disk Group Name field, enter a name for the disk group, for example RECO.
    2. Choose the Redundancy level for this disk group. Normal is the recommended option.
    3. In the Add Disks section, choose the disks to add to this disk group.
    When you have finished providing the information for the disk group, click Next.
    The Specify ASM Password window appears.
  12. Choose the same password for the Oracle ASM SYS and ASMSNMP account, or specify different passwords for each account, then click Next.
    The Automatic Self Correction window appears.
  13. Select Enable Automatic Self Correction option if you want to configure automatic self correction for your installation, then click Next.
    The automated fixup framework for Configuration Verification Utility (CVU) identifies and corrects any configuration errors.
    The Failure Isolation Support window appears.
  14. Select the option Do not use Intelligent Platform Management Interface (IPMI), then click Next.
    The Specify Management Options window appears.
  15. If you have Enterprise Manager Cloud Control installed in your enterprise, then choose the option Register with Enterprise Manager (EM) Cloud Control and provide the EM configuration information. If you do not have Enterprise Manager Cloud Control installed in your enterprise, then click Next to continue.
    The Privileged Operating System Groups window appears.
  16. Accept the default operating system group names for Oracle ASM administration and click Next.
    The Specify Installation Location window appears.
  17. Specify the directory to use for the Oracle base for the Oracle Grid Infrastructure installation, then click Next. The Oracle base directory must be different from the Oracle home directory.
    If you copied the Oracle Grid Infrastructure installation files into the Oracle Grid home directory as directed in Step 1, then the default location for the Oracle base directory should display as /u01/app/grid.
    If you have not installed Oracle software previously on this computer, then the Create Inventory window appears.
  18. Change the path for the inventory directory, if required. Then, click Next.
    If you are using the same directory names as the examples in this book, then it should show a value of /u01/app/oraInventory. The group name for the oraInventory directory should show oinstall.
    The Root Script Execution Configuration window appears.
  19. Select the option to Automatically run configuration scripts. Enter the credentials for the root user or a sudo account, then click Next.
    Alternatively, you can Run the scripts manually as the root user at the end of the installation process when prompted by the installer.
    The Perform Prerequisite Checks window appears.
  20. If any of the checks have a status of Failed and are not Fixable, then you must manually correct these issues. After you have fixed the issue, you can click the Check Again button to have the installer recheck the requirement and update the status. Repeat as needed until all the checks have a status of Succeeded. Click Next.

    The Summary window appears.

  21. Review the contents of the Summary window and then click Install.
    The installer displays a progress indicator enabling you to monitor the installation process.
  22. If you did not configure automation of the root scripts, then you are required to run certain scripts as the root user, as specified in the Run Configuration Scripts window. Do not click OK until you have run all the scripts. Run the scripts on all nodes as directed, in the order shown.

    For example, on Oracle Linux you perform the following steps (note that for clarity, the examples show the current user, node and directory in the prompt):

    1. As the grid user on node1, open a terminal window, and enter the following commands:

      cd /u01/app/oraInventory
      su
      
    2. Enter the password for the root user, and then enter the following command to run the first script on node1:

      ./orainstRoot.sh
      
    3. After the orainstRoot.sh script finishes on node1, open another terminal window, and as the grid user, enter the following commands:

      ssh node2
      cd /u01/app/oraInventory
      su
      
    4. Enter the password for the root user, and then enter the following command to run the first script on node2:

      ./orainstRoot.sh
      
    5. After the orainstRoot.sh script finishes on node2, go to the terminal window you opened in part a of this step. As the root user on node1, enter the following commands to run the second script, root.sh:

      cd /u01/app/23.0.0/grid
      ./root.sh
      

      Press Enter at the prompt to accept the default value.

      Note:

      You must run the root.sh script on the first node and wait for it to finish. You can run root.sh scripts concurrently on all other nodes except for the last node on which you run the script. Like the first node, the root.sh script on the last node must be run separately.

    6. After the root.sh script finishes on node1, go to the terminal window you opened in part c of this step. As the root user on node2, enter the following commands:

      cd /u01/app/23.0.0/grid
      ./root.sh
      

      After the root.sh script completes, return to the Oracle Universal Installer window where the Installer prompted you to run the orainstRoot.sh and root.sh scripts. Click OK.

      The software installation monitoring window reappears.

  23. Continue monitoring the installation until the Finish window appears. Then click Close to complete the installation process and exit the installer.

Caution:

After installation is complete, do not remove manually or run cron jobs that remove /tmp/.oracle or /var/tmp/.oracle directories or their files while Oracle software is running on the server. If you remove these files, then the Oracle software can encounter intermittent delays. Oracle Clusterware installations can fail with the error:

CRS-0184: Cannot communicate with the CRS daemon.

After your Oracle Grid Infrastructure installation is complete, you can install Oracle Database or other generic applications on a cluster node for high availability.

If you are following a conversion procedure of the single instance database for single-server database rolling maintenance, you can continue with the next steps of the conversion procedure.