Start the Oracle Grid Infrastructure Installer
Use this procedure to start up the Oracle Grid Infrastructure installer, and provide information for the configuration.
Note:
The instructions in the "Oracle Database Patch 34130714 - GI Release Update 19.16.0.0.220719" patch notes tell you to useopatchauto
to
install the patch. However, this patch should be applied by the Oracle Grid
Infrastructure installer using the -applyRU
argument
-
From your client connection to the Podman container as the Grid user on
racnode1
, start the installer, using the following command:/u01/app/19c/grid/gridSetup.sh -applyRU /software/stage/34130714 \ -applyOneOffs /software/stage/34339952,/software/stage/32869666
-
Choose the option Configure Grid Infrastructure for a New Cluster, and click Next.
The Select Cluster Configuration window appears.
- Choose the option Configure an Oracle Standalone Cluster, and click Next.
-
In the Cluster Name and SCAN Name fields, enter the names for your cluster, and for the cluster Single Client Access Names (SCANs) that are unique throughout your entire enterprise network. For this example, we used these names:
- Cluster Name:
raccluster01
- SCAN Name:
racnode-scan
- SCAN Port :
1521
- Cluster Name:
-
If you have configured your domain name server (DNS) to send to the GNS virtual IP address name resolution, then you can select Configure GNS requests for the subdomain GNS servers. Click Next.
-
In the Public Hostname column of the table of cluster nodes, check to see that you have the following values set
- Public Hostname:
racnode1.example.info
racnode2.example.info
- Virtual Hostname:
racnode1-vip.example.info
racnode2-vip.example.info
- Public Hostname:
-
Click SSH connectivity, and set up SSH between
racnode1
andracnode2
.When SSH is configured, click Next.
- On the Network Interface Usage
window, select the following:
eth0 10.0.20.0
for the Public networketh1 192.168.17.0
for the first Oracle ASM and Private networketh2 192.168.18.0
for the second Oracle ASM and Private network
After you make those selections, click Next.
- On the Storage Option window, select Use Oracle Flex ASM for Storage and click Next.
- On the GIMR Option window, select default, and click Next.
- On the Create ASM Disk Group
window, click Change Discovery Path, and set the value
for Disk Discovery Path to
/dev/asm*
, and click OK. Provide the following values:- Disk group name:
DATA
- Redundancy:
External
- Select the default, Allocation Unit Size
- Select Disks, and provide the
following values:
/dev/asm-disk1
/dev/asm-disk2
When you have entered these values, click. Next.
- Disk group name:
- On the ASM Password window, provide
the passwords for the
SYS
andASMSNMP
users, and click Next. - On the Failure Isolation window, Select the default, and click Next.
- On the Management Options window, select the default, and click Next.
- On the Operating System Group window, select the default, and click Next.
- On the Installation Location
window, for Oracle base, enter the path
/u01/app/grid
, and click Next. - On the Oracle Inventory window,
for Inventory Directory, enter
/u01/app/oraInventory
, and click Next. - On the Root Script Execution window, select the default, and click Next.
-
On the Prerequisite Checks window, under Verification Results, you may see a
Systemd
status warning. You can ignore this warning, and proceed.If you encounter an unexpected warning, then please refer to Known Issues in My Oracle Support ID 2885873.1.
- On the Prerequisite Checks
window, it is possible that you can see a warning indicating that
cvuqdisk-1.0.10-1
is missing, and see the failure message "Device Checks for ASM" failed. If this warning appears, then you must install the packagecvuqdisk-1.0.10-1
on both the containers. In this case:- The Fix & Check Again button is
disabled, and you need to install the package manually. Complete the
following steps:
-
Open a terminal, and log in as
root
toracnode1 /bin/bash
. -
Run the following command to install the
cvuqdisk
RPM package:rpm -ivh /tmp/GridSetupActions*/CVU_*/cvuqdisk-1.0.10-1.rpm
-
Click Check Again.
-
Repeat steps a and b to install the RPM on
racnode2
.
You should not see any further warnings or failure messages. The installer should automatically proceed to the next window.
-
- The Fix & Check Again button is
disabled, and you need to install the package manually. Complete the
following steps:
-
Click Install.
-
When prompted, run
orainstRoot.sh
androot.sh
onracnode1
andracnode2
. - After installation is complete, confirm that the CRS stack is
up:
$ORACLE_HOME/bin/crsctl stat res -t
Parent topic: Run the Oracle Grid Infrastructure Installer