This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.
This chapter contains the following topics:
You must perform the following tasks after completing your installation:
Note:In prior releases, backing up the voting disks using a
ddcommand was a required postinstallation task. With Oracle Clusterware release 11.2 and later, backing up and restoring a voting disk using the
ddcommand may result in the loss of the voting disk, so this procedure is not supported.
Note:Browsers require an Adobe Flash plug-in, version 9.0.115 or higher to use My Oracle Support. Check your browser for the correct version of Flash plug-in by going to the Adobe Flash checker page, and installing the latest version of Adobe Flash.
If you do not have Flash installed, then download the latest version of the Flash Player from the Adobe Web site:
To download required patch updates:
Use a Web browser to view the My Oracle Support Web site:
Log in to My Oracle Support Web site.
Note:If you are not a My Oracle Support registered user, then click Register for My Oracle Support and register.
On the main My Oracle Support page, click Patches & Updates.
On the Patches & Update page, click Advanced Search.
On the Advanced Search page, click the search icon next to the Product or Product Family field.
In the Search and Select: Product Family field, select Database and Tools in the Search list field, enter RDBMS Server in the text field, and click Go.
RDBMS Server appears in the Product or Product Family field. The current release appears in the Release field.
Select your platform from the list in the Platform field, and at the bottom of the selection list, click Go.
Any available patch updates appear under the Results heading.
Click the patch number to download the patch.
On the Patch Set page, click View README and read the page that appears. The README page contains information about the patch set and how to apply the patches to your installation.
Return to the Patch Set page, click Download, and save the file on your system.
Use the unzip utility provided with Oracle Database 11g release 2 (11.2) to uncompress the Oracle patch updates that you downloaded from My Oracle Support. The unzip utility is located in the
Refer to Appendix E for information about how to stop database processes in preparation for installing patches.
Oracle recommends that you complete the following tasks as needed after installing Oracle Grid Infrastructure:
Oracle recommends that you back up the
root.sh script after you complete an installation. If you install other products in the same Oracle home directory, then the installer updates the contents of the existing
root.sh script during the installation. If you require information contained in the original
root.sh script, then you can recover it from the
root.sh file copy.
On Oracle Solaris platforms, where Oracle does not currently support the native IPMI driver, DHCP addressing is not supported and manual configuration is required for IPMI support. OUI will not collect the administrator credentials, so failure isolation must be manually configured, the BMC must be configured with a static IP address, and the address must be manually stored in the OLR.
To configure Failure Isolation using IPMI, complete the following steps on each cluster member node:
If necessary, start Oracle Clusterware using the following command:
$ crsctl start crs
Use the BMC management utility to obtain the BMC's IP address and then use the cluster control utility
crsctl to store the BMC's IP address in the Oracle Local Registry (OLR) by issuing the
crsctl set css ipmiaddr
address command. For example:
$crsctl set css ipmiaddr 192.168.10.45
Enter the following crsctl command to store the user ID and password for the resident BMC in the OLR, where
youradminacct is the IPMI administrator user account, and provide the password when prompted:
$ crsctl set css ipmiadmin youradminact IPMI BMC Password:
This command attempts to validate the credentials you enter by sending them to another cluster node. The command fails if that cluster node is unable to access the local BMC using the credentials.
When you store the IPMI credentials in the OLR, you must have the anonymous user specified explicitly, or a parsing error will be reported.
Refer to the following guidelines only if the default semaphore parameter values are too low to accommodate all Oracle processes:
Note:Oracle recommends that you refer to the operating system documentation for more information about setting semaphore parameters.
Calculate the minimum total semaphore requirements using the following formula:
2 * sum (process parameters of all database instances on the system) + overhead for background processes + system and other application requirements
semmns (total semaphores systemwide) to this total.
semmsl (semaphores for each set) to 250.
semmni (total semaphores sets) to
semmns divided by
semmsl, rounded up to the nearest multiple of 1024.
During installation, by default you can create one disk group. If you plan to add an Oracle Database for a standalone server or an Oracle RAC database, then you should create the Fast Recovery Area for database files.
The Fast Recovery Area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the path for the Fast Recovery Area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.
When you enable Fast Recovery in the
init.ora file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the Fast Recovery Area. RMAN automatically manages files in the Fast Recovery Area by deleting obsolete backups and archive files no longer required for recovery.
Oracle recommends that you create a Fast Recovery Area disk group. Oracle Clusterware files and Oracle Database files can be placed on the same disk group, and you can also place Fast recovery files in the same disk group. However, Oracle recommends that you create a separate Fast Recovery disk group to reduce storage device contention.
The Fast Recovery Area is enabled by setting DB_RECOVERY_FILE_DEST. The size of the Fast Recovery Area is set with DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the Fast Recovery Area, the more useful it becomes. For ease of use, Oracle recommends that you create a Fast Recovery Area disk group on storage devices that can contain at least three days of recovery information. Ideally, the Fast Recovery Area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.
Multiple databases can use the same Fast Recovery Area. For example, assume you have created one Fast Recovery Area disk group on disks with 150 GB of storage, shared by three different databases. You can set the size of the Fast Recovery Area for each database depending on the importance of each database. For example, if database1 is your least important database, database 2 is of greater importance and database 3 is of greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for database 1, 50 GB for database 2, and 70 GB for database 3.
To create a Fast recovery file disk group:
$ cd /u01/app/11.2.0/grid/bin $ ./asmca
ASMCA opens at the Disk Groups tab. Click Create to create a new disk group
The Create Disk Groups window opens.
In the Disk Group Name field, enter a descriptive name for the Fast Recovery Area group. For example: FRA.
In the Redundancy section, select the level of redundancy you want to use.
In the Select Member Disks field, select eligible disks to be added to the Fast Recovery Area, and click OK.
The Diskgroup Creation window opens to inform you when disk group creation is complete. Click OK.
Oracle recommends that you run the Oracle RAC Configuration Audit tool (RACcheck) to check your Oracle RAC installation. RACcheck is an Oracle RAC auditing tool that checks various important configuration settings within Oracle Real Application Clusters, Oracle Clusterware, Oracle Automatic Storage Management and the Oracle Grid Infrastructure environment.
Oracle recommends that you download and run the latest version of RACcheck from My Oracle Support. For information about downloading, configuring, and running RACcheck configuration audit tool, refer to My Oracle Support note 1268927.1, which is available at the following URL:
Review the following sections for information about using older Oracle Database releases with 11g release 2 (11.2) Oracle Grid Infrastructure installations:
You can use Oracle Database release 9.2, release 10.x and release 11.1 with Oracle Clusterware 11g release 2 (11.2).
However, placing Oracle Database homes on Oracle ACFS that are prior to Oracle Database release 11.2 is not supported, because earlier releases are not designed to use Oracle ACFS.
If you upgrade an existing version of Oracle Clusterware, then required configuration of existing databases is completed automatically. However, if you complete a new installation of Oracle Grid Infrastructure for a cluster, and then want to install a version of Oracle Database prior to 11.2, then you must complete additional manual configuration tasks.
Note:Before you start an Oracle RAC or Oracle Database installation on an Oracle Clusterware release 11.2 installation, if you are upgrading from releases 220.127.116.11, 18.104.22.168, and 10.2.0.4, Oracle recommends that you check for the latest recommended patches for the release you are upgrading from, and install those patches as needed prior to the upgrade.
For more information on recommended patches, refer to "Oracle Upgrade Companion," which is available through Note 785351.1 on My Oracle Support:
You may also refer to Notes 756388.1 and 756671.1 for the current list of recommended patches for each release
Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install older Oracle databases and Oracle RAC databases on Oracle Grid Infrastructure installations. Starting with 11g release 2, Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.
When Oracle Clusterware 11g release 11.2 is installed on a cluster with no previous Oracle software version, it configures cluster nodes dynamically, which is compatible with Oracle Database Release 11.2 and later, but Oracle Database 10g and 11.1 require a persistent configuration. This process of association of a node name with a node number is called pinning.
Note:During an upgrade, all cluster member nodes are pinned automatically, and no manual pinning is required for existing databases. This procedure is required only if you install older database versions after installing Oracle Grid Infrastructure release 11.2 software.
To pin a node in preparation for installing or using an older Oracle Database version, use
/bin/crsctl with the following command syntax, where
nodes is a space-delimited list of one or more nodes in the cluster whose configuration you want to pin:
crsctl pin css -n nodes
For example, to pin nodes
node4, log in as
root and enter the following command:
$ crsctl pin css -n node3 node4
To determine if a node is in a pinned or unpinned state, use
/bin/olsnodes with the following command syntax:
To list all pinned nodes:
olsnodes -t -n
# /u01/app/11.2.0/grid/bin/olsnodes -t -n node1 1 Pinned node2 2 Pinned node3 3 Pinned node4 4 Pinned
To list the state of a particular node:
olsnodes -t -n node3
# /u01/app/11.2.0/grid/bin/olsnodes -t -n node3 node3 3 Pinned
See Also:Oracle Clusterware Administration and Deployment Guide for more information about pinning and unpinning nodes
By default, the Global Services daemon (GSD) is disabled. If you install Oracle Database 9i release 2 (9.2) on Oracle Grid Infrastructure for a Cluster 11g release 2 (11.2), then you must enable the GSD. Use the following commands to enable the GSD before you install Oracle Database release 9.2:
srvctl enable nodeapps -g srvctl start nodeapps
To administer 11g release 2 local and scan listeners using the
lsnrctl command, set your
$ORACLE_HOME environment variable to the path for the Oracle Grid Infrastructure home (Grid home). Do not attempt to use the
lsnrctl commands from Oracle home locations for previous releases, as they cannot be used with the new release.
For example, if you want to apply a one-off patch, or if you want to modify an Oracle Exadata configuration to run IPC traffic over RDS on the interconnect instead of using the default UDP, then you must unlock the Grid home.
Caution:Before relinking executables, you must shut down all executables that run in the Oracle home directory that you are relinking. In addition, shut down applications linked with Oracle shared libraries.
Unlock the home using the following procedure:
Change directory to the path Grid_home/crs/install, where Grid_home is the path to the Grid home, and unlock the Grid home using the command
rootcrs.pl -unlock -crshome
Grid_home, where Grid_home is the path to your Grid infrastructure home. For example, with the Grid home
/u01/app/11.2.0/grid, enter the following command:
# cd /u01/app/11.2.0/grid/crs/install # perl rootcrs.pl -unlock -crshome /u01/app/11.2.0/grid
Change user to the Oracle Grid Infrastructure software owner, and relink binaries using the command syntax make -f Grid_home/rdbms/lib/ins_rdbms.mk target, where Grid_home is the Grid home, and target is the binaries that you want to relink. For example, where the grid user is
$ORACLE_HOME is set to the Grid home, and where you are updating the interconnect protocol from UDP to IPC, enter the following command:
# su grid $ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk ipc_rds ioracle
Note:To relink binaries, you can also change to the grid installation owner and run the command
Relock the Grid home and restart the cluster using the following command:
# perl rootcrs.pl -patch
Repeat steps 1 through 3 on each cluster member node.