|Oracle® Grid Infrastructure Installation Guide
12c Release 1 (12.1) for Microsoft Windows x64 (64-Bit)
|PDF · Mobi · ePub|
This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.
This chapter contains the following topics:
On a regular basis Oracle provides patch sets that include generic and port specific fixes encountered by customers since the base product was released. Patch sets increment the 4th digit of the release number e.g. 18.104.22.168.0 to 22.214.171.124.0, these patch sets are fully regression tested in the same way that the base release is (i.e. 126.96.36.199.0). Customers are encouraged to apply these fixes.
If a customer encounters a critical problem that requires a fix prior to the next patch set becoming available, they can request that a one off fix is made available on top of the latest patch set. This delivery mechanism is similar to the Microsoft Hot Fixes and is known as an Oracle patch set exception (or interim patch). Unlike Unix platforms these patch set exceptions are delivered in a patch set exception bundle (cumulative patch bundle), which includes all fixes since the current patch set. For example, bug 12393432 is a patch set exception bundle, Patch 12, for Oracle Database Release 188.8.131.52 for Microsoft Windows (x64). You should always apply the latest patch bundle available for your release.
The patch set exception bundles also include the fixes for the CPU (Critical Patch Update), DST (Daylight Saving Time), PSU (Patch Set Update) and Recommended Patch Bundles. It is not required to have previous security patches applied before applying the patch set exception bundle. However, you must be on the stated patch set level for a given product home before applying the patch set exception bundle for that release.
Refer to Appendix D, "How to Upgrade to Oracle Grid Infrastructure 12c Release 1" for information about how to stop database processes in preparation for installing patches.
Oracle Grid Infrastructure on Windows Server 2008 or Windows Server 2008 R2 requires the Oracle Address Resolution Protocol (ARP) driver.
Start the Oracle Grid Infrastructure stack, if it is not started.
To check the status of the Oracle Grid Infrastructure stack, use:
crsctl check crs
To start the Oracle Grid Infrastructure stack, use:
crsctl start crs
netcfg utility to install the Oracle ARP driver, as shown in the following command:
%systemroot%\system32\netcfg.exe -l Grid_home\bin\oraarpdrv.inf -c p -i orcl_ndisprot
If "Oracle America, Inc." is not in list of the trusted publishers for the node, then select Install to allow the installation to continue.
Start the Oracle ARP Protocol Driver.
net.exe start oraarpdrv
Stop the following Oracle Clusterware resources:
If GNS is configured for your cluster, then stop the GNS resource:
Grid_home\bin\srvctl stop gns
To determine if GNS is configured for your cluster, use the following command:
Grid_home\bin\srvctl config gns
Stop the SCAN resource.
Grid_home\bin\srvctl stop scan -f
Stop all node applications running on the cluster.
Grid_home\bin\srvctl stop nodeapps -n nodename -f
Restart the Oracle Clusterware resources in the opposite order.
Stop all node applications running on the cluster.
Grid_home\bin\srvctl start nodeapps -n nodename
Stop the SCAN resource.
Grid_home\bin\srvctl start scan
If GNS is configured for your cluster, then stop the GNS resource:
Grid_home\bin\srvctl start gns
If the Windows Firewall feature is enabled on one or more of the nodes in your cluster, then virtually all transmission control protocol (TCP) network ports are blocked to incoming connections. As a result, any Oracle product that listens for incoming connections on a TCP port will not receive any of those connection requests and the clients making those connections will report errors.
You must configure exceptions for the Windows Firewall if your system meets all of the following conditions:
Oracle server-side components are installed on a computer running a supported version of Microsoft Windows. The list of components includes the Oracle Database, Oracle Grid Infrastructure, Oracle Real Application Clusters (Oracle RAC), network listeners, or any web servers or services.
The Windows computer in question accepts connections from other computers over the network. If no other computers connect to the Windows computer to access the Oracle software, then no post-installation configuration steps are required and the Oracle software functions as expected.
The Windows computer in question is configured to run the Windows Firewall. If the Windows Firewall is not enabled, then no post-installation configuration steps are required.
If all of the above conditions are met, then the Windows Firewall must be configured to allow successful incoming connections to the Oracle software. To enable Oracle software to accept connection requests, Windows Firewall must be configured by either opening up specific static TCP ports in the firewall or by creating exceptions for specific executables so they can receive connection requests on any ports they choose. This firewall configuration can be done by one of the following methods:
Start the Windows Firewall application, select the Exceptions tab and then click either Add Program or Add Port to create exceptions for the Oracle software.
From the command prompt, use the
netsh firewall add... command.
When Windows notifies you that a foreground application is attempting to listen on a port, and gives you the opportunity to create an exception for that executable. If you choose the create the exception in this way, the effect is the same as creating an exception for the executable either through Control Panel or from the command line.
The following sections list the Oracle Database 11g release 2 (11.2) executables that listen on TCP ports on Windows, along with a brief description of the executable. It is recommended that these executables (if in use and accepting connections from a remote, client computer) be added to the exceptions list for the Windows Firewall to ensure correct operation. In addition, if multiple Oracle homes are in use, firewall exceptions may have to be created for the same executable, for example,
oracle.exe, multiple times, once for each Oracle home from which that executable loads.
For basic database operation and connectivity from remote clients, such as SQL*Plus, Oracle Call Interface (OCI), Open Database Connectivity (ODBC), and so on, the following executables must be added to the Windows Firewall exception list:
\bin\oracle.exe - Oracle Database executable
\bin\tnslsnr.exe - Oracle Listener
If you use remote monitoring capabilities for your database, the following executables must be added to the Windows Firewall exception list:
emagent.exe - Oracle Enterprise Manager
\jdk\bin\java.exe - Java Virtual Machine (JVM) for Oracle Enterprise Manager
After installing the Oracle Database Companion CD, the following executables must be added to the Windows Firewall exception list:
\opmn\bin\opmn.exe - Oracle Process Manager
\jdk\bin\java.exe - JVM
If your Oracle database interacts with non-Oracle software through a gateway, then you must add the gateway executable to the Windows Firewall exception list. Table 8-1table lists the gateway executables used to access non-Oracle software.
Oracle Services for Microsoft Transaction Server
Oracle Database Gateway for Sybase
Oracle Database Gateway for Teradata
Oracle Database Gateway for SQL Server
Oracle Database Gateway for Distributed Relational Database Architecture (DRDA)
Oracle Database Gateway for Advanced Program to Program Communication (APPC)
Oracle Database Gateway for APPC
Oracle Database Gateway for WebSphere MQ
Oracle Database Gateway for WebSphere MQ
Oracle Database Gateway for ODBC
If you installed the Oracle Grid Infrastructure software on the nodes in your cluster, then you can enable the Windows Firewall only after adding the following executables and ports to the Firewall exception list. The Firewall Exception list must be updated on each node.
\bin\gpnpd.exe - Grid Plug and Play daemon
\bin\oracle.exe - Oracle Automatic Storage Management (Oracle ASM) executable (if using Oracle ASM for storage)
\bin\racgvip.exe - Virtual Internet Protocol Configuration Assistant
\bin\evmd.exe - OracleEVMService
\bin\crsd.exe - OracleCRService
\bin\ocssd.exe - OracleCSService
\bin\octssd.exe - Cluster Time Synchronization Service daemon
\bin\mDNSResponder.exe - multicast-domain name system (DNS) Responder Daemon
\bin\gipcd.exe - Grid inter-process communication (IPC) daemon
\bin\gnsd.exe - Grid Naming Service (GNS) daemon
\bin\ohasd.exe - OracleOHService
\bin\TNSLSNR.EXE - single client access name (SCAN) listener and local listener for Oracle RAC database and Oracle ASM
\opmn\bin\ons.exe - Oracle Notification Service (ONS)
\jdk\jre\bin\java.exe - JVM
\bin\oracle.exe - Oracle RAC database instance
\bin\emagent.exe - Oracle Enterprise Manager agent
\jdk\bin\java.exe - For the Oracle Enterprise Manager Database Console
In addition, the following ports should be added to the Windows Firewall exception list:
Microsoft file sharing system management bus (SMB)
TCP ports from 135 through 139
Direct-hosted SMB traffic without a network basic I/O system (NetBIOS)
port 445 (TCP)
In additional to all the previously listed exceptions, if you use any of the Oracle software listed in, then you must create an exception for Windows Firewall for the associated executable.
The Windows operating system should be optimized for Memory Usage of ’Programs' instead of ’System Caching'. To modify the memory optimization settings, perform the following steps:
From the Start Menu, select Control Panel, then System.
In the System Properties window, click the Advanced tab.
In the Performance section, click Settings.
In the Performance Options window, click the Advanced tab.
In the Memory Usage section, ensure Programs is selected.
During installation of Oracle Grid Infrastructure, if you select Oracle ASM for storage, a single disk group is created to store the Oracle Clusterware files. If you plan to create a single-instance database, an Oracle RAC database, or an Oracle RAC One Node database, then this disk group can also be used to store the data files for the database. However, you should create a separate disk group for the fast recovery area.
The fast recovery area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the
DB_RECOVERY_FILE_DEST parameter to the path for the fast recovery area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.
When you enable the fast recovery area in the database initialization parameter file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the fast recovery area. RMAN automatically manages files in the fast recovery area by deleting obsolete backups and archive files that are no longer required for recovery.
To use a fast recovery area in Oracle RAC, you must place it on an Oracle ASM disk group, a cluster file system, or on a shared directory that is configured through Direct network file system (NFS) for each Oracle RAC instance. In other words, the fast recovery area must be shared among all of the instances of an Oracle RAC database. Oracle Clusterware files and Oracle Database files can be placed on the same disk group as fast recovery area files. However, Oracle recommends that you create a separate fast recovery area disk group to reduce storage device contention.
The fast recovery area is enabled by setting the parameter
DB_RECOVERY_FILE_DEST to the same value on all instances. The size of the fast recovery area is set with the parameter
DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast recovery area, the more useful it becomes. For ease of use, Oracle recommends that you create a fast recovery area disk group on storage devices that can contain at least three days of recovery information. Ideally, the fast recovery area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.
Multiple databases can use the same fast recovery area. For example, assume you have created one fast recovery area disk group on disks with 150 gigabyte (GB) of storage, shared by three different databases. You can set the size of the fast recovery area for each database depending on the importance of each database. For example, if
test1 is your least important database,
products is of greater importance and
orders is of greatest importance, then you can set different
DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for
test1, 50 GB for
products, and 70 GB for
To create an Oracle ASM disk group for the fast recovery area:
C:\> cd app\12.1.0\grid\bin C:\> asmca
ASMCA opens at the Disk Groups tab. Click Create to create a new disk group.
The Create Disk Groups window opens.
In the Disk Group Name field, enter a descriptive name for the fast recovery area disk group, for example, FRA.
In the Redundancy section, select the level of redundancy you want to use.
In the Select Member Disks field, select eligible disks to be added to the fast recovery area, then click OK.
The Diskgroup Creation window opens to inform you when disk group creation is complete.
Click OK to acknowledge the message, then click Exit to quit the application.
The SCAN is a name that provides service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database instance. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.
You can use the command
cluvfy comp scan (located in
\bin) to confirm that the DNS is correctly associating the SCAN with the addresses. For example:
C:\> cluvfy comp scan Verifying scan Checking Single Client Access Name (SCAN)... Checking TCP connectivity to SCAN Listeners... TCP connectivity to SCAN Listeners exists on all cluster nodes Checking name resolution setup for ”node1.example.com”... Verification of SCAN VIP and Listener setup passed Verification of scan was successful.
After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.
When upgrading Oracle Grid Infrastructure, one of the scripts run during the installation sets the state of the Windows services for the Oracle Enterprise Manager Database Control and Cloud Control agent to
MANUAL. So, when you reboot a node, Windows does not restart the Database Control and Cloud Control agents automatically.
If you are managing Oracle Clusterware as target from Database Control or Cloud Control, you must login to all Database Control consoles and Cloud Control Consoles, update the
ORACLE_HOME property for the Oracle Clusterware, Oracle ASM and Oracle Grid Infrastructure Listener targets to point to the new Grid home using the Monitoring Configuration user interface. If the Windows services for the Database control or Cloud Control agents are not started, you may need to manually start these services, then login to the Database Control consoles and Cloud Control Consoles to update the target properties.
After you modify and save the
ORACLE_HOME target property of Oracle Clusterware and Oracle ASM targets, you can set the state of the Windows services for the Oracle Enterprise Manager Database Control and Cloud Control agent to
Review the following sections for information about using earlier Oracle Database releases with Oracle Grid Infrastructure 12c Release 1 (12.1) installations:
You can use Oracle Database 10g and Oracle Database 11g with Oracle Clusterware and Oracle ASM 12c Release 1 (12.1). If you upgrade an existing release of Oracle Clusterware and Oracle ASM to Oracle Grid Infrastructure 11g Release 2 (11.2) (which includes Oracle Clusterware and Oracle ASM), and you also plan to upgrade your Oracle RAC database to Oracle Database 12c Release 1 (12.1), then the required configuration of the existing databases is completed automatically when you complete the Oracle RAC upgrade, and this section does not concern you.
However, if you upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), and you have existing Oracle RAC installations that you do not plan to upgrade, or if you install an earlier release of Oracle RAC (11.2) on a cluster running Oracle Grid Infrastructure 12c Release 1 (12.1), then you must complete additional configuration tasks or apply patches, or both, before the earlier database releases will work correctly with Oracle Grid Infrastructure.
Oracle Database homes can only be stored on Oracle ACFS if the database release is Oracle Database 11g Release 2 or higher. Earlier releases of Oracle Database cannot be installed on Oracle ACFS because these releases were not designed to use Oracle ACFS.
Note:Before you start an Oracle RAC or Oracle Database install on an Oracle Clusterware 12c Release 1 (12.1) installation, if you are upgrading from Oracle Clusterware 11g Release 1 or Oracle Clusterware 10g Release 2, you must first upgrade to Oracle Clusterware 11g Release 2. Then, you must move the OCR and voting files to Oracle ASM storage before upgrading to Oracle Clusterware 12c.
"Oracle 12c Upgrade Companion," which is available through Note 1462240.1 on My Oracle Support:
To use Oracle ASM with Oracle Database releases earlier than Oracle Database 12c, you must use Local ASM or set the cardinality for Flex ASM to ALL, instead of the default of 3. After you install Oracle Grid Infrastructure 12c, to use Oracle ASM to provide storage service for Oracle Database releases that are earlier than Oracle Database 12c, you must use the following command to modify the Oracle ASM resource (
srvctl modify asm -count ALL
This setting changes the cardinality of the Oracle ASM resource so that Flex ASM instances run on all cluster nodes. You must change the setting even if you have a cluster with 3 or less nodes to ensure previous release databases can find the
inst instance alias.
Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install earlier Oracle Database and Oracle RAC releases on Oracle Grid Infrastructure 11g installations. Starting with Oracle Grid Infrastructure 11g Release 2, Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.
See Also:Oracle Automatic Storage Management Administrator's Guide for details on configuring disk group compatibility for databases using Oracle Database 11g Release 2 with Oracle Grid Infrastructure 12c Release 1 (12.1)
To administer local and SCAN listeners using the Listener Control utility (LSNRCTL) in Oracle Clusterware and Oracle ASM 11g Release 2, use the
lsnrctl program located in the Grid home. Do not attempt to use the
lsnrctl programs from Oracle home locations for previous releases because they cannot be used with the new release.
Before shutting down Oracle Clusterware 12c Release 1 (12.1), if you have an Oracle Oracle Database 11g Release 2 (11.2) database registered with Oracle Clusterware, then you must do one of the following:
Stop the Oracle Database 11g Release 2 database instances first, then stop the Oracle Clusterware stack
crsctl stop crs -f command to shut down the Oracle Clusterware stack and ignore any errors that are raised
If you need to shut down a cluster node that currently has Oracle Database and Oracle Grid Infrastructure running on that node, then you must perform the following steps to cleanly shutdown the cluster node:
crsctl stop crs command to shut down the Oracle Clusterware stack
After Oracle Clusterware has been stopped, you can shutdown the Windows server using
After installation, if you must modify the software installed in your Grid home, then you must first stop the Oracle Clusterware stack. For example, to apply a one-off patch, or modify any of the dynamic-link libraries (DLLs) used by Oracle Clusterware or Oracle ASM, you must follow these steps to stop and restart Oracle Clusterware.
Caution:To put the changes you make to the Oracle Grid Infrastructure home into effect, you must shut down all executables that run in the Grid home directory and then restart them. In addition, shut down any applications that use Oracle shared libraries or DLL files in the Grid home.
Prepare the Oracle Grid Infrastructure home for modification using the following procedure:
Log in using a member of the Administrators group and go to the directory
Grid_home is the path to the Oracle Grid Infrastructure home.
Shut down Oracle Clusterware using the following command:
C:\..\bin> crsctl stop crs -f
After Oracle Clusterware is completely shut down, perform the updates to the software installed in the Grid home.
Use the following command to restart Oracle Clusterware:
C:\..\bin> crsctl start crs
Note:Do not delete directories in the Grid home. For example, do not delete the directory
/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use Opatch to patch the Grid home, and Opatch displays the error message "
checkdir error: cannot create