Skip Headers
Oracle® Grid Infrastructure Installation Guide
12c Release 1 (12.1) for Microsoft Windows x64 (64-Bit)

E17618-07
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

8 Oracle Grid Infrastructure Postinstallation Procedures

This chapter describes how to complete the postinstallation tasks after you have installed the Oracle Grid Infrastructure software.

This chapter contains the following topics:

8.1 Required Postinstallation Tasks

You must perform the following tasks after completing your installation:

Note:

Backing up a voting file is no longer required.

8.1.1 Download and Install Patch Updates

Refer to the My Oracle Support website for required patch updates for your installation.

https://support.oracle.com

On a regular basis Oracle provides patch sets that include generic and port specific fixes encountered by customers since the base product was released. Patch sets increment the 4th digit of the release number e.g. 11.2.0.1.0 to 11.2.0.3.0, these patch sets are fully regression tested in the same way that the base release is (i.e. 11.2.0.1.0). Customers are encouraged to apply these fixes.

If a customer encounters a critical problem that requires a fix prior to the next patch set becoming available, they can request that a one off fix is made available on top of the latest patch set. This delivery mechanism is similar to the Microsoft Hot Fixes and is known as an Oracle patch set exception (or interim patch). Unlike Unix platforms these patch set exceptions are delivered in a patch set exception bundle (cumulative patch bundle), which includes all fixes since the current patch set. For example, bug 12393432 is a patch set exception bundle, Patch 12, for Oracle Database Release 11.2.0.1 for Microsoft Windows (x64). You should always apply the latest patch bundle available for your release.

The patch set exception bundles also include the fixes for the CPU (Critical Patch Update), DST (Daylight Saving Time), PSU (Patch Set Update) and Recommended Patch Bundles. It is not required to have previous security patches applied before applying the patch set exception bundle. However, you must be on the stated patch set level for a given product home before applying the patch set exception bundle for that release.

Refer to Appendix D, "How to Upgrade to Oracle Grid Infrastructure 12c Release 1" for information about how to stop database processes in preparation for installing patches.

8.1.2 Configure the Oracle ARP Driver

Oracle Grid Infrastructure on Windows Server 2008 or Windows Server 2008 R2 requires the Oracle Address Resolution Protocol (ARP) driver.

  1. Start the Oracle Grid Infrastructure stack, if it is not started.

    To check the status of the Oracle Grid Infrastructure stack, use:

    crsctl check crs
    

    To start the Oracle Grid Infrastructure stack, use:

    crsctl start crs
    
  2. Use the netcfg utility to install the Oracle ARP driver, as shown in the following command:

    %systemroot%\system32\netcfg.exe -l Grid_home\bin\oraarpdrv.inf -c p 
    -i orcl_ndisprot
    

    If "Oracle America, Inc." is not in list of the trusted publishers for the node, then select Install to allow the installation to continue.

  3. Start the Oracle ARP Protocol Driver.

    net.exe start oraarpdrv
    
  4. Stop the following Oracle Clusterware resources:

    1. If GNS is configured for your cluster, then stop the GNS resource:

      Grid_home\bin\srvctl stop gns
      

      To determine if GNS is configured for your cluster, use the following command:

      Grid_home\bin\srvctl config gns
      
    2. Stop the SCAN resource.

      Grid_home\bin\srvctl stop scan -f
      
    3. Stop all node applications running on the cluster.

      Grid_home\bin\srvctl stop nodeapps -n nodename -f
      
  5. Restart the Oracle Clusterware resources in the opposite order.

    1. Stop all node applications running on the cluster.

      Grid_home\bin\srvctl start nodeapps -n nodename
      
    2. Stop the SCAN resource.

      Grid_home\bin\srvctl start scan
      
    3. If GNS is configured for your cluster, then stop the GNS resource:

      Grid_home\bin\srvctl start gns
      

8.1.3 Configure Exceptions for the Windows Firewall

If the Windows Firewall feature is enabled on one or more of the nodes in your cluster, then virtually all transmission control protocol (TCP) network ports are blocked to incoming connections. As a result, any Oracle product that listens for incoming connections on a TCP port will not receive any of those connection requests and the clients making those connections will report errors.

You must configure exceptions for the Windows Firewall if your system meets all of the following conditions:

  • Oracle server-side components are installed on a computer running a supported version of Microsoft Windows. The list of components includes the Oracle Database, Oracle Grid Infrastructure, Oracle Real Application Clusters (Oracle RAC), network listeners, or any web servers or services.

  • The Windows computer in question accepts connections from other computers over the network. If no other computers connect to the Windows computer to access the Oracle software, then no post-installation configuration steps are required and the Oracle software functions as expected.

  • The Windows computer in question is configured to run the Windows Firewall. If the Windows Firewall is not enabled, then no post-installation configuration steps are required.

If all of the above conditions are met, then the Windows Firewall must be configured to allow successful incoming connections to the Oracle software. To enable Oracle software to accept connection requests, Windows Firewall must be configured by either opening up specific static TCP ports in the firewall or by creating exceptions for specific executables so they can receive connection requests on any ports they choose. This firewall configuration can be done by one of the following methods:

  • Start the Windows Firewall application, select the Exceptions tab and then click either Add Program or Add Port to create exceptions for the Oracle software.

  • From the command prompt, use the netsh firewall add... command.

  • When Windows notifies you that a foreground application is attempting to listen on a port, and gives you the opportunity to create an exception for that executable. If you choose the create the exception in this way, the effect is the same as creating an exception for the executable either through Control Panel or from the command line.

The following sections list the Oracle Database 11g release 2 (11.2) executables that listen on TCP ports on Windows, along with a brief description of the executable. It is recommended that these executables (if in use and accepting connections from a remote, client computer) be added to the exceptions list for the Windows Firewall to ensure correct operation. In addition, if multiple Oracle homes are in use, firewall exceptions may have to be created for the same executable, for example, oracle.exe, multiple times, once for each Oracle home from which that executable loads.

8.1.3.1 Firewall Exceptions for Oracle Database

For basic database operation and connectivity from remote clients, such as SQL*Plus, Oracle Call Interface (OCI), Open Database Connectivity (ODBC), and so on, the following executables must be added to the Windows Firewall exception list:

  • Oracle_home\bin\oracle.exe - Oracle Database executable

  • Oracle_home\bin\tnslsnr.exe - Oracle Listener

If you use remote monitoring capabilities for your database, the following executables must be added to the Windows Firewall exception list:

  • Oracle_home\bin\emagent.exe - Oracle Enterprise Manager

  • Oracle_home\jdk\bin\java.exe - Java Virtual Machine (JVM) for Oracle Enterprise Manager

8.1.3.2 Firewall Exceptions for Oracle Database Examples (or the Companion CD)

After installing the Oracle Database Companion CD, the following executables must be added to the Windows Firewall exception list:

  • Oracle_home\opmn\bin\opmn.exe - Oracle Process Manager

  • Oracle_home\jdk\bin\java.exe - JVM

8.1.3.3 Firewall Exceptions for Oracle Gateways

If your Oracle database interacts with non-Oracle software through a gateway, then you must add the gateway executable to the Windows Firewall exception list. Table 8-1table lists the gateway executables used to access non-Oracle software.

Table 8-1 Oracle Executables Used to Access Non-Oracle Software

Executable Name Description

omtsreco.exe

Oracle Services for Microsoft Transaction Server

dg4sybs.exe

Oracle Database Gateway for Sybase

dg4tera.exe

Oracle Database Gateway for Teradata

dg4msql.exe

Oracle Database Gateway for SQL Server

dg4db2.exe

Oracle Database Gateway for Distributed Relational Database Architecture (DRDA)

pg4arv.exe

Oracle Database Gateway for Advanced Program to Program Communication (APPC)

pg4t4ic.exe

Oracle Database Gateway for APPC

dg4mqs.exe

Oracle Database Gateway for WebSphere MQ

dg4mqc.exe

Oracle Database Gateway for WebSphere MQ

dg4odbc.exe

Oracle Database Gateway for ODBC


8.1.3.4 Firewall Exceptions for Oracle Clusterware and Oracle ASM

If you installed the Oracle Grid Infrastructure software on the nodes in your cluster, then you can enable the Windows Firewall only after adding the following executables and ports to the Firewall exception list. The Firewall Exception list must be updated on each node.

  • Grid_home\bin\gpnpd.exe - Grid Plug and Play daemon

  • Grid_home\bin\oracle.exe - Oracle Automatic Storage Management (Oracle ASM) executable (if using Oracle ASM for storage)

  • Grid_home\bin\racgvip.exe - Virtual Internet Protocol Configuration Assistant

  • Grid_home\bin\evmd.exe - OracleEVMService

  • Grid_home\bin\crsd.exe - OracleCRService

  • Grid_home\bin\ocssd.exe - OracleCSService

  • Grid_home\bin\octssd.exe - Cluster Time Synchronization Service daemon

  • Grid_home\bin\mDNSResponder.exe - multicast-domain name system (DNS) Responder Daemon

  • Grid_home\bin\gipcd.exe - Grid inter-process communication (IPC) daemon

  • Grid_home\bin\gnsd.exe - Grid Naming Service (GNS) daemon

  • Grid_home\bin\ohasd.exe - OracleOHService

  • Grid_home\bin\TNSLSNR.EXE - single client access name (SCAN) listener and local listener for Oracle RAC database and Oracle ASM

  • Grid_home\opmn\bin\ons.exe - Oracle Notification Service (ONS)

  • Grid_home\jdk\jre\bin\java.exe - JVM

8.1.3.5 Firewall Exceptions for Oracle RAC Database

For the Oracle RAC database, the executables that require exceptions are:

  • Oracle_home\bin\oracle.exe - Oracle RAC database instance

  • Oracle_home\bin\emagent.exe - Oracle Enterprise Manager agent

  • Oracle_home\jdk\bin\java.exe - For the Oracle Enterprise Manager Database Console

In addition, the following ports should be added to the Windows Firewall exception list:

  • Microsoft file sharing system management bus (SMB)

    • TCP ports from 135 through 139

  • Direct-hosted SMB traffic without a network basic I/O system (NetBIOS)

    • port 445 (TCP)

8.1.3.6 Firewall Exceptions for Other Oracle Products

In additional to all the previously listed exceptions, if you use any of the Oracle software listed in, then you must create an exception for Windows Firewall for the associated executable.

Table 8-2 Other Oracle Software Products Requiring Windows Firewall Exceptions

Oracle Software Product Executable Name

Data Guard Manager

dgmgrl.exe

Oracle Internet Directory lightweight directory access protocol (LDAP) Server

oidldapd.exe

External Procedural Calls

extproc.exe


8.2 Recommended Postinstallation Tasks

Oracle recommends that you complete the following tasks as needed after installing Oracle Grid Infrastructure:

8.2.1 Optimize Memory Usage for Programs

The Windows operating system should be optimized for Memory Usage of ’Programs' instead of ’System Caching'. To modify the memory optimization settings, perform the following steps:

  1. From the Start Menu, select Control Panel, then System.

  2. In the System Properties window, click the Advanced tab.

  3. In the Performance section, click Settings.

  4. In the Performance Options window, click the Advanced tab.

  5. In the Memory Usage section, ensure Programs is selected.

8.2.2 Create a Fast Recovery Area Disk Group

During installation of Oracle Grid Infrastructure, if you select Oracle ASM for storage, a single disk group is created to store the Oracle Clusterware files. If you plan to create a single-instance database, an Oracle RAC database, or an Oracle RAC One Node database, then this disk group can also be used to store the data files for the database. However, you should create a separate disk group for the fast recovery area.

8.2.2.1 About the Fast Recovery Area and the Fast Recovery Area Disk Group

The fast recovery area is a unified storage location for all Oracle Database files related to recovery. Database administrators can define the DB_RECOVERY_FILE_DEST parameter to the path for the fast recovery area to enable on-disk backups, and rapid recovery of data. Enabling rapid backups for recent data can reduce requests to system administrators to retrieve backup tapes for recovery operations.

When you enable the fast recovery area in the database initialization parameter file, all RMAN backups, archive logs, control file automatic backups, and database copies are written to the fast recovery area. RMAN automatically manages files in the fast recovery area by deleting obsolete backups and archive files that are no longer required for recovery.

To use a fast recovery area in Oracle RAC, you must place it on an Oracle ASM disk group, a cluster file system, or on a shared directory that is configured through Direct network file system (NFS) for each Oracle RAC instance. In other words, the fast recovery area must be shared among all of the instances of an Oracle RAC database. Oracle Clusterware files and Oracle Database files can be placed on the same disk group as fast recovery area files. However, Oracle recommends that you create a separate fast recovery area disk group to reduce storage device contention.

The fast recovery area is enabled by setting the parameter DB_RECOVERY_FILE_DEST to the same value on all instances. The size of the fast recovery area is set with the parameter DB_RECOVERY_FILE_DEST_SIZE. As a general rule, the larger the fast recovery area, the more useful it becomes. For ease of use, Oracle recommends that you create a fast recovery area disk group on storage devices that can contain at least three days of recovery information. Ideally, the fast recovery area should be large enough to hold a copy of all of your data files and control files, the online redo logs, and the archived redo log files needed to recover your database using the data file backups kept under your retention policy.

Multiple databases can use the same fast recovery area. For example, assume you have created one fast recovery area disk group on disks with 150 gigabyte (GB) of storage, shared by three different databases. You can set the size of the fast recovery area for each database depending on the importance of each database. For example, if test1 is your least important database, products is of greater importance and orders is of greatest importance, then you can set different DB_RECOVERY_FILE_DEST_SIZE settings for each database to meet your retention target for each database: 30 GB for test1, 50 GB for products, and 70 GB for orders.

8.2.2.2 Creating the Fast Recovery Area Disk Group

To create an Oracle ASM disk group for the fast recovery area:

  1. Navigate to the bin directory in the Grid home and start Oracle ASM Configuration Assistant (ASMCA). For example:

    C:\> cd app\12.1.0\grid\bin
    C:\> asmca
    
  2. ASMCA opens at the Disk Groups tab. Click Create to create a new disk group.

    The Create Disk Groups window opens.

  3. In the Disk Group Name field, enter a descriptive name for the fast recovery area disk group, for example, FRA.

    In the Redundancy section, select the level of redundancy you want to use.

    In the Select Member Disks field, select eligible disks to be added to the fast recovery area, then click OK.

    The Diskgroup Creation window opens to inform you when disk group creation is complete.

  4. Click OK to acknowledge the message, then click Exit to quit the application.

8.2.3 Checking the SCAN Configuration

The SCAN is a name that provides service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database instance. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.

You can use the command cluvfy comp scan (located in Grid_home\bin) to confirm that the DNS is correctly associating the SCAN with the addresses. For example:

C:\> cluvfy comp scan
Verifying scan

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for ”node1.example.com”...

Verification of SCAN VIP and Listener setup passed

Verification of scan was successful.

After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.

8.3 Configuring Oracle Enterprise Manager Cloud Control After an Upgrade

When upgrading Oracle Grid Infrastructure, one of the scripts run during the installation sets the state of the Windows services for the Oracle Enterprise Manager Database Control and Cloud Control agent to MANUAL. So, when you reboot a node, Windows does not restart the Database Control and Cloud Control agents automatically.

If you are managing Oracle Clusterware as target from Database Control or Cloud Control, you must login to all Database Control consoles and Cloud Control Consoles, update the ORACLE_HOME property for the Oracle Clusterware, Oracle ASM and Oracle Grid Infrastructure Listener targets to point to the new Grid home using the Monitoring Configuration user interface. If the Windows services for the Database control or Cloud Control agents are not started, you may need to manually start these services, then login to the Database Control consoles and Cloud Control Consoles to update the target properties.

After you modify and save the ORACLE_HOME target property of Oracle Clusterware and Oracle ASM targets, you can set the state of the Windows services for the Oracle Enterprise Manager Database Control and Cloud Control agent to AUTOMATIC.

8.4 Using Earlier Oracle Database Releases with Grid Infrastructure

Review the following sections for information about using earlier Oracle Database releases with Oracle Grid Infrastructure 12c Release 1 (12.1) installations:

8.4.1 General Restrictions for Using Earlier Oracle Database Releases

You can use Oracle Database 10g and Oracle Database 11g with Oracle Clusterware and Oracle ASM 12c Release 1 (12.1). If you upgrade an existing release of Oracle Clusterware and Oracle ASM to Oracle Grid Infrastructure 11g Release 2 (11.2) (which includes Oracle Clusterware and Oracle ASM), and you also plan to upgrade your Oracle RAC database to Oracle Database 12c Release 1 (12.1), then the required configuration of the existing databases is completed automatically when you complete the Oracle RAC upgrade, and this section does not concern you.

However, if you upgrade to Oracle Grid Infrastructure 12c Release 1 (12.1), and you have existing Oracle RAC installations that you do not plan to upgrade, or if you install an earlier release of Oracle RAC (11.2) on a cluster running Oracle Grid Infrastructure 12c Release 1 (12.1), then you must complete additional configuration tasks or apply patches, or both, before the earlier database releases will work correctly with Oracle Grid Infrastructure.

Oracle Database homes can only be stored on Oracle ACFS if the database release is Oracle Database 11g Release 2 or higher. Earlier releases of Oracle Database cannot be installed on Oracle ACFS because these releases were not designed to use Oracle ACFS.

Note:

Before you start an Oracle RAC or Oracle Database install on an Oracle Clusterware 12c Release 1 (12.1) installation, if you are upgrading from Oracle Clusterware 11g Release 1 or Oracle Clusterware 10g Release 2, you must first upgrade to Oracle Clusterware 11g Release 2. Then, you must move the OCR and voting files to Oracle ASM storage before upgrading to Oracle Clusterware 12c.

See Also:

8.4.2 Making Oracle ASM Available to Earlier Oracle Database Releases

To use Oracle ASM with Oracle Database releases earlier than Oracle Database 12c, you must use Local ASM or set the cardinality for Flex ASM to ALL, instead of the default of 3. After you install Oracle Grid Infrastructure 12c, to use Oracle ASM to provide storage service for Oracle Database releases that are earlier than Oracle Database 12c, you must use the following command to modify the Oracle ASM resource (ora.asm) :

srvctl modify asm -count ALL

This setting changes the cardinality of the Oracle ASM resource so that Flex ASM instances run on all cluster nodes. You must change the setting even if you have a cluster with 3 or less nodes to ensure previous release databases can find the ora.node.sid.inst instance alias.

8.4.3 Using ASMCA to Administer Disk Groups for Earlier Database Releases

Use Oracle ASM Configuration Assistant (ASMCA) to create and modify disk groups when you install earlier Oracle Database and Oracle RAC releases on Oracle Grid Infrastructure 11g installations. Starting with Oracle Grid Infrastructure 11g Release 2, Oracle ASM is installed as part of an Oracle Grid Infrastructure installation, with Oracle Clusterware. You can no longer use Database Configuration Assistant (DBCA) to perform administrative tasks on Oracle ASM.

See Also:

Oracle Automatic Storage Management Administrator's Guide for details on configuring disk group compatibility for databases using Oracle Database 11g Release 2 with Oracle Grid Infrastructure 12c Release 1 (12.1)

8.4.4 Using the Correct LSNRCTL Commands

To administer local and SCAN listeners using the Listener Control utility (LSNRCTL) in Oracle Clusterware and Oracle ASM 11g Release 2, use the lsnrctl program located in the Grid home. Do not attempt to use the lsnrctl programs from Oracle home locations for previous releases because they cannot be used with the new release.

8.4.5 Starting and Stopping Cluster Nodes or Oracle Clusterware Resources

Before shutting down Oracle Clusterware 12c Release 1 (12.1), if you have an Oracle Oracle Database 11g Release 2 (11.2) database registered with Oracle Clusterware, then you must do one of the following:

  • Stop the Oracle Database 11g Release 2 database instances first, then stop the Oracle Clusterware stack

  • Use the crsctl stop crs -f command to shut down the Oracle Clusterware stack and ignore any errors that are raised

If you need to shut down a cluster node that currently has Oracle Database and Oracle Grid Infrastructure running on that node, then you must perform the following steps to cleanly shutdown the cluster node:

  • Use the crsctl stop crs command to shut down the Oracle Clusterware stack

  • After Oracle Clusterware has been stopped, you can shutdown the Windows server using shutdown -r.

8.5 Modifying Oracle Clusterware Binaries After Installation

After installation, if you must modify the software installed in your Grid home, then you must first stop the Oracle Clusterware stack. For example, to apply a one-off patch, or modify any of the dynamic-link libraries (DLLs) used by Oracle Clusterware or Oracle ASM, you must follow these steps to stop and restart Oracle Clusterware.

Caution:

To put the changes you make to the Oracle Grid Infrastructure home into effect, you must shut down all executables that run in the Grid home directory and then restart them. In addition, shut down any applications that use Oracle shared libraries or DLL files in the Grid home.

Prepare the Oracle Grid Infrastructure home for modification using the following procedure:

  1. Log in using a member of the Administrators group and go to the directory Grid_home\bin, where Grid_home is the path to the Oracle Grid Infrastructure home.

  2. Shut down Oracle Clusterware using the following command:

    C:\..\bin> crsctl stop crs -f
    
  3. After Oracle Clusterware is completely shut down, perform the updates to the software installed in the Grid home.

  4. Use the following command to restart Oracle Clusterware:

    C:\..\bin> crsctl start crs
    
  5. Repeat steps 1 through 4 on each cluster member node.

Note:

Do not delete directories in the Grid home. For example, do not delete the directory Grid_home/Opatch. If you delete the directory, then the Grid infrastructure installation owner cannot use Opatch to patch the Grid home, and Opatch displays the error message "checkdir error: cannot create Grid_home/OPatch".