Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for Microsoft Windows

Part Number E10817-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Storage for Grid Infrastructure for a Cluster and Oracle RAC

This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.

This chapter contains the following topics:

3.1 Reviewing Storage Options

This section describes supported options for storing Oracle Grid Infrastructure for a cluster storage options. It contains the following sections:

See Also:

The Oracle Certify site for a list of supported vendors for Network Attached Storage options:
http://www.oracle.com/technology/support/

Refer also to the Certify site on My Oracle Support for the most current information about certified storage options:

https://support.oracle.com/

3.1.1 General Storage Considerations for Oracle Grid Infrastructure

Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an Oracle ASM diskgroup, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted.

For a storage option to meet high availability requirements, the files stored on the disk need to be protected by data redundancy, so that if one or more disks fail, then the data stored on the failed disks can be recovered. This redundancy can be provided externally using Redundant Array of Independent Disks (RAID) devices, or logical volumes on more than one physical device and implement the stripe-and-mirror- everything methodology, also known as SAME. If you do not have a RAID devices or logical volumes, then you can create additional copies, or mirrors, of the files on different file systems. If you choose to mirror the files, then you must provide disk space for additional Oracle Cluster Registry (OCR) files and at least two additional voting disk files.

Each OCR location should be placed on a different disk. For voting disk file placement, ensure that each file is configured so that it does not share any hardware device or disk, or other single point of failure with the other voting disks. Any node that does not have available to it an absolute majority of voting disks configured (more than half) will be restarted.

Use the following guidelines when choosing storage options:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • You can use Oracle ASM 11g release 2 (11.2) to store Oracle Clusterware files. You cannot use prior Oracle ASM releases to do this.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk locations and at least three Oracle Cluster Registry locations to provide redundancy.

3.1.2 General Storage Considerations for Oracle RAC

For all Oracle RAC installations, you must choose the storage options that you want to use for Oracle Database files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

If you want to enable automated backups during the installation, then you must also choose the shared storage option that you want to use for recovery files (the fast recovery area). Use the following guidelines when choosing the storage options to use for each file type:

  • The shared storage option that you choose for recovery files can be the same as or different from the option that you choose for the database files. However, you cannot use raw storage to store recovery files.

  • You can choose any combination of the supported shared storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Oracle ASM as the shared storage option for database and recovery files.

  • For Standard Edition Oracle RAC installations, Oracle ASM is the only supported shared storage option for database or recovery files. You must use Oracle ASM for the storage of Oracle RAC data files, online redo logs, archived redo logs, control files, server parameter files (SPFILE), and the fast recovery area.

  • If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed as part of an Oracle grid infrastructure for a cluster installation.

    • Any existing Oracle ASM instance on any node in the cluster is shut down.

  • Raw devices are supported only when upgrading an existing installation using the partitions already configured. On new installations, using raw device partitions is not supported by Oracle Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if you perform manual configuration.

3.1.2.1 Guidelines for Placing Oracle Data Files on a File System

If you decide to place the Oracle data files on Oracle Cluster File System (OCFS) for Windows, then use the following guidelines when deciding where to place them:

  • You can choose either a single cluster file system or more than one cluster file system to store the data files:

    • If you want to use a single cluster file system, then choose a cluster file system on a physical device that is dedicated to the database.

      For best performance and reliability, choose a RAID device or a logical volume on more than one physical device and implement the stripe-and-mirror-everything methodology, also known as SAME.

    • If you want to use more than one cluster file system, then choose cluster file systems on separate physical devices or partitions that are dedicated to the database.

      This method enables you to distribute physical I/O and create separate control files on different devices for increased reliability. It also enables you to fully implement Oracle Optimal Flexible Architecture (OFA) guidelines. To implement this method, you must choose the Advanced database creation option.

  • If you intend to create a preconfigured database during the installation, then the cluster file system (or systems) that you choose must have at least 4 GB of free disk space.

    For production databases, you must estimate the disk space requirement depending on how you use the database.

  • For optimum performance, the cluster file systems that you choose should be on physical devices that are used by only the database.

    Note:

    You must not create an NTFS partition on a disk that you are using for OCFS for Windows.

3.1.2.2 Guidelines for Placing Oracle Recovery Files on a File System

You must choose a location for recovery files prior to installation only if you intend to enable automated backups during installation.

If you choose to place the Oracle recovery files on a cluster file system, then use the following guidelines when deciding where to place them:

  • To prevent disk failure from making the database files as well as the recovery files unavailable, place the recovery files on a cluster file system that is on a different physical disk from the database files.

    Note:

    Alternatively use an Oracle ASM disk group with a normal or high redundancy level for either or both file types, or use external redundancy.
  • The cluster file system that you choose should have at least 3 GB of free disk space.

    The disk space requirement is the default disk quota configured for the fast recovery area (specified by the DB_RECOVERY_FILE_DEST_SIZE initialization parameter).

    If you choose the Advanced database configuration option, then you can specify a different disk quota value. After you create the database, you can also use Oracle Enterprise Manager to specify a different value.

    See Also:

    Oracle Database Backup and Recovery Basics for more information about sizing the fast recovery area.

3.1.3 Supported Storage Options for Oracle Clusterware and Oracle RAC

There are two ways of storing Oracle Clusterware files:

  • Oracle Automatic Storage Management (Oracle ASM): You can install Oracle Clusterware files (OCR and voting disks) in Oracle ASM diskgroups.

    Oracle ASM is an integrated, high-performance database file system and disk manager for Oracle Clusterware and Oracle Database files. It performs striping and mirroring of database files automatically.

    Note:

    You can no longer use OUI to install Oracle Clusterware or Oracle Database files directly on raw devices.

    Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.

  • OCFS for Windows: OCFS for Windows is a cluster file system used to store Oracle Clusterware and Oracle RAC files on the Microsoft Windows platforms. OCFS for Windows is not the same as OCFS2, which is available on Linux.

    Note:

    You cannot put Oracle Clusterware files on Oracle Automatic Storage Management Cluster File System (Oracle ACFS). You cannot put Oracle Clusterware binaries on a cluster file system.

    See Also:

    The Certify page on My Oracle Support for supported cluster file systems

You cannot install the Oracle Grid infrastructure software on a cluster file system. The Oracle Clusterware home must be on a local, NTFS formatted disk.

There are several ways of storing Oracle Database (Oracle RAC) files:

  • Oracle Automatic Storage Management (Oracle ASM): You can create Oracle Database files in Oracle ASM diskgroups.

    Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations.

    Note:

    You can no longer use OUI to install Oracle Clusterware or Oracle Database files or binaries directly on raw devices.

    Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.

  • A supported shared file system: Supported file systems include the following:

    • Oracle Cluster File System (OCFS) for Windows: OCFS for Windows is a cluster file system used to store Oracle Database binary and data files. If you intend to use OCFS for Windows for your database files, then you should create partitions large enough for all the database and recovery files when you create partitions for use by Oracle Database.

      See Also:

      The Certify page on My Oracle Support for supported cluster file systems
    • Oracle Automatic Storage Management Cluster File System (Oracle ACFS): Oracle ACFS provides a general purpose file system that can be used to store the Oracle Database binary files.

      Note:

      You cannot put Oracle Database files on Oracle ACFS.
  • Network File System (NFS) with Oracle Direct NFS client: You can configure Oracle RAC to access NFS V3 servers directly using an Oracle internal Direct NFS client.

    Note:

    You cannot use Direct NFS to store Oracle Clusterware files. You can only use Direct NFS to store Oracle Database files. To install Oracle Real Application Clusters (Oracle RAC) on Windows using Direct NFS, you must have access to a shared storage method other than NFS for the Oracle Clusterware files.

    See Also:

    "About Direct NFS Storage" for more information on using Direct NFS

The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.

Table 3-1 Supported Storage Options for Oracle Clusterware and Oracle RAC Files and Binaries

Storage Option OCR and Voting Disks Oracle Clusterware Binaries Oracle RAC Binaries Oracle RAC Database Files Oracle Recovery Files

Oracle Automatic Storage Management

Yes

No

No

Yes

Yes

Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

No

No

Yes

No

No

OCFS for Windows

Yes

No

Yes

Yes

Yes

Direct NFS access to a certified NAS filer

Note: Direct NFS does not support Oracle Clusterware files.

No

No

No

Yes

Yes

Shared disk partitions (raw devices)

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No

No

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No

Local storage

No

Yes

Yes

No

No


Note:

For the most up-to-date information about supported storage options for Oracle Clusterware and Oracle RAC installations, refer to the Certify pages on the My Oracle Support Web site:
https://support.oracle.com

3.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, first perform the steps listed in the section "Preliminary Shared Disk Preparation", then configure the shared storage:

3.2 Preliminary Shared Disk Preparation

Complete the following steps to prepare shared disks for storage:

3.2.1 Disabling Write Caching

You must disable write caching on all disks that will be used to share data between the nodes in your cluster. Perform these steps to disable write caching:

  1. Click Start, then select Control Panel, then Administrative Tools, then Computer Management, then Device Manager, and then Disk drives

  2. Expand the Disk drives and double-click the first drive listed.

  3. Under the Policies tab for the selected drive, uncheck the option that enables write caching.

  4. Double-click each of the other drives that will be used by Oracle Clusterware and Oracle RAC and disable write caching as described in the previous step.

Caution:

Any disks that you use to store files, including database files, that will be shared between nodes, must have write caching disabled.

3.2.2 Enabling Automounting for Windows

If you are using Windows 2003 R2 Enterprise Edition or Datacenter Edition, then you must enable disk automounting, as it is disabled by default. For other Windows releases, even though the automount feature is enabled by default, you should verify that automount is enabled.

You must enable automounting when using:

  • Raw partitions for Oracle Real Application Clusters (Oracle RAC)

  • Oracle Cluster File System for Windows (OCFS for Windows)

  • Oracle Clusterware

  • Raw partitions for single-node database installations

  • Logical drives for Oracle Automatic Storage Management (Oracle ASM)

Note:

Raw partitions are supported only when upgrading an existing installation using the partitions already configured. On new installations, using raw partitions is not supported by Oracle Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if you perform manual configuration

If you upgrade the operating system from one version of Windows to another (for example, Windows Server 2003 to Windows Advanced Server 2003), then you must repeat this procedure after the upgrade is finished.

To determine if automatic mounting of new volumes is enabled, use the following commands:

c:\> diskpart
DISKPART> automount
Automatic mounting of new volumes disabled.

To enable automounting:

  1. Enter the following commands at a command prompt:

    c:\> diskpart
    DISKPART> automount enable
    Automatic mounting of new volumes enabled.
    
  2. Type exit to end the diskpart session

  3. Repeat steps 1 and 2 for each node in the cluster.

  4. When you have prepared all of the cluster nodes in your Windows 2003 R2 system as described in the previous steps, restart all of the nodes.

Note:

All nodes in the cluster must have automatic mounting enabled in order to correctly install Oracle RAC and Oracle Clusterware. Oracle recommends that you enable automatic mounting before creating any logical partitions for use by the database, Oracle ASM, or the Oracle Cluster File System.

You must restart each node after enabling disk automounting. After it is enabled and the node is restarted, automatic mounting remains active until it is disabled.

3.3 Storage Requirements for Oracle Clusterware and Oracle RAC

Each supported file system type has additional requirements that must be met to support Oracle Clusterware and Oracle RAC. Use the following sections to help you select your storage option:

3.3.1 Requirements for Using a Cluster File System for Shared Storage

To use OCFS for Windows for Oracle Clusterware files, you must comply with the following requirements:

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy)

    • At least three file systems are mounted, and use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR and voting disks

  • If you use a RAID device to store the Oracle Clusterware files, then you must have a partition that has at least 560 MB of available space for the OCR and voting disk.

  • If you use the redundancy features of Oracle Clusterware to provide high availability for the OCR and voting disk files, then you need a minimum of three file systems, and each one must have 560 MB of available space for the OCR and voting disk.

    Note:

    The smallest partition size that OCFS for Windows can use is 500 MB

The total required volume size listed in the previous paragraph is cumulative. For example, to store all OCR and voting disk files on a shared file system that does not provide redundancy at the hardware level (external redundancy), you should have at least 1.7 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and voting disk files, one on each volume). If you use a file system that provides data redundancy, then you need only one physical disk with 560 MB of available space to store the OCR and voting disk files.

Note:

If you are upgrading from a previous release of Oracle Clusterware, and the existing OCR and voting disk files are not 280 MB in size, then you do not need to change the size of the OCR or voting disks before performing the upgrade.

3.3.2 Identifying Storage Requirements for Using Oracle ASM for Shared Storage

To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

Tip:

As you progress through the following steps, make a list of the raw device names you intend to use and have it available during your database or Oracle ASM installation.
  1. Determine whether you want to use Oracle Automatic Storage Management for Oracle Clusterware files (OCR and voting disks), Oracle Database files, recovery files, or all files except for Oracle Clusterware binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

    Note:

    • You do not have to use the same storage mechanism for data files and recovery files. You can store one type of file in a cluster file system while storing the other file type within Oracle ASM. If you plan to use Oracle ASM for both data files and recovery files, then you should create separate Oracle ASM disk groups for the data files and the recovery files.

    • Oracle Clusterware files must use either Oracle ASM or a cluster file system. You cannot have some of the Oracle Clusterware files in Oracle ASM and other Oracle Clusterware files in a cluster file system.

    • If you choose to store Oracle Clusterware files on Oracle ASM and use redundancy for the disk group, then redundant voting files are created automatically on Oracle ASM; you cannot create extra voting files after the installation is complete. Oracle ASM automatically adds or migrates the voting files to maintain the ideal number of voting files based on the redundancy of the disk group.

    If you plan to enable automated backups during the installation, then you can choose Oracle ASM as the shared storage mechanism for recovery files by specifying an Oracle ASM disk group for the fast recovery area. Depending how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs DBCA in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to use the same Oracle ASM disk group for data files and recovery files. You can also choose to use different disk groups for each file type. Ideally, you should create separate Oracle ASM disk groups for data files and recovery files.

      The same choice is available to you if you use DBCA after the installation to create a database.

    • If you select an installation type that runs DBCA in non-interactive mode, then you must use the same Oracle ASM disk group for data files and recovery files.

  2. Choose the Oracle ASM redundancy level that you want to use for the Oracle ASM disk group.

    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group, and determines the number of disks and amount of disk space that you require. The redundancy levels are as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Oracle Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you select external redundancy only if you use RAID or similar devices that provide their own data protection mechanisms for disk devices.

      Even if you select external redundancy, you must have at least three voting disks configured, as each voting disk is an independent entity, and cannot be mirrored.

    • Normal redundancy

      A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For most installations, Oracle recommends that you select normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of the devices.

      While high redundancy disk groups provide a high level of data protection, you must consider the higher cost of additional storage devices before deciding to use this redundancy level.

  3. Determine the total amount of disk space that you require for the Oracle Clusterware files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware, where you have voting disks in separate disk groups:

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) files Voting disks Both File Types
    External 1 280 MB 280 MB 560 MB
    Normal 3 560 MB 840 MB 1.4 GBFoot 1 
    High 5 840 MB 1.4 GB 2.3 GB

    Footnote 1 If you create a diskgroup during installation, then it must be at least 2 GB in size.

    Note:

    If the voting disk files are in a disk group, then be aware that disk groups with Oracle Clusterware files (OCR and voting disks) have a higher minimum number of failure groups than other disk groups.

    If you intend to place database files and Oracle Clusterware files in the same disk group, then refer to the section "Identifying Storage Requirements for Using Oracle ASM for Shared Storage".

    To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

    For Oracle Clusterware installations, you must also add additional disk space for the Oracle Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes.

    • nodes = Number of nodes in cluster.

    • clients = Number of database instances for each node.

    • disks = Number of disks in disk group.

    For example, for a four-node Oracle Grid Infrastructure installation, using three disks in a normal redundancy disk group, you require an additional 1684 MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
    
  4. Determine the total amount of disk space that you require for the Oracle database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for installing the starter database:

    Redundancy Level Minimum Number of Disks Data Files Recovery FIles Both File Types
    External 1 1.5 GB 3 GB 4.5 GB
    Normal 2 3 GB 6 GB 9 GB
    High 3 4.5 GB 9 GB 13.5 GB

    Note:

    The file sizes listed in the previous table are estimates of minimum requirements for a new installation (or a database without any user data). The file sizes for your database will be larger.

    For Oracle RAC installations, you must also add additional disk space for the Oracle Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes.

    • nodes = Number of nodes in cluster.

    • clients = Number of database instances for each node.

    • disks = Number of disks in disk group.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional 1684 MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
    
  5. Determine if you can use an existing disk group.

    If an Oracle ASM instance already exists on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

    See "Using an Existing Oracle Automatic Storage Management Disk Group" for more information about using an existing disk group.

  6. Optionally, identify failure groups for the Oracle ASM disk group devices.

    Note:

    You only need to complete this step to use an installation method that runs DBCA in interactive mode. Do this if, for example, you choose the Advanced database configuration option. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. Failure groups define Oracle ASM disks that share a common potential failure mechanism. By default, each device comprises its own failure group.

    If two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure. To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    For more information about Oracle ASM failure groups, refer to Oracle Database Storage Administrator's Guide.

  7. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

    • Do not specify more than one partition on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Oracle ASM from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

3.3.2.1 Using an Existing Oracle Automatic Storage Management Disk Group

To use Oracle ASM as the storage option for either database or recovery files, you must use an existing Oracle ASM disk group, or use Oracle ASM Configuration Assistant (ASMCA) to create the necessary disk groups prior to installing Oracle Database 11g release 2.

To determine if an Oracle ASM disk group already exists, or to determine whether there is sufficient disk space in an existing disk group, you can use Oracle Enterprise Manager, either Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. In the Services Control Panel, make sure that the OracleASMService+ASMn service, where n is the node number, has started.

  2. Open a Windows command prompt and temporarily set the ORACLE_SID environment variable to specify the appropriate value for the Oracle ASM instance that you want to use.

    For example, if the Oracle ASM SID is named +ASM1, then enter a setting similar to the following:

    C:\> set ORACLE_SID = +ASM1
    
  3. Use SQL*Plus to connect to the Oracle ASM instance as the SYS user with the SYSASM privilege and start the instance if necessary with a command similar to the following:

    C:\> sqlplus /nolog
    SQL> CONNECT SYS AS SYSASM
    Enter password: sys_password
    Connected to an idle instance.
    
    SQL> STARTUP
    
  4. Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each disk group:

    SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install, or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.3.3 Restrictions for Disk Partitions Used By Oracle ASM

Be aware of the following restrictions when configuring disk partitions for use with Oracle ASM:

  • You cannot use primary partitions for storing Oracle Clusterware files while running the OUI to install Oracle Clusterware as described in Chapter 4, "Installing Oracle Grid Infrastructure for a Cluster". You must create logical drives inside extended partitions for the disks to be used by Oracle Clusterware files and Oracle ASM.

  • With 64-bit Windows, you can create up to 128 primary partitions for each disk.

  • You can create shared directories only on primary partitions and logical drives.

  • Oracle recommends that you limit the number of partitions you create on a single disk to prevent disk contention. Therefore, you may prefer to use extended partitions rather than primary partitions.

For these reasons, you might prefer to use extended partitions for storing Oracle software files and not primary partitions.

3.3.4 Requirements for Using a Shared File System

To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:

  • To use a cluster file system, it must be a supported cluster file system, as listed in the section "Supported Storage Options for Oracle Clusterware and Oracle RAC".

  • To use an NFS file system, it must be on a certified NAS device. Log in to My Oracle Support at the following URL, and click the Certify tab to find a list of certified NAS devices.

    https://support.oracle.com/

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:

    • The disks used for the file system are on a highly available storage device, for example, a RAID device.

    • At least two file systems are mounted, and use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR.

  • If you choose to place your database files on a shared file system, then one of the following should be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The user account with which you perform the installation (oracle or grid) must have write permissions to create the files in the path that you specify.

Note:

Upgrading from Oracle9i release 2 using the raw device or shared file for the OCR that you used for the SRVM configuration repository is not supported.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you can continue to use those partition sizes.

All storage products must be supported by both your server and storage vendors.

Use Table 3-2 and Table 3-3 to determine the minimum size for shared file systems:

Table 3-2 Oracle Clusterware Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Voting disks with external redundancy

3

At least 280 MB for each voting disk volume.

Oracle Cluster Registry (OCR) with external redundancy

1

At least 280 MB for each OCR volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.

1

At least 280 MB for each OCR volume

At least 280 MB for each voting disk volume


Table 3-3 Oracle RAC Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 3-2 and Table 3-3, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting disk on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting disks and OCR files are on separate physical disks. If you add Oracle RAC using one volume for database files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 5.5 GB available total for all volumes.

3.3.5 Requirements for Files Managed by Oracle

If you use OCFS for Windows or Oracle ASM for your database files, then your database is created by default with files managed by Oracle Database. When using the Oracle Managed files feature, you need specify only the database object name instead of file names when creating or deleting database files.

Configuration procedures are required in order to enable Oracle Managed Files.

See Also:

"Using Oracle-Managed Files" in Oracle Database Administrator's Guide

3.4 Configuring the Shared Storage Used by Oracle ASM

The installer does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on Oracle ASM, then you must first create and configure disk partitions to by used by Oracle ASM.

The following sections describe how to create and configure disk partitions to be used by Oracle ASM for storing Oracle Clusterware files or Oracle Database data files, how to configure the Oracle ASM Cluster File System to store other file types, and what to do if you already have storage configured for a previous release of Oracle ASM:

Note:

The OCR is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation.

3.4.1 Creating Disk Partitions for Oracle ASM

To use direct-attached storage (DAS) or storage area network (SAN) disks for Oracle ASM, each disk must have a partition table. Oracle recommends creating exactly one partition for each disk that encompasses the entire disk.

Note:

You can use any physical disk for Oracle ASM, as long as it is partitioned. However, you cannot use network-attached storage (NAS) or Microsoft dynamic disks.

Use Microsoft Computer Management utility or the command line tool diskpart to create the partitions. Ensure that you create the partitions without drive letters. After you have created the partitions, the disks can be configured.

See Also:

"Stamp Disks for Oracle ASM" for more information about using diskpart to create a partition

3.4.2 Marking Disk Partitions for Oracle ASM Prior to Installation

The only partitions that OUI displays for Windows systems are logical drives that are on disks that do not contain a primary partition, and have been stamped with asmtool. Configure the disks before installation either by using asmtoolg (GUI version) or using asmtool (command line version). You also have the option of using the asmtoolg utility during Oracle Grid infrastructure for a cluster installation.

The asmtoolg and asmtool utilities only work on partitioned disks; you cannot use Oracle ASM on unpartitioned disks. You can also use these tools to reconfigure the disks after installation.

The following section describes the asmtoolg and asmtool functions and commands.

Note:

Refer to Oracle Database Storage Administrator's Guide for more information about using asmtool.

3.4.2.1 Overview of asmtoolg and asmtool

The asmtoolg and asmtool tools associate meaningful, persistent names with disks to facilitate using those disks with Oracle ASM. Oracle ASM uses disk strings to operate more easily on groups of disks at once. The names that asmtoolg or asmtool create make this easier than using Windows drive letters.

All disk names created by asmtoolg or asmtool begin with the prefix ORCLDISK followed by a user-defined prefix (the default is DATA), and by a disk number for identification purposes. You can use them as raw devices in the Oracle ASM instance by specifying a name \\.\ORCLDISKprefixn, where prefix either can be DATA, or can be a value you supply, and where n represents the disk number.

To configure your disks with asmtoolg, refer to the section "Using asmtoolg (Graphical User Interface)". To configure the disks with asmtool, refer to the section "Using asmtool (Command Line)".

3.4.2.2 Using asmtoolg (Graphical User Interface)

Use asmtoolg, a graphical interface, to create device names; use asmtoolg to add, change, delete, and examine the devices available for use in Oracle ASM.

To add or change disk stamps:

  1. In the installation media for Oracle Grid Infrastructure, go the asmtool folder and double-click asmtoolg.

    If Oracle Clusterware is already installed, then go to the Grid_home\bin folder and double-click asmtoolg.exe.

    On Windows Server 2008 and Windows Server 2008 R2, if user access control (UAC) is enabled, then you must create a desktop shortcut to a DOS command window. Open the command window using the Run as Administrator, right-click context menu, and launch asmtoolg.

  2. Select the Add or change label option, and then click Next.

    asmtoolg shows the devices available on the system. Unrecognized disks have a status of "Candidate device", stamped disks have a status of "Stamped ASM device," and disks that have had their stamp deleted have a status of "Unstamped ASM device." The tool also shows disks that are recognized by Windows as a file system (such as NTFS). These disks are not available for use as Oracle ASM disks, and cannot be selected. In addition, Microsoft Dynamic disks are not available for use as Oracle ASM disks.

    If necessary, follow the steps under "Stamp Disks for Oracle ASM" to create disk partitions for the Oracle ASM instance.

  3. On the Stamp Disks window, select the disks to stamp.

    For ease of use, Oracle ASM can generate unique stamps for all of the devices selected for a given prefix. The stamps are generated by concatenating a number with the prefix specified. For example, if the prefix is DATA, then the first Oracle ASM link name is ORCLDISKDATA0.

    You can also specify the stamps of individual devices.

  4. Optionally, select a disk to edit the individual stamp (Oracle ASM link name).

  5. Click Next.

  6. Click Finish.

To delete disk stamps:

  1. Select the Delete labels option, then click Next.

    The delete option is only available if disks exist with stamps. The delete screen shows all stamped Oracle ASM disks.

  2. On the Delete Stamps screen, select the disks to unstamp.

  3. Click Next.

  4. Click Finish.

3.4.2.3 Using asmtool (Command Line)

asmtool is a command-line interface for stamping disks. It has the following options:

Option Description Example
-add Adds or changes stamps. You must specify the hard disk, partition, and new stamp name. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option.

If necessary, follow the steps under "Stamp Disks for Oracle ASM" to create disk partitions for the Oracle ASM instance.

asmtool -add [-force]
\Device\Harddisk1\Partition1 ORCLDISKASM0
\Device\Harddisk2\Partition1 ORCLDISKASM2...
-addprefix Adds or changes stamps using a common prefix to generate stamps automatically. The stamps are generated by concatenating a number with the prefix specified. If the disk is a raw device or has an existing Oracle ASM stamp, then you must specify the -force option.
asmtool -addprefix ORCLDISKASM [-force]
\Device\Harddisk1\Partition1
\Device\Harddisk2\Partition1...
-create Creates an ASM disk device from a file instead of a partition.

Note: Usage of this command is not supported for production environments.

asmtool -create \\server\share\file 1000
asmtool -create D:\asm\asmfile02.asm 240
-list List available disks. The stamp, windows device name, and disk size in megabytes are shown.
asmtool -list
-delete Removes existing stamps from disks.
asmtool -delete ORCLDISKASM0 ORCLDISKASM1...

Note:

If you use -add, -addprefix, and -delete, asmtool notifies the Oracle ASM instance on the local machine and on other nodes in the cluster, if available, to rescan the available disks.

3.4.3 Configuring Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

Oracle ACFS is installed as part of an Oracle grid infrastructure installation, Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), for 11g release 2 (11.2).

Note:

Oracle ACFS is supported only on Windows Server 2003 64-bit and Windows Server 2003 R2 64-bit. All other Windows releases that are supported for Oracle grid infrastructure and Oracle Clusterware 11g release 2 (11.2) are not supported for Oracle ACFS.

To configure Oracle Automatic Storage Management Cluster File System for an Oracle Database home for an Oracle RAC database, perform the following steps:

  1. Install Oracle grid infrastructure for a cluster (Oracle Clusterware and Oracle ASM).

  2. Start Oracle ASM Configuration Assistant as the grid installation owner.

  3. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.

  4. On the ASM Cluster File Systems page, right-click the Data disk, then select Create ACFS for Database Home.

  5. In the Create ACFS Hosted Database Home window, enter the following information:

    • Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise. For example: racdb_01

    • Database Home Mountpoint: Enter the directory path for the mountpoint. For example: M:\acfsdisks\racdb_01

      Make a note of this mountpoint for future reference.

    • Database Home Size (GB): Enter in gigabytes the size you want the database home to be.

    • Click OK when you have completed your entries.

  6. During Oracle RAC installation, ensure that you or the DBA who installs Oracle RAC selects for the Oracle home the mountpoint you provided in the Database Home Mountpoint field (in the preceding example, M:\acfsdisks\racdb_01).

See Also:

Oracle Database Storage Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS

3.4.4 Migrating Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home\bin) to upgrade the existing Oracle ASM instance to Oracle ASM 11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of the software on all nodes is Oracle ASM 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of the software for an Oracle RAC installation is from a release prior to Oracle ASM 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to Oracle ASM 11g release 2 (11.2).

3.5 Configuring Storage for Oracle Database Files on OCFS for Windows

To use OCFS for Windows for your Oracle home and data files, the following partitions, at a minimum, must exist before you run OUI to install Oracle Clusterware:

Log in to Windows using a member of the Administrators group and perform the steps described in this section to set up the shared disk raw partitions for OCFS for Windows. Windows refers to raw partitions as logical drives. If you need more information about creating partitions, then refer to the Windows online help from within the Disk Management utility.

  1. Run the Windows Disk Management utility from one node to create an extended partition. Use a basic disk; dynamic disks are not supported.

  2. Create a partition for the Oracle Database data files and recovery files, and optionally create a second partition for the Oracle home.

    The number of partitions used for OCFS for Windows affects performance. Therefore, you should create the minimum number of partitions needed for the OCFS for Windows option you choose.

Note:

Oracle supports installing the database into multiple Oracle Homes on a single system. This allows flexibility in deployment and maintenance of the database software. For example, it allows you to run different versions of the database simultaneously on the same system, or it allows you to upgrade specific database or Oracle Automatic Storage Management instances on a system without affecting other running databases.

However, when you have installed multiple Oracle Homes on a single system, there is also some added complexity introduced that you may need to take into account to allow these Oracle Homes to coexist. For more information on this topic, refer to Oracle Database Platform Guide for Microsoft Windows and Oracle Real Application Clusters Installation Guide

To create the required partitions, perform the following steps:

  1. From one of the existing nodes of the cluster, run the DiskPart utility as follows:

    C:\> diskpart
    DISKPART>
    
  2. List the available disks. By specifying its disk number (n), select the disk on which you want to create a partition.

    DISKPART> list disk
    DISKPART> select disk n
    
  3. Create an extended partition:

    DISKPART> create part ext
    
  4. Create a logical drive of the desired size after the extended partition is created using the following syntax:

    DISKPART> create part log [size=n] [offset=n] [noerr]
    
  5. Repeat steps 2 through 6 for the second and any additional partitions. An optimal configuration is one partition for the Oracle home and one partition for Oracle Database files.

  6. List the available volumes, and remove any drive letters from the logical drives you plan to use.

    DISKPART> list volume
    DISKPART> select volume n
    DISKPART> remove
    
  7. If you are preparing drives on a Windows 2003 R2 system, then you should restart all nodes in the cluster after you have created the logical drives.

  8. Check all nodes in the cluster to ensure that the partitions are visible on all the nodes and to ensure that none of the Oracle partitions have drive letters assigned. If any partitions have drive letters assigned, then remove them by performing these steps:

    • Right-click the partition in the Windows Disk Management utility

    • Select "Change Drive Letters and Paths..." from the menu

    • Click Remove in the "Change Drive Letter and Paths" window

3.5.1 Formatting Drives to Use OCFS for Windows after Installation

If you have already installed Oracle Grid Infrastructure, and you want to use OCFS for Windows for storage for Oracle RAC, then run the ocfsformat.exe command from the Grid_home\cfs directory using the following syntax:

Grid_home\cfs\OcfsFormat /m link_name /c ClusterSize_in_KB /v volume_label /f /a

Where:

  • /m link_name is the mountpoint for this file system which you want to format with OCFS for Windows. On Windows, provide a drive letter corresponding to the logical drive.

  • ClusterSize_in_KB is the Cluster size or allocation size for the OCFS for Windows volume (this option must be used with the /a option or else the default size of 4 KB is used)

    Note:

    The Cluster size is essentially the block size. Recommended values are 1024 (1 MB) if the OCFS for Windows disk partition is to be used for Oracle datafiles and 4 (4 KB) if the OCFS for Windows disk partition is to be used for the Oracle home.
  • volume_label is an optional volume label

  • The /f option forces the format of the specified volume

  • The /a option, if specified, forces OcfsFormat to use the clustersize specified with the /c option

For example, to create an OCFS for Windows formatted shared disk partition named DATA, mounted as U:, using a shared disk with a non-default cluster size of 1 MB, you would use the following command:

ocfsformat /m U: /c 1024 /v DATA /f /a

3.6 Configuring Direct NFS Storage for Oracle RAC Data Files

This section contains the following information about Direct NFS:

3.6.1 About Direct NFS Storage

Oracle Disk Manager (ODM) can manage network file systems (NFS) on its own. This is referred to as Direct NFS. Direct NFS implements NFS version 3 protocol within the Oracle RDBMS kernel. This change enables monitoring of NFS status using the ODM interface. The Oracle RDBMS kernel driver tunes itself to obtain optimal use of available resources.

Starting with Oracle Database 11g release 1 (11.1), you can configure Oracle Database to access NFS version 3 servers directly using Direct NFS. This allows you to store data files on a supported NFS system.

If Oracle Database is unable to open an NFS server using Direct NFS, then an informational message is logged into the Oracle alert and trace files indicating that Direct NFS could not be established.

Note:

Direct NFS does not work if the backend NFS server does not support a write size (wtmax) of 32768 or larger.

The Oracle files resident on the NFS server that are served by the Direct NFS Client can also be accessed through a third party NFS client. Management of Oracle data files created with Direct NFS should be done according to the guidelines specified in Oracle Database Administrator's Guide.

Use the following views for Direct NFS management:

  • V$DNFS_SERVERS: Lists the servers that are accessed using Direct NFS.

  • V$DNFS_FILES: Lists the files that are currently open using Direct NFS.

  • V$DNFS_CHANNELS: Shows the open network paths, or channels, to servers for which Direct NFS is providing files.

  • V$DNFS_STATS: Lists performance statistics for Direct NFS.

3.6.2 About the Oranfstab File for Direct NFS

If you use Direct NFS, then you must create a new configuration file, oranfstab, to specify the options, attributes, and parameters that enable Oracle Database to use Direct NFS. Direct NFS looks for the mount point entries in Oracle_home\database\oranfstab. It uses the first matched entry as the mount point. You must add the oranfstab file to the Oracle_home\database directory.

For Oracle RAC installations, if you want to use Direct NFS, then you must replicate the oranfstab file on all of the nodes. You must also keep all of the oranfstab files synchronized on all nodes.

When the oranfstab file is placed in Oracle_home\database, the entries in the file are specific to a single database. All nodes running an Oracle RAC database should use the same Oracle_home\database\oranfstab file.

Note:

If you remove an NFS path from oranfstab that Oracle Database is using, then you must restart the database for the change to be effective. In addition, the mount point that you use for the file system must be identical on each node.

See Also:

"Enabling the Direct NFS Client" for more information about creating the oranfstab file

3.6.3 Mounting NFS Storage Devices with Direct NFS

Direct NFS determines mount point settings to NFS storage devices based on the configuration information in oranfstab. If Oracle Database is unable to open an NFS server using Direct NFS, then an error message is written into the Oracle alert and trace files indicating that Direct NFS could not be established.

3.6.4 Specifying Network Paths for a NFS Server

Direct NFS can use up to four network paths defined in the oranfstab file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS re-issues all outstanding requests over any remaining paths.

Note:

You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance prevents the use of another Direct NFS implementation.

3.6.5 Enabling the Direct NFS Client

To enable the Direct NFS Client, you must add an oranfstab file to Oracle_home\database. When oranfstab is placed in this directory, the entries in this file are specific to one particular database. The Direct NFS Client searches for the mount point entries as they appear in oranfstab. The Direct NFS Client uses the first matched entry as the mount point.

Complete the following procedure to enable the Direct NFS Client:

  1. Create an oranfstab file with the following attributes for each NFS server that you want to access using Direct NFS:

    • server: The NFS server name.

    • path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command on the NFS server.

    • local: Up to 4 network interfaces on the database host, specified by IP address, or by name, as displayed using the ipconfig command on the database host.

    • export: The exported path from the NFS server. Use a UNIX-style path.

    • mount: The corresponding local mount point for the exported volume. Use WINDOWS-style path.

    • mnt_timeout: (Optional) Specifies the time (in seconds) for which Direct NFS client should wait for a successful mount before timing out. The default timeout is 10 minutes.

    • uid: (Optional) The UNIX user ID to be used by Direct NFS to access all NFS servers listed in oranfstab. The default value is uid:65534, which corresponds to user:nobody on the NFS server.

    • gid: (Optional) The UNIX group ID to be used by Direct NFS to access all NFS servers listed in oranfstab. The default value is gid:65534, which corresponds to group:nogroup on the NFS server.

    Note:

    Direct NFS ignores a uid or gid value of 0.

    The following is an example of an oranfstab file with two NFS server entries, where the first NFS server uses 2 network paths and the second NFS server uses 4 network paths:

    server: MyDataServer1
    local: 132.34.35.10
    path: 132.34.35.12
    local: 132.34.55.10
    path: 132.34.55.12
    export: /vol/oradata1 mount: C:\APP\ORACLE\ORADATA\ORCL
     
    server: MyDataServer2
    local: LocalInterface1
    path: NfsPath1
    local: LocalInterface2
    path: NfsPath2
    local: LocalInterface3
    path: NfsPath3
    local: LocalInterface4
    path: NfsPath4
    export: /vol/oradata2 mount: C:\APP\ORACLE\ORADATA\ORCL2
    export: /vol/oradata3 mount: C:\APP\ORACLE\ORADATA\ORCL3
    

    The mount point specified in the oranfstab file represents the local path where the database files would reside normally, as if Direct NFS was not used. For example, if a database that does not use Direct NFS would have data files located in the C:\app\oracle\oradata\orcl directory, then you specify C:\app\oracle\oradata\orcl for the NFS virtual mount point in the corresponding oranfstab file.

    Note:

    The exported path from the NFS server must be accessible for read/write/execute by the user with the uid, gid specified in oranfstab. If neither uid nor gid is listed, then the exported path must be accessible by the user with uid:65534 and gid:65534.
  2. Oracle Database uses the Oracle Disk Manager (ODM) library, oranfsodm11.dll, to enable Direct NFS. To replace the standard ODM library, oraodm11.dll, with the ODM NFS library, complete the following steps:

    1. Change directory to Oracle_home\bin.

    2. Shut down the Oracle Database instance on a node using SRVCTL.

    3. Enter the following commands:

      copy oraodm11.dll oraodm11.dll.orig
      copy /Y oranfsodm11.dll oraodm11.dll 
      
    4. Restart the Oracle Database instance using SRVCTL.

    5. Repeat Step a to Step d for each node in the cluster.

3.6.6 Performing Basic File Operations Using the ORADNFS Utility

ORADNFS is a utility which enables the database administrators to perform basic file operations over Direct NFS Client on Microsoft Windows platforms.

ORADNFS is a multi-call binary, which is a single binary that acts like a number of utilities. You must be a member of the local ORA_DBA group in order to use ORADNFS. To execute commands using ORADNFS you issue the command as an argument on the command line.

The following command prints a list of commands available with ORADNFS:

C:\> oradnfs help

To display the list of files in the NFS directory mounted as C:\ORACLE\ORADATA, use the following command:

C:\> oradnfs ls C:\ORACLE\ORADATA\ORCL

Note:

A valid copy of the oranfstab configuration file must be present in Oracle_home\database for ORADNFS to operate.

3.6.7 Disabling Direct NFS Client

Use one of the following methods to disable the Direct NFS client:

  • Remove the oranfstab file.

  • Restore the original oraodm11.dll file by reversing the process you completed in "Enabling the Direct NFS Client".

  • Remove the specific NFS server or export paths in the oranfstab file.

3.7 Desupport of Raw Devices

With the release of Oracle Database 11g and Oracle RAC release 11g, writing datafiles directly to raw devices using Database Configuration Assistant or Oracle Universal Installer is not supported. You can still use raw devices with Oracle ASM.