5 Configuring Oracle Real Application Clusters Storage

This chapter includes storage administration tasks that you should complete if you intend to use Oracle Clusterware with Oracle Real Application Clusters (Oracle RAC).

This chapter contains the following topics:

5.1 Reviewing Storage Options for Oracle Database and Recovery Files

This section describes supported options for storing Oracle Database files, and data files.

See Also:

The Oracle Certify site for a list of supported vendors for Network Attached Storage options:
http://www.oracle.com/technology/support/metalink/

Refer also to the Certify site on OracleMetalink for the most current information about certified storage options:

https://metalink.oracle.com/

5.1.1 Overview of Oracle Database and Recovery File Options

There are three ways of storing Oracle Database and recovery files:

  • Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle Database files. It performs striping and mirroring of database files automatically.

    Note:

    For Standard Edition Oracle Database installations using Oracle RAC, ASM is the only supported storage option.

    Only one ASM instance is permitted for each node regardless of the number of database instances on the node.

  • A supported shared file system: Supported file systems include the following:

    • A supported cluster file system: Note that if you intend to use a cluster file system for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.

      See Also:

      The Certify page on OracleMetalink for supported cluster file systems
    • NAS Network File System (NFS) listed on Oracle Certify: Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.

    See Also:

    The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices, and supported cluster file systems
  • Raw Devices: A partition is required for each database file. If you do not use ASM, then for new installations on raw devices, you must use a custom installation.

5.1.2 General Storage Considerations for Oracle RAC

For all installations, you must choose the storage option that you want to use for Oracle Database files, or for Oracle Clusterware with Oracle RAC. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the Fast recovery area). You do not have to use the same storage option for each file type.

For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting of local file systems.

The following table shows the storage options supported for storing Oracle Database files and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

Note:

For the most up-to-date information about supported storage options for Oracle RAC installations, refer to the Certify pages on the OracleMetaLink Web site:
https://metalink.oracle.com

Table 5-1 Supported Storage Options for Oracle Database and Recovery Files

Storage Option File Types Supported
Database Recovery

Automatic Storage Management

Yes

Yes

Local storage

No

No

NFS file system

Note: Requires a certified NAS device

Yes

Yes

Shared raw devices

Yes

No


Use the following guidelines when choosing the storage options that you want to use for each file type:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.

  • For Standard Edition Oracle RAC installations, ASM is the only supported storage option for database or recovery files.

  • If you intend to use ASM with Oracle RAC, and you are configuring a new ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have the 11g release 1 (11.1) version of Oracle Clusterware installed.

    • Any existing ASM instance on any node in the cluster is shut down.

  • If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with ASM instances, then you must ensure that your system meets the following conditions:

    • Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) are run on the node where the Oracle RAC database or Oracle RAC database with ASM instance is located.

    • The Oracle RAC database or Oracle RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing Oracle RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

5.1.3 After You Have Selected Disk Storage Options

After you have installed and configured Oracle Clusterware storage, and after you have reviewed your disk storage options for Oracle Database files, you must perform the following tasks in the order listed:

1: Check for available shared storage with CVU

Refer to Checking for Available Shared Storage with CVU.

2: Configure storage for Oracle Database files and recovery files

5.2 Checking for Available Shared Storage with CVU

To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (oracle or crs), and use the following syntax:

/mountpoint/runcluvfy.sh comp ssa -n node_list

If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:

/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list

In the preceding syntax examples, the variable mountpoint is the mountpoint path of the installation media, the variable node_list is the list of nodes you want to check, separated by commas, and the variable storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.

For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dw/dsk/c1t2d3 and /dw/dsk/c2t4d5, and your mountpoint is /dev/dvdrom/, then enter the following command:

/dev/dvdrom/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dw/dsk/c1t2d3,/dw/dsk/c2t4d5

If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list

5.3 Choosing a Storage Option for Oracle Database Files

Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:

  • Network File System (NFS)

  • Automatic Storage Management (ASM)

  • Raw devices (Database files only--not for the recovery area)

During configuration of Oracle Clusterware, if you selected NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required preinstallation steps. You can proceed to Chapter 6, "Installing Oracle Clusterware".

If you want to place your database files on ASM, then proceed to "Configuring Disks for Automatic Storage Management".

If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Storage for Oracle Database Files on Shared Storage Devices".

Note:

Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM. For NFS certification status, refer to the Certify page on OracleMetaLink.

5.4 Configuring Storage for Oracle Database Files on a Supported Shared File System

Review the following sections to complete storage requirements for Oracle Database files:

5.4.1 Requirements for Using a File System for Oracle Database Files

To use a file system for Oracle Database files, the file system must comply with the following requirements:

  • To use a cluster file system, it must be a supported cluster file system. At the time of this release, no cluster file system is supported.

  • To use an NFS file system, it must be on a certified NAS device.

  • If you choose to place your database files on a shared file system, then one of the following must be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device that implements file redundancy).

    • The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • The oracle user must have write permissions to create the files in the path that you specify.

Use Table 5-2 to determine the partition size for shared file systems.

Table 5-2 Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 5-2, the total required volume size is cumulative. For example, to store all database files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.

5.4.2 Deciding to Use NFS for Data Files

Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.

NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.

5.4.3 Deciding to Use Direct NFS for Datafiles

This section contains the following information about Direct NFS:

5.4.3.1 About Direct NFS Storage

With Oracle Database 11g release 1 (11.1), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS client.

To enable Oracle Database to use Direct NFS, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. The mount options used in mounting the file systems are not relevant, as Direct NFS manages settings after installation. Refer to your vendor documentation to complete NFS configuration and mounting.

Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS to operate. To disable reserved port checking, consult your NFS file server documentation.

5.4.3.2 Using the Oranfstab File with Direct NFS

If you use Direct NFS, then you can choose to use a new file specific for Oracle datafile management, oranfstab, to specify additional options specific for Oracle Database to Direct NFS. For example, you can use oranfstab to specify additional paths for a mount point. You can add the oranfstab file either to /etc or to $ORACLE_HOME/dbs. The oranfstab file is not required to use NFS or Direct NFS.

With Oracle RAC installations, if you want to use Direct NFS, then you must replicate the file /etc/oranfstab on all nodes, and keep each /etc/oranfstab file synchronized on all nodes.

When the oranfstab file is placed in $ORACLE_HOME/dbs, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab file.

When the oranfstab file is placed in /etc, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including single-instance databases. However, on Oracle RAC systems, if the oranfstab file is placed in /etc, then you must replicate the file /etc/oranfstab file on all nodes, and keep each /etc/oranfstab file synchronized on all nodes, just as you must with the /etc/fstab file.

In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS.

5.4.3.3 Mounting NFS Storage Devices with Direct NFS

Direct NFS determines mount point settings to NFS storage devices based on the configurations in /etc/mtab, which are changed with configuring the /etc/fstab file.

Direct NFS searches for mount entries in the following order:

  1. $ORACLE_HOME/dbs/oranfstab

  2. /etc/oranfstab

  3. /etc/mtab

Direct NFS uses the first matching entry found.

Note:

You can have only one active Direct NFS implementation for each instance. Using Direct NFS on an instance will prevent another Direct NFS implementation.

If Oracle Database uses Direct NFS mount points configured using oranfstab, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not serve the NFS server. If Oracle Database is unable to open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in "Checking NFS Mount Buffer Size Parameters for Oracle RAC Binaries". Additionally, an informational message will be logged into the Oracle alert and trace files indicating that Direct NFS could not be established. The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.

5.4.3.4 Specifying Network Paths with the Oranfstab File

Direct NFS can use up to four network paths defined in the oranfstab file for an NFS server. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/O commands over any remaining paths.

Use the following views for Direct NFS management:

  • v$dnfs_servers: Shows a table of servers accessed using Direct NFS.

  • v$dnfs_files: Shows a table of files currently open using Direct NFS.

  • v$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files.

  • v$dnfs_stats: Shows a table of performance statistics for Direct NFS.

5.4.4 Enabling Direct NFS Client Oracle Disk Manager Control of NFS

Complete the following procedure to enable Direct NFS:

  1. Create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS:

    • Server: The NFS server name.

    • Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command.

    • Export: The exported path from the NFS server.

    • Mount: The local mount point for the NFS server.

    Note:

    On Linux and UNIX platforms, the location of the oranfstab file is $ORACLE_HOME/dbs.

    The following is an example of an oranfstab file with two NFS server entries:

    server:  MyDataServer1
    path:  132.34.35.12
    path:  132.34.35.13
    export: /vol/oradata1 mount: /mnt/oradata1
     
    server: MyDataServer2
    path:  NfsPath1
    path:  NfsPath2
    path:  NfsPath3
    path:  NfsPath4
    export: /vol/oradata2 mount: /mnt/oradata2
    export: /vol/oradata3 mount: /mnt/oradata3
    export: /vol/oradata4 mount: /mnt/oradata4
    export: /vol/oradata5 mount: /mnt/oradata5
    
  2. Oracle Database uses an ODM library, libnfsodm10.so, to enable Direct NFS. To replace the standard ODM library, $ORACLE_HOME/lib/libodm10.so, with the ODM NFS library, libnfsodm10.so, complete the following steps:

    1. Change directory to $ORACLE_HOME/lib.

    2. Enter the following commands:

      cp libodm10.so libodm10.so_stub
      ln -s libnfsodm10.so libodm10.so
      

5.4.5 Disabling Direct NFS Client Oracle Disk Management Control of NFS

Use one of the following methods to disable the Direct NFS client:

Note:

If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective.

5.4.6 Checking NFS Mount Buffer Size Parameters for Oracle RAC Binaries

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to 32768.

If you are using Direct NFS, note that it will not serve an NFS server with write size values (wtmax) less than 32768.

Update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /u02/oradata     nfs\   
rw,bg,vers=3,proto=tcp,noac,forcedirectio,hard,nointr,timeo=600,rsize=32768,wsize=32768,suid

Note:

Refer to your storage vendor documentation for additional information about mount options.

If you use NFS mounts, then Oracle recommends that you use the option forcedirectio to force direct I/O for better performance. However, if you add forcedirectio to the mount option, then the same mount point cannot be used for Oracle software binaries, executables, shared libraries, and objects.

5.4.7 Creating Required Directories for Oracle Database Files on Shared File Systems

Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for a RAC database).

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the bdf command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Database files Choose either:
    • A single file system with at least 1.5 GB of free disk space.

    • Two or more file systems with at least 1.5 GB of free disk space in total.

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
    • Recovery file directory (Fast recovery area):

      # mkdir /mount_point/Fast_recovery_area
      # chown oracle:oinstall /mount_point/Fast_recovery_area
      # chmod 775 /mount_point/Fast_recovery_area
      

By making the oracle user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

5.5 Configuring Disks for Automatic Storage Management

This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks on each platform:

Note:

Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for HP-UX for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.

5.5.1 Identifying Storage Requirements for Automatic Storage Management

Note:

For the most up-to-date information about supported configurations, refer to the Certify pages on the OracleMetaLink Web site at the following URL:
https://metalink.oracle.com

To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.

    Note:

    You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.

    If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the Fast recovery area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or you can choose to use different disk groups for each file type.

      The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

    • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.

  2. Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.

    The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.

    • Normal redundancy

      In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For most installations, Oracle recommends that you select normal redundancy disk groups.

    • High redundancy

      In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      While high redundancy disk groups do provide a high level of data protection, you must consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for the database files and recovery files.

    Use the following table to determine the minimum number of disks and the minimum disk space requirements for the installation:

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types
    External 1 1.15 GB 2.3 GB 3.45 GB
    Normal 2 2.3 GB 4.6 GB 6.9 GB
    High 3 3.45 GB 6.9 GB 10.35 GB

    for Oracle RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)

    For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:

    15 + (2 * 3) + (126 * 4) = 525

    If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

    The following section describes how to identify existing disk groups and determine the free disk space that they contain.

  4. Optionally, identify failure groups for the Automatic Storage Management disk group devices.

    Note:

    Complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.
  5. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.

    • Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices.

    See Also:

    The "Configuring Disks for Automatic Storage Management" section for information about completing this task

5.5.2 Using an Existing Automatic Storage Management Disk Group

If you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.

To determine whether an existing Automatic Storage Management disk group exists, or to determine whether there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine whether an Automatic Storage Management instance is configured on the system:

    # more /etc/oratab
    

    If an Automatic Storage Management instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    

    In this example, +ASM2 is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.

  3. Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:

    # $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
    SQL> STARTUP
    
  4. Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

5.5.3 Configuring Disks for Automatic Storage Management

To configure disks for use with ASM on HP-UX, follow these steps:

  1. If necessary, install the shared disks that you intend to use for the ASM disk group.

  2. To make sure that the disks are available, enter the following command on every node:

    # /usr/sbin/ioscan -fun -C disk
    

    The output from this command is similar to the following:

    Class  I  H/W Path    Driver S/W State   H/W Type     Description
    ==========================================================================
    disk    0  0/0/1/0.6.0 sdisk  CLAIMED     DEVICE       HP   DVD-ROM 6x/32x
                           /dev/rdsk/c0t6d0   /dev/rdsk/c0t6d0
    disk    1  0/0/1/1.2.0 sdisk  CLAIMED     DEVICE      SEAGATE ST39103LC
                           /dev/rdsk/c1t2d0   /dev/rdsk/c1t2d0
    

    This command displays information about each disk attached to the system, including character raw device names (/dev/rdsk/).

    Note:

    On HP-UX 11i v.3, you can also use agile view to review mass storage devices, including character raw devices (/dev/rdisk/diskxyz). For example:
    #>ioscan -funN -C disk
    Class     I  H/W Path  Driver S/W State   H/W Type     Description
    ===================================================================
    disk      4  64000/0xfa00/0x1   esdisk   CLAIMED     DEVICE       HP 73.4GST373454LC
                     /dev/disk/disk4   /dev/rdisk/disk4
    disk    907  64000/0xfa00/0x2f  esdisk   CLAIMED     DEVICE       COMPAQ  MSA1000 VOLUME
                     /dev/disk/disk907   /dev/rdisk/disk907
    
  3. If the ioscan command does not display device name information for a device that you want to use, enter the following command to install the special device files for any new devices:

    # /usr/sbin/insf -e
    
  4. For each disk that you want to add to a disk group, enter the following command on any node to verify that it is not already part of an LVM volume group:

    # /sbin/pvdisplay /dev/dsk/cxtydz
    

    If this command displays volume group information, the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.

    Note:

    If you are using different volume management software, for example VERITAS Volume Manager, refer to the appropriate documentation for information about verifying that a disk is not in use.
  5. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk that you want to add to a disk group:

    # chown oracle:dba /dev/rdsk/cxtydz
    # chmod 660 /dev/rdsk/cxtydz
    

    Note:

    If you are using a multi-pathing disk driver with ASM, make sure that you set the permissions only on the correct logical device name for the disk.

    If the nodes are configured differently, the device name for a particular device might be different on some nodes. Make sure that you specify the correct device names on each node.

  6. If you also want to use raw devices for storage, then refer to the following section, "Configuring Disks for Database Files on Raw Devices" section.

5.6 Configuring Storage for Oracle Database Files on Shared Storage Devices

The following subsections describe how to configure Oracle Clusterware files on raw devices.

5.6.1 Planning Your Shared Storage Device Creation Strategy

Before installing the Oracle Database 11g release 1 (11.1) software with Oracle RAC, create enough partitions of specific sizes to support your database, and also leave a few spare partitions of the same size for future expansion. For example, if you have space on your shared disk array, then select a limited set of standard partition sizes for your entire database. Partition sizes of 50 MB, 100 MB, 500 MB, and 1 GB are suitable for most databases. Also, create a few very small and a few very large spare partitions that are (for example) 1 MB and perhaps 5 GB or greater in size. Based on your plans for using each partition, determine the placement of these spare partitions by combining different sizes on one disk, or by segmenting each disk into same-sized partitions.

Note:

Be aware that each instance has its own redo log files, but all instances in a cluster share the control files and data files. In addition, each instance's online redo log files must be readable by all other instances to enable recovery.

In addition to the minimum required number of partitions, you should configure spare partitions. Doing this enables you to perform emergency file relocations or additions if a tablespace data file becomes full.

5.6.2 Identifying Required Shared Partitions for Database Files

Table 5-3 lists the number and size of the shared partitions that you must configure for database files.

Table 5-3 Shared Devices or Logical Volumes Required for Database Files on HP-UX

Number Partition Size (MB) Purpose

1

800

SYSTEM tablespace

1

400 + (Number of instances * 250)

SYSAUX tablespace

Number of instances

500

UNDOTBSn tablespace (One tablespace for each instance)

1

250

TEMP tablespace

1

160

EXAMPLE tablespace

1

120

USERS tablespace

2 * number of instances

120

Two online redo log files for each instance

2

110

First and second control files

1

5

Server parameter file (SPFILE)

1

5

Password file


Note:

If you prefer to use manual undo management, instead of automatic undo management, then, instead of the UNDOTBSn shared storage devices, you must create a single rollback segment tablespace (RBS) on a shared storage device partition that is at least 500 MB in size.

5.6.3 Desupport of the Database Configuration Assistant Raw Device Mapping File

With the release of Oracle Database 11g and Oracle RAC release 11g, configuring raw devices using Database Configuration Assistant is not supported.

5.7 Configuring Disks for Database Files on Raw Devices

The following subsections describe how to configure raw partitions for database files:

5.7.1 Identifying Partitions and Configuring Raw Devices for Database Files

Table 5-4 lists the number and size of the raw disk devices that you must configure for database files.

Note:

Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.

Table 5-4 Raw Disk Devices Required for Database Files on HP-UX

Number Size (MB) Purpose and Sample Alternative Device File Name

1

800

SYSTEM tablespace:

dbname_system_raw_800m

1

400 + (Number of instances * 250)

SYSAUX tablespace:

dbname_sysaux_raw_900m

Number of instances

500

UNDOTBSn tablespace (One tablespace for each instance, where n is the number of the instance):

dbname_undotbsn_raw_500m

1

250

TEMP tablespace:

dbname_temp_raw_250m

1

160

EXAMPLE tablespace:

dbname_example_raw_160m

1

120

USERS tablespace:

dbname_users_raw_120m

2 * number of instances

120

Two online redo log files for each instance (where n is the number of the instance and m is the log number, 1 or 2):

dbname_redon_m_raw_120m

2

110

First and second control files:

dbname_control{1|2}_raw_110m

1

5

Server parameter file (SPFILE):

dbname_spfile_raw_5m

1

5

Password file:

dbname_pwdfile_raw_5m

  1. If you intend to use raw disk devices for database file storage, then choose a name for the database that you want to create.

    The name that you choose must start with a letter and have no more than four characters, for example, orcl.

  2. Identify or configure the required disk devices.

    The disk devices must be shared on all of the cluster nodes.

  3. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/ioscan -fun -C disk
    

    The output from this command is similar to the following:

    Class  I  H/W Path    Driver S/W State   H/W Type     Description
    ==========================================================================
    disk    0  0/0/1/0.6.0 sdisk  CLAIMED     DEVICE       HP   DVD-ROM 6x/32x
                           /dev/dsk/c0t6d0   /dev/rdsk/c0t6d0
    disk    1  0/0/1/1.2.0 sdisk  CLAIMED     DEVICE      SEAGATE ST39103LC
                           /dev/dsk/c1t2d0   /dev/rdsk/c1t2d0
    

    This command displays information about each disk attached to the system, including the character raw device name (/dev/rdsk/cxtydz).

  4. If the ioscan command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:

    # /usr/sbin/insf -e
    
  5. For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group:

    # /sbin/pvdisplay /dev/dsk/cxtydz
    

    If this command displays volume group information, then the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.

    Note:

    If you are using different volume management software, for example VERITAS Volume Manager, then refer to the appropriate documentation for information about verifying that a disk is not in use.
  6. If the ioscan command shows different device names for the same device on any node, then:

    1. Change directory to the /dev/rdsk directory.

    2. Enter the following command to list the raw disk device names and their associated major and minor numbers:

      # ls -la
      

      The output from this command is similar to the following for each disk device:

      crw-r--r--   1 bin        sys        188 0x032000 Nov  4  2003 c3t2d0
      

      In this example, 188 is the device major number and 0x32000 is the device minor number.

    3. Enter the following command to create a new device file for the disk that you want to use, specifying the same major and minor number as the existing device file:

      Note:

      Oracle recommends that you use the alternative device file names shown in the previous table.
      # mknod ora_ocr_raw_256m c 188 0x032000
      
    4. Repeat these steps on each node, specifying the correct major and minor numbers for the new device files on each node.

  7. Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk device that you want to use:

    Note:

    If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.

    If you created an alternative device file for the device, then set the permissions on that device file.

    • OCR:

      # chown root:oinstall /dev/rdsk/cxtydz
      # chmod 640 /dev/rdsk/cxtydz
      
    • Oracle Clusterware voting disk or database files:

      # chown oracle:dba /dev/rdsk/cxtydz
      # chmod 660 /dev/rdsk/cxtydz
      
  8. If you are using raw disk devices for database files, then follow these steps to create the Database Configuration Assistant raw device mapping file:

    Note:

    You must complete this procedure only if you are using raw devices for database files. The Database Configuration Assistant raw device mapping file enables Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Database Configuration Assistant raw device mapping file.
    1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

      • Bourne or Korn shell:

        $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
        
      • C shell:

        % setenv ORACLE_BASE /u01/app/oracle
        
    2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

      # mkdir -p $ORACLE_BASE/oradata/dbname
      # chown -R oracle:oinstall $ORACLE_BASE/oradata
      # chmod -R 775 $ORACLE_BASE/oradata
      

      In this example, dbname is the name of the database that you chose previously.

    3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

    4. Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.

      Oracle recommends that you use a file name similar to dbname_raw.conf for this file.

      Note:

      The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.
      system=/dev/rdsk/c2t1d1
      sysaux=/dev/rdsk/c2t1d2
      example=/dev/rdsk/c2t1d3
      users=/dev/rdsk/c2t1d4
      temp=/dev/rdsk/c2t1d5
      undotbs1=/dev/rdsk/c2t1d6
      undotbs2=/dev/rdsk/c2t1d7
      redo1_1=/dev/rdsk/c2t1d8
      redo1_2=/dev/rdsk/c2t1d9
      redo2_1=/dev/rdsk/c2t1d10
      redo2_2=/dev/rdsk/c2t1d11
      control1=/dev/rdsk/c2t1d12
      control2=/dev/rdsk/c2t1d13
      spfile=/dev/rdsk/dbname_spfile_raw_5m
      pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
      

      In this example, dbname is the name of the database.

      Use the following guidelines when creating or editing this file:

      • Each line in the file must have the following format:

        database_object_identifier=device_file_name
        

        The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device file name, redo1_1 is the database object identifier:

        rac_redo1_1_raw_120m
        
      • For a RAC database, the file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

      • Specify at least two control files (control1, control2).

      • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    5. Save the file and note the file name that you specified.

    6. When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

  9. When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:

    /dev/rdsk/cxtydz
    

5.7.2 Creating the Database Configuration Assistant Raw Device Mapping File

Note:

You must complete this procedure only if you are using raw logical volumes for database files.

To enable Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:

  1. Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:

    • Bourne or Korn shell:

      $ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
      
    • C shell:

      % setenv ORACLE_BASE /u01/app/oracle
      
  2. Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:

    # mkdir -p $ORACLE_BASE/oradata/dbname
    # chown -R oracle:oinstall $ORACLE_BASE/oradata
    # chmod -R 775 $ORACLE_BASE/oradata
    

    In this example, dbname is the name of the database that you chose previously.

  3. Change directory to the $ORACLE_BASE/oradata/dbname directory.

  4. Enter a command similar to the following to create a text file that you can use to create the raw device mapping file:

    # find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf
    
  5. Edit the dbname_raw.conf file in any text editor to create a file similar to the following:

    Note:

    The following example shows a sample mapping file for a two-instance RAC cluster.
    system=/dev/vg_name/rdbname_system_raw_800m
    sysaux=/dev/vg_name/rdbname_sysaux_raw_900m
    example=/dev/vg_name/rdbname_example_raw_160m
    users=/dev/vg_name/rdbname_users_raw_120m
    temp=/dev/vg_name/rdbname_temp_raw_250m
    undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m
    undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m
    redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m
    redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m
    redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m
    redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m
    control1=/dev/vg_name/rdbname_control1_raw_110m
    control2=/dev/vg_name/rdbname_control2_raw_110m
    spfile=/dev/vg_name/rdbname_spfile_raw_5m
    pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m
    

    In this example:

    • vg_name is the name of the volume group

    • dbname is the name of the database

    Use the following guidelines when creating or editing this file:

    • Each line in the file must have the following format:

      database_object_identifier=logical_volume
      

      The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1 is the database object identifier:

      /dev/oracle_vg/rrac_redo1_1_raw_120m
      
    • The file must specify one automatic undo tablespace datafile (undotbsn) and two redo log files (redon_1, redon_2) for each instance.

    • Specify at least two control files (control1, control2).

    • To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs) instead of the automatic undo management tablespace data files.

    • Save the file and note the file name that you specified.

    • When you are configuring the oracle user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.

5.7.3 Desupport of the Database Configuration Assistant Raw Device Mapping File

With the release of Oracle Database 11g and Oracle RAC release 11g, configuring raw devices using Database Configuration Assistant is not supported.

5.8 Checking the System Setup with CVU

As the oracle user, use the following command syntax to start Cluster Verification Utility (CVU) stage verification to check hardware, operating system, and storage setup:

/mountpoint/runcluvfy.sh stage –post hwos –n node_list [-verbose]

In the preceding syntax example, replace the variable node_list with the names of the nodes in your cluster, separated by commas. For example, to check the hardware and operating system of a two-node cluster with nodes node1 and node2, with the mountpoint /mnt/dvdrom/ and with the option to limit the output to the test results, enter the following command:

$ /mnt/dvdrom/runcluvfy.sh stage –post hwos –n node1,node2

Select the option -verbose to receive detailed reports of the test results, and progress updates about the system checks performed by Cluster Verification Utility.