This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:
This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files. It includes the following sections:
Use the information in this overview to help you select your storage option.
There are two ways of storing Oracle Clusterware files:
A supported shared file system: Supported file systems include the following:
Cluster File System: A supported cluster file system.
Network File System (NFS): A file-level protocol that enables access and sharing of files
See Also:The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices, and your storage vendor
Raw Devices: Oracle Clusterware files can be placed on RAW devices based on shared disk partitions.
Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are shared files on a cluster or network file system environment. If you do not use a cluster file system, then you must place these files on shared raw devices. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.
For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.
For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use ASM, or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.
The following table shows the storage options supported for storing Oracle Clusterware files. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).
Note:For the most up-to-date information about supported storage options, refer to the Certify pages on the OracleMetaLink Web site:
|Storage Option||File Types Supported|
|OCR and Voting Disk||Oracle Software|
|Automatic Storage Management||No||No|
|NFS file system
Note: Requires a certified NAS device
|Shared raw device partitions||Yes||No|
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type, provided that you satisfy all requirements listed for the chosen storage options.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g Release 1 (11.1), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g Release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.
When you have determined your disk storage options, you must perform the following tasks in the order listed:
To use a file system (NFS) for Oracle Clusterware files, refer to "Configuring Storage for Oracle Clusterware Files on a Supported Shared File System".
To use raw devices (partitions) for Oracle Clusterware files, refer to "Configuring Storage for Oracle Clusterware Files on Raw Devices".
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, use the following command:
/mountpoint/clusterware/cluvfy/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/clusterware/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable
mountpoint is the mountpoint path of the installation media, the variable
node_list is the list of nodes you want to check, separated by commas, and the variable
storageID_list is the list of storage device IDs for the storage devices managed by the file system type that you want to check.
For example, if you want to check the shared accessibility from node1 and node2 of storage devices
/dev/c0t0d0s3, and your mountpoint is
/dev/dvdrom/, then enter the following command:
/dev/dvdrom/clusterware/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s\ /dev/c0t0d0s2,/dev/c0t0d0s3
If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list
Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
Note:Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates. The OCR is a shared file in a cluster file system environment. If you do not use a cluster file system, then you must make this file a shared raw device. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.
To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:
To use an NFS file system, it must be on a certified NAS device.
Note:If you are using a shared file system on a NAS device to store a shared Oracle home directory for Oracle Clusterware or RAC, then you must use the same NAS device for Oracle Clusterware file storage.
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:
At least two file systems are mounted, and use the features of Oracle Database 11g Release 1 (11.1) to provide redundancy for the OCR.
In addition, if you put the OCR and voting disk files on a shared file system, then that shared files system must be a shared QFS file system, and not a globally mounted UFS or VxFS file system.
If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
oracle user must have write permissions to create the files in the path that you specify.
Note:If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.
Use Table 4-1 to determine the partition size for shared file systems.
|File Types Stored||Number of Volumes||Volume Size|
Oracle Clusterware files (OCR and voting disks) with external redundancy
At least 280 MB for each volume
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software
At least 280 MB for each volume
Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks)
At least 280 MB of free space for each OCR location, if the OCR is configured on a file system
At least 280 MB available for each OCR location if the OCR is configured on raw devices.
At least 280 MB for each voting disk location, with a minimum of three disks.
In Table 4-1, the total required volume size is cumulative. For example, to store all files on the shared file system with normal redundancy, you should have at least 1.3 GB of storage available over a minimum of three volumes (two separate volume locations for the OCR and OCR mirror, and one voting disk on each volume).
Note:When you create partitions with fdisk by specifying a device size, such as +256M, the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions.
Oracle configuration software checks to ensure that devices contain a minimum of 256MB of available disk space. Therefore, Oracle recommends using at least 280MB for the device size. You can check partition sizes by using the command syntax fdisk -s partition. For example:
[root@node1]$ fdisk -s /dev/sdb1 281106
The User Data Protocol (UDP) parameter settings define the amount of send and receive buffer space for sending and receiving datagrams over an IP network. These settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.
On Solaris, the UDP parameters are
udp_xmit_hiwat. On Solaris 10 the default values for these parameters are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.
To check current settings for
udp_xmit_hiwat, enter the following commands:
# ndd /dev/udp udp_xmit_hiwat # ndd /dev/udp udp_recv_hiwat
On Solaris 10, to set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set /dev/udp udp_xmit_hiwat 65536# ndd -set /dev/udp udp_recv_hiwat 65536
On Solaris 9, to set the values of these parameters to 65536 bytes on system restarts, open the
/etc/system file, and enter the following lines:
set udp:udp_xmit_hiwat=65536 set udp:udp_recv_hiwat=65536
On Solaris 10, to set the UDP values for when the system restarts, the
ndd commands have to be included in a system startup script. For example, The following script in
/etc/rc2.d/S99ndd sets the parameters:
ndd -set /dev/udp udp_xmit_hiwat 65536 ndd -set /dev/udp udp_recv_hiwat 65536
See Also:"Overview of Tuning IP Suite Parameters" in Solaris Tunable Parameters Reference Manual, in the Sun documentation set available at the following URL:
If you are using NFS, then you must set the values for the NFS buffer size parameters
wsize to at least 32768.
The NFS mount options for clusterware files are:
nfs_server:/vol/CWfiles /u01/oracle/cwfiles nfs -yes rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,noac,forcedirectio
Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting disks), and data files.
If you want to create a mount point for binaries only, then enter the following line for a binaries mount point:
nfs_server:/vol/crshome /u01/oracle/crs nfs -yes rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3,suid
See Also:OracleMetaLink bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:
Note:Refer to your storage vendor documentation for additional information about mount options.
Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.
Note:For NFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system to the Oracle base directory.
For Storage Area Network (SAN) storage configured without Sun Cluster, Oracle recommends the following:
To ensure that devices are mapped to the same controller in all the nodes, before you install the operating system, you should first install the HBA cards in all the nodes (in the same slots). Doing this ensures that devices will be mapped to the same controllers in all the nodes.
To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
df -k command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use:
|File Type||File System Requirements|
|Oracle Clusterware files||Choose a file system with at least 560 MB of free disk space (one OCR and one voting disk, with external redundancy).|
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (
oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Example of creating Oracle Clusterware file directory owned by the installation user
# mkdir /mount_point/oracrs # chown oracle:oinstall /mount_point/oracrs # chmod 750 /mount_point/oracrs
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed CFS or NFS configuration.
The following subsection describe how to configure Oracle Clusterware files on raw partitions.
Table 4-2 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.
|Number||Size for Each Partition (MB)||Purpose|
(or 1, if you have external redundancy support for this file)
Note: Create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Cluster Registry (OCR).
You should create two partitions: One for the OCR, and one for a mirrored OCR.
If you are upgrading from Oracle9i release 2, then you can continue to use the raw device that you used for the SRVM configuration repository instead of creating this new raw device.
(or 1, if you have external redundancy support for this file)
Note: Create these raw partitions only once on the cluster. If you create more than one database on the cluster, then they all share the same Oracle Clusterware voting disk.
You should create three partitions: One for the voting disk, and two for additional voting disks.
Note:If you put Oracle Clusterware files on a Cluster File System (CFS) then you should ensure that the CFS volumes are at least 500 MB in size.