|Oracle® Clusterware Installation Guide
11g Release 1 (11.1) for Linux
|PDF · Mobi · ePub|
This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer to install Oracle Clusterware.
This chapter contains the following topics:
This section describes supported options for storing Oracle Clusterware.
There are two ways of storing Oracle Clusterware files:
A supported shared file system: Supported file systems include the following:
A supported cluster file system
Note:For information about how to download and configure OCFS2, refer to the following URL
See Also:The Certify page on OracleMetalink for supported cluster file systems
Network File System (NFS): A file-level protocol that enables access and sharing of files
See Also:The Certify page on OracleMetalink for supported Network Attached Storage (NAS) devices
Block or Raw Devices: Oracle Clusterware files can be placed on either Block or RAW devices based on shared disk partitions. Oracle recommends using Block devices for easier usage.
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files, or for Oracle Clusterware with Oracle Real Application Clusters databases (Oracle RAC). You do not have to use the same storage option for each file type.
Oracle Clusterware files include voting disks, used to monitor cluster node status, and Oracle Cluster Registry (OCR) which contains configuration information about the cluster. The voting disks and OCR are shared files on a cluster or network file system environment. If you do not use a cluster file system, then you must place these files on shared block devices or shared raw devices. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation.
For voting disk file placement, Oracle recommends that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. Any node that does not have available to it an absolute majority of voting disks configured (more than half) will be restarted.
The following table shows the storage options supported for storing Oracle Clusterware files. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).
Note:For information about OCFS2, refer to the following Web site:
For OCFS2 certification status, refer to the Certify page on OracleMetaLink.
|Storage Option||File Types Supported|
|OCR and Voting Disks||Oracle Software|
Automatic Storage Management
NFS file system
Note: Requires a certified NAS device. Direct NFS is not certified for Oracle Clusterware files.
Shared disk partitions (block devices or raw devices)
Use the following guidelines when choosing the storage options that you want to use for Oracle Clusterware:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any ASM instance starts.
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g release 1 (11.1), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.
When you have determined your disk storage options, you must perform the following tasks in the order listed:
To use a file system for Oracle Clusterware files, refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System.
To use block devices for Oracle Clusterware files, refer to Configuring Disk Devices for Oracle Clusterware Files.
To check for all shared file systems available across all nodes on the cluster on a supported shared file system, log in as the installation owner user (
crs), and use the following syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable
mountpoint is the mountpoint path of the installation media, the variable
node_list is the list of nodes you want to check, separated by commas, and the variable
storageID_list is the paths for the storage devices that you want to check.
For example, if you want to check the shared accessibility from
node2 of storage devices
/dev/sdc, and your mountpoint is
/dev/dvdrom/, then enter the following command:
$ /mnt/dvdrom/runcluvfy.sh comp ssa -n node1,node2 -s /dev/sdb,/dev/sdc
If you do not specify storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list.
Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
Note:The OCR is a file that contains the configuration information and status of the cluster. Oracle Universal Installer (OUI) automatically initializes the OCR during the Oracle Clusterware installation. Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates.
The OCR is a shared file in a cluster file system environment. If you do not use a cluster file system, then you must place this file on a shared storage device.
To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system, as listed in the section "Deciding to Use a Cluster File System for Oracle Clusterware Files".
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:
The user account with which you perform the installation (
crs) must have write permissions to create the files in the path that you specify.
Note:If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.
Use Table 4-2 to determine the partition size for shared file systems.
|File Types Stored||Number of Volumes||Volume Size|
Oracle Clusterware files (OCR and voting disks) with external redundancy
At least 280 MB for each OCR volume
At least 280 MB for each voting disk volume
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.
At least 280 MB for each OCR volume
At least 280 MB for each voting disk volume
In Table 4-2, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 1.3 GB of storage available over a minimum of three volumes (two separate volume locations for the OCR and OCR mirror, and one voting disk on each volume).
Note:When you create partitions with
fdiskby specifying a device size, such as
+256M, the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions.
Oracle configuration software checks to ensure that devices contain a minimum of 256MB of available disk space. Therefore, Oracle recommends using at least 280MB for the device size. You can check partition sizes by using the command syntax
partition. For example:
[root@node1]$ fdisk -s /dev/sdb1 281106
For Linux x86 (32-bit) and x86-64 (64-bit) platforms, Oracle provides a cluster file system, OCFS2. You can have a shared Oracle home on OCFS2.
# /sbin/modinfo ocfs2
If you want to install Oracle Clusterware files on an OCFS2 file system, and the packages are not installed, then download them from the following Web site. Follow the instructions listed with the kit to install the packages and configure the file system:
Note:For OCFS2 certification status, refer to the Certify page on OracleMetaLink:
The NFS mount options are:
/etc/fstab file on each node with an entry containing the NFS mount options for your platform. For example, if your platform is x86-64, then update the
/etc/fstab files with an entry similar to the following:
nfs_server:/vol/CWfiles /u01/oracle/cwfiles nfs \ rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,noac
Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting disks), and data files.
If you want to create a mount point for binaries only, then provide an entry similar to the following for a binaries mount point:
nfs_server:/vol/crshome /u01/oracle/crs nfs -yes rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actime=0
See Also:OracleMetaLink bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:
Note:Refer to your storage vendor documentation for additional information about mount options.
Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.
Note:For both NFS and OCFS2 storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.
To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
df command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use. Choose a file system with a minimum of 560 MB of free disk space (one OCR and one voting disk, with external redundancy).
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically,
oracle) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the Oracle Clusterware home (or CRS home). For example, where the user is
oracle, and the CRS home is
# mkdir /mount_point/oracrs # chown oracle:oinstall /mount_point/oracrs # chmod 750 /mount_point/oracrs
Note:After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned by
root, and not writable by any account other than
When you have completed creating a subdirectory in the mount point directory, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Clusterware.
On Linux systems,
O_DIRECT enables direct read and writes to block devices, avoiding kernel overhead. With Oracle Clusterware release 10.2 and later, Oracle Clusterware files are configured by default to use direct input/output.
With the 2. 6 kernel or later for Red Hat Enterprise Linux, Oracle Linux, and SUSE Enterprise Server, you must create a permissions file to maintain permissions on Oracle Cluster Registry (OCR) and voting disk partitions. If you do not create this permissions file, then permissions on disk devices revert to their default values,
root:disk, and Oracle Clusterware will fail to start.
On Asianux 2, Red Hat Enterprise Linux 4, and Oracle Linux 4, you must create a permissions file number that is lower than 50.
On Asianux Server 3, Red Hat Enterprise Linux 5, Oracle Linux 5, or SUSE Enterprise Linux 10, you must create a permissions file number that is higher at 50.
To configure a permissions file for disk devices, complete the following tasks:
Create a permissions file in
/etc/udev/permissions.d, to change the permissions from default root ownership to root and members of the oinstall group, called
51-oracle.permissions, depending on your Linux distribution. In each case, the contents of the xx-oracle.permissions file are as follows:
For example, to set permissions for an OCR partition on block device
/dev/sda1, create the following entry:
Use the section "Example of Creating a Udev Permissions File for Oracle Clusterware" for a step-by-step example of how to perform this task.
Configure the block devices from the local node with the required partition space for Oracle Clusterware files. Use the section "Example of Configuring Block Device Storage for Oracle Clusterware" to help you configure block devices, if you are unfamiliar with creating partitions.
Change the ownership of OCR partitions to the installation owner on all member nodes of the cluster. In the session where you run the Installer, the OCR partitions must be owned by the installation owner (
oracle) that performs the Oracle Clusterware installation. The installation owner must own the OCR partitions so that the Installer can write to them. During installation, the Installer changes ownership of the OCR partitions back to
root. With subsequent system restarts, ownership is set correctly by the
The procedure to create a permissions file to grant oinstall group members write privileges to block devices is as follows:
Log in as root.
Change to the
# cd /etc/udev/permissions.d
Start a text editor, such as vi, and enter the partition information where you want to place the OCR and voting disk files, using the syntax device[partitions]:root:oinstall:0640. Note that Oracle recommends that you place the OCR and the voting disk files on separate physical disks. For example, to grant oinstall members access to SCSI disks to place OCR files on sda1 and sdb2, and to grant the Oracle Clusterware owner (in this example crs) permissions to place voting disks on sdb3, sdc1 and sda2, add the following information to the file:
# OCR disks sda1:root:oinstall:0640 sdb2:root:oinstall:0640 # Voting disks sda2:crs:oinstall:0640 sdb3:crs:oinstall:0640 sdc1:crs:oinstall:0640
Save the file:
On Asianux 2, Oracle Linux 4, and Red Hat Enterprise Linux 4 systems, save the file as
On Asianux Server 3, Oracle Linux 5, Red Hat Enterprise Linux 5, and SUSE Enterprise Server 10 systems, save the file as
Using the following command, assign the permissions in the udev file to the devices:
The procedure to create partitions for Oracle Clusterware files on block devices is as follows:
log in as root
Enter the fdisk command to format a specific storage disk (for example,
Create a new partition, and make the partition 280 MB in size for both OCR and voting disk partitions.
Use the command syntax
diskpath on each node in the cluster to update the kernel partition table for the shared storage device on each node.
The following is an example of how to use fdisk to create one partition on a shared storage block disk device for an OCR file:
[crs@localnode /] $ su Password: [root@localnode /] # /sbin/fdisk /dev/sdb The number of cylinders for this disk is set to 1024. Command (m for help): n Command action e extended P primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-1024, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1) Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):+280m Command (m for help):w The partition table has been altered! Calling ioctl () to re-read partition table. Synching disks. [root@localnode /]# exit [crs@localnode /] $ ssh remotenode Last login Wed Feb 21 20:23:01 from localnode [crs@remotenode ~]$ su Password: [root@localnode /] # /sbin/partprobe /dev/sdb1
Note:Oracle recommends that you create partitions for Oracle Clusterware files on physically separate disks.