Skip Headers
Oracle® Grid Infrastructure Installation Guide
12c Release 1 (12.1) for Linux

E17888-19
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Master Index
Master Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
PDF · Mobi · ePub

7 Configuring Storage for Oracle Grid Infrastructure and Oracle RAC

This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.

This chapter contains the following topics:

7.1 Reviewing Oracle Grid Infrastructure Storage Options

This section describes the supported storage options for Oracle Grid Infrastructure for a cluster, and for features running on Oracle Grid Infrastructure. It includes the following topics:

See Also:

The Oracle Certification site on My Oracle Support for the most current information about certified storage options:
https://support.oracle.com

7.1.1 Supported Storage Options

The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.

Note:

For information about OCFS2, see the following website:
http://oss.oracle.com/projects/ocfs2/

If you plan to install an Oracle RAC home on a shared OCFS2 location, then you must upgrade OCFS2 to at least version 1.4.1, which supports shared writable mmaps.

For OCFS2 certification status, and for other cluster file system support, see the Certify page on My Oracle Support.

Table 7-1 Supported Storage Options for Oracle Clusterware and Oracle RAC

Storage Option OCR and Voting Files Oracle Clusterware binaries Oracle RAC binaries Oracle Database Files Oracle Recovery Files

Oracle Automatic Storage Management (Oracle ASM)

Note: Loopback devices are not supported for use with Oracle ASM

Yes

No

No

Yes

Yes

 

Oracle Automatic Storage Management Cluster File System (Oracle ACFS)

No

No

Yes for running Oracle Database on Hub Nodes for Oracle Database 11g Release 2 (11.2) and later.

No for running Oracle Database on Leaf Nodes.

Yes (Oracle Database 12c Release 1 (12.1) and later)

Yes (Oracle Database 12c Release 1 (12.1) and later

Local file system

No

Yes

Yes

No

No

Network file system (NFS) on a certified network-attached storage (NAS) filer

Note: Direct NFS Client does not support Oracle Clusterware files.

Yes

Yes

Yes

Yes

Yes

Shared disk partitions (block devices or raw devices)

No

No

No

No

No


Use the following guidelines when choosing storage options:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • You can use Oracle ASM to store Oracle Clusterware files.

  • Direct use of raw or block devices is not supported. You can only use raw or block devices under Oracle ASM.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting files locations and at least two Oracle Cluster Registry locations to provide redundancy.

7.1.2 About Oracle ACFS and Oracle ADVM

This section contains information about Oracle Automatic Storage Management Cluster File System (Oracle ACFS) and Oracle Automatic Storage Management Dynamic Volume Manager (Oracle ADVM). It contains the following topics:

7.1.2.1 About Oracle ACFS and Oracle ADVM

Oracle ACFS extends Oracle ASM technology to support of all of your application data in both single instance and cluster configurations. Oracle ADVM provides volume management services and a standard disk device driver interface to clients. Oracle Automatic Storage Management Cluster File System communicates with Oracle ASM through the Oracle Automatic Storage Management Dynamic Volume Manager interface.

7.1.2.2 Oracle ACFS and Oracle ADVM Support on Linux

Oracle ACFS and Oracle ADVM are supported on Oracle Linux, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server. Table 7-2 lists the releases, platforms and kernel versions that support Oracle ACFS and Oracle ADVM.

Table 7-2 Platforms That Support Oracle ACFS and Oracle ADVM

Platform / Operating System Kernel

Oracle Linux 6

  • Oracle Linux 6 with Red Hat Compatible Kernel

  • Unbreakable Enterprise Kernel Release 1:

    2.6.32-300 and later updates to -300 kernels

    2.6.32-200 and later updates to -200 kernels

    2.6.32-100.34.1 and later updates to -100 kernels

  • Unbreakable Enterprise Kernel Release 2:

    2.6.39-400 and later updates to -400 kernels

    2.6.39-300 and later updates to -300 kernels

    2.6.39-200 and later updates to -200 kernels

    2.6.39-100 and later updates to -100 kernels

Oracle Linux 5

  • Oracle Linux 5 Update 3 with Red Hat Compatible Kernel: 2.6.18 or later

  • Unbreakable Enterprise Kernel Release 1:

    2.6.32-300 and later updates to -300 kernels

    2.6.32-200 and later updates to -200 kernels

    2.6.32-100.34.1 and later updates to -100 kernels

  • Unbreakable Enterprise Kernel Release 2:

    2.6.39-400 and later updates to -400 kernels

    2.6.39-300 and later updates to -300 kernels

    2.6.39-200 and later updates to -200 kernels

    2.6.39-100 and later updates to -100 kernels

Red Hat Enterprise Linux 6

Red Hat Enterprise Linux 6

Red Hat Enterprise Linux 5

Red Hat Enterprise Linux 5 Update 3:2.6.18-238.el5 or later

SUSE Linux Enterprise Server 11

SUSE Linux Enterprise Server 11 Service Pack 2 (SP2)


Note:

Security Enhanced Linux (SELinux) is not supported on Oracle ACFS file systems.

See Also:

7.1.2.3 Restrictions and Guidelines for Oracle ACFS

Note the following general restrictions and guidelines about Oracle ACFS:

  • Oracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a general purpose file system. You can place Oracle Database binaries and Oracle Database files on this system, but you cannot place Oracle Clusterware files on Oracle ACFS.

    For policy-managed Oracle Flex Cluster databases, be aware that Oracle ACFS can run on Hub Nodes, but cannot run on Leaf Nodes. For this reason, Oracle RAC binaries cannot be placed on Oracle ACFS on Leaf Nodes.

  • You cannot store Oracle Clusterware binaries and files on Oracle ACFS.

  • Starting with Oracle Grid Infrastructure 12c Release 1 (12.1) for a cluster, creating Oracle data files on an Oracle ACFS file system is supported.

  • You can store Oracle Database binaries, data files, and administrative files (for example, trace files) on Oracle ACFS.

7.1.3 General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC

For all installations, you must choose the storage option to use for Oracle Grid Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application Clusters (Oracle RAC) databases.

7.1.3.1 General Storage Considerations for Oracle Clusterware

Oracle Clusterware voting files are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can store Oracle Cluster Registry (OCR) and voting files in Oracle ASM disk groups. You can also store a backup of the OCR file in a disk group. Storage must be shared; any node that does not have access to an absolute majority of voting files (more than half) will be restarted.

7.1.3.2 General Storage Considerations for Oracle RAC

For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database and recovery files. For all installations, Oracle recommends that you create at least two separate Oracle ASM disk groups: One for Oracle Database data files, and one for recovery files. Oracle recommends that you place the Oracle Database disk group and the recovery files disk group in separate failure groups.

If you do not use Oracle ASM, then Oracle recommends that you place the data files and the Fast Recovery Area in shared storage located outside of the Oracle home, in separate locations, so that a hardware failure does not affect availability.

See Also:

Note the following additional guidelines for supported storage options:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • If you plan to install an Oracle RAC home on a shared OCFS2 location, then you must upgrade OCFS2 to at least version 1.4.1, which supports shared writable mmaps.

  • If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have Oracle Clusterware and Oracle ASM 12c Release 1 (12.1) installed as part of an Oracle Grid Infrastructure for a cluster installation.

    • Any existing Oracle ASM instance on any node in the cluster is shut down.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting file areas to provide voting file redundancy.

7.1.4 Guidelines for Using Oracle ASM Disk Groups for Storage

During Oracle Grid Infrastructure installation, you can create one disk group. After the Oracle Grid Infrastructure installation, you can create additional disk groups using Oracle Automatic Storage Management Configuration Assistant (ASMCA), SQL*Plus, or Automatic Storage Management Command-Line Utility (ASMCMD). Note that with Oracle Database 11g Release 2 (11.2) and later releases, Oracle Database Configuration Assistant (DBCA) does not have the functionality to create disk groups for Oracle ASM.

If you install Oracle Database or Oracle RAC after you install Oracle Grid Infrastructure, then you can either use the same disk group for database files, OCR, and voting files, or you can use different disk groups. If you create multiple disk groups before installing Oracle RAC or before creating a database, then you can do one of the following:

  • Place the data files in the same disk group as the Oracle Clusterware files.

  • Use the same Oracle ASM disk group for data files and recovery files.

  • Use different disk groups for each file type.

If you create only one disk group for storage, then the OCR and voting files, database files, and recovery files are contained in the one disk group. If you create multiple disk groups for storage, then you can place files in different disk groups.

Note:

The Oracle ASM instance that manages the existing disk group should be running in the Grid home.

See Also:

Oracle Automatic Storage Management Administrator's Guide for information about creating disk groups

7.1.5 Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC

Oracle Grid Infrastructure and Oracle RAC only support cluster-aware volume managers. Some third-party volume managers are not cluster-aware, and so are not supported. To confirm that a volume manager you want to use is supported, click Certifications on My Oracle Support to determine if your volume manager is certified for Oracle RAC. My Oracle Support is available at the following URL:

https://support.oracle.com

7.1.6 After You Have Selected Disk Storage Options

When you have determined your disk storage options, configure shared storage:

7.2 About Shared File System Storage Configuration

The installer suggests default locations for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting files, based on the shared storage locations detected on the server. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

Note:

The OCR is a file that contains the configuration information and status of the cluster. The installer automatically initializes the OCR during the Oracle Clusterware installation. Database Configuration Assistant uses the OCR for storing the configurations for the cluster databases that it creates.

7.2.1 Guidelines for Using a Shared File System with Oracle Grid Infrastructure

To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:

  • To use an NFS file system, it must be on a supported NAS device. Log in to My Oracle Support at the following URL, and click Certifications to find the most current information about supported NAS devices:

    https://support.oracle.com/

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that you configure your shared file systems in one of the following ways:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • At least two file systems are mounted, and use the features of Oracle Clusterware 12c Release 1 (12.1) to provide redundancy for the OCR.

  • If you choose to place your database files on a shared file system, then one of the following should be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • The file systems consist of at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.

  • Oracle recommends that you create a Grid Infrastructure Management Repository. This repository is an optional component, but if you do not select this feature during installation, then you lose access to Oracle Database Quality of Service management, Memory Guard, and Cluster Health Monitor. You cannot enable these features after installation except by reinstalling Oracle Grid Infrastructure.

  • The user account with which you perform the installation (oracle or grid) must have write permissions to create the files in the path that you specify.

Note:

Upgrading from Oracle9i Release 2 using the raw device or shared file for the OCR that you used for the SRVM configuration repository is not supported.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting file partitions, then you must extend the OCR partition to at least 400 MB, and you should extend the voting file partition to 300 MB. Oracle recommends that you do not use partitions, but instead place OCR and voting files in a special type of failure group, called a quorum failure group.

All storage products must be supported by both your server and storage vendors.

See Also:

Oracle Database Quality of Service Management User's Guide for more information about features requiring the Grid Infrastructure Management Repository

7.2.2 Requirements for Oracle Grid Infrastructure Shared File System Volume Sizes

Use Table 7-3 and Table 7-4 to determine the minimum size for shared file systems:

Table 7-3 Oracle Clusterware Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Voting files with external redundancy

1

At least 300 MB for each voting file volume

Oracle Cluster Registry (OCR) with external redundancy without the Grid Infrastructure Management Repository

1

At least 400 MB for each OCR volume

Oracle Cluster Registry (OCR) with external redundancy and the Grid Infrastructure Management Repository

1

At least 4 GB for the OCR volume that contains the Grid Infrastructure Management Repository (3.3 GB + 300 MB voting file + 400 MB OCR), plus 500 MB for each node for clusters greater than four nodes. For example, a six-node cluster allocation should be 5 GB.

Oracle Clusterware files (OCR and voting files) and Grid Infrastructure Management Repository with redundancy provided by Oracle software

3

At least 400 MB for each OCR volume

At least 300 MB for each voting file volume

2 x 4 GB (normal redundancy):

For 5 nodes and beyond, add 500 MB for each additional node.

For example, for a 6 node cluster the size is 10.3 GB:

  • Grid Infrastructure Management Repository = 2 x (3.3 GB+500 MB+500 MB) GB = 8.6 GB

  • 2 OCRs (2 x 400 MB) = 0.8 GB

  • 3 voting files (3 x 300 MB) = 0.9 GB

= 10.3 GB


Table 7-4 Oracle RAC Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 7-3 and Table 7-4, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting file on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting files and OCR files are on separate physical disks. If you add Oracle RAC using one volume for database files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 6.9 GB available total for all volumes.

Note:

If you create partitions on shared partitions with fdisk by specifying a device size, such as +400M, then the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions. Oracle recommends that you partition the entire disk that you allocate for use by Oracle ASM.

7.2.3 Deciding to Use a Cluster File System for Oracle Clusterware Files

For new installations, Oracle recommends that you use Oracle Automatic Storage Management (Oracle ASM) to store voting files and OCR files. For Linux86-64 (64-bit) platforms, Oracle provides a cluster file system, OCFS2. However, Oracle does not recommend using OCFS2 for Oracle Clusterware files.

7.2.4 About Direct NFS Client and Data File Storage

Direct NFS Client is an alternative to using kernel-managed NFS. This section contains the following information about Direct NFS Client:

7.2.4.1 About Direct NFS Client Storage

With Oracle Database, instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS servers directly using an Oracle internal client called Direct NFS Client. Direct NFS Client supports NFSv3, NFSv4 and NFSv4.1 protocols (excluding the Parallel NFS extension) to access the NFS server.

To enable Oracle Database to use Direct NFS Client, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. Direct NFS Client manages settings after installation. If Oracle Database cannot open an NFS server using Direct NFS Client, then Oracle Database uses the platform operating system kernel NFS client. You should still set the kernel mount options as a backup, but for normal operation, Direct NFS Client uses its own NFS client.

Direct NFS Client supports up to four network paths to the NFS server. Direct NFS Client client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client reissues I/O commands over any remaining paths.

Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable reserved port checking for Direct NFS Client to operate. To disable reserved port checking, consult your NFS file server documentation.

For NFS servers that restrict port range, you can use the insecure option to enable clients other than root to connect to the NFS server. Alternatively, you can disable Direct NFS Client as described in Section 7.3.9, "Disabling Direct NFS Client Oracle Disk Management Control of NFS".

Note:

Use NFS servers supported for Oracle RAC. See the following URL for support information:

https://support.oracle.com

7.2.4.2 About Direct NFS Client Configuration

Direct NFS Client uses either the configuration file $ORACLE_HOME/dbs/oranfstab or the operating system mount tab file /etc/mtab to find out what mount points are available. If oranfstab is not present, then by default Direct NFS Client servers mount entries found in /etc/mtab. No other configuration is required. You can use oranfstab to specify additional specific Oracle Database operations to use Direct NFS Client. For example, you can use oranfstab to specify additional paths for a mount point.

Direct NFS Client supports up to four network paths to the NFS server. Direct NFS Client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client reissues I/O commands over any remaining paths.

7.2.4.3 About the oranfstab File and Direct NFS Client

If you use Direct NFS Client, then you can use a new file specific for Oracle data file management, oranfstab, to specify additional options specific for Oracle Database to Direct NFS Client. For example, you can use oranfstab to specify additional paths for a mount point. You can add the oranfstab file either to /etc or to $ORACLE_HOME/dbs.

With shared Oracle homes, when the oranfstab file is placed in $ORACLE_HOME/dbs, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab file. In non-shared Oracle RAC installs, oranfstab must be replicated on all nodes.

When the oranfstab file is placed in /etc, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including standalone databases. However, on Oracle RAC systems, if the oranfstab file is placed in /etc, then you must replicate the file /etc/oranfstab file on all nodes, and keep each /etc/oranfstab file synchronized on all nodes, just as you must with the /etc/fstab file.

See Also:

Section 7.3.1, "Configuring Operating System NFS Mount and Buffer Size Parameters" for information about configuring /etc/fstab

In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS Client. Refer to your vendor documentation to complete operating system NFS configuration and mounting.

Caution:

Direct NFS Client cannot serve an NFS server with write size values (wtmax) less than 32768.

7.2.4.4 About Mounting NFS Storage Devices with Direct NFS Client

Direct NFS Client determines mount point settings to NFS storage devices based on the configurations in /etc/mtab, which are changed with configuring the /etc/fstab file.

Direct NFS Client searches for mount entries in the following order:

  1. $ORACLE_HOME/dbs/oranfstab

  2. /etc/oranfstab

  3. /etc/mtab

Direct NFS Client uses the first matching entry as the mount point.

Oracle Database requires that mount points be mounted by the kernel NFS system even when served through Direct NFS Client.

Note:

You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance will prevent another Direct NFS Client implementation.

If Oracle Database uses Direct NFS Client mount points configured using oranfstab, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab with operating system NFS mount points. If a mismatch exists, then Direct NFS Client logs an informational message, and does not operate.

If Oracle Database cannot open an NFS server using Direct NFS Client, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in "Checking NFS Mount and Buffer Size Parameters for Oracle RAC". Additionally, an informational message is logged into the Oracle alert and trace files indicating that Direct NFS Client could not connect to an NFS server.

Section 7.1.1, "Supported Storage Options" lists the file types that are supported by Direct NFS Client.

The Oracle files resident on the NFS server that are served by Direct NFS Client are also accessible through the operating system kernel NFS client.

See Also:

Oracle Automatic Storage Management Administrator's Guide for guidelines to follow regarding managing Oracle database data files created with Direct NFS Client or kernel NFS

7.2.5 Deciding to Use NFS for Data Files

Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.

NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.

Be aware that the performance of Oracle software and databases stored on NAS devices depends on the performance of the network connection between the Oracle server and the NAS device.

For this reason, Oracle recommends that you connect the server to the NAS device using a private dedicated network connection, which should be Gigabit Ethernet or better.

7.3 Configuring Operating System and Direct NFS Client

Refer to the following sections to configure your operating system and Direct NFS Client:

7.3.1 Configuring Operating System NFS Mount and Buffer Size Parameters

If you are using NFS for the Grid home or Oracle RAC home, then you must set up the NFS mounts on the storage to enable the following:

  • The root user on the clients mounting to the storage can be considered as the root user on the file server, instead of being mapped to an anonymous user.

  • The root user on the client server can create files on the NFS filesystem that are owned by root on the file server.

On NFS, you can obtain root access for clients writing to the storage by enabling no_root_squash on the server side. For example, to set up Oracle Clusterware file storage in the path /vol/grid, with nodes node1, node 2, and node3 in the domain mycluster.example.com, add a line similar to the following to the /etc/exports file:

/vol/grid/ node1.mycluster.example.com(rw,no_root_squash)
node2.mycluster.example.com(rw,no_root_squash) node3.mycluster.example.com
(rw,no_root_squash) 

If the domain or DNS is secure so that no unauthorized system can obtain an IP address on it, then you can grant root access by domain, rather than specifying particular cluster member nodes:

For example:

/vol/grid/ *.mycluster.example.com(rw,no_root_squash)

Oracle recommends that you use a secure DNS or domain, and grant root access to cluster member nodes using the domain, because using this syntax enables you to add or remove nodes without the need to reconfigure the NFS server.

If you use Grid Naming Service (GNS), then the subdomain allocated for resolution by GNS within the cluster is a secure domain. Any server without a correctly signed Grid Plug and Play (GPnP) profile cannot join the cluster, so an unauthorized system cannot obtain or use names inside the GNS subdomain.

Caution:

Granting root access by domain can be used to obtain unauthorized access to systems. System administrators should see their operating system documentation for the risks associated with using no_root_squash.

After changing /etc/exports, reload the file system mount using the following command:

# /usr/sbin/exportfs -avr

7.3.2 Checking Operating System NFS Mount and Buffer Size Parameters

On Oracle Grid Infrastructure cluster member nodes, you must set the values for the NFS buffer size parameters rsize and wsize to 32768.

The NFS client-side mount options for binaries are:

rw,bg,hard,nointr,tcp,nfsvers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0

Note:

The intr and nointr mount options are deprecated with Oracle Unbreakable Enterprise Linux and Oracle Linux kernels, 2.6.32 and later.

If you have Oracle Grid Infrastructure binaries on an NFS mount, then you must not include the nosuid option.

The NFS client-side mount options for Oracle Clusterware files (OCR and voting files) are:

rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0

Update the /etc/fstab file on each node with an entry containing the NFS mount options for your platform. For example, if your platform is x86-64, and you are creating a mount point for Oracle Clusterware files, then update the /etc/fstab files with an entry similar to the following:

nfs_server:/vol/grid  /u02/oracle/cwfiles nfs \
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0

Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting files), and data files.

To create a mount point for binaries only, provide an entry similar to the following for a binaries mount point:

nfs_server:/vol/bin /u02/oracle/grid nfs \
rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0,suid 0 0

See Also:

My Oracle Support bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:

https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=359515.1

Note:

Refer to your storage vendor documentation for additional information about mount options.

7.3.3 Checking NFS Mount and Buffer Size Parameters for Oracle RAC

If you use NFS mounts for Oracle RAC files, then you must mount NFS volumes used for storing database files with special mount options on each node that has an Oracle RAC instance. When mounting an NFS file system, Oracle recommends that you use the same mount point options that your NAS vendor used when certifying the device. Refer to your device documentation or contact your vendor for information about recommended mount-point options.

Update the /etc/fstab file on each node with an entry similar to the following:

nfs_server:/vol/DATA/oradata  /u02/oradata     nfs\   
rw,bg,hard,nointr,tcp,nfsvers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0

The mandatory mount options comprise the minimum set of mount options that you must use while mounting the NFS volumes. These mount options are essential to protect the integrity of the data and to prevent any database corruption. Failure to use these mount options may result in the generation of file access errors. see your operating system or NAS device documentation for more information about the specific options supported on your platform.

See Also:

My Oracle Support Note 359515.1 for updated NAS mount option information, available at the following URL:
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=359515.1

7.3.4 Checking TCP Network Protocol Buffer for Direct NFS Client

By default, the network buffer size is set to 1 MB for TCP, and 2 MB for UDP. The TCP buffer size can set a limit on file transfers, which can negatively affect performance for Direct NFS Client users.

To check the current TCP buffer size, enter the following command:

# sysctl -a |grep -e net.ipv4.tcp_[rw]mem

The output of this command is similar to the following:

net.ipv4.tcp_rmem = 4096        87380   1048576
net.ipv4.tcp_wmem = 4096        16384   1048576

Oracle recommends that you set the value based on the link speed of your servers. For example, perform the following steps:

  1. As root, use a text editor to open /etc/sysctl.conf, and add or change the following:

    net.ipv4.tcp_rmem = 4096        87380   4194304
    net.ipv4.tcp_wmem = 4096        16384   4194304
    
  2. Apply your changes by running the following command:

    # sysctl -p
    
  3. Restart the network:

    # /etc/rc.d/init.d/network restart
    

7.3.5 Enabling Direct NFS Client Oracle Disk Manager Control of NFS

Complete the following procedure to enable Direct NFS Client:

  1. Create an oranfstab file with the following attributes for each NFS server you configure for access using Direct NFS Client:

    • server: The NFS server name.

    • local: Up to four paths on the database host, specified by IP address or by name, as displayed using the ifconfig command run on the database host.

    • path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command on the NFS server.

    • export: The exported path from the NFS server.

    • mount: The corresponding local mount point for the exported volume.

    • mnt_timeout: Specifies (in seconds) the time Direct NFS Client should wait for a successful mount before timing out. This parameter is optional. The default timeout is 10 minutes (600).

    • nfs_version: Specifies the NFS protocol version Direct NFS Client uses. Possible values are NFSv3, NFSv4 and NFSv4.1. The default version is NFSv3. If you select NFSv4.x, then you must configure the value in oranfstab for nfs_version.

    • dontroute: Specifies that outgoing messages should not be routed by the operating system, but instead sent using the IP address to which they are bound. Note that this POSIX option sometimes does not work on Linux systems with multiple paths in the same subnet.

    See Also:

    Oracle Database Performance Tuning Guide for more information about limiting asynchronous I/O

    Example 7-1, Example 7-2, and Example 7-3 show three possible NFS server entries in oranfstab. A single oranfstab can have multiple NFS server entries.

  2. By default, Direct NFS Client is installed in an enabled state. However, if Direct NFS Client is disabled and you want to enable it, complete the following steps on each node. If you use a shared Grid home for the cluster, then complete the following steps in the shared Grid home:

    1. Log in as the Oracle Grid Infrastructure installation owner.

    2. Change directory to Grid_home/rdbms/lib.

    3. Enter the following commands:

      $ make -f ins_rdbms.mk dnfs_on
      

Example 7-1 Using Local and Path NFS Server Entries

The following example uses both local and path. Because local and path are in different subnets, there is no need to specify dontroute.

server: MyDataServer1
local: 192.0.2.0
path: 192.0.2.1
local: 192.0.100.0
path: 192.0.100.1
export: /vol/oradata1 mount: /mnt/oradata1

Example 7-2 Using Local and Path in the Same Subnet, with dontroute

The following example shows local and path in the same subnet. dontroute is specified in this case:

server: MyDataServer2
local: 192.0.2.0
path: 192.0.2.128
local: 192.0.2.1
path: 192.0.2.129
dontroute
export: /vol/oradata2 mount: /mnt/oradata2

Example 7-3 Using Names in Place of IP Addresses, with Multiple Exports

server: MyDataServer3
local: LocalPath1
path: NfsPath1
local: LocalPath2
path: NfsPath2
local: LocalPath3
path: NfsPath3
local: LocalPath4
path: NfsPath4
dontroute
export: /vol/oradata3 mount: /mnt/oradata3
export: /vol/oradata4 mount: /mnt/oradata4
export: /vol/oradata5 mount: /mnt/oradata5
export: /vol/oradata6 mount: /mnt/oradata6

7.3.6 Specifying Network Paths with the Oranfstab File

Direct NFS Client can use up to four network paths defined in the oranfstab file for an NFS server. Direct NFS Client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client reissues I/O commands over any remaining paths.

Use the following SQL*Plus views for managing Direct NFS Client in a cluster environment:

  • gv$dnfs_servers: Shows a table of servers accessed using Direct NFS Client.

  • gv$dnfs_files: Shows a table of files currently open using Direct NFS Client.

  • gv$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS Client is providing files.

  • gv$dnfs_stats: Shows a table of performance statistics for Direct NFS Client.

Note:

Use v$ views for single instances, and gv$ views for Oracle Clusterware and Oracle RAC storage.

7.3.7 Creating Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For both NFS and OCFS2 storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems to use. Choose a file system with a minimum of 600 MB of free disk space (one OCR and one voting file, with external redundancy).

    If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, grid or oracle) has permissions to create directories on the storage location where you plan to install Oracle Clusterware files, then OUI creates the Oracle Clusterware file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the directory. For example, where the user is oracle, and the Oracle Clusterware file storage area is cluster:

    # mkdir /mount_point/cluster
    # chown oracle:oinstall /mount_point/cluster
    # chmod 775 /mount_point/cluster
    

    Note:

    After installation, directories in the installation path for the OCR files should be owned by root, and not writable by any account other than root.

When you have completed creating a subdirectory in the mount point directory, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Grid Infrastructure.

7.3.8 Creating Directories for Oracle Database Files on Shared File Systems

Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for an Oracle RAC database).

  1. If necessary, configure the shared file systems and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -h command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems:

    File Type File System Requirements
    Database files Choose either:
    • A single file system with at least 1.5 GB of free disk space.

    • Two or more file systems with at least 1.5 GB of free disk space in total.

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
    • Recovery file directory (Fast Recovery Area):

      # mkdir /mount_point/recovery_area
      # chown oracle:oinstall /mount_point/recovery_area
      # chmod 775 /mount_point/recovery_area
      

By making members of the oinstall group owners of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed OCFS2 or NFS configuration for Oracle Database shared storage.

7.3.9 Disabling Direct NFS Client Oracle Disk Management Control of NFS

Complete the following steps to disable Direct NFS Client:

  1. Log in as the Oracle Grid Infrastructure installation owner, and disable Direct NFS Client using the following commands, where Grid_home is the path to the Oracle Grid Infrastructure home:

    $ cd Grid_home/rdbms/lib
    $ make -f ins_rdbms.mk dnfs_off
    

    Enter these commands on each node in the cluster, or on the shared Grid home if you are using a shared home for the Oracle Grid Infrastructure installation.

  2. Remove the oranfstab file.

Note:

If you remove an NFS path that an Oracle Database is using, then you must restart the database for the change to be effective.

7.4 Oracle Automatic Storage Management Storage Configuration

Review the following sections to configure storage for Oracle Automatic Storage Management:

7.4.1 Configuring Storage for Oracle Automatic Storage Management

This section describes how to configure storage for use with Oracle Automatic Storage Management.

7.4.1.1 Identifying Storage Requirements for Oracle Automatic Storage Management

To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting files), Oracle Database files, recovery files, or all files except for Oracle Clusterware or Oracle Database binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

    Note:

    • You do not have to use the same storage mechanism for Oracle Clusterware, Oracle Database files and recovery files. You can use a shared file system for one file type and Oracle ASM for the other.
    • There are two types of Oracle Clusterware files: OCR files and voting files. Each type of file can be stored on either Oracle ASM or a cluster file system. All the OCR files or all the voting files must use the same type of storage. You cannot have some OCR files stored in Oracle ASM and other OCR files in a cluster file system. However, you can use one type of storage for the OCR files and a different type of storage for the voting files if all files of each type use the same type of storage.

  2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.

    Except when using external redundancy, Oracle ASM mirrors all Oracle Clusterware files in separate failure groups within a disk group. A quorum failure group, a special type of failure group, contains mirror copies of voting files when voting files are stored in normal or high redundancy disk groups. If the voting files are in a disk group, then the disk groups that contain Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups because the voting files are stored in quorum failure groups.

    A quorum failure group is a special type of failure group that is used to store the Oracle Clusterware voting files. The quorum failure group is used to ensure that a quorum of the specified failure groups are available. When Oracle ASM mounts a disk group that contains Oracle Clusterware files, the quorum failure group is used to determine if the disk group can be mounted in the event of the loss of one or more failure groups. Disks in the quorum failure group do not contain user data, therefore a quorum failure group is not considered when determining redundancy requirements in respect to storing user data.

    The redundancy levels are as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.

    • Normal redundancy

      In a normal redundancy disk group, to increase performance and reliability, Oracle ASM by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For Oracle Clusterware files, a normal redundancy disk group requires a minimum of three disk devices (two of the three disks are used by failure groups and all three disks are used by the quorum failure group) and provides three voting files and one OCR (one primary and one secondary copy). With normal redundancy, the cluster can survive the loss of one failure group.

      For most installations, Oracle recommends that you select normal redundancy.

    • High redundancy

      In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      For Oracle Clusterware files, a high redundancy disk group requires a minimum of five disk devices (three of the five disks are used by failure groups and all five disks are used by the quorum failure group) and provides five voting files and one OCR (one primary and two secondary copies). With high redundancy, the cluster can survive the loss of two failure groups.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

    Note:

    After a disk group is created, you cannot alter the redundancy level of the disk group.
  3. Determine the total amount of disk space that you require for Oracle Clusterware files, and for the database files and recovery files.

    Use Table 7-5 and Table 7-6 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware files, and installing the starter database, where you have voting files in a separate disk group:

    Table 7-5 Total Oracle Clusterware Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) Files Voting Files Both File Types Total Storage When Grid Infrastructure Management Repository is Selected

    External

    1

    400 MB

    300 MB

    700 MB

    At least 4 GB for a cluster with 4 nodes or less (3.3 GB + 400 MB + 300 MB).

    Additional space required for clusters with 5 or more nodes. For example, a six-node cluster allocation should be at least 5 GB:

    (3.3 GB +2*(500 MB) +400 MB + 300 MB).

    Normal

    3

    At least 400 MB for each failure group, or 800 MB

    900 MB

    1.7 GBFoot 1 

    At least 8.3 GB for a cluster with 4 nodes or less (6.6 GB + 2*400 MB + 3*300 MB).

    Additional space required for clusters with 5 or more nodes. For example, for a six-node cluster allocation should be at least 10.3 GB:

    (2 * (3.3 GB +2*(500 MB)) +(2 * 400 MB) +(3 * 300 MB).

    High

    5

    At least 400 MB for each failure group, or 1.2 GB

    1.5 GB

    2.7 GB

    At least 12.6 GB for a cluster with 4 nodes or less (3* 3.3 GB + 3*400 MB + 5*300 MB).

    Additional space required for clusters with 5 or more nodes. For example, for a six-node cluster allocation should be at least 15.6 GB:

    (3* (3.3 GB +2*(500 MB))+(3 * 400 MB) +(5 * 300 MB).


    Footnote 1 If you create a disk group during installation, then it must be at least 2 GB.

    Note:

    If the voting files are in a disk group, be aware that disk groups with Oracle Clusterware files (OCR and voting files) have a higher minimum number of failure groups than other disk groups.

    If you create a disk group as part of the installation in order to install the OCR and voting files, then the installer requires that you create these files on a disk group with at least 2 GB of available space.

    Table 7-6 Total Oracle Database Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types

    External

    1

    1.5 GB

    3 GB

    4.5 GB

    Normal

    2

    3 GB

    6 GB

    9 GB

    High

    3

    4.5 GB

    9 GB

    13.5 GB


  4. Determine an allocation unit size. Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is the fundamental unit of allocation within a disk group. You can select the AU Size value from 1, 2, 4, 8, 16, 32 or 64 MB, depending on the specific disk group compatibility level. The default value is set to 1 MB.

  5. For Oracle Clusterware installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the disk space requirements (in MB) for OCR and voting files, and the Oracle ASM metadata:

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes (default is 1 MB)

    • nodes = Number of nodes in cluster.

    • clients - Number of database instances for each node.

    • disks - Number of disks in disk group.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB

    To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

  6. Optionally, identify failure groups for the Oracle ASM disk group devices.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    Define custom failure groups after installation, using the GUI tool ASMCA, the command line tool asmcmd, or SQL commands.

    If you define custom failure groups, then for failure groups containing database files only, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    For failure groups containing database files and clusterware files, including voting files, you must specify a minimum of three failure groups for normal redundancy disk groups, and five failure groups for high redundancy disk groups.

    Disk groups containing voting files must have at least 3 failure groups for normal redundancy or at least 5 failure groups for high redundancy. Otherwise, the minimum is 2 and 3 respectively. The minimum number of failure groups applies whether or not they are custom failure groups.

  7. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

    • Do not specify multiple partitions on a single physical disk as a disk group device. Each disk group device should be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle Grid Infrastructure and Oracle RAC require a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.

      Oracle recommends that if you choose to use a logical volume manager, then use the logical volume manager to represent a single LUN without striping or mirroring, so that you can minimize the impact of the additional storage layer.

See Also:

Oracle Automatic Storage Management Administrator's Guide for information about allocation units

7.4.1.2 Creating Files on a NAS Device for Use with Oracle ASM

If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.

To create these files, follow these steps:

  1. If necessary, create an exported directory for the disk group files on the NAS device.

    Refer to the NAS device documentation for more information about completing this step.

  2. Switch user to root.

  3. Create a mount point directory on the local system. For example:

    # mkdir -p /mnt/oracleasm
    
  4. To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/fstab.

    See Also:

    My Oracle Support Note 359515.1 for updated NAS mount option information, available at the following URL:
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=359515.1
    

    For more information about editing the mount file for the operating system, see the man pages. For more information about recommended mount options, see the section "Checking NFS Mount and Buffer Size Parameters for Oracle RAC".

  5. Enter a command similar to the following to mount the NFS file system on the local system:

    # mount /mnt/oracleasm
    
  6. Choose a name for the disk group to create. For example: sales1.

  7. Create a directory for the files on the NFS file system, using the disk group name as the directory name. For example:

    # mkdir /mnt/oracleasm/nfsdg
    
  8. Use commands similar to the following to create the required number of zero-padded files in this directory:

    # dd if=/dev/zero 
    of=/mnt/oracleasm/nfsdg/disk1 bs=1024k 
    count=1000 oflag=direct
    

    This example creates 1 GB files on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.

  9. Enter commands similar to the following to change the owner, group, and permissions on the directory and files that you created, where the installation owner is grid, and the OSASM group is asmadmin:

    # chown -R grid:asmadmin /mnt/oracleasm
    # chmod -R 660 /mnt/oracleasm
    
  10. If you plan to install Oracle RAC or a standalone Oracle Database, then during installation, edit the Oracle ASM disk discovery string to specify a regular expression that matches the file names you created. For example:

    /mnt/oracleasm/sales1/
    

    Note:

    During installation, disks labelled as ASMLIB disks are listed as candidate disks when using the default discovery string. However, if the disk has a header status of MEMBER, then it is not a candidate disk.

7.4.1.3 Using an Existing Oracle ASM Disk Group

Select from the following choices to store either database or recovery files in an existing Oracle ASM disk group, depending on installation method:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode, then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Oracle ASM instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Oracle ASM disk group exists, or to determine if there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Cloud Control or the Oracle ASM command line tool (asmcmd) as follows:

  1. Connect to the Oracle ASM instance and start the instance if necessary:

    $ $ORACLE_HOME/bin/asmcmd
    ASMCMD> startup
    
  2. Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    ASMCMD> lsdg
    

    or:

    $ORACLE_HOME/bin/asmcmd -p lsdg
    
  3. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  4. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

7.4.2 Configuring Storage Device Path Persistence Using ASMLIB

Review the following section to configure ASMLIB:

Note:

Oracle ASMLIB is not supported on IBM:Linux on System z.

7.4.2.1 About Oracle ASM with ASMLIB

The Oracle Automatic Storage Management (Oracle ASM) library driver (ASMLIB) simplifies the configuration and management of block disk devices by eliminating the need to rebind block disk devices used with Oracle ASM each time the system is restarted.

With ASMLIB, you define the range of disks you want to have made available as Oracle ASM disks. ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that label is available even after an operating system upgrade. You can update storage paths on all cluster member nodes by running one oracleasm command on each node, without the need to modify the udev file manually to provide permissions and path persistence.

Note:

If you configure disks using ASMLIB, then you must change the disk discovery string to ORCL:*. If the disk string is set to ORCL:*, or is left empty (""), then the installer discovers these disks.

7.4.2.2 Configuring ASMLIB to Maintain Block Devices

To use the Oracle Automatic Storage Management library driver (ASMLIB) to configure Oracle ASM devices, complete the following tasks.

Note:

To create a database during the installation using the Oracle ASM library driver, you must choose an installation method that runs ASMCA in interactive mode. You must also change the default disk discovery string to ORCL:*.
7.4.2.2.1 Installing and Configuring the Oracle ASM Library Driver Software

ASMLIB is already included with Oracle Linux packages, and with SUSE Linux Enterprise Server. If you are a member of the Unbreakable Linux Network, then you can install the ASMLIB RPMs by subscribing to the Oracle Linux channel, and using yum to retrieve the most current package for your system and kernel. For additional information, see the following URL:

http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html

To install and configure the ASMLIB driver software manually, follow these steps:

  1. Enter the following command to determine the kernel version and architecture of the system:

    # uname -rm
    
  2. Download the required ASMLIB packages from the Oracle Technology Network website:

    http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html
    

    Note:

    You must install oracleasm-support package version 2.0.1 or later to use ASMLIB on Red Hat Enterprise Linux 5 Advanced Server. ASMLIB is already included with SUSE Linux Enterprise Server distributions.

    See Also:

    My Oracle Support Note 1089399.1 for information about ASMLIB support with Red Hat distributions:

    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1089399.1

  3. Switch user to the root user:

    $ su -
    
  4. Install the following packages in sequence, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using:

    oracleasm-support-version.arch.rpm
    oracleasm-kernel-version.arch.rpm
    oracleasmlib-version.arch.rpm
    

    Enter a command similar to the following to install the packages:

    # rpm -ivh oracleasm-support-version.arch.rpm \
               oracleasm-kernel-version.arch.rpm \
               oracleasmlib-version.arch.rpm
    

    For example, if you are using the Red Hat Enterprise Linux 5 AS kernel on an AMD64 system, then enter a command similar to the following:

    # rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm \
         oracleasm-2.6.18-194.26.1.el5xen-2.0.5-1.el5.x86_64.rpm \
         oracleasmlib-2.0.4-1.el5.x86_64.rpm
    
  5. Enter the following command to run the oracleasm initialization script with the configure option:

    # /usr/sbin/oracleasm configure -i
    

    Note:

    The oracleasm command in /usr/sbin is the command you should use. The /etc/init.d path is not deprecated, but the oracleasm binary in that path is now used typically for internal commands.
  6. Enter the following information in response to the prompts that the script displays:

    Table 7-7 ORACLEASM Configure Prompts and Responses

    Prompt Suggested Response

    Default user to own the driver interface:

    Standard groups and users configuration: Specify the Oracle software owner user (for example, oracle).

    Job role separation groups and users configuration: Specify the Oracle Grid Infrastructure software owner user (for example, grid).

    Default group to own the driver interface:

    Standard groups and users configuration: Specify the OSDBA group for the database (for example, dba).

    Job role separation groups and users configuration: Specify the OSASM group for storage administration (for example, asmadmin).

    Start Oracle ASM Library driver on boot (y/n):

    Enter y to start the Oracle Automatic Storage Management library driver when the system starts.

    Scan for Oracle ASM disks on boot (y/n)

    Enter y to scan for Oracle ASM disks when the system starts.


    The script completes the following tasks:

    • Creates the /etc/sysconfig/oracleasm configuration file

    • Creates the /dev/oracleasm mount point

    • Mounts the ASMLIB driver file system

      Note:

      The ASMLIB driver file system is not a regular file system. It is used only by the Oracle ASM library to communicate with the Oracle ASM driver.
  7. Enter the following command to load the oracleasm kernel module:

    # /usr/sbin/oracleasm init
    
  8. Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC.

7.4.2.2.2 Configuring Disk Devices to Use Oracle ASM Library Driver on x86 Systems

To configure the disk devices to use in an Oracle ASM disk group, follow these steps:

  1. If you intend to use IDE, SCSI, or RAID devices in the Oracle ASM disk group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.

    2. Enter the following command to identify the device name for the disks to use, enter the following command:

      # /sbin/fdisk -l
      

      Depending on the type of disk, the device name can vary. Table 7-8 describes some types of disk paths:

      Table 7-8 Types of Linux Storage Disk Paths

      Disk Type Device Name Format Description

      IDE disk

      /dev/hdxn
      

      In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.

      SCSI disk

      /dev/sdxn
      

      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.

      RAID disk

      /dev/rd/cxdypz
      /dev/ida/cxdypz
      

      Depending on the RAID controller, RAID devices can have different device names. In the examples shown, x is a number that identifies the controller, y is a number that identifies the disk, and z is a number that identifies the partition. For example, /dev/ida/c0d1 is the second logical drive on the first controller.


      To include devices in a disk group, you can specify either whole-drive device names or partition device names.

      Note:

      Oracle recommends that you create a single whole-disk partition on each disk.
    3. Use either fdisk or parted to create a single whole-disk partition on the disk devices.

  2. Enter a command similar to the following to mark a disk as an Oracle ASM disk:

    # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
    

    In this example, DISK1 is the name you assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with Oracle ASM, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other nodes in the cluster, enter the following command as root on each node:

    # /usr/sbin/oracleasm scandisks
    

    This command identifies shared disks attached to the node that are marked as Oracle ASM disks.

7.4.2.2.3 Configuring Disk Devices to Use ASM Library Driver on IBM: Linux on System z Systems
  1. If you formatted the DASD with the compatible disk layout, then enter a command similar to the following to create a single whole-disk partition on the device:

    # /sbin/fdasd -a /dev/dasdxxxx
    
  2. Enter a command similar to the following to mark a disk as an ASM disk:

    # /etc/init.d/oracleasm createdisk DISK1 /dev/dasdxxxx
    

    In this example, DISK1 is a name that you want to assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with ASM, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other cluster nodes, enter the following command as root on each node:

    # /etc/init.d/oracleasm scandisks
    

    This command identifies shared disks attached to the node that are marked as ASM disks.

7.4.2.2.4 Administering the Oracle ASM Library Driver and Disks

To administer the Oracle Automatic Storage Management library driver (ASMLIB) and disks, use the /usr/sbin/oracleasm initialization script with different options, as described in Table 7-9:

Table 7-9 Disk Management Tasks Using ORACLEASM

Task Command Example Description

Configure or reconfigure ASMLIB

oracleasm configure -i

Use the configure option to reconfigure the Oracle Automatic Storage Management library driver, if necessary.

To see command options, enter oracleasm configure without the -i flag.

Change system restart load options for ASMLIB

oracleasm enable

Options are disable and enable.

Use the disable and enable options to change the actions of the Oracle Automatic Storage Management library driver when the system starts. The enable option causes the Oracle Automatic Storage Management library driver to load when the system starts

Load or unload ASMLIB without restarting the system

oracleasm restart

Options are start, stop and restart.

Use the start, stop, and restart options to load or unload the Oracle Automatic Storage Management library driver without restarting the system.

Mark a disk for use with ASMLIB

oracleasm createdisk VOL1 /dev/sda1

Use the createdisk option to mark a disk device for use with the Oracle Automatic Storage Management library driver and give it a name, where labelname is the name you want to use to mark the device, and devicepath is the path to the device:

oracleasm createdisk labelname devicepath

Unmark a named disk device

oracleasm deletedisk VOL1

Use the deletedisk option to unmark a named disk device, where diskname is the name of the disk:

oracleasm deletedisk diskname

Caution: Do not use this command to unmark disks that are being used by an Oracle Automatic Storage Management disk group. You must delete the disk from the Oracle Automatic Storage Management disk group before you unmark it.

Determine if ASMLIB is using a disk device

oracleasm querydisk

Use the querydisk option to determine if a disk device or disk name is being used by the Oracle Automatic Storage Management library driver, where diskname_devicename is the name of the disk or device that you want to query:

oracleasm querydisk diskname_devicename

List Oracle ASMLIB disks

oracleasm listdisks

Use the listdisks option to list the disk names of marked ASMLIB disks.

Identify disks marked as ASMLIB disks

oracleasm scandisks

Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as ASMLIB disks on another node.

Rename ASMLIB disks

oracleasm renamedisk VOL1 VOL2

Use the renamedisk option to change the label of an Oracle ASM library driver disk or device by using the following syntax, where manager specifies the manager device, label_device specifies the disk you intend to rename, as specified either by OracleASM label name or by the device path, and new_label specifies the new label you want to use for the disk:

oracleasm renamedisk [-l manager] [-v] label_device new_label

Use the -v flag to provide a verbose output for debugging.

Caution: You must ensure that all Oracle Database and Oracle ASM instances have ceased using the disk before you relabel the disk. If you do not do this, then you may lose data.


7.4.2.3 Configuring ASMLIB for Multipath Disks

Additional configuration is required to use the Oracle Automatic Storage Management library Driver (ASMLIB) with third party vendor multipath disks.

See Also:

My Oracle Support site for updates to supported storage options:
https://support.oracle.com/
7.4.2.3.1 About Using Oracle ASM with Multipath Disks

Oracle ASM requires that each disk is uniquely identified. If the same disk appears under multiple paths, then it causes errors. In a multipath disk configuration, the same disk can appear three times:

  1. The initial path to the disk

  2. The second path to the disk

  3. The multipath disk access point

For example: If you have one local disk, /dev/sda, and one disk attached with external storage, then your server shows two connections, or paths, to that external storage. The Linux SCSI driver shows both paths. They appear as /dev/sdb and /dev/sdc. The system may access either /dev/sdb or /dev/sdc, but the access is to the same disk.

If you enable multipathing, then you have a multipath disk (for example, /dev/multipatha), which can access both /dev/sdb and /dev sdc; any I/O to multipatha can use either the sdb or sdc path. If a system is using the /dev/sdb path, and that cable is unplugged, then the system shows an error. But the multipath disk will switch from the /dev/sdb path to the /dev/sdc path.

Most system software is unaware of multipath configurations. They can use any paths (sdb, sdc or multipatha). ASMLIB also is unaware of multipath configurations.

By default, ASMLIB recognizes the first disk path that Linux reports to it, but because it imprints an identity on that disk, it recognizes that disk only under one path. Depending on your storage driver, it may recognize the multipath disk, or it may recognize one of the single disk paths.

Instead of relying on the default, you should configure Oracle ASM to recognize the multipath disk. Ensure that the /etc/multipath.conf file has +r access to recognize multipath shared storage devices for Oracle ASM.

7.4.2.3.2 Disk Scan Ordering

The ASMLIB configuration file is located in the path /etc/sysconfig/oracleasm. It contains all the startup configuration you specified with the command /etc/init.d/oracleasm configure. That command cannot configure scan ordering.

The configuration file contains many configuration variables. The ORACLEASM_SCANORDER variable specifies disks to be scanned first. The ORACLEASM_SCANEXCLUDE variable specifies the disks that are to be ignored.

Configure values for ORACLEASM_SCANORDER using space-delimited prefix strings. A prefix string is the common string associated with a type of disk. For example, if you use the prefix string sd, then this string matches all SCSI devices, including /dev/sda, /dev/sdb, /dev/sdc and so on. Note that these are not globs. They do not use wild cards. They are simple prefixes. Also note that the path is not a part of the prefix. For example, the /dev/ path is not part of the prefix for SCSI disks that are in the path /dev/sd*.

For Oracle Linux and Red Hat Enterprise Linux version 5, when scanning, the kernel sees the devices as /dev/mapper/XXX entries. By default, the device file naming scheme udev creates the /dev/mapper/XXX names for human readability. Any configuration using ORACLEASM_SCANORDER should use the /dev/mapper/XXX entries.

7.4.2.3.3 Configuring Disk Scan Ordering to Select Multipath Disks

To configure ASMLIB to select multipath disks first, complete the following procedure:

  1. Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm.

  2. Edit the ORACLEASM_SCANORDER variable to provide the prefix path of the multipath disks. For example, if the multipath disks use the prefix multipath (/dev/mapper/multipatha, /dev/mapper/multipathb and so on), and the multipath disks mount SCSI disks, then provide a prefix path similar to the following:

    ORACLEASM_SCANORDER="multipath sd"
    
  3. Save the file.

When you have completed this procedure, then when ASMLIB scans disks, it first scans all disks with the prefix string multipath, and labels these disks as Oracle ASM disks using the /dev/mapper/multipathX value. It then scans all disks with the prefix string sd. However, because ASMLIB recognizes that these disks have already been labeled with the /dev/mapper/multipath string values, it ignores these disks. After scanning for the prefix strings multipath and sd, Oracle ASM then scans for any other disks that do not match the scan order.

In the example in step 2, the key word multipath is actually the alias for multipath devices configured in /etc/multipath.conf under the multipaths section. For example:

multipaths {
       multipath {
               wwid                    3600508b4000156d700012000000b0000
               alias                   multipath
               ...
       }
       multipath {
               ...
               alias                   mympath
               ...
       }
       ...
}

The default device name is in the format /dev/mapper/mpath* (or a similar path).

7.4.2.3.4 Configuring Disk Order Scan to Exclude Single Path Disks

To configure ASMLIB to exclude particular single path disks, complete the following procedure:

  1. Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm.

  2. Edit the ORACLEASM_SCANEXCLUDE variable to provide the prefix path of the single path disks. For example, if you want to exclude the single path disks /dev sdb and /dev/sdc, then provide a prefix path similar to the following:

    ORACLEASM_SCANEXCLUDE="sdb sdc"
    
  3. Save the file.

When you have completed this procedure, then when ASMLIB scans disks, it scans all disks except for the disks with the sdb and sdc prefixes, so that it ignores /dev/sdb and /dev/sdc. It does not ignore other SCSI disks, nor multipath disks. If you have a multipath disk (for example, /dev/multipatha), which accesses both /dev/sdb and /dev sdc, but you have configured ASMLIB to ignore sdb and sdc, then ASMLIB ignores these disks and instead marks only the multipath disk as an Oracle ASM disk.

7.4.3 Configuring Disk Devices Manually for Oracle ASM

This section contains the following information about preparing disk devices for use by Oracle ASM:

Note:

The operation of udev depends on the Linux version, vendor, and storage configuration.

7.4.3.1 About Device File Names and Ownership for Linux

By default, the device file naming scheme udev dynamically creates device file names when the server is started, and assigns ownership of them to root. If udev applies default settings, then it changes device file names and owners for voting files or Oracle Cluster Registry partitions, corrupting them when the server is restarted. For example, a voting file on a device named /dev/sdd owned by the user grid may be on a device named /dev/sdf owned by root after restarting the server.

If you use ASMLIB, then you do not need to ensure permissions and device path persistency in udev.

If you do not use ASMLIB, then you must create a custom rules file. When udev is started, it sequentially carries out rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules are parsed and carried out before rules in the rules file 90-ib.rules.

When specifying the device information in the UDEV rules file, ensure that the OWNER, GROUP and MODE are specified before all other characteristics in the order shown. For example, if you want to include the characteristic ACTION on the UDEV line, then specify ACTION after OWNER, GROUP, and MODE.

Where rules files describe the same devices, on the supported Linux kernel versions, the last file read is the one that is applied.

7.4.3.2 Configuring a Permissions File for Disk Devices for Oracle ASM

To configure a permissions file for disk devices for Oracle ASM, complete the following tasks:

  1. To obtain information about existing block devices, run the command scsi_id (/sbin/scsi_id) on storage devices from one cluster node to obtain their unique device identifiers. When running the scsi_id command with the -s argument, the device path and name passed should be that relative to the sysfs directory /sys (for example, /block/device) when referring to /sys/block/device. For example:

    # /sbin/scsi_id -g -s /block/sdb/sdb1
    360a98000686f6959684a453333524174
     
    # /sbin/scsi_id -g -s /block/sde/sde1
    360a98000686f6959684a453333524179
    

    Record the unique SCSI identifiers of clusterware devices, so you can provide them when required.

    Note:

    The command scsi_id should return the same device identifier value for a given device, regardless of which node the command is run from.
  2. Configure SCSI devices as trusted devices (white listed), by editing the /etc/scsi_id.config file and adding options=-g to the file. For example:

    # cat > /etc/scsi_id.config
    vendor="ATA",options=-p 0x80
    options=-g
    
  3. Using a text editor, create a UDEV rules file for the Oracle ASM devices, setting permissions to 0660 for the installation owner and the group whose members are administrators of the Oracle Grid Infrastructure software. For example, on Oracle Linux, to create a role-based configuration rules.d file where the installation owner is grid and the OSASM group asmadmin, enter commands similar to the following:

    # vi /etc/udev/rules.d/99-oracle-asmdevices.rules
     
    KERNEL=="sd?1", OWNER="grid", GROUP="asmadmin", MODE="0660", 
    BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000000"
    KERNEL=="sd?2", OWNER="grid", GROUP="asmadmin", MODE="0660",
    BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000001"
    KERNEL=="sd?3", OWNER="grid", GROUP="asmadmin", MODE="0660",
    BUS=="scsi", PROGRAM=="/sbin/scsi_id", RESULT=="14f70656e66696c00000002"
    
  4. Copy the rules.d file to all other nodes on the cluster. For example:

    # scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracle-asmdevices.rules
    
  5. Load updated block device partition tables on all member nodes of the cluster, using /sbin/partprobe devicename. You must do this as root.

  6. Run the command udevtest (/sbin/udevtest) to test the UDEV rules configuration you have created. The output should indicate that the devices are available and the rules are applied as expected. For example:

    # udevtest /block/sdb/sdb1
    main: looking at device '/block/sdb/sdb1' from subsystem 'block'
    udev_rules_get_name: add symlink
    'disk/by-id/scsi-360a98000686f6959684a453333524174-part1'
    udev_rules_get_name: add symlink
    'disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.887085-part1'
    udev_node_mknod: preserve file '/dev/.tmp-8-17', because it has correct dev_t
    run_program: '/lib/udev/vol_id --export /dev/.tmp-8-17'
    run_program: '/lib/udev/vol_id' returned with status 4
    run_program: '/sbin/scsi_id'
    run_program: '/sbin/scsi_id' (stdout) '360a98000686f6959684a453333524174'
    run_program: '/sbin/scsi_id' returned with status 0
    udev_rules_get_name: rule applied, 'sdb1' becomes 'data1'
    udev_device_event: device '/block/sdb/sdb1' validate currently present symlinks
    udev_node_add: creating device node '/dev/data1', major = '8', minor = '17', 
    mode = '0640', uid = '0', gid = '500'
    udev_node_add: creating symlink
    '/dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1' to '../../data1'
    udev_node_add: creating symlink
    '/dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085
    -part1' to '../../data1'
    main: run: 'socket:/org/kernel/udev/monitor'
    main: run: '/lib/udev/udev_run_devd'
    main: run: 'socket:/org/freedesktop/hal/udev_event'
    main: run: '/sbin/pam_console_apply /dev/data1
    /dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1
    /dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085-
    part1'
    

    In the example output, note that applying the rules renames OCR device /dev/sdb1 to /dev/data1.

  7. Enter the command to restart the UDEV service.

    On Oracle Linux and Red Hat Enterprise Linux, the commands are:

    # /sbin/udevcontrol reload_rules
    # /sbin/start_udev
    

    On SUSE Linux Enterprise Server, the command is:

    # /etc/init.d boot.udev restart
    

7.4.4 Using Disk Groups with Oracle Database Files on Oracle ASM

Review the following sections to configure Oracle Automatic Storage Management (Oracle ASM) storage for Oracle Clusterware and Oracle Database Files:

7.4.4.1 Identifying and Using Existing Oracle Database Diskgroups on Oracle ASM

The following section describes how to identify existing disk groups and determine the free disk space that they contain. Optionally, identify failure groups for the Oracle ASM disk group devices. You can use the kfod op command to set the Oracle ASM disk discovery path. The kfod utility is located in the path Grid_home/bin. For more information about Oracle ASM disk discovery, see Oracle Automatic Storage Management Administrator's Guide.

If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

Note:

If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy and three failure groups for high redundancy.

7.4.4.2 Creating Diskgroups for Oracle Database Data Files

If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

  • Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.

  • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.

7.4.5 Configuring Oracle Automatic Storage Management Cluster File System

Oracle ACFS is installed as part of an Oracle Grid Infrastructure installation 12c Release 1 (12.1).

You can also create a General Purpose File System configuration of ACFS using ASMCA.

See Also:

Section 7.1.2.3, "Restrictions and Guidelines for Oracle ACFS" for supported deployment options

To configure Oracle ACFS for an Oracle Database home for an Oracle RAC database:

  1. Install Oracle Grid Infrastructure for a cluster.

  2. Change directory to the Oracle Grid Infrastructure home. For example:

    $ cd /u01/app/12.1.0/grid
    
  3. Ensure that the Oracle Grid Infrastructure installation owner has read and write permissions on the storage mountpoint you want to use. For example, if you want to use the mountpoint /u02/acfsmounts/:

    $ ls -l /u02/acfsmounts
    
  4. Start Oracle ASM Configuration Assistant as the grid installation owner. For example:

    ./asmca
    
  5. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.

  6. On the ASM Cluster File Systems page, right-click the Data disk, then select Create ACFS for Database Home.

  7. In the Create ACFS Hosted Database Home window, enter the following information:

    • Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise. For example: dbase_01

    • Database Home Mountpoint: Enter the directory path for the mount point. For example: /u02/acfsmounts/dbase_01

      Make a note of this mount point for future reference.

    • Database Home Size (GB): Enter in gigabytes the size you want the database home to be.

    • Database Home Owner Name: Enter the name of the Oracle Database installation owner you plan to use to install the database. For example: oracle1

    • Database Home Owner Group: Enter the OSDBA group whose members you plan to provide when you install the database. Members of this group are given operating system authentication for the SYSDBA privileges on the database. For example: dba1

    • Click OK when you have completed your entries.

  8. Run the script generated by Oracle ASM Configuration Assistant as a privileged user (root). On an Oracle Clusterware environment, the script registers the ACFS as a resource managed by Oracle Clusterware. Registering ACFS as a resource helps Oracle Clusterware to mount the ACFS automatically in proper order when ACFS is used for an Oracle RAC database Home.

  9. During Oracle RAC installation, ensure that you or the DBA who installs Oracle RAC selects for the Oracle home the mount point you provided in the Database Home Mountpoint field (in the preceding example, /u02/acfsmounts/dbase_01).

See Also:

Oracle Automatic Storage Management Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS

7.4.6 Upgrading Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to 12c Release 1 (12.1), and subsequently configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you are upgrading from an Oracle ASM release before 11.2, and you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 12c Release 1 (12.1) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.

If you are upgrading from Oracle ASM 11g Release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started by the root scripts during upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from a prior release to the current release.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is 11g Release 1 or later, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the earlier version of Oracle ASM instances on an Oracle RAC installation are from a release before 11g Release 1, then rolling upgrades cannot be performed. In that case, Oracle ASM on all nodes are upgraded to 12c Release 1 (12.1).