Skip Headers
Oracle® Grid Infrastructure Installation Guide
11g Release 2 (11.2) for IBM AIX on POWER Systems (64-Bit)

Part Number E10814-01
Go to Documentation Home
Home
Go to Book List
Book List
Go to Table of Contents
Contents
Go to Index
Index
Go to Feedback page
Contact Us

Go to previous page
Previous
Go to next page
Next
View PDF

3 Configuring Storage for Grid Infrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)

This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.

This chapter contains the following topics:

3.1 Reviewing Oracle Grid Infrastructure Storage Options

This section describes supported options for storing Oracle grid infrastructure for a cluster storage options. It contains the following sections:

3.1.1 Overview of Oracle Clusterware and Oracle RAC Storage Options

There are two ways of storing Oracle Clusterware files:

  • Oracle Automatic Storage Management (Oracle ASM): You can install Oracle Clusterware files (OCR and voting disks) in Oracle ASM diskgroups.

    Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations. It is an integrated, high-performance database file system and disk manager for Oracle Clusterware and Oracle Database files. It performs striping and mirroring of database files automatically.

    Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.

  • A supported shared file system: Supported file systems include the following:

    • General Parallel File System (GPFS): A cluster file system for AIX that provides concurrent file access

    • A supported cluster file system. Note that if you intend to use a cluster file system for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.

      See Also:

      The Certify page on My Oracle Support for supported cluster file systems
    • Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle grid infrastructure. NFS mounts differ for software binaries, Oracle Clusterware files, and database files.

      Note:

      You can no longer use OUI to install Oracle Clusterware or Oracle Database files on raw disks.

      See Also:

      My Oracle Support for supported file systems and NFS or NAS filers

3.1.1.1 Quorum Disk Location Restriction with Existing 9.2 Clusterware Installations

When upgrading your Oracle9i release 9.2 Oracle RAC environment to Oracle Database 11g Release 2 (11.2), you are prompted to specify one or more voting disks during the Oracle Clusterware installation. You must specify a new location for the voting disk in Oracle Database 11g Release 1 (11.1). You cannot reuse the old Oracle9i release 9.2 quorum disk for this purpose.

3.1.1.2 After You Have Selected Disk Storage Options

When you have determined your disk storage options, you must perform the following tasks in the order listed:

1: Configure shared storage for Oracle Clusterware files

2: Configure storage for Oracle Database files and recovery files

3.1.2 General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC

For all installations, you must choose the storage option to use for Oracle grid infrastructure (Oracle Clusterware and Oracle ASM), and Oracle Real Application Clusters databases (Oracle RAC). To enable automated backups during the installation, you must also choose the storage option to use for recovery files (the Fast Recovery Area). You do not have to use the same storage option for each file type.

3.1.2.1 General Storage Considerations for Oracle Clusterware

Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an ASM diskgroup, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted.

3.1.2.2 General Storage Considerations for Oracle RAC

Use the following guidelines when choosing the storage options to use for each file type:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.

  • For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database or recovery files.

  • If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:

    • All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed as part of an Oracle grid infrastructure for a cluster installation.

    • Any existing Oracle ASM instance on any node in the cluster is shut down.

  • Raw disks are supported only when upgrading an existing installation using the disks already configured. On new installations, using disks is not supported by Oracle Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if you perform manual configuration.

    See Also:

    Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing database
  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.

3.1.3 Supported Storage Options

The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.

Table 3-1 Supported Storage Options for Oracle Clusterware and Oracle RAC

Storage Option OCR and Voting Disks Oracle Clusterware binaries Oracle RAC binaries Oracle Database Files Oracle Recovery Files

Oracle Automatic Storage Management

Yes

No

No

Yes

Yes

General Parallel File System (GPFS)

  • Note: You cannot place ASM files on GPFS.

  • Oracle does not recommend the use of GPFS for voting disks if HACMP is used.

Yes

Yes

Yes

Yes

Yes

Local file system

No

Yes

Yes

No

No

NFS file system on a certified NAS filer

Note: Requires a certified NAS device. Oracle does not recommend the use of NFS for voting disks if HACMP is used.

Yes

Yes

Yes

Yes

Yes

Shared disk partitions (raw disks), including raw logical volumes managed by HACMP

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No

No

Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation.

No


Use the following guidelines when choosing storage options:

  • You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.

  • You can use Oracle ASM 11g release 2 (11.2) to store Oracle Clusterware files. You cannot use prior Oracle ASM releases to do this.

  • If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk locations and at least three Oracle Cluster Registry locations to provide redundancy.

3.1.4 After You Have Selected Disk Storage Options

When you have determined your disk storage options, configure shared storage:

3.2 Shared File System Storage Configuration

The installer does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:

3.2.1 Requirements for Using a Shared File System

To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:

  • To use a cluster file system, it must be a supported cluster file system. Refer to My Oracle Support (https://metalink.oracle.com) for a list of supported cluster file systems.

  • To use an NFS file system, it must be on a certified NAS device. Log in to My Oracle Support, and click the Certify tab to find a list of certified NAS devices:

    https://metalink.oracle.com/

  • If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • At least two file systems are mounted on separate disks, and use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR.

  • If you choose to place your database files on a shared file system, then one of the following should be true:

    • The disks used for the file system are on a highly available storage device, (for example, a RAID device).

    • The file systems consist of at least two independent file systems on separate physical disks, with the database files on one file system, and the recovery files on a different file system.

  • The user account with which you perform the installation (oracle or grid) must have write permissions to create the files in the path that you specify.

Note:

Upgrading from Oracle9i release 2 using the raw disk or shared file for the OCR that you used for the SRVM configuration repository is not supported.

If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disks, then you can continue to use those sizes.

All storage products must be supported by both your server and storage vendors.

Use Table 3-2 and Table 3-3 to determine the minimum size for shared file systems:

Table 3-2 Oracle Clusterware Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Voting disks with external redundancy

3

At least 280 MB for each voting disk volume.

Oracle Cluster Registry (OCR) with external redundancy

1

At least 280 MB for each OCR volume

Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software.

1

At least 280 MB for each OCR volume

At least 280 MB for each voting disk volume


Table 3-3 Oracle RAC Shared File System Volume Size Requirements

File Types Stored Number of Volumes Volume Size

Oracle Database files

1

At least 1.5 GB for each volume

Recovery files

Note: Recovery files must be on a different volume than database files

1

At least 2 GB for each volume


In Table 3-2 and Table 3-3, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting disk on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting disks and OCR files are on separate physical disks. If you add Oracle RAC using one volume for database files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 5.5 GB available total for all volumes.

Note:

If you create partitions on shared partitions with fdisk by specifying a device size, such as +300M, the actual device created may be smaller than the size requested, based on the cylinder geometry of the disk. This is due to current fdisk restrictions. Oracle recommends that you partition the entire disk that you allocate for use by Oracle ASM.

3.2.2 Deciding to Use a Cluster File System for Oracle Clusterware Files

For new installations, Oracle recommends that you use Oracle Automatic Storage Management (Oracle ASM) to store voting disk and OCR files.

3.2.3 Deciding to Use NFS for Data Files

Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.

NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.

Be aware that the performance of Oracle software and databases stored on NAS devices depends on the performance of the network connection between the Oracle server and the NAS device.

For this reason, Oracle recommends that you connect the server to the NAS device using a private dedicated network connection, which should be Gigabit Ethernet or better.

3.2.4 Configuring Storage NFS Mount and Buffer Size Parameters

If you are using NFS for the Grid home or Oracle RAC home, then you must set up the NFS mounts on the storage so that they allow root on the clients mounting to the storage to be considered root instead of being mapped to an anonymous user, and allow root on the client server to create files on the NFS filesystem that are owned by root.

If you are using NFS, then you must set the values for the NFS buffer size parameters rsize and wsize to 32768. The NFS mount options for binaries are:

rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,timeo=600 

The NFS client-side mount options for Oracle Clusterware files (OCR and voting disk files) are:

cio,rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600 

The NFS client-side mount options for Oracle Database datafiles are:

cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600 

Update the /etc/filesystems file on each node with entries similar to the following:

/NFS_mount:
dev = "/vol/gridhome"
vfs = nfs
nodename = /vol/gridhome
mount = true
options = rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,timeo=600
account = false
/NFS_mount:
dev = "/vol/CWfiles"
vfs = nfs
nodename = /vol/CWfiles
mount = true
options = cio,rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,vers=3,timeo=600
account = false
/NFS_mount:
dev = "/vol/datafiles"
vfs = nfs
nodename = /u02/app/oracle/data
mount = true
options = cio,rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,timeo=600
account = false

Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting disks), and data files.

If you want to create a mount point for binaries only, then enter the following line for a binaries mount point:

nfs_server:/vol/grid /u01/oracle/grid nfs -yes
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,timeo=600 

On NFS, you can obtain root access for clients writing to the storage by enabling no_root_squash on the server side. For example, to set up Oracle Clusterware file storage in the path /vol/grid, with nodes node1, node 2, and node3 in the domain mycluster.example.com, add a line similar to the following to the /etc/exports file:

/vol/grid/ node1.mycluster.example.com(rw,no_root_squash)
node2.mycluster.example.com(rw,no_root_squash) node3.mycluster.example.com
(rw,no_root_squash) 

If the domain or DNS is secure so that no unauthorized system can obtain an IP address on it, then you can grant root access by domain, rather than specifying particular cluster member nodes:

For example:

/vol/grid/ *.mycluster.example.com(rw,no_root_squash)

Oracle recommends that you use a secure DNS or domain, and grant root access to cluster member nodes using the domain, as using this syntax allows you to add or remove nodes without the need to reconfigure the NFS server.

If you use Grid Naming Service (GNS), then the subdomain allocated for resolution by GNS within the cluster is a secure domain. Any server without a correctly signed Grid Plug and Play (GPnP) profile cannot join the cluster, so an unauthorized system cannot obtain or use names inside the GNS subdomain.

Caution:

Granting root access by domain can be used to obtain unauthorized access to systems. System administrators should refer to their operating system documentation for the risks associated with using no_root_squash.

After changing /etc/exports, reload the file system mount using the following command:

# /usr/sbin/exportfs -avr

See Also:

OracleMetaLink bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:

http://metalink.oracle.com

Note:

Refer to your storage vendor documentation for additional information about mount options.

3.2.5 Configuring HACMP Multinode Disk Heartbeat (MNDHB) for Oracle Clusterware

This section contains the following topics:

See Also:

OracleMetaLink for additional information about HACMP deployment and HACMP certification

3.2.5.1 Overview of Requirements for Using HACMP with Oracle Clusterware

You must define one Multi-node Disk Heartbeat (MNDHB) network for each Oracle Clusterware voting disk. Each MNDHB and voting disk pair must be located on a single hard disk, separate from the other pairs. You must also configure MNDHB so that the node is halted if access is lost to a quorum of the MNDHB networks in the enhanced concurrent volume group.

To reduce the likelihood of a cluster partition, IBM recommends that HACMP is deployed with multiple IP networks and at least one non-IP network. The non-IP networks can be implemented using RS232 or disk heart-beating. For systems using Oracle RAC and HACMP enhanced concurrent resources (enhanced concurrent logical volumes) for database storage, you must configure MNDHB networks.

Install, configure and have HACMP running before installing Oracle Clusterware. For an Oracle RAC configuration, do not use HACMP for IP failovers on the Oracle RAC network interfaces (public, VIP or private). These network interfaces should not be configured to use HACMP IP failover, as Oracle Clusterware manages VIP failovers for Oracle RAC. The RAC network interfaces are bound to individual nodes and RAC instances. Problems can occur with Oracle Clusterware if HACMP reconfigures IP addresses over different interfaces, or fails over addresses across nodes. You only can use HACMP for failover of IP address on Oracle RAC nodes if Oracle RAC does not use these addresses.

3.2.5.2 Deploying HACMP and MDNDHB for Oracle Clusterware

Complete the following tasks, replacing each term in italics with the appropriate response for your system, or carrying out the action described and entering the appropriate response for your image:

  1. Start HACMP.

  2. Enter the following command to ensure that the HACMP clcomdES daemon is running:

    # lssrc -s clcomdES
    

    If the daemon is not running, then start it using the following command:

    # startsrc –s clcomdES
    
  3. Ensure that your versions of HACMP and AIX meet the system requirements listed in Section 2.7, "Identifying the Software Requirements".

  4. Create HACMP cluster and add the Oracle Clusterware nodes. For example:

    # smitty cm_add_change_show_an_hacmp_cluster.dialog
    * Cluster Name [mycluster] 
    
  5. Create an HACMP cluster node for each Oracle Clusterware node. For example:

    # smitty cm_add_a_node_to_the_hacmp_cluster_dialog 
    * Node Name [mycluster_node1]
    Communication Path to Node [] 
    
  6. Create HACMP Ethernet heartbeat networks. The HACMP configuration requires network definitions. Select NO for the IP address takeover for these networks, since they are used by Oracle Clusterware.

    Create at least two network definitions: one for the Oracle public interface and a second one for the Oracle private (cluster interconnect) network. Additional Ethernet heartbeat networks can be added if desired.

    For example:

    # smitty cm_add_a_network_to_the_hacmp_cluster_select 
    - select ether network 
    * Network Name [my_network_name] 
    * Network Type ether 
    * Netmask [my.network.netmask.here] 
    * Enable IP Address Takeover via IP Aliases [No] 
    IP Address Offset for Heart beating over IP Aliases [] 
    
  7. For each of the networks added in the previous step, define all of the IP names for each Oracle Clusterware node associated with that network, including the public, private and VIP names for each Oracle Clusterware node. For example:

    # smitty cm_add_communication_interfaces_devices.select 
    - select: Add Pre-defined Communication Interfaces and Devices / Communication Interfaces / desired network 
    * IP Label/Address [node_ip_address] 
    * Network Type ether 
    * Network Name some_network_name 
    * Node Name [my_node_name] 
    Network Interface [] 
    
  8. Create an HACMP resource group for the enhanced concurrent volume group resource with the following options:

    # smitty config_resource_group.dialog.custom 
    * Resource Group Name [my_resource_group_name] 
    * Participating Nodes (Default Node Priority) [mynode1,mynode2,mynode3] 
    Startup Policy Online On All Available Nodes 
    Fallover Policy Bring Offline (On Error Node Only) 
    Fallback Policy Never Fallback 
    
  9. Create an AIX enhanced concurrent volume group (Big VG, or Scalable VG) using either the command smitty mkvg, or using command lines. The VG must contain at least one hard disk for each voting disk. You must configure at least three voting disks.

    In the following example, where you see default, accept the default response:

    # smitty _mksvg 
    VOLUME GROUP name [my_vg_name] PP SIZE in MB 
    * PHYSICAL VOLUME names [mydisk1,mydisk2,mydisk3] 
    Force the creation of a volume group? no 
    Activate volume group AUTOMATICALLY no at system restart? 
    Volume Group MAJOR NUMBER [] 
    Create VG Concurrent Capable? enhanced concurrent 
    Max PPs per VG in kilobytes default
    Max Logical Volumes default
    
  10. Under "Change/Show Resources for a Resource Group (standard)", add the concurrent volume group to the resource group added in the preceding steps.

    For example:

    # smitty cm_change_show_resources_std_resource_group_menu_dmn.select 
    - select_resource_group_from_step_6
    Resource Group Name shared_storage 
    Participating Nodes (Default Node Priority) mynode1,mynode2,mynode3
    Startup Policy Online On All Available Nodes 
    Fallover Policy Bring Offline (On Error Node Only) 
    Fallback Policy Never Fallback 
    Concurrent Volume Groups [enter_VG_from_step_7]
    Use forced varyon of volume groups, if necessary false 
    Application Servers [] 
    
  11. Using the following command, ensure that one MNDHB network is defined for each Oracle Clusterware voting disk. Each MNDHB and voting disk pair must be collocated on a single hard disk, separate from the other pairs. The MNDHB network and Voting Disks exist on shared logical volumes in an enhanced concurrent logical volume managed by HACMP as an enhanced concurrent resource. For each of the hard disks in the VG created in step 6 on which you want to place a voting disk logical volume (LV), create a MNDHB LV.

    # smitty cl_add_mndhb_lv 
    - select_resource_group_defined_in_step_6
    * Physical Volume name enter F4, then select a hard disk
    Logical Volume Name [] 
    Logical Volume Label [] 
    Volume Group name ccvg 
    Resource Group Name shared_storage 
    Network Name [n]
    

    Note:

    When you define the LVs for the Oracle Clusterware voting disks, they should be defined on the same disks: one for each disk, as used in this step for the MNDHB LVs.
  12. Configure MNDHB so that the node is halted if access is lost to a quorum of the MNDHB networks in the enhanced concurrent volume group. For example:

    # smitty cl_set_mndhb_response 
    - select_the_VG_created_in_step_7 
    On loss of access Halt the node 
    Optional notification method [] 
    Volume Group ccvg 
    
  13. Verify and Synchronize HACMP configuration. For example:

    # smitty cm_initialization_and_standard_config_menu_dmn 
    - select "Verify and Synchronize HACMP Configuration" 
    

    Enter Yes if prompted: "Would you like to import shared VG: ccvg, in resource group my_resource_group onto node: mynode to node: racha702 [Yes / No]:"

  14. Add the Add the HACMP cluster node IP names to the file /usr/es/sbin/cluster/etc/rhosts.

3.2.5.3 Upgrading an Existing Oracle Clusterware and HACMP Installation

Complete the following procedure:

  1. Back up all databases, and back up the Oracle Cluster Registry (OCR)

  2. Shut down on all nodes all Oracle RAC databases, all node applications, and Oracle Clusterware.

  3. Enter the following command to disable Oracle Clusterware from starting when nodes are restarted:

    # crsctl disable crs
    
  4. Shut down HACMP on all nodes.

  5. Install HACMP APAR IZ01809, following the directions in the README included with that APAR.

  6. Determine if the existing voting disk LVs are already on separate hard disks, and if each of these disks have sufficient space (at least 256 MB for the MNDHB LVs. If this is true, then create a MNDHB LV on each of the hard disks. If this is not true, then create new MNDHB LVs and new voting disk LVs, located on separate hard disks using the following command, responding to the sections in italics with the appropriate information for your system:

    # smitty cl_add_mndhb_lv 
    - Select_resource_group
    * Physical Volume name Enter F4, then select disk for the MNDHB and Voting Disk pair
    Logical Volume Name [] 
    Logical Volume Label [] 
    Volume Group name ccvg 
    Resource Group Name shared_storage 
    Network Name [net_diskhbmulti_01] 
    
  7. Verify and Synchronize HACMP configuration.

  8. Start HACMP on all nodes.

  9. If you added new LVs for voting disks in step 5, then replace each of the existing voting disks with the new ones.

  10. Enter the following command to re-enable Oracle Clusterware:

    # crsctl enable CRS
    
  11. Start Oracle Clusterware on all nodes, and verify that all resources start correctly.

3.2.6 Configuring Raw Logical Volumes for Oracle Clusterware

Note:

To use raw logical volumes for Oracle Clusterware, HACMP must be installed and configured on all cluster nodes.

This section describes how to configure raw logical volumes for Oracle Clusterware and database file storage. The procedures in this section describe how to create a new volume group that contains the logical volumes required for both types of files.

Before you continue, review the following guidelines which contain important information about using volume groups with this release of Oracle RAC:

  • You must use concurrent-capable volume groups for Oracle Clusterware.

  • The Oracle Clusterware files require less than 560 MB of disk space, with external redundancy. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same volume group for the logical volumes for both the Oracle Clusterware files and the database files.

  • If you are upgrading an existing Oracle9i release 2 Oracle RAC installation that uses raw logical volumes, then you can use the existing SRVM configuration repository logical volume for the OCR and create a new logical volume in the same volume group for the Oracle Clusterware voting disk. However, you must remove this volume group from the HACMP concurrent resource group that activates it before you install Oracle Clusterware.

    See Also:

    The HACMP documentation for information about removing a volume group from a concurrent resource group.

    Note:

    If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group" section for more information about the requirements for the Oracle Clusterware voting disk and SYSAUX logical volumes.
  • You must use a HACMP concurrent resource group to activate new or existing volume groups that contain only database files (not Oracle Clusterware files).

    See Also:

    The HACMP documentation for information about adding a volume group to a new or existing concurrent resource group.
  • All volume groups that you intend to use for Oracle Clusterware must be activated in concurrent mode before you start the installation.

  • The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes, (using mirroring, for example), then use this section in conjunction with the HACMP documentation.

3.2.7 Configuring Raw Logical Volumes in the New Oracle Clusterware Volume Group

To create the required raw logical volumes in the new Oracle Clusterware volume group:

  1. Identify the logical volumes that you must create.

  2. If you prefer, you can also use the command smit mklv to create raw logical volumes.

    The following example shows the command used to create a logical volume for the ocr volume group in the SYSAUX tablespace with a physical partition size of 114 MB (1792/7 = 256):

    # /usr/sbin/mklv -y test_sysaux_raw_1792m -T O -w n -s n -r n ocr 7
    
  3. Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:

    Note:

    The device file associated with the Oracle Cluster Registry must be owned by root. All other device files must be owned by the Oracle software owner user (oracle).
    # chown oracle:dba /dev/rora_vote_raw_280m
    # chmod 660 /dev/rora_vote_raw_280m
    # chown root:oinstall /dev/rora_ocr_raw_280m
    # chmod 640 /dev/rora_ocr_raw_280m
    

3.2.8 Creating a Volume Group for Database Files

To create a volume group for the Oracle Database files:

  1. If necessary, install the shared disks that you intend to use.

  2. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
  3. If a disk is not listed as available on any node, then enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
  4. Enter the following command on any node to identify the device names and any associated volume group for each disk:

    # /usr/sbin/lspv
    

    The output from this command is similar to the following:

    hdisk0     0000078752249812   rootvg
    hdisk1     none               none
    hdisk4     00034b6fd4ac1d71   ccvg1
    

    For each disk, this command shows:

    • The disk device name

    • Either the 16 character physical volume identifier (PVID) if the disk has one, or none

    • Either the volume group to which the disk belongs, or none

    The disks that you want to use may have a PVID, but they must not belong to existing volume groups.

  5. If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
  6. To identify used device major numbers, enter the following command on each node of the cluster:

    # ls -la /dev | more
    

    This command displays information about all configured devices, similar to the following:

    crw-rw----   1 root     system    45,  0 Jul 19 11:56 vg1
    

    In this example, 45 is the major number of the vg1 volume group device.

  7. Identify an appropriate major number that is unused on all nodes in the cluster.

  8. To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg):

    # /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \
    -C PhysicalVolumes
    
  9. The following table describes the options and variables used in this example. Refer to the mkvg man page for more information about these options.

    Command Option SMIT Field Sample Value and Description
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create.
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create.
    -B
    
    Create a big VG format Volume Group Specify this option to create a big VG format volume group.

    Note: If you are using SMIT, then choose yes for this field.

    -s PPsize
    
    Physical partition SIZE in megabytes
    32
    
    Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016).
    -V Majornum
    
    Volume Group MAJOR NUMBER
    46
    
    Specify the device major number for the volume group that you identified in Step 7.
    -n
    
    Activate volume group AUTOMATICALLY at system restart Specify this option to prevent the volume group from being activated at system restart.

    Note: If you are using SMIT, then choose no for this field.

    -C
    
    Create VG Concurrent Capable Specify this option to create a concurrent capable volume group.

    Note: If you are using SMIT, then choose yes for this field.

    PhysicalVolumes
    
    PHYSICAL VOLUME names
    hdisk3 hdisk4
    
    Specify the device names of the disks that you want to add to the volume group.

  10. Enter a command similar to the following to vary on the volume group that you created:

    # /usr/sbin/varyonvg VGname
    

3.2.9 Creating a Volume Group for Oracle Clusterware

To create a volume group for the Oracle Clusterware files:

  1. If necessary, install the shared disks that you intend to use.

  2. To ensure that the disks are available, enter the following command on every node:

    # /usr/sbin/lsdev -Cc disk
    

    The output from this command is similar to the following:

    hdisk0 Available 1A-09-00-8,0  16 Bit LVD SCSI Disk Drive
    hdisk1 Available 1A-09-00-9,0  16 Bit LVD SCSI Disk Drive
    hdisk2 Available 17-08-L       SSA Logical Disk Drive
    
  3. If a disk is not listed as available on any node, then enter the following command to configure the new disks:

    # /usr/sbin/cfgmgr
    
  4. Enter the following command on any node to identify the device names and any associated volume group for each disk:

    # /usr/sbin/lspv
    

    The output from this command is similar to the following:

    hdisk0     0000078752249812   rootvg
    hdisk1     none               none
    hdisk4     00034b6fd4ac1d71   ccvg1
    

    For each disk, this command shows:

    • The disk device name

    • Either the 16 character physical volume identifier (PVID) if the disk has one, or none

    • Either the volume group to which the disk belongs, or none

    The disks that you want to use may have a PVID, but they must not belong to existing volume groups.

  5. If a disk that you want to use for the volume group does not have a PVID, then enter a command similar to the following to assign one to it:

    # /usr/sbin/chdev -l hdiskn -a pv=yes
    
  6. To identify used device major numbers, enter the following command on each node of the cluster:

    # ls -la /dev | more
    

    This command displays information about all configured devices, similar to the following:

    crw-rw----   1 root     system    45,  0 Jul 19 11:56 vg1
    

    In this example, 45 is the major number of the vg1 volume group device.

  7. Identify an appropriate major number that is unused on all nodes in the cluster.

  8. To create a volume group, enter a command similar to the following, or use SMIT (smit mkvg):

    # /usr/sbin/mkvg -y VGname -B -s PPsize -V majornum -n \
    -C PhysicalVolumes
    
  9. The following table describes the options and variables used in this example. Refer to the mkvg man page for more information about these options.

    Command Option SMIT Field Sample Value and Description
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or it could specify the name of the database that you intend to create.
    -y VGname
    
    VOLUME GROUP name
    oracle_vg1
    
    Specify the name for the volume group. The name that you specify could be a generic name, as shown, or for a database volume group, it could specify the name of the database that you intend to create.
    -B
    
    Create a big VG format Volume Group Specify this option to create a big VG format volume group.

    Note: If you are using SMIT, then choose yes for this field.

    -s PPsize
    
    Physical partition SIZE in megabytes
    32
    
    Specify the size of the physical partitions for the database. The sample value shown enables you to include a disk up to 32 GB in size (32 MB * 1016).
    -V Majornum
    
    Volume Group MAJOR NUMBER
    46
    
    Specify the device major number for the volume group that you identified in Step 7.
    -n
    
    Activate volume group AUTOMATICALLY at system restart Specify this option to prevent the volume group from being activated at system restart.

    Note: If you are using SMIT, then choose no for this field.

    -C
    
    Create VG Concurrent Capable Specify this option to create a concurrent capable volume group.

    Note: If you are using SMIT, then choose yes for this field.

    PhysicalVolumes
    
    PHYSICAL VOLUME names
    hdisk3 hdisk4
    
    Specify the device names of the disks that you want to add to the volume group.

  10. Enter a command similar to the following to vary on the volume group that you created:

    # /usr/sbin/varyonvg VGname
    

3.2.10 Importing the Volume Group on the Other Cluster Nodes

To make the volume group available to all nodes in the cluster, you must import it on each node, as follows:

  1. Because the physical volume names may be different on the other nodes, enter the following command to determine the PVID of the physical volumes used by the volume group:

    # /usr/sbin/lspv
    
  2. Note the PVIDs of the physical devices used by the volume group.

  3. To vary off the volume group that you want to use, enter a command similar to the following on the node where you created it:

    # /usr/sbin/varyoffvg VGname
    
  4. On each cluster node, complete the following steps:

    1. Enter the following command to determine the physical volume names associated with the PVIDs you noted previously:

      # /usr/sbin/lspv
      
    2. On each node of the cluster, enter commands similar to the following to import the volume group definitions:

      # /usr/sbin/importvg -y VGname -V MajorNumber PhysicalVolume
      

      In this example, MajorNumber is the device major number for the volume group and PhysicalVolume is the name of one of the physical volumes in the volume group.

      For example, to import the definition of the oracle_vg1 volume group with device major number 45 on the hdisk3 and hdisk4 physical volumes, enter the following command:

      # /usr/sbin/importvg -y oracle_vg1 -V 45 hdisk3
      
    3. Change the owner, group, and permissions on the character device files associated with the logical volumes you created, as follows:

      # chown oracle:dba /dev/rora_vote_raw_280m
      # chmod 660 /dev/rora_vote_raw_280m
      # chown root:oinstall /dev/rora_ocr_raw_280m
      # chmod 640 /dev/rora_ocr_raw_280m
      
    4. Enter the following command to ensure that the volume group will not be activated by the operating system when the node starts:

      # /usr/sbin/chvg -a n VGname
      

3.2.11 Activating the Volume Group in Concurrent Mode on All Cluster Nodes

To activate the volume group in concurrent mode on all cluster nodes, enter the following command on each node:

# /usr/sbin/varyonvg -c VGname

3.2.12 Creating Directories for Oracle Clusterware Files on Shared File Systems

Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.

Note:

For NFS or GPFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system to the Oracle base directory.

To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:

  1. If necessary, configure the shared file systems that you want to use and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Make sure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -k command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems that you want to use:

    File Type File System Requirements
    Oracle Clusterware files Choose a file system with at least 560 MB of free disk space (one OCR and one voting disk, with external redundancy).
    Database files Choose either:
    • A single file system with at least 1.5 GB of free disk space.

    • Two or more file systems with at least 1.5 GB of free disk space in total.

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation has permissions to create directories on the disks where you plan to install Oracle Clusterware, then OUI creates the Oracle Clusterware file directory.

    1. If necessary, configure the shared file systems to use and mount them on each node.

      Note:

      The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
    2. Use the df command to determine the free disk space on each mounted file system.

    3. From the display, identify the file systems to use. Choose a file system with a minimum of 600 MB of free disk space (one OCR and one voting disk, with external redundancy).

      If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

    4. Note the names of the mount point directories for the file systems that you identified.

    5. If the user performing installation (typically, grid or oracle) has permissions to create directories on the storage location where you plan to install Oracle Clusterware files, then OUI creates the Oracle Clusterware file directory.

      If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the directory. For example, where the user is oracle, and the Oracle Clusterware file storage area is cluster:

      # mkdir /mount_point/cluster
      # chown oracle:oinstall /mount_point/cluster
      # chmod 775 /mount_point/cluster
      

      Note:

      After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned by root, and not writable by any account other than root.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed GPFS configuration.

3.2.13 Creating Directories for Oracle Database Files on Shared File Systems

Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for an Oracle RAC database).

  1. If necessary, configure the shared file systems and mount them on each node.

    Note:

    The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.
  2. Use the df -k command to determine the free disk space on each mounted file system.

  3. From the display, identify the file systems:

    File Type File System Requirements
    Database files Choose either:
    • A single file system with at least 1.5 GB of free disk space.

    • Two or more file systems with at least 1.5 GB of free disk space in total.

    Recovery files Choose a file system with at least 2 GB of free disk space.

    If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.

  4. Note the names of the mount point directories for the file systems that you identified.

  5. If the user performing installation (typically, oracle) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.

    If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:

    • Database file directory:

      # mkdir /mount_point/oradata
      # chown oracle:oinstall /mount_point/oradata
      # chmod 775 /mount_point/oradata
      
    • Recovery file directory (Fast Recovery Area):

      # mkdir /mount_point/fast_recovery_area
      # chown oracle:oinstall /mount_point/fast_recovery_area
      # chmod 775 /mount_point/fast_recovery_area
      

By making members of the oinstall group owners of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.

When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed NFS configuration for Oracle Database shared storage.

3.3 Oracle Automatic Storage Management Storage Configuration

Review the following sections to configure storage for Oracle Automatic Storage Management:

3.3.1 Configuring Storage for Oracle Automatic Storage Management

This section describes how to configure storage for use with Oracle Automatic Storage Management (Oracle ASM).

3.3.1.1 Identifying Storage Requirements for Oracle ASM

To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting disks), Oracle Database files, recovery files, or all files except for Oracle Clusterware or Oracle Database binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

    Note:

    You do not have to use the same storage mechanism for Oracle Clusterware, Oracle Database files and recovery files. You can use a shared file system for one file type and Oracle ASM for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Oracle ASM for recovery file storage.

    If you enable automated backups during the installation, then you can select Oracle ASM as the storage mechanism for recovery files by specifying an Oracle ASM disk group for the Fast Recovery Area. Depending on how you choose to create a database during the installation, you have the following options:

    • If you select an installation method that runs ASMCA in interactive mode, then you can decide whether you want to use the same Oracle ASM disk group for database files and recovery files, or use different failure groups for each file type.

    • If you select an installation method that runs DBCA in noninteractive mode, then you must use the same Oracle ASM disk group for database files and recovery files.

  2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.

    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.

    • Normal redundancy

      In a normal redundancy disk group, to increase performance and reliability, Oracle ASM by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For Oracle Clusterware files, Normal redundancy disk groups provide 3 voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror). With normal redundancy, the cluster can survive the loss of one failure group.

      For most installations, Oracle recommends that you select normal redundancy.

    • High redundancy

      In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      For Oracle Clusterware files, High redundancy disk groups provide 5 voting disk files, 1 OCR and 3 copies (one primary and two secondary mirrors). With high redundancy, the cluster can survive the loss of two failure groups.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for Oracle Clusterware files, and for the database files and recovery files.

    Use Table 3-4 and Table 3-5 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware files, and installing the starter database, where you have voting disks in a separate disk group:

    Table 3-4 Total Oracle Clusterware Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) Files Voting Disk Files Both File Types

    External

    1

    280 MB

    280 MB

    560 MB

    Normal

    3

    560 MB

    840 MB

    1.4 GBFoot 1 

    High

    5

    840 MB

    1.4 GB

    2.3 GB


    Footnote 1 If you create a diskgroup during installation, then it must be at least 2 GB.

    Note:

    If the voting disk files are in a disk group, be aware that disk groups with Oracle Clusterware files (OCR and voting disks) have a higher minimum number of failure groups than other disk groups.

    If you create a diskgroup as part of the installation in order to install the OCR and voting disk files, then the installer requires that you create these files on a diskgroup with at least 2 GB of available space.

    Table 3-5 Total Oracle Database Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types

    External

    1

    1.5 GB

    3 GB

    4.5 GB

    Normal

    2

    3 GB

    6 GB

    9 GB

    High

    3

    4.5 GB

    9 GB

    13.5 GB


  4. For Oracle Clusterware installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes.

    • nodes = Number of nodes in cluster.

    • clients - Number of database instances for each node.

    • disks - Number of disks in disk group.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional X MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB

    To ensure high availability of Oracle Clusterware files on Oracle ASM, you need to have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. Each disk must have at least 1 GB of capacity to ensure that there is sufficient space to create Oracle Clusterware files.

  5. For Oracle RAC installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the additional disk space requirements (in MB):

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • ausize = Metadata AU size in megabytes.

    • clients = Number of database instances for each node.

    • disks = Number of disks in disk group.

    • nodes = Number of nodes in cluster.

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional 1684 MB of disk space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4+1) + 30) + (64 * 4) + 533)] = 1684 MB

    If an Oracle ASM instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.

  6. Optionally, identify failure groups for the Oracle ASM disk group devices.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    Define custom failure groups after installation, using the GUI tool ASMCA, the command line tool asmcmd, or SQL commands.

    If you define custom failure groups, then for failure groups containing database files only, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    For failure groups containing database files and clusterware files, including voting disks, you must specify a minimum of three failure groups for normal redundancy disk groups, and five failure groups for high redundancy disk groups.

    Disk groups containing voting files must have at least 3 failure groups for normal redundancy or at least 5 failure groups for high redundancy. Otherwise, the minimum is 2 and 3 respectively. The minimum number of failure groups applies whether or not they are custom failure groups.

  7. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

    • Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Oracle ASM from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

3.3.1.2 Creating Files on a NAS Device for Use with Oracle ASM

If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.

To create these files, follow these steps:

  1. If necessary, create an exported directory for the disk group files on the NAS device.

    Refer to the NAS device documentation for more information about completing this step.

  2. Switch user to root.

  3. Create a mount point directory on the local system. For example:

    # mkdir -p /mnt/asm
    
  4. To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/vfstab.

    See Also:

    My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:
    https://metalink.oracle.com
    

    For more information about editing the mount file for the operating system, refer to the man pages. For more information about recommended mount options, refer to the section Section 3.2.4, "Configuring Storage NFS Mount and Buffer Size Parameters".

  5. Enter a command similar to the following to mount the NFS file system on the local system:

    # mount /mnt/asm
    
  6. Choose a name for the disk group to create. For example: sales1.

  7. Create a directory for the files on the NFS file system, using the disk group name as the directory name. For example:

    # mkdir /mnt/asm/nfsdg
    
  8. Use commands similar to the following to create the required number of zero-padded files in this directory:

    # dd if=/dev/zero of=/mnt/asm/nfsdg/disk1 bs=1024k count=1000
    

    This example creates 1 GB files on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.

  9. Enter commands similar to the following to change the owner, group, and permissions on the directory and files that you created, where the installation owner is grid, and the OSASM group is asmadmin:

    # chown -R grid:asmadmin /mnt/asm
    # chmod -R 660 /mnt/asm
    
  10. If you plan to install Oracle RAC or a standalone Oracle Database, then during installation, edit the Oracle ASM disk discovery string to specify a regular expression that matches the file names you created. For example:

    /mnt/asm/sales1/
    

    Note:

    During installation, disk paths mounted on Oracle ASM are listed as default database storage candidate disks.

3.3.2 Using an Existing Oracle ASM Disk Group

To store either database or recovery files in an existing Oracle ASM disk group, then you have the following choices, depending on the installation method that you select:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode, then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Oracle ASM instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Oracle ASM disk group exists, or to determine if there is sufficient disk space in a disk group, you can use the ASM command line tool (asmcmd), Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine if an Oracle ASM instance is configured on the system:

    $ more /etc/oratab
    

    If an Oracle ASM instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    

    In this example, +ASM2 is the system identifier (SID) of the Oracle ASM instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Oracle ASM instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Oracle ASM instance.

  3. Connect to the Oracle ASM instance and start the instance if necessary:

    $ $ORACLE_HOME/bin/asmcmd
    ASMCMD> startup
    
  4. Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    ASMCMD> lsdb
    

    or:

    $ORACLE_HOME/bin/asmcmd -p lsdg
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.3.3 Configuring Disk Devices for Oracle ASM

You can configure raw disks for use as Oracle Automatic Storage Management (Oracle ASM) disk groups. To use Oracle ASM with raw disks, you must create sufficient partitions for your data files, and then bind the partitions to raw disks. Make a list of the raw disk names you create for the data files, and have the list available during database installation.

In the following procedure, you are directed to set physical volume IDs (PVIDs) for raw disks. Oracle recommends that you complete the entire procedure, even if you are certain that you do not have PVIDs configured on your system, to prevent the possibility of configuration issues.

Note:

If you intend to use Hitachi HDLM (dmlf devices) for storage, then ASM instances do not automatically discover the physical disks, but instead discover only the logical volume manager (LVM) disks. This is because the physical disks can only be opened by programs running as root.

Physical disk paths have path names similar to the following:

/dev/rdlmfdrv8
/dev/rdlmfdrv9

Use the following procedure to configure disks:

  1. If necessary, install the disks that you intend to use for the disk group and restart the system.

  2. Identify or create the disks that you want to include in the Oracle ASM disk group. As the root user, enter the following command on any node to identify the device names for the disk devices that you want to use:

    # /usr/sbin/lspv | grep -i none 
    

    This command displays information similar to the following for each disk device that is not configured in a volume group:

    hdisk17         0009005fb9c23648                    None  
    

    In this example, hdisk17 is the device name of the disk and 0009005fb9c23648 is the physical volume ID (PVID).

  3. If a disk device that you want to use does not have a PVID, then enter a command similar to the following to assign one to it, where n is the number of the hdisk:

    # chdev -l hdiskn -a pv=yes
    

    Note:

    If you have an existing PVID, then chdev overwrites the existing PVID. Be aware that if you have applications depending on the previous PVID, then they will fail.
  4. On each of the other nodes, enter a command similar to the following to identify the device name associated with each PVID on that node:

    # /usr/sbin/lspv | grep -i "0009005fb9c23648"
    

    The output from this command should be similar to the following:

    hdisk18         0009005fb9c23648                    None
    

    In this example, the device name associated with the disk device (hdisk18) is different on this node.

  5. If the device names are the same on all nodes, then enter commands similar to the following on all nodes to change the owner, group, and permissions on the character raw device files for the disk devices where grid is the grid infrastructure installation owner, and asmadmin is the OSASM group:

    # chown grid:asmadmin /dev/rhdiskn
    # chmod 660 /dev/rhdiskn
    
  6. To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute, depending on the type of reserve attribute used by your disks. The following section describes how to perform this task using hdisk logical names. Refer to your operating system documentation to find logical device names.

    To determine the reserve setting your disks use, enter the following command, where n is the hdisk device number:

    # lsattr -E -l hdiskn | grep reserve_
    

    The response is either a reserve_lock setting, or a reserve_policy setting. If the attribute is reserve_lock, then ensure that the setting is reserve_lock = no. If the attribute is reserve_policy, then ensure that the setting is reserve_policy = no_reserve.

    If necessary, change the setting with the chdev command using the following syntax, where n is the hdisk device number:

    chdev -l hdiskn -a [ reserve_lock=no | reserve_policy=no_reserve ]
    

    For example, to change a setting for the device hdisk4 from reserve_lock=yes to reserve_lock=no, enter the following command:

    # chdev -l hdisk4  -a  reserve_lock=no
    

    To verify that the setting is correct on all disk devices, enter the following command:

    # lsattr -El hdiskn | grep reserve
    
  7. Enter commands similar to the following on any node to clear the PVID from each disk device that you want to use:

    # /usr/sbin/chdev -l hdiskn -a pv=clear
    

    When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk. For example:

    /dev/rhdisk10
    

3.3.4 Using Diskgroups with Oracle Database Files on Oracle ASM

Review the following sections to configure Oracle Automatic Storage Management storage for Oracle Clusterware and Oracle Database Files:

3.3.4.1 Identifying and Using Existing Oracle Database Diskgroups on Oracle ASM

The following section describes how to identify existing diskgroups and determine the free disk space that they contain.

  • Optionally, identify failure groups for the Oracle Automatic Storage Management disk group devices.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy and three failure groups for high redundancy.

3.3.4.2 Creating Diskgroups for Oracle Database Data Files

If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Oracle Automatic Storage Management disk group should be the same size and have the same performance characteristics.

  • Do not specify multiple partitions on a single physical disk as a disk group device. Oracle Automatic Storage Management expects each disk group device to be on a separate physical disk.

  • Although you can specify a logical volume as a device in an Oracle Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Oracle Automatic Storage Management from optimizing I/O across the physical devices. They are not supported with Oracle RAC.

3.3.5 Migrating Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to 11g release 2 (11.2), and subsequently configure failure groups and ASM volumes.

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from a release prior to 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to 11g release 2 (11.2).

3.3.6 Converting Standalone Oracle ASM Installations to Clustered Installations

If you have existing standalone Oracle ASM installations on one or more nodes you select as member nodes of the cluster, then OUI proceeds to install Oracle grid infrastructure for a cluster.

If you place Oracle Clusterware files (OCR and voting disks) on Oracle ASM, then ASMCA is started at the end of the clusterware installation, and provides prompts for you to migrate and upgrade the Oracle ASM instance on the local node, so that you have an Oracle ASM 11g release 2 (11.2) installation.

On remote nodes, ASMCA identifies any standalone Oracle ASM instances that are running, and prompts you to shut down those Oracle ASM instances, and any database instances that use them. ASMCA then extends clustered Oracle ASM instances to all nodes in the cluster. However, diskgroup names on the cluster-enabled Oracle ASM instances must be different from existing standalone diskgroup names.

3.4 Desupport of Raw Disks

With the release of Oracle Database 11g release 2 (11.2) and Oracle RAC 11g release 2 (11.2), using Database Configuration Assistant or the installer to store Oracle Clusterware or Oracle Database files directly on raw disks is not supported.

If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use a existing raw disks or raw logical volumes, and perform a rolling upgrade of your existing installation. Performing a new installation using raw disks or raw logical volumes is not allowed.