JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide     Oracle Solaris Cluster 4.0
search filter icon
search icon

Document Information

Preface

1.  Installing Support for Oracle RAC

2.  Configuring Storage for Oracle Files

Summary of Configuration Tasks for Storage for Oracle Files

Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files

Tasks for Configuring Hardware RAID Support for Oracle Files

Tasks for Configuring Oracle ASM for Oracle Files

Tasks for Configuring Qualified NAS Devices for Oracle Files

Tasks for Configuring a Cluster File System for Oracle Files

Installing Storage Management Software With Support for Oracle RAC

Using Solaris Volume Manager for Sun Cluster

How to Use Solaris Volume Manager for Sun Cluster

Using Hardware RAID Support

How to Use Hardware RAID Support

Using Oracle ASM

How to Use Oracle ASM With Hardware RAID

Using a Cluster File System

Types of Oracle Files That You Can Store on a PxFS-Based Cluster File System

Optimizing Performance and Availability When Using a PxFS-Based Cluster File System

How to Use a PxFS-Based Cluster File System

3.  Registering and Configuring the Resource Groups

4.  Enabling Oracle RAC to Run in a Cluster

5.  Administering Support for Oracle RAC

6.  Troubleshooting Support for Oracle RAC

7.  Modifying an Existing Configuration of Support for Oracle RAC

A.  Sample Configurations of This Data Service

B.  Preset Actions for DBMS Errors and Logged Alerts

C.  Support for Oracle RAC Extension Properties

D.  Command-Line Alternatives

Index

Installing Storage Management Software With Support for Oracle RAC

Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements.


Note - For information about how to install and configure qualified NAS devices with Support for Oracle RAC, see Oracle Solaris Cluster 4.0 With Network-Attached Storage Device Manual.


This section contains the following information:

Using Solaris Volume Manager for Sun Cluster

Always install Solaris Volume Manager software, which includes the Solaris Volume Manager for Sun Cluster feature, in the global cluster, even when supporting zone clusters. Solaris Volume Manager software is not automatically installed as part of an Oracle Solaris 11 software installation. You must install it manually by using the following command:

# pkg install system/svm

The clzonecluster command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.

When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.

When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.

For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements.

How to Use Solaris Volume Manager for Sun Cluster

To use the Solaris Volume Manager for Sun Cluster software with Support for Oracle RAC, perform the following tasks. Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.

  1. Configure the Solaris Volume Manager for Sun Cluster software on the global-cluster nodes.

    For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Oracle Solaris Cluster Software Installation Guide.

  2. If you are using a zone cluster, configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster.

    For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Oracle Solaris Cluster Software Installation Guide.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.

Using Hardware RAID Support

For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements.

Oracle Solaris Cluster software provides hardware RAID support for several storage devices. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.

How to Use Hardware RAID Support

  1. Create LUNs on the disk arrays.

    See the Oracle Solaris Cluster hardware documentation for information about how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.

    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note - To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.


  3. Determine the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.

    Use the cldevice(1CL) command for this purpose.

    The following example lists output from the cldevice list -v command.

    # cldevice list -v
    
    DID Device     Full Device Path
    ----------     ----------------
    d1             phys-schost-1:/dev/rdsk/c0t2d0
    d2             phys-schost-1:/dev/rdsk/c0t3d0
    d3             phys-schost-2:/dev/rdsk/c4t4d0
    d3             phys-schost-1:/dev/rdsk/c1t5d0
    d4             phys-schost-2:/dev/rdsk/c3t5d0
    d4             phys-schost-1:/dev/rdsk/c2t5d0
    d5             phys-schost-2:/dev/rdsk/c4t4d1
    d5             phys-schost-1:/dev/rdsk/c1t5d1
    d6             phys-schost-2:/dev/rdsk/c3t5d1
    d6             phys-schost-1:/dev/rdsk/c2t5d1
    d7             phys-schost-2:/dev/rdsk/c0t2d0
    d8             phys-schost-2:/dev/rdsk/c0t3d0

    In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.

  4. Obtain the full DID device name that corresponds to the DID device that you identified in Step 3.

    The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.

    # cldevice show d4
    
    === DID Device Instances ===                   
    
    DID Device Name:                                /dev/did/rdsk/d4
      Full Device Path:                                phys-schost-1:/dev/rdsk/c2t5d0
      Replication:                                     none
      default_fencing:                                 global
  5. If you are using a zone cluster configure the DID devices into the zone cluster. Otherwise, proceed to Step 6.

    For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.

  6. Create or modify a slice on each DID device to contain the disk-space allocation for the raw device.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.

  7. Change the ownership and permissions of the raw devices that you are using to allow access to these devices.

    To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.

    For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.

Using Oracle ASM

Use Oracle ASM with one storage management scheme from the following list:

For information about the types of Oracle files that you can store by using Oracle ASM, see Storage Management Requirements.


Note - When an Oracle RAC installation in a zone cluster uses Oracle ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzonecluster command. When Oracle ASM runs inside a zone cluster, the administration of Oracle ASM occurs entirely within the same zone cluster.


How to Use Oracle ASM With Hardware RAID

  1. On a cluster member, log in as root or become superuser.
  2. Determine the identities of device identity (DID) devices that correspond to shared disks that are available in the cluster.

    Use the cldevice(1CL) command for this purpose.

    The following example shows an extract from output from the cldevice list -v command.

    # cldevice list -v
    DID Device          Full Device Path
    ----------          ----------------
    …
    d5                  phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0
    d5                  phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
    d5                  phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0
    d5                  phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0
    d6                  phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0
    d6                  phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
    d6                  phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0
    d6                  phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0
    …

    In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.

  3. Obtain the full DID device name for each DID device that you are using for the Oracle ASM disk group.

    The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.

    # cldevice show d5 d6
    
    === DID Device Instances ===                   
    
    DID Device Name:                         /dev/did/rdsk/d5
      Full Device Path:                      phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
      Replication:                                none
      default_fencing:                          global
    
    DID Device Name:                          /dev/did/rdsk/d6
      Full Device Path:                       phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
      Replication:                                none
      default_fencing:                            global
  4. If you are using a zone cluster, configure the DID devices into the zone cluster. Otherwise, proceed to Step 5.

    For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.

  5. Create or modify a slice on each DID device to contain the disk-space allocation for the Oracle ASM disk group.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0 for the Oracle ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.

  6. Prepare the raw devices that you are using for Oracle ASM.
    1. Change the ownership and permissions of each raw device that you are using for Oracle ASM, to allow access by Oracle ASM to these devices.

      Note - If Oracle ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.


      To specify the raw device, append sX to the DID device name that you obtained in Step 3, where X is the slice number.

      # chown oraasm:oinstall /dev/did/rdsk/dNsX
      # chmod 660 /dev/disk/rdsk/dNsX
      # ls -lhL /dev/did/rdsk/dNsX
      crw-rw----  1 oraasm  oinstall  239, 128 Jun 15 04:38 /dev/did/rdsk/dNsX

      For more information about changing the ownership and permissions of raw devices for use by Oracle ASM, see your Oracle documentation.

    2. Clean out the disk headers for each raw device that you are using for Oracle ASM.
      # dd if=/dev/zero of=/dev/did/rdsk/dNsX bs=1024k count=200
      2000+0 records in
      2000+0 records out
  7. Modify the ASM_DISKSTRING Oracle ASM instance-initialization parameter to specify the devices that you are using for the Oracle ASM disk group.

    For example, to use the /dev/did/ path for the Oracle ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:

    ASM_DISKSTRING = '/dev/did/rdsk/*'

    For more information, see your Oracle documentation.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.

Using a Cluster File System

Oracle RAC is supported on cluster file systems:

For information that is specific to the use of cluster file systems with Support for Oracle RAC, see the subsections that follow.

Types of Oracle Files That You Can Store on a PxFS-Based Cluster File System

You can store only these files that are associated with Oracle RAC on a PxFS-based cluster file system:


Note - You must not store data files, control files, online redo log files, or Oracle recovery files on a PxFS-based cluster file system.


Optimizing Performance and Availability When Using a PxFS-Based Cluster File System

The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.

To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Oracle Solaris Cluster Concepts Guide.

How to Use a PxFS-Based Cluster File System

  1. Create and mount the cluster file system.

    See Creating Cluster File Systems in Oracle Solaris Cluster Software Installation Guide for information about how to create and mount the cluster file system.


    Note - Oracle Grid Infrastructure binaries cannot reside on a cluster file system.


  2. If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.

    For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.


    File Type
    Options
    Oracle RDBMS binary files
    global, logging
    Oracle Grid Infrastructure binary files
    global, logging
    Oracle configuration files
    global, logging
    System parameter file (SPFILE)
    global, logging
    Alert files
    global, logging
    Trace files
    global, logging
    Archived redo log files
    global, logging, forcedirectio
    Flashback log files
    global, logging, forcedirectio
    OCR files
    global, logging, forcedirectio
    Oracle Grid Infrastructure voting disk
    global, logging, forcedirectio

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.