Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

Installing Storage Management Software With Sun Cluster Support for Oracle RAC

Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements for Oracle Files.


Note –

For information about how to install and configure qualified NAS devices with Sun Cluster Support for Oracle RAC, see Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.


Using Solaris Volume Manager for Sun Cluster

The Solaris Volume Manager for Sun Cluster is always installed in the global cluster, even when supporting zone clusters. The clzc command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.

When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.

When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.

For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements for Oracle Files.

ProcedureHow to Use Solaris Volume Manager for Sun Cluster

To use the Solaris Volume Manager for Sun Cluster software with Sun Cluster Support for Oracle RAC, perform the following tasks.

  1. Ensure that you are using Solaris 9 9/04, Solaris 10, or compatible versions.

    Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.

  2. Configure the Solaris Volume Manager for Sun Cluster software on the cluster nodes.

    For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Sun Cluster Software Installation Guide for Solaris OS.

  3. If you are using a zone cluster, configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster.

    For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using VxVM

For information about the types of Oracle files that you can store by using VxVM, see Storage Management Requirements for Oracle Files.


Note –

Using VxVM for Oracle RAC in zone clusters is not supported in this release.


ProcedureSPARC: How to Use VxVM

To use the VxVM software with Sun Cluster Support for Oracle RAC, perform the following tasks.

  1. If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution – Caution –

    Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle RAC support. Before you install the Oracle RAC packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using Hardware RAID Support

For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements for Oracle Files.

Sun Cluster provides hardware RAID support for several storage devices. For example, you can use Sun StorEdgeTM SE9960 disk arrays with hardware RAID support and without volume manager software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.

ProcedureHow to Use Hardware RAID Support

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster hardware documentation for information about how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note –

    To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.


  3. Determine the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.

    Use the cldevice(1CL) command for this purpose.

    The following example lists output from the cldevice list -v command.


    # cldevice list -v
    
    DID Device     Full Device Path
    ----------     ----------------
    d1             phys-schost-1:/dev/rdsk/c0t2d0
    d2             phys-schost-1:/dev/rdsk/c0t3d0
    d3             phys-schost-2:/dev/rdsk/c4t4d0
    d3             phys-schost-1:/dev/rdsk/c1t5d0
    d4             phys-schost-2:/dev/rdsk/c3t5d0
    d4             phys-schost-1:/dev/rdsk/c2t5d0
    d5             phys-schost-2:/dev/rdsk/c4t4d1
    d5             phys-schost-1:/dev/rdsk/c1t5d1
    d6             phys-schost-2:/dev/rdsk/c3t5d1
    d6             phys-schost-1:/dev/rdsk/c2t5d1
    d7             phys-schost-2:/dev/rdsk/c0t2d0
    d8             phys-schost-2:/dev/rdsk/c0t3d0

    In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.

  4. Obtain the full DID device name that corresponds to the DID device that you identified in Step 3.

    The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.


    # cldevice show d4
    
    === DID Device Instances ===                   
    
    DID Device Name:                                /dev/did/rdsk/d4
      Full Device Path:                                phys-schost-1:/dev/rdsk/c2t5d0
      Replication:                                     none
      default_fencing:                                 global
  5. If you are using a zone cluster configure the DID devices into the zone cluster, otherwise go to Step 6.

    For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  6. Create or modify a slice on each DID device to contain the disk-space allocation for the raw device.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.

  7. Change the ownership and permissions of the raw devices that you are using to allow access to these devices.

    To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.

    For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using the Sun StorageTek QFS Shared File System

The Sun StorageTek QFS shared file system is always installed in the global-cluster voting node, even when a file system is used by a zone cluster. You configure specific Sun StorageTek QFS shared file system into a specific zone cluster using the clzc command. The scalable mount-point resource belongs to this zone cluster. The metadata server resource, SUNW.qfs, belongs to the global cluster.

You must use the Sun StorageTek QFS shared file system with one storage management scheme from the following list:

Distributing Oracle Files Among Sun StorageTek QFS Shared File Systems

You can store all the files that are associated with Oracle RAC on the Sun StorageTek QFS shared file system.

Distribute these files among several file systems as explained in the subsections that follow.

Sun StorageTek QFS File Systems for RDBMS Binary Files and Related Files

For RDBMS binary files and related files, create one file system in the cluster to store the files.

The RDBMS binary files and related files are as follows:

Sun StorageTek QFS File Systems for Database Files and Related Files

For database files and related files, determine whether you require one file system for each database or multiple file systems for each database.


Note –

If you are adding storage for an existing database, you must create additional file systems for the storage that you are adding. In this situation, distribute the database files and related files among the file systems that you will use for the database.


Each file system that you create for database files and related files must have its own metadata server. For information about the resources that are required for the metadata servers, see Resources for the Sun StorageTek QFS Metadata Server.

The database files and related files are as follows:

Optimizing the Performance of the Sun StorageTek QFS Shared File System

For optimum performance with Solaris Volume Manager for Sun Cluster, configure the volume manager and the file system as follows:

Mirroring the LUNs of your disk arrays involves the following operations:

The input/output (I/O) load on your system might be heavy. In this situation, ensure that the LUN for Solaris Volume Manager metadata or hardware RAID metadata maps to a different physical disk than the LUN for data. Mapping these LUNs to different physical disks ensures that contention is minimized.

ProcedureHow to Install and Configure the Sun StorageTek QFS Shared File System

Before You Begin

You might use Solaris Volume Manager metadevices as devices for the shared file systems. In this situation, ensure that the metaset and its metadevices are created and available on all nodes before configuring the shared file systems.

  1. Ensure that the Sun StorageTek QFS software is installed on all nodes of the global cluster where Sun Cluster Support for Oracle RAC is to run.

    For information about how to install Sun StorageTek QFS, see Sun StorageTek QFS Installation and Upgrade Guide, Version 4, Update 6.

  2. Ensure that each Sun StorageTek QFS shared file system is correctly created for use with Sun Cluster Support for Oracle RAC.

    For information about how to create a Sun StorageTek QFS file system, see Sun StorageTek QFS File System Configuration and Administration Guide, Version 4, Update 6.

    For each Sun StorageTek QFS shared file system, set the correct mount options for the types of Oracle files that the file system is to store.

    • For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.

    • For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:

      • In the /etc/vfstab file set the shared option.

      • In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:

        fs=fs-name
        stripe=width
        sync_meta=1 This value is not required for Sun StorageTek QFS shared file system version 4 6.
        mh_write
        qwrite
        forcedirectio
        nstreams=1024 This value is not required for Sun StorageTek QFS shared file system version 4.6.
        rdlease=300 Set this value for optimum performance.
        wrlease=300 Set this value for optimum performance.
        aplease=300 Set this value for optimum performance.
        
        fs-name

        Specifies the name that uniquely identifies the file system.

        width

        Specifies the required stripe width for devices in the file system. The required stripe width is a multiple of the file system's disk allocation unit (DAU). width must be an integer that is greater than or equal to 1.


      Note –

      Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.


  3. Mount each Sun StorageTek QFS shared file system that you are using for Oracle files.


    # mount mount-point
    
    mount-point

    Specifies the mountpoint of the file system that you are mounting.

  4. If you are using a zone cluster, configure the Sun StorageTek QFS shared file system into the zone cluster. Otherwise, go to Step 5.

    For information about configuring Sun StorageTek QFS shared file system into a zone cluster, see How to Add a QFS Shared File System to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  5. Change the ownership of each file system that you are using for Oracle files as follows:

    • Owner: the database administrator (DBA) user

    • Group: the DBA group

    The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.


    # chown user-name:group-name mount-point
    
    user-name

    Specifies the user name of the DBA user. This user is normally named oracle.

    group-name

    Specifies the name of the DBA group. This group is normally named dba.

    mount-point

    Specifies the mountpoint of the file system whose ownership you are changing.


    Note –

    If you have configured Sun StorageTek QFS shared file system for a zone cluster, perform this step in that zone cluster.


  6. Grant to the owner of each file system whose ownership you changed in Step 5 read access and write access to the file system.


    # chmod u+rw mount-point
    
    mount-point

    Specifies the mountpoint of the file system to whose owner you are granting read access and write access.


    Note –

    When Sun StorageTek QFS shared file system is configured for a zone cluster, you need to perform this step in that zone cluster.


Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using ASM

When an Oracle RAC installation in a zone cluster uses ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzc command. When ASM runs inside a zone cluster, the administration of ASM occurs entirely within the same zone cluster.

For information about the types of Oracle files that you can store by using ASM, see Storage Management Requirements for Oracle Files.

Use ASM with one storage management scheme from the following list:

ProcedureHow to Use ASM With Hardware RAID

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  1. On a cluster member, log in as root or become superuser.

  2. Determine the identities of device identity (DID) devices that correspond to shared disks that are available in the cluster.

    Use the cldevice(1CL) command for this purpose.

    The following example shows an extract from output from the cldevice list -v command.


    # cldevice list -v
    DID Device          Full Device Path
    ----------          ----------------
    …
    d5                  phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0
    d5                  phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
    d5                  phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0
    d5                  phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0
    d6                  phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0
    d6                  phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
    d6                  phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0
    d6                  phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0
    …

    In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.

  3. Obtain the full DID device name for each DID device that you are using for the ASM disk group.

    The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.


    # cldevice show d5 d6
    
    === DID Device Instances ===                   
    
    DID Device Name:                         /dev/did/rdsk/d5
      Full Device Path:                      phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
      Replication:                                none
      default_fencing:                          global
    
    DID Device Name:                          /dev/did/rdsk/d6
      Full Device Path:                       phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
      Replication:                                none
      default_fencing:                            global
  4. If you are using a zone cluster, configure the DID devices into the zone cluster. Otherwise, go to Step 5.

    For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  5. Create or modify a slice on each DID device to contain the disk-space allocation for the ASM disk group.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0 for the ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.

  6. Change the ownership and permissions of the raw devices that you are using for ASM to allow access by ASM to these devices.

    To specify the raw device, append sN to the DID device name that you obtained in Step 3, where N is the slice number.

    For example, the cldevice output in Step 3 identifies that the raw DIDs that correspond to the disk are /dev/did/rdsk/d5 and /dev/did/rdsk/d6. If you choose to use slice s0 on these devices, specify the raw devices /dev/did/rdsk/d5s0 and /dev/did/rdsk/d6s0.

    For more information about changing the ownership and permissions of raw devices for use by ASM, see your Oracle documentation.


    Note –

    If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.


  7. Modify the ASM_DISKSTRING ASM instance-initialization parameter to specify the devices that you are using for the ASM disk group.

    For example, to use the /dev/did/ path for the ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:

    ASM_DISKSTRING = '/dev/did/rdsk/*'

    For more information, see your Oracle documentation.


    Note –

    If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.


Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using the Cluster File System

For general information about how to create and mount cluster file systems, see the following documentation:

For information that is specific to the use of the cluster file system with Sun Cluster Support for Oracle RAC, see the subsections that follow.

Types of Oracle Files That You Can Store on the Cluster File System

You can store only these files that are associated with Oracle RAC on the cluster file system:


Note –

You must not store data files, control files, online redo log files, or Oracle recovery files on the cluster file system.


Optimizing Performance and Availability When Using the Cluster File System

The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.

To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Sun Cluster Concepts Guide for Solaris OS.

ProcedureHow to Use the Cluster File System

  1. Create and mount the cluster file system.

    See Creating Cluster File Systems in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.

  2. If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.

    For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mountpoint.

    File Type 

    Options 

    Oracle RDBMS binary files

    global, logging

    Oracle CRS binary files

    global, logging

    Oracle configuration files

    global, logging

    System parameter file (SPFILE)

    global, logging

    Alert files

    global, logging

    Trace files

    global, logging

    Archived redo log files

    global, logging, forcedirectio

    Flashback log files

    global, logging, forcedirectio

    OCR files

    global, logging, forcedirectio

    Oracle CRS voting disk

    global, logging, forcedirectio

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.