Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

Chapter 2 Configuring Storage for Oracle Files

This chapter explains how to configure storage for Oracle files.

Summary of Configuration Tasks for Storage for Oracle Files

This section summarizes the tasks for configuring each storage management scheme for Oracle files.

Tasks for Configuring the Sun StorageTek QFS Shared File System for Oracle Files

The following tables summarizes the tasks for configuring the Sun StorageTek QFS shared file system and provides cross-references to detailed instructions for performing the tasks. The first table provides information on Oracle RAC running in the global cluster and the second table provide information on Oracle RAC running in a zone cluster.

Perform these tasks in the order in which they are listed in the table.

Table 2–1 Tasks for Configuring the Sun StorageTek QFS Shared File System for Oracle Files in Gobal Cluster

Task 

Instructions 

Install and configure the Sun StorageTek QFS shared file system 

Using the Sun StorageTek QFS Shared File System

Install and configure the other storage management scheme that you are using with the Sun StorageTek QFS shared file system 

If you are using Solaris Volume Manager for Sun Cluster, see Using Solaris Volume Manager for Sun Cluster.

If you are using hardware RAID support, see Using Hardware RAID Support.

Register and configure the RAC framework resource group 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

If you are using Solaris Volume Manager for Sun Cluster, create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database 

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

Register and configure storage resources for Oracle files 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.

Table 2–2 Tasks for Configuring the Sun StorageTek QFS Shared File System for Oracle Files in a Zone Cluster

Task 

Instructions 

Install and configure the Sun StorageTek QFS shared file system in the global cluster 

Using the Sun StorageTek QFS Shared File System

Install and configure the other storage management scheme that you are using with the Sun StorageTek QFS shared file system in the global cluster 

If you are using Solaris Volume Manager for Sun Cluster, see Using Solaris Volume Manager for Sun Cluster.

If you are using hardware RAID support, see Using Hardware RAID Support.

Register and configure the RAC framework resource group in the global cluster 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

If you are using Solaris Volume Manager for Sun Cluster, create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database in the global cluster 

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

Configure Sun StorageTek QFS shared file system for the zone cluster 

See How to Add a QFS Shared File System to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS

Register and configure the storage resources for Oracle files in the zone cluster 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.

Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files

The following tables summarize the tasks for configuring Solaris Volume Manager for Sun Cluster and provides cross-references to detailed instructions for performing the tasks.

Perform these tasks in the order in which they are listed in the table.

Table 2–3 Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files in a Global Cluster

Task 

Instructions 

Configure Solaris Volume Manager for Sun Cluster 

Using Solaris Volume Manager for Sun Cluster

Register and configure the RAC framework resource group 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

Create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database 

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

Register and configure storage resources for Oracle files 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.

Table 2–4 Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files in a Zone Cluster

Task 

Instructions 

Configure Solaris Volume Manager for Sun Cluster in the global cluster 

Using Solaris Volume Manager for Sun Cluster

Register and configure the RAC framework resource group in the global cluster 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

Create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database in the global cluster 

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

Configure SVM devices in a zone cluster 

See How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS

Register and configure storage resources for Oracle files in the zone cluster 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.

Tasks for Configuring VxVM for Oracle Files

The following table summarizes the tasks for configuring VxVM and provides cross-references to detailed instructions for performing the tasks.

Perform these tasks in the order in which they are listed in the table.

Table 2–5 Tasks for Configuring VxVM for Oracle Files

Task 

Instructions 

Install and configure VxVM 

Using VxVM

Register and configure the RAC framework resource group 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

Create a VxVM shared-disk group for the Oracle RAC database 

How to Create a VxVM Shared-Disk Group for the Oracle RAC Database

Register and configure storage resources for Oracle files 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.


Note –

The VxVM devices are currently not supported by zone clusters.


Tasks for Configuring Hardware RAID Support for Oracle Files

The following table summarizes the tasks for configuring hardware RAID support and provides cross-references to detailed instructions for performing the tasks.

Table 2–6 Tasks for Configuring Hardware RAID Support for Oracle Files

Task 

Instructions 

Configure hardware RAID support 

Using Hardware RAID Support


Note –

For information configuring hardware RAID for a zone cluster, see Adding Storage Devices to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.


Tasks for Configuring ASM for Oracle Files

The following table summarizes the tasks for configuring ASM and provides cross-references to detailed instructions for performing the tasks.

Table 2–7 Tasks for Configuring ASM for Oracle Files

Task 

Instructions 

Configure devices for ASM 

Using ASM


Note –

For information configuring ASM for a zone cluster, see Adding Storage Devices to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.


Tasks for Configuring Qualified NAS Devices for Oracle Files

The following table summarizes the tasks for configuring qualified NAS devices and provides cross-references to detailed instructions for performing the tasks.

Perform these tasks in the order in which they are listed in the table.

Table 2–8 Tasks for Configuring Qualified NAS Devices for Oracle Files

Task 

Instructions 

Install and configure the qualified NAS device 

Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS

Register and configure the RAC framework resource group 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.

Register and configure storage resources for Oracle files 

If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files.

If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.


Note –

The NAS devices are currently not supported in zone clusters.


Tasks for Configuring the Cluster File System for Oracle Files

The following table summarizes the tasks for configuring the cluster file system and provides cross-references to detailed instructions for performing the tasks.

Perform these tasks in the order in which they are listed in the table.

Table 2–9 Tasks for Configuring the Cluster File System for Oracle Files

Task 

Instructions 

Install and configure the cluster file system 

Using the Cluster File System

Register and configure the RAC framework resource group 

If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group.

If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.


Note –

The cluster file system is currently not supported for Oracle RAC in zone clusters.


Installing Storage Management Software With Sun Cluster Support for Oracle RAC

Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements for Oracle Files.


Note –

For information about how to install and configure qualified NAS devices with Sun Cluster Support for Oracle RAC, see Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.


Using Solaris Volume Manager for Sun Cluster

The Solaris Volume Manager for Sun Cluster is always installed in the global cluster, even when supporting zone clusters. The clzc command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.

When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.

When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.

For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements for Oracle Files.

ProcedureHow to Use Solaris Volume Manager for Sun Cluster

To use the Solaris Volume Manager for Sun Cluster software with Sun Cluster Support for Oracle RAC, perform the following tasks.

  1. Ensure that you are using Solaris 9 9/04, Solaris 10, or compatible versions.

    Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.

  2. Configure the Solaris Volume Manager for Sun Cluster software on the cluster nodes.

    For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Sun Cluster Software Installation Guide for Solaris OS.

  3. If you are using a zone cluster, configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster.

    For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using VxVM

For information about the types of Oracle files that you can store by using VxVM, see Storage Management Requirements for Oracle Files.


Note –

Using VxVM for Oracle RAC in zone clusters is not supported in this release.


ProcedureSPARC: How to Use VxVM

To use the VxVM software with Sun Cluster Support for Oracle RAC, perform the following tasks.

  1. If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution – Caution –

    Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle RAC support. Before you install the Oracle RAC packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using Hardware RAID Support

For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements for Oracle Files.

Sun Cluster provides hardware RAID support for several storage devices. For example, you can use Sun StorEdgeTM SE9960 disk arrays with hardware RAID support and without volume manager software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.

ProcedureHow to Use Hardware RAID Support

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster hardware documentation for information about how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note –

    To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.


  3. Determine the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.

    Use the cldevice(1CL) command for this purpose.

    The following example lists output from the cldevice list -v command.


    # cldevice list -v
    
    DID Device     Full Device Path
    ----------     ----------------
    d1             phys-schost-1:/dev/rdsk/c0t2d0
    d2             phys-schost-1:/dev/rdsk/c0t3d0
    d3             phys-schost-2:/dev/rdsk/c4t4d0
    d3             phys-schost-1:/dev/rdsk/c1t5d0
    d4             phys-schost-2:/dev/rdsk/c3t5d0
    d4             phys-schost-1:/dev/rdsk/c2t5d0
    d5             phys-schost-2:/dev/rdsk/c4t4d1
    d5             phys-schost-1:/dev/rdsk/c1t5d1
    d6             phys-schost-2:/dev/rdsk/c3t5d1
    d6             phys-schost-1:/dev/rdsk/c2t5d1
    d7             phys-schost-2:/dev/rdsk/c0t2d0
    d8             phys-schost-2:/dev/rdsk/c0t3d0

    In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.

  4. Obtain the full DID device name that corresponds to the DID device that you identified in Step 3.

    The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.


    # cldevice show d4
    
    === DID Device Instances ===                   
    
    DID Device Name:                                /dev/did/rdsk/d4
      Full Device Path:                                phys-schost-1:/dev/rdsk/c2t5d0
      Replication:                                     none
      default_fencing:                                 global
  5. If you are using a zone cluster configure the DID devices into the zone cluster, otherwise go to Step 6.

    For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  6. Create or modify a slice on each DID device to contain the disk-space allocation for the raw device.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.

  7. Change the ownership and permissions of the raw devices that you are using to allow access to these devices.

    To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.

    For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using the Sun StorageTek QFS Shared File System

The Sun StorageTek QFS shared file system is always installed in the global-cluster voting node, even when a file system is used by a zone cluster. You configure specific Sun StorageTek QFS shared file system into a specific zone cluster using the clzc command. The scalable mount-point resource belongs to this zone cluster. The metadata server resource, SUNW.qfs, belongs to the global cluster.

You must use the Sun StorageTek QFS shared file system with one storage management scheme from the following list:

Distributing Oracle Files Among Sun StorageTek QFS Shared File Systems

You can store all the files that are associated with Oracle RAC on the Sun StorageTek QFS shared file system.

Distribute these files among several file systems as explained in the subsections that follow.

Sun StorageTek QFS File Systems for RDBMS Binary Files and Related Files

For RDBMS binary files and related files, create one file system in the cluster to store the files.

The RDBMS binary files and related files are as follows:

Sun StorageTek QFS File Systems for Database Files and Related Files

For database files and related files, determine whether you require one file system for each database or multiple file systems for each database.


Note –

If you are adding storage for an existing database, you must create additional file systems for the storage that you are adding. In this situation, distribute the database files and related files among the file systems that you will use for the database.


Each file system that you create for database files and related files must have its own metadata server. For information about the resources that are required for the metadata servers, see Resources for the Sun StorageTek QFS Metadata Server.

The database files and related files are as follows:

Optimizing the Performance of the Sun StorageTek QFS Shared File System

For optimum performance with Solaris Volume Manager for Sun Cluster, configure the volume manager and the file system as follows:

Mirroring the LUNs of your disk arrays involves the following operations:

The input/output (I/O) load on your system might be heavy. In this situation, ensure that the LUN for Solaris Volume Manager metadata or hardware RAID metadata maps to a different physical disk than the LUN for data. Mapping these LUNs to different physical disks ensures that contention is minimized.

ProcedureHow to Install and Configure the Sun StorageTek QFS Shared File System

Before You Begin

You might use Solaris Volume Manager metadevices as devices for the shared file systems. In this situation, ensure that the metaset and its metadevices are created and available on all nodes before configuring the shared file systems.

  1. Ensure that the Sun StorageTek QFS software is installed on all nodes of the global cluster where Sun Cluster Support for Oracle RAC is to run.

    For information about how to install Sun StorageTek QFS, see Sun StorageTek QFS Installation and Upgrade Guide, Version 4, Update 6.

  2. Ensure that each Sun StorageTek QFS shared file system is correctly created for use with Sun Cluster Support for Oracle RAC.

    For information about how to create a Sun StorageTek QFS file system, see Sun StorageTek QFS File System Configuration and Administration Guide, Version 4, Update 6.

    For each Sun StorageTek QFS shared file system, set the correct mount options for the types of Oracle files that the file system is to store.

    • For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.

    • For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:

      • In the /etc/vfstab file set the shared option.

      • In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:

        fs=fs-name
        stripe=width
        sync_meta=1 This value is not required for Sun StorageTek QFS shared file system version 4 6.
        mh_write
        qwrite
        forcedirectio
        nstreams=1024 This value is not required for Sun StorageTek QFS shared file system version 4.6.
        rdlease=300 Set this value for optimum performance.
        wrlease=300 Set this value for optimum performance.
        aplease=300 Set this value for optimum performance.
        
        fs-name

        Specifies the name that uniquely identifies the file system.

        width

        Specifies the required stripe width for devices in the file system. The required stripe width is a multiple of the file system's disk allocation unit (DAU). width must be an integer that is greater than or equal to 1.


      Note –

      Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.


  3. Mount each Sun StorageTek QFS shared file system that you are using for Oracle files.


    # mount mount-point
    
    mount-point

    Specifies the mountpoint of the file system that you are mounting.

  4. If you are using a zone cluster, configure the Sun StorageTek QFS shared file system into the zone cluster. Otherwise, go to Step 5.

    For information about configuring Sun StorageTek QFS shared file system into a zone cluster, see How to Add a QFS Shared File System to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  5. Change the ownership of each file system that you are using for Oracle files as follows:

    • Owner: the database administrator (DBA) user

    • Group: the DBA group

    The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.


    # chown user-name:group-name mount-point
    
    user-name

    Specifies the user name of the DBA user. This user is normally named oracle.

    group-name

    Specifies the name of the DBA group. This group is normally named dba.

    mount-point

    Specifies the mountpoint of the file system whose ownership you are changing.


    Note –

    If you have configured Sun StorageTek QFS shared file system for a zone cluster, perform this step in that zone cluster.


  6. Grant to the owner of each file system whose ownership you changed in Step 5 read access and write access to the file system.


    # chmod u+rw mount-point
    
    mount-point

    Specifies the mountpoint of the file system to whose owner you are granting read access and write access.


    Note –

    When Sun StorageTek QFS shared file system is configured for a zone cluster, you need to perform this step in that zone cluster.


Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using ASM

When an Oracle RAC installation in a zone cluster uses ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzc command. When ASM runs inside a zone cluster, the administration of ASM occurs entirely within the same zone cluster.

For information about the types of Oracle files that you can store by using ASM, see Storage Management Requirements for Oracle Files.

Use ASM with one storage management scheme from the following list:

ProcedureHow to Use ASM With Hardware RAID

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

  1. On a cluster member, log in as root or become superuser.

  2. Determine the identities of device identity (DID) devices that correspond to shared disks that are available in the cluster.

    Use the cldevice(1CL) command for this purpose.

    The following example shows an extract from output from the cldevice list -v command.


    # cldevice list -v
    DID Device          Full Device Path
    ----------          ----------------
    …
    d5                  phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0
    d5                  phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
    d5                  phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0
    d5                  phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0
    d6                  phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0
    d6                  phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
    d6                  phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0
    d6                  phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0
    …

    In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.

  3. Obtain the full DID device name for each DID device that you are using for the ASM disk group.

    The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.


    # cldevice show d5 d6
    
    === DID Device Instances ===                   
    
    DID Device Name:                         /dev/did/rdsk/d5
      Full Device Path:                      phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0
      Replication:                                none
      default_fencing:                          global
    
    DID Device Name:                          /dev/did/rdsk/d6
      Full Device Path:                       phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0
      Replication:                                none
      default_fencing:                            global
  4. If you are using a zone cluster, configure the DID devices into the zone cluster. Otherwise, go to Step 5.

    For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.

  5. Create or modify a slice on each DID device to contain the disk-space allocation for the ASM disk group.

    Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.

    For example, if you choose to use slice s0 for the ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.

  6. Change the ownership and permissions of the raw devices that you are using for ASM to allow access by ASM to these devices.

    To specify the raw device, append sN to the DID device name that you obtained in Step 3, where N is the slice number.

    For example, the cldevice output in Step 3 identifies that the raw DIDs that correspond to the disk are /dev/did/rdsk/d5 and /dev/did/rdsk/d6. If you choose to use slice s0 on these devices, specify the raw devices /dev/did/rdsk/d5s0 and /dev/did/rdsk/d6s0.

    For more information about changing the ownership and permissions of raw devices for use by ASM, see your Oracle documentation.


    Note –

    If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.


  7. Modify the ASM_DISKSTRING ASM instance-initialization parameter to specify the devices that you are using for the ASM disk group.

    For example, to use the /dev/did/ path for the ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:

    ASM_DISKSTRING = '/dev/did/rdsk/*'

    For more information, see your Oracle documentation.


    Note –

    If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.


Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Using the Cluster File System

For general information about how to create and mount cluster file systems, see the following documentation:

For information that is specific to the use of the cluster file system with Sun Cluster Support for Oracle RAC, see the subsections that follow.

Types of Oracle Files That You Can Store on the Cluster File System

You can store only these files that are associated with Oracle RAC on the cluster file system:


Note –

You must not store data files, control files, online redo log files, or Oracle recovery files on the cluster file system.


Optimizing Performance and Availability When Using the Cluster File System

The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.

To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Sun Cluster Concepts Guide for Solaris OS.

ProcedureHow to Use the Cluster File System

  1. Create and mount the cluster file system.

    See Creating Cluster File Systems in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.

  2. If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.

    For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mountpoint.

    File Type 

    Options 

    Oracle RDBMS binary files

    global, logging

    Oracle CRS binary files

    global, logging

    Oracle configuration files

    global, logging

    System parameter file (SPFILE)

    global, logging

    Alert files

    global, logging

    Trace files

    global, logging

    Archived redo log files

    global, logging, forcedirectio

    Flashback log files

    global, logging, forcedirectio

    OCR files

    global, logging, forcedirectio

    Oracle CRS voting disk

    global, logging, forcedirectio

Next Steps

Ensure that all other storage management schemes that you are using for Oracle files are installed.

After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.

Registering and Configuring the RAC Framework Resource Group

Registering and configuring the RAC framework resource group enables Oracle RAC to run with Sun Cluster.


Note –

You must register and configure the RAC framework resource group. Otherwise, Oracle RAC cannot run with Sun Cluster software.


On Solaris OS 9, only one RAC framework resource can exist on a machine. On appropriate versions of Solaris OS 10 where the zone cluster feature is supported, multiple RAC framework resource groups can exist. The RAC framework resource in the global-cluster voting node supports any volume manager used by RAC anywhere on the machine, including the global cluster and all zone clusters. The RAC framework resource in the global-cluster voting node can also support any Oracle RAC installation running in the global cluster. The RAC framework resource in the zone cluster supports the Oracle RAC installation running in that specific zone cluster.

Tools for Registering and Configuring the RAC Framework Resource Group

Sun Cluster provides the following tools for registering and configuring the RAC framework resource group:

The clsetup utility and Sun Cluster Manager each provide a wizard for configuring resources for the RAC framework resource group. The wizards reduce the possibility of configuration errors that might result from command syntax errors or omissions. These wizards also ensure that all required resources are created and that all required dependencies between resources are set.


Note –

The Sun Cluster Manager and clsetup utility run only in the global-cluster voting node of the global cluster.


ProcedureHow to Register and Configure the RAC Framework Resource Group by Using clsetup

When you register and configure the RAC framework resource group for a cluster, the RAC framework resource group is created.

Perform this procedure during your initial setup of Sun Cluster Support for Oracle RAC. Perform this procedure from one node only.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.


Note –

The following instructions explain how to perform this operation by using the clsetup utility.


Before You Begin

Ensure that the following prerequisites are met:

Ensure that you have the following information:

  1. Become superuser on any cluster node.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring Sun Cluster Support for Oracle RAC and press Return.

    The clsetup utility displays information about Sun Cluster Support for Oracle RAC.

  5. Press Return to continue.

    The clsetup utility prompts you to select whether you are performing the initial configuration of Sun Cluster Support for Oracle RAC or administering an existing configuration.


    Note –

    The clsetup utility currently allows ongoing administration of RAC framework running only in the global cluster. For ongoing administration of framework configured in the zone cluster, you need to run the Sun Cluster CLIs.


  6. Type the number that corresponds to the option for performing the initial configuration of Sun Cluster Support for Oracle RAC and press Return.

    The clsetup utility displays a list of components of Oracle RAC to configure.

  7. Type the number that corresponds to the option for the RAC framework resource group and press Return.

    The clsetup utility prompts you to select the Oracle RAC clusters location. This location can be global cluster or zone cluster.

  8. Type the number that corresponds to the option for the location of the Oracle RAC clusters and press Return.

    • If you have selected the global cluster option, the clsetup utility displays the list of components of Oracle RAC to configure. Go to Step 10.

    • If you have selected the zone cluster option, the clsetup utility prompts you to select the required zone cluster. Go to Step 9.

  9. Type the number that corresponds to the option for the required zone cluster and press Return.

    The clsetup utility displays a list of components of Oracle RAC to configure.

  10. Type the number that corresponds to the option for the component of Oracle RAC and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  11. Verify that the prerequisites are met, and press Return.

    The clsetup utility displays a list of the cluster nodes on which the Sun Cluster Support for Oracle RAC packages are installed.

  12. Select the nodes where you require Sun Cluster Support for Oracle RAC to run.

    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.

    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes and press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the RAC framework resource group's node list.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the RAC framework resource group's node list.

  13. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a list of storage management schemes for Oracle files.

  14. Type the numbers that correspond to the storage management schemes that you are using for Oracle files and press Return.

  15. To confirm your selection of storage management schemes, type d and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create.

  16. If you require a different name for any Sun Cluster objects, change each name as follows.

    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.

  17. To confirm your selection of Sun Cluster object names, type d and press Return.

    The clsetup utility displays information about the Sun Cluster configuration that the utility will create.

  18. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  19. Press Return to continue.

    The clsetup utility returns you to the list of options for configuring Sun Cluster Support for Oracle RAC.

  20. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing RAC framework resource group when you restart the utility.

  21. Determine if the RAC framework resource group and its resources are online.

    Use the clresourcegroup(1CL) utility for this purpose. By default, the clsetup utility assigns the name rac-framework-rg to the RAC framework resource group.

    • If you are using the global cluster, type the following command.


      # clresourcegroup status rac-framework-rg
      
    • If you are using a zone cluster, type the following command.


      # clresourcegroup status -Z zoneclustername rac-framework-rg
      
  22. If the RAC framework resource group and its resources are not online, bring them online.

    • If you are using the global cluster, type the following command.


      # clresourcegroup online rac-framework-rg
      
    • If you are using a zone cluster, type the following command.


      # clresourcegroup online -Z zoneclustername rac-framework-rg
      
Resource Configuration

The following table lists the default resource configuration that the clsetup utility creates when you complete this task.

Resource Name, Resource Type, and Resource Group 

Dependencies 

Description 

Resource type: SUW.rac_framework

Resource name: rac-framework-rs

Resource group: rac-framework-rg

None. 

RAC framework resource. 

SPARC:

Resource type: SUNW.rac_udlm

Resource name: rac-udlm-rs

Resource group: rac-framework-rg

Strong dependency on the RAC framework resource. 

Oracle UDLM resource. 

Resource type: SUNW.rac_svm

Resource name: rac-svm-rs

Resource group: rac-framework-rg

Strong dependency on the RAC framework resource. 

Solaris Volume Manager for Sun Cluster resource. Created only if Solaris Volume Manager for Sun Cluster was selected. 

Resource type: SUNW.rac_cvm

Resource name: rac-cvm-rs

Resource group: rac-framework-rg

Strong dependency on the RAC framework resource. 

VxVM resource. Created only if VxVM was selected. 


Note –

For a zone cluster, the framework resources are created based on the storage management scheme you select. For detailed information for the resource configuration for zone clusters, see the figures in Appendix A, Sample Configurations of This Data Service.


Next Steps

The next step depends on the volume manager that you are using, as shown in the following table.

Volume Manager 

Next Step 

Solaris Volume Manager for Sun Cluster 

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database

VxVM with the cluster feature 

How to Create a VxVM Shared-Disk Group for the Oracle RAC Database

None 

Registering and Configuring Storage Resources for Oracle Files

Creating a Global Device Group for the Oracle RAC Database

If you are using a volume manager for Oracle database files, the volume manager requires a global device group for the Oracle RAC database to use.

The type of global device group to create depends on the volume manager that you are using:

ProcedureHow to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database


Note –

Perform this task only if you are using Solaris Volume Manager for Sun Cluster.


If you are using Solaris Volume Manager for Sun Cluster, Solaris Volume Manager requires a multi-owner disk set for the Oracle RAC database, the Sun StorageTek QFS shared file system, or ASM to use. For information about Solaris Volume Manager for Sun Cluster multi–owner disk sets, see Multi-Owner Disk Set Concepts in Solaris Volume Manager Administration Guide.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Before You Begin

Note the following points.

  1. Create a multi-owner disk set.

    Use the metaset(1M) command for this purpose.


    # metaset -s setname -M -a -h nodelist
    
    -s setname

    Specifies the name of the disk set that you are creating.

    -M

    Specifies that the disk set that you are creating is a multi-owner disk set.

    -a

    Specifies that the nodes that the -h option specifies are to be added to the disk set.

    -h nodelist

    Specifies a space-separated list of nodes that are to be added to the disk set. The Sun Cluster Support for Oracle RAC software packages must be installed on each node in the list.

  2. Add global devices to the disk set that you created in Step 1.


    # metaset -s setname -a devicelist
    
    -s setname

    Specifies that you are modifying the disk set that you created in Step 1.

    -a

    Specifies that the devices that devicelist specifies are to be added to the disk set.

    devicelist

    Specifies a space-separated list of full device ID path names for the global devices that are to be added to the disk set. To enable consistent access to each device from any node in the cluster, ensure that each device ID path name is of the form /dev/did/dsk/dN, where N is the device number.

  3. For the disk set that you created in Step 1, create the volumes that the Oracle RAC database or Sun StorageTek QFS shared file system will use.


    Tip –

    If you are creating many volumes for Oracle data files, you can simplify this step by using soft partitions. However, if you are using the Sun StorageTek QFS shared file system and the I/O load on your system is heavy, use separate partitions for data and metadata. Otherwise, the performance of your system might be impaired. For information about soft partitions, see Chapter 12, Soft Partitions (Overview), in Solaris Volume Manager Administration Guide and Chapter 13, Soft Partitions (Tasks), in Solaris Volume Manager Administration Guide.


    Create each volume by concatenating slices on global devices that you added in Step 2. Use the metainit(1M) command for this purpose.


    # metainit -s setname volume-abbrev numstripes width slicelist
    
    -s setname

    Specifies that you are creating a volume for the disk set that you created in Step 1.

    volume-abbrev

    Specifies the abbreviated name of the volume that you are creating. An abbreviated volume name has the format dV, where V is the volume number.

    numstripes

    Specifies the number of stripes in the volume.

    width

    Specifies the number of slices in each stripe. If you set width to greater than 1, the slices are striped.

    slicelist

    Specifies a space-separated list of slices that the volume contains. Each slice must reside on a global device that you added in Step 2.

  4. If you are using mirrored devices, create the mirrors by using volumes that you created in Step 3 as submirrors.

    If you are not using mirrored devices, omit this step.

    Use the metainit command to create each mirror as follows:


    # metainit -s setname mirror -m submirror-list
    
    -s setname

    Specifies that you are creating a mirror for the disk set that you created in Step 1.

    mirror

    Specifies the name of the mirror that you are creating in the form of an abbreviated volume name. An abbreviated volume name has the format dV, where V is the volume number.

    submirror-list

    Specifies a space-separated list of submirrors that the mirror is to contain. Each submirror must be a volume that you created in Step 3. Specify the name of each submirror in the form of an abbreviated volume name.


    Note –

    For information on configuring a SVM disk set in a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS.


  5. Verify that each node is correctly added to the multi-owner disk set.

    Use the metaset command for this purpose.


    # metaset -s setname
    
    -s setname

    Specifies that you are verifying the disk set that you created in Step 1.

    This command displays a table that contains the following information for each node that is correctly added to the disk set:

    • The Host column contains the node name.

    • The Owner column contains the text multi-owner.

    • The Member column contains the text Yes.

  6. Verify that the multi-owner disk set is correctly configured.


    # cldevicegroup show setname
    
    setname

    Specifies that configuration information only for the disk set that you created in Step 1 is displayed.

    This command displays the device group information for the disk set. For a multi-owner disk set, the device group type is Multi-owner_SVM.

  7. Verify the online status of the multi-owner disk set.


    # cldevicegroup status setname
    

    This command displays the status of the multi-owner disk set on each node in the multi-owner disk set.

  8. (Configurations without the Sun StorageTek QFS shared file system only) On each node that can own the disk set, change the ownership of each volume that you created in Step 3 as follows:

    • Owner: the DBA user

    • Group: the DBA group

    The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.

    If you are using the Sun StorageTek QFS shared file system, omit this step.

    Ensure that you change ownership only of volumes that the Oracle RAC database will use.


    # chown user-name:group-name volume-list
    
    user-name

    Specifies the user name of the DBA user. This user is normally named oracle.

    group-name

    Specifies the name of the DBA group. This group is normally named dba.

    volume-list

    Specifies a space-separated list of the logical names of the volumes that you created for the disk set. The format of these names depends on the type of device where the volume resides, as follows:

    • For block devices: /dev/md/setname/dsk/dV

    • For raw devices: /dev/md/setname/rdsk/dV

    The replaceable items in these names are as follows:

    setname

    Specifies the name of the multi-owner disk set that you created in Step 1.

    V

    Specifies the volume number of a volume that you created in Step 3.

    Ensure that this list specifies each volume that you created in Step 3.


    Note –

    For a zone cluster, perform this step in the zone cluster.


  9. (Configurations without the Sun StorageTek QFS shared file system only) Grant to the owner of each volume whose ownership you changed in Step 8 read access and write access to the volume.

    If you are using the Sun StorageTek QFS shared file system, omit this step.

    Grant access to the volume on each node that can own the disk set.

    Ensure that you change access permissions only of volumes that the Oracle RAC database will use.


    # chmod u+rw volume-list
    
    volume-list

    Specifies a space-separated list of the logical names of the volumes to whose owners you are granting read access and write access. Ensure that this list contains the volumes that you specified in Step 8.


    Note –

    For a zone cluster, perform this step in the zone cluster.


  10. If you are using ASM, specify the raw devices that you are using for the ASM disk group.

    To specify the devices, modify the ASM_DISKSTRING ASM instance-initialization parameter.

    For example, to use the /dev/md/setname/rdsk/d path for the ASM disk group, add the value /dev/md/*/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:

    ASM_DISKSTRING = '/dev/md/*/rdsk/d*'

    If you are using mirrored devices, specify external redundancy in the ASM configuration.

    For more information, see your Oracle documentation.


Example 2–1 Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster

This example shows the sequence of operations that is required to create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for a four-node cluster. The disk set uses mirrored devices.

The disk set is to be used with the Sun StorageTek QFS shared file system. This example does not show the creation of the Sun StorageTek QFS shared file system on the devices that are added to the disk set.

  1. To create the multi-owner disk set, the following command is run:


    # metaset -s oradg -M -a -h  pclus1 pclus2 pclus3 pclus4
    

    The multi-owner disk set is named oradg. The nodes pclus1, pclus2, pclus3, and pclus4 are added to this disk set.

  2. To add global devices to the disk set, the following command is run:


    # metaset -s oradg -a  /dev/did/dsk/d8  /dev/did/dsk/d9 /dev/did/dsk/d15 \
    /dev/did/dsk/d16
    

    The preceding command adds the following global devices to the disk set:

    • /dev/did/dsk/d8

    • /dev/did/dsk/d9

    • /dev/did/dsk/d15

    • /dev/did/dsk/d16

  3. To create volumes for the disk set, the following commands are run:


    # metainit -s oradg d10 1 1 /dev/did/dsk/d9s0
    # metainit -s oradg d11 1 1 /dev/did/dsk/d16s0
    # metainit -s oradg d20 1 1 /dev/did/dsk/d8s0
    # metainit -s oradg d21 1 1 /dev/did/dsk/d15s0
    

    Each volume is created by a one-on-one concatenation of a slice as shown in the following table. The slices are not striped.

    Volume 

    Slice 

    d10

    /dev/did/dsk/d9s0

    d11

    /dev/did/dsk/d16s0

    d20

    /dev/did/dsk/d8s0

    d21

    /dev/did/dsk/d15s0

  4. To create mirrors for the disk set, the following commands are run:


    # metainit -s oradg d1 -m d10 d11
    # metainit -s oradg d2 -m d20 d21
    

    The preceding commands create a mirror that is named d1 from volumes d10 and d11, and a mirror that is named d2 from volumes d20 and d21.

  5. To verify that each node is correctly added to the multi-owner disk set, the following command is run:


    # metaset -s oradgMulti-owner Set name = oradg, Set number = 1, Master = pclus2
    
    Host                Owner          Member
      pclus1             multi-owner   Yes 
      pclus2             multi-owner   Yes 
      pclus3             multi-owner   Yes 
      pclus4             multi-owner   Yes 
    
    Drive Dbase
    
    d8    Yes  
    
    d9    Yes  
    
    d15   Yes  
    
    d16   Yes  
  6. To verify that the multi-owner disk set is correctly configured, the following command is run:


    # cldevicegroup show oradg
    === Device Groups ===                          
    
    Device Group Name:                              oradg
      Type:                                            Multi-owner_SVM
      failback:                                        false
      Node List:                                       pclus1, pclus2, pclus3, pclus4
      preferenced:                                     false
      numsecondaries:                                  0
      diskset name:                                    oradg
  7. To verify the online status of the multi-owner disk set, the following command is run:


    # cldevicegroup status oradg
    
    === Cluster Device Groups ===
    
    --- Device Group Status ---
    
    Device Group Name     Primary     Secondary     Status
    -----------------     -------     ---------     ------
    
    
    --- Multi-owner Device Group Status ---
    
    Device Group Name           Node Name           Status
    -----------------           ---------           ------
    oradg                       pclus1              Online
                                pclus2              Online
                                pclus3              Online
                                pclus4              Online

Next Steps

Go to Registering and Configuring Storage Resources for Oracle Files.

ProcedureHow to Create a VxVM Shared-Disk Group for the Oracle RAC Database


Note –

Perform this task only if you are using VxVM with the cluster feature.


If you are using VxVM with the cluster feature, VxVM requires a shared-disk group for the Oracle RAC database or ASM to use.

Before You Begin

Note the following points.

  1. Use Veritas commands that are provided for creating a VxVM shared-disk group.

    For information about VxVM shared-disk groups, see your VxVM documentation.

  2. If you are using ASM, specify the raw devices that you are using for the ASM disk group.

    To specify the devices, modify the ASM_DISKSTRING ASM instance-initialization parameter.

    For example, to use the /dev/md/setname/rdsk/d path for the ASM disk group, add the value /dev/md/*/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:

    ASM_DISKSTRING = '/dev/md/*/rdsk/d*'

    If you are using mirrored devices, specify external redundancy in the ASM configuration.

    For more information, see your Oracle documentation.

Next Steps

Go to Registering and Configuring Storage Resources for Oracle Files.

Registering and Configuring Storage Resources for Oracle Files

Storage resources provide fault monitoring and automatic fault recovery for global device groups and file systems.

If you are using global device groups or shared file systems for Oracle files, configure storage resources to manage the availability of the storage on which the Oracle software depends.

Configure storage resources for the following types of global device groups:

Configure storage resources for the following types of shared file systems:

Tools for Registering and Configuring Storage Resources for Oracle Files

Sun Cluster provides the following tools for registering and configuring storage resources for Oracle files:

The clsetup utility and Sun Cluster Manager each provide a wizard for configuring storage resources for Oracle files. The wizards reduce the possibility of configuration errors that might result from command syntax errors or omissions. These wizards also ensure that all required resources are created and that all required dependencies between resources are set.

ProcedureHow to Register and Configure Storage Resources for Oracle Files by Using clsetup

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure from only one node of the cluster.

Before You Begin

Ensure that the following prerequisites are met:

Ensure that you have the following information:

  1. On one node of the cluster, become superuser.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring Sun Cluster Support for Oracle RAC and press Return.

    The clsetup utility displays information about Sun Cluster Support for Oracle RAC.

  5. Press Return to continue.

    The clsetup utility prompts you to select whether you are performing the initial configuration of Sun Cluster Support for Oracle RAC or administering an existing configuration.

  6. Type the number that corresponds to the option for performing the initial configuration of Sun Cluster Support for Oracle RAC and press Return.

    The clsetup utility displays a list of components of Oracle RAC to configure.

  7. Type the number that corresponds to the option for storage resources for Oracle files and press Return.

    The clsetup utility prompts you to select the Oracle RAC clusters location. This can be a global cluster or a zone cluster.

  8. Type the number that corresponds to the option for the location of the Oracle RAC clusters and press Return.

    • If you have selected the global cluster option, the clsetup utility displays the list of components of Oracle RAC to configure. Go to Step 10.

    • If you have selected the zone cluster option, the clsetup utility prompts you to select the required zone cluster. Go to Step 9.

  9. Type the number that corresponds to the option for the required zone cluster and press Return.

    The clsetup utility displays the list of components of Oracle RAC to configure.

  10. Type the number that corresponds to the option for the component of Oracle RAC and press Return.

    The clsetup utility displays the list of prerequisites for performing in this task.

  11. Verify that the prerequisites are met, and press Return.

    The response of the clsetup utility depends on how the RAC framework resource group was configured.

    • By using the clsetup wizard or the Sun Cluster Manager wizard. The clsetup utility displays a list of the resources for scalable device groups that are configured on the cluster. If no suitable resources exist, this list is empty.

    • By using the scsetup utility or Sun Cluster maintenance commands. The clsetup utility displays a list of storage management schemes for Oracle files.

  12. If you are prompted to select storage management schemes for Oracle files, select the schemes.

    If you are prompted for resources for scalable device groups, omit this step.

    1. Type the numbers that correspond to the storage management schemes that you are using for Oracle files and press Return.

    2. To confirm your selection of storage management schemes, type d and press Return.

      The clsetup utility displays a list of the resources for scalable device groups that are configured on the cluster. If no suitable resources exist, this list is empty.

  13. If no suitable resources exist, or if no resource exists for a device group that you are using, add a resource to the list.

    If resources exist for all the device groups that you are using, omit this step.

    For each resource that you are adding, perform the following steps:

    1. Type c and press Return.

      The clsetup utility displays a list of the scalable device groups that are configured on the cluster.

    2. Type the number that corresponds to the device group that you are using for Oracle files and press Return.

      Once you select the device group, you can either select the entire disk group or choose to specify logical devices, or disks, in the disk group.

    3. Choose whether you want to specify logical devices.

      • If you choose to specify logical devices, type yes. Go to Step d.

      • If you want to select the entire disk group, type no. Go to Step e.

    4. Type a comma-separated list of numbers that corresponds to the logical devices or disks you choose or type a for all.

      The clsetup utility returns you to the list of resources for scalable device groups that are configured on the cluster.

    5. To confirm your selection of device groups, type d and press Return.

      The clsetup utility returns you to the list of the resources for scalable device groups that are configured on the cluster. The resource that you are creating is added to the list.

  14. If a suitable existing resource that you intend to use is not listed, type r to refresh the list.

  15. When the list contains resources for all the device groups that you are using, type the numbers that correspond to the resources that you require.

    You can select existing resources, resources that are not yet created, or a combination of existing resources and new resources. If you select more than one existing resource, the selected resources must be in the same resource group.

  16. To confirm your selection of resources for device groups, type d and press Return.

    The clsetup utility displays a list of the resources for shared file-system mountpoints that are configured on the cluster. If no suitable resources exist, this list is empty.

  17. If no suitable resources exist, or if no resource exists for a file-system mountpoint that you are using, add a resource to the list.

    If resources exist for all the file-system mountpoints that you are using, omit this step.

    For each resource that you are adding, perform the following steps:

    1. Type c and press Return.

      The clsetup utility displays a list of the shared file systems that are configured on the cluster.

    2. Type a comma-separated or space-separated list of numbers that correspond to the file systems that you are using for Oracle files and press Return.

    3. To confirm your selection of file systems, type d and press Return.

      The clsetup utility returns you to the list of the resources for file-system mountpoints that are configured on the cluster. The resource that you are creating is added to the list.

  18. If a suitable existing resource that you intend to use is not listed, type r to refresh the list.

  19. When the list contains resources for all the file-system mountpoints that you are using, type the numbers that correspond to the resources that you require.

    You can select existing resources, resources that are not yet created, or a combination of existing resources and new resources. If you select more than one existing resource, the selected resources must be in the same resource group.

  20. To confirm your selection of resources for file-system mountpoints, type d and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create or add to your configuration.

  21. If you need to modify a Sun Cluster object that the utility will create, modify the object as follows:

    1. Type the number that corresponds to the Sun Cluster object that you are modifying and press Return.

      The clsetup utility displays a list of properties that are set for the object.

    2. Modify each property that you are changing as follows:

      1. Type the number that corresponds to the property that you are changing and press Return.

        The clsetup utility prompts you for the new value.

      2. At the prompt, type the new value and press Return.

        The clsetup utility returns you to the list of properties that are set for the object.

    3. When you have modified all the properties that you need to change, type d.

      The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create or add to your configuration.

  22. When you have modified all the Sun Cluster objects that you need to change, type d.

    The clsetup utility displays information about the RAC framework resource group for which storage resources will be configured.

  23. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  24. Press Return to continue.

    The clsetup utility returns you to the list of options for configuring Sun Cluster Support for Oracle RAC.

  25. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing RAC framework resource group when you restart the utility.

  26. Determine if the resource groups that the wizard created are online.


    # clresourcegroup status
    
  27. If a resource group that the wizard created is not online, bring the resource group online.

    For each resource group that you are bringing online, type the following command:


    # clresourcegroup online -emM rac-storage-rg
    
    rac-storage-rg

    Specifies the name of the resource group that you are bringing online.

Resource Configuration

The following table lists the default resource configuration that the clsetup utility creates when you complete this task.

Resource Type, Resource Name, and Resource Group 

Dependencies 

Description 

Resource type: SUNW.ScalDeviceGroup

Resource name: scaldg-name-rs, where dg-name is the name of the device group that the resource represents

Resource group: scaldg-rg

Strong dependency on the resource in the RAC framework resource group for the volume manager that is associated with the device group: either the Solaris Volume Manager for Sun Cluster resource or the VxVM resource. 

Scalable device-group resource. One resource is created for each scalable device group that you are using for Oracle files. 

Resource type: SUNW.qfs

Resource name: qfs-mp-dir-rs, where mp-dir is the mountpoint of the file system, with / replaced by

Resource group: qfsmds-rg

Strong dependency on the scalable wait_zc_boot resource and scalable device-group resource, if any. 

If you are using Sun StorageTek QFS without a volume manager, this resource does not depend on any other resources. 

Resource for the Sun StorageTek QFS metadata server. One resource is created for each Sun StorageTek QFS shared file system that you are using for Oracle files. 

Resource type: SUNW.ScalMountPoint

Resource name: scal-mp-dir-rs, where mp-dir is the mountpoint of the file system, with / replaced by

Resource group: scalmnt-rg

Strong dependency on the resource for the Sun StorageTek QFS metadata server, if any. 

Offline-restart dependency on the scalable device-group resource, if any. 

If you are using a file system on a qualified NAS device without a volume manager, this resource does not depend on any other resources. 

Scalable file system mountpoint resource. One resource is created for each shared file system that you are using for Oracle files. 

Resource type: SUNW.wait_zc_boot

Resource name: wait-zc-rs, where zc is the zone cluster name.

Resource group: scalmnt-rg

None 

Resource to ensure that the Sun StorageTek QFS shared file system configured to the zone cluster are mounted only after zone cluster is booted. 


Note –

For detailed information for the resource configuration for zone clusters, see the figures in Appendix A, Sample Configurations of This Data Service.


Next Steps

Go to Chapter 3, Enabling Oracle RAC to Run in a Cluster.