Sun Cluster 3.1 Data Service for Oracle Parallel Server/Real Application Clusters

Installing Volume Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

For Sun Cluster Support for Oracle Parallel Server/Real Application Clusters disks, use the following configurations.

How to Use VxVM

To use the VxVM software with Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, perform the following tasks.

  1. Obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution – Caution –

    Failure to correctly install the license for the Volume Manager cluster feature might result in a panic when you install Oracle Parallel Server/Real Application Clusters support. Before you install the Oracle Parallel Server/Real Application Clusters packages, run the vxlicense -p check command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See the VxVM appendix in the Sun Cluster 3.1 Software Installation Guide and the VxVM documentation for more information.

  3. Use VERITAS commands to create a separate shared disk group for the Oracle Parallel Server/Real Application Clusters database to use (see your VxVM documentation for details on shared disk groups).

    Before you create the shared disk group, note the following points.

    • Do not register the shared disk group within the cluster.

    • Do not create any file systems in the shared disk group because only the raw data file will use this disk group.

    • Create volumes as the gen use type.

    • Disks that you add to the shared disk group must be directly attached to all of the cluster nodes.

    • Ensure that your VxVM license is current. If your license expires, the node will panic.

How to Use Hardware RAID Support

You can use Sun Cluster Support for Oracle Parallel Server/Real Application Clusters with hardware RAID support.

For example, you can use Sun StorEdgeTM A3500/A3500FC disk arrays with hardware RAID support and without VxVM software. To do so, configure raw device IDs (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle Parallel Server/Real Application Clusters on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster 3.1 Hardware Guide for information on how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note –

    If you use slice 0, do not start the partition at cylinder 0.


  3. Run the scdidadm(1M) command to find the raw device ID (DID) that corresponds to the LUNs that you created in Step 1.

    The following example lists output from the scdidadm -L command.


    # scdidadm -L
    
    1        phys-schost-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-schost-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-schost-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-schost-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-schost-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-schost-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-schost-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-schost-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-schost-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-schost-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-schost-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-schost-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6
  4. Use the DID that the scdidadm output identifies to set up the raw devices.

    For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sN raw device, where N is the slice number.