Sun Cluster 3.0 U1 Data Services Installation and Configuration Guide

Installing Volume Management Software With Sun Cluster HA for Oracle Parallel Server

For Sun Cluster HA for Oracle Parallel Server disks, you can use the following configurations.

How to Use VxVM

To use the VxVM software with the Sun Cluster HA for Oracle Parallel Server data service, perform the following tasks.

  1. Obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution - Caution -

    Failure to correctly install the license for the Volume Manager cluster feature might result in a panic when you install OPS support. Prior to installing the OPS packages, run the vxlicense check command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See the VxVM appendix in the Sun Cluster 3.0 U1 Installation Guide and the VxVM documentation for more information.

How to Use Sun StorEdge A3500/A3500FC Disk Arrays With Hardware RAID Support

If you use StorEdge A3500/A3500FC disk arrays with hardware RAID support and without VxVM software, configure raw device IDs (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs).

To set up the raw devices for OPS on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster 3.0 U1 Hardware Guide for information on how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note -

    If you use slice 0, do not start the partition at cylinder 0.


  3. Run the scdidadm(1M) command to find the raw device ID (DID) that corresponds to the LUNs that you created in Step 1.

    The following example lists output from the scdidadm -L command.


    # scdidadm -L
    
    1        phys-schost-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-schost-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-schost-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-schost-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-schost-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-schost-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-schost-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-schost-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-schost-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-schost-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-schost-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-schost-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6
  4. Use the DID that the scdidadm output identifies to set up the raw devices.

    For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sx raw device, where x is the slice number.