For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements for Oracle Files.
Sun Cluster provides hardware RAID support for several storage devices. For example, you can use Sun StorEdgeTM SE9960 disk arrays with hardware RAID support and without volume manager software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Create LUNs on the disk arrays.
See the Sun Cluster hardware documentation for information about how to create LUNs.
After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2
Determine the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.
Use the cldevice(1CL) command for this purpose.
The following example lists output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t2d0 d2 phys-schost-1:/dev/rdsk/c0t3d0 d3 phys-schost-2:/dev/rdsk/c4t4d0 d3 phys-schost-1:/dev/rdsk/c1t5d0 d4 phys-schost-2:/dev/rdsk/c3t5d0 d4 phys-schost-1:/dev/rdsk/c2t5d0 d5 phys-schost-2:/dev/rdsk/c4t4d1 d5 phys-schost-1:/dev/rdsk/c1t5d1 d6 phys-schost-2:/dev/rdsk/c3t5d1 d6 phys-schost-1:/dev/rdsk/c2t5d1 d7 phys-schost-2:/dev/rdsk/c0t2d0 d8 phys-schost-2:/dev/rdsk/c0t3d0
In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.
Obtain the full DID device name that corresponds to the DID device that you identified in Step 3.
The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.
# cldevice show d4 === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-1:/dev/rdsk/c2t5d0 Replication: none default_fencing: global
Create or modify a slice on each DID device to contain the disk-space allocation for the raw device.
For example, if you choose to use slice s0, you might choose to allocate 100 Gbytes of disk space in slice s0.
Change the ownership and permissions of the raw devices that you are using to allow access to these devices.
To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.
For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.