Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide Oracle Solaris Cluster 4.0 |
1. Installing Support for Oracle RAC
2. Configuring Storage for Oracle Files
Summary of Configuration Tasks for Storage for Oracle Files
Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files
Tasks for Configuring Hardware RAID Support for Oracle Files
Tasks for Configuring Oracle ASM for Oracle Files
Tasks for Configuring Qualified NAS Devices for Oracle Files
Tasks for Configuring a Cluster File System for Oracle Files
Installing Storage Management Software With Support for Oracle RAC
Using Solaris Volume Manager for Sun Cluster
How to Use Solaris Volume Manager for Sun Cluster
How to Use Hardware RAID Support
How to Use Oracle ASM With Hardware RAID
Types of Oracle Files That You Can Store on a PxFS-Based Cluster File System
Optimizing Performance and Availability When Using a PxFS-Based Cluster File System
3. Registering and Configuring the Resource Groups
4. Enabling Oracle RAC to Run in a Cluster
5. Administering Support for Oracle RAC
6. Troubleshooting Support for Oracle RAC
7. Modifying an Existing Configuration of Support for Oracle RAC
A. Sample Configurations of This Data Service
B. Preset Actions for DBMS Errors and Logged Alerts
Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements.
Note - For information about how to install and configure qualified NAS devices with Support for Oracle RAC, see Oracle Solaris Cluster 4.0 With Network-Attached Storage Device Manual.
This section contains the following information:
Always install Solaris Volume Manager software, which includes the Solaris Volume Manager for Sun Cluster feature, in the global cluster, even when supporting zone clusters. Solaris Volume Manager software is not automatically installed as part of an Oracle Solaris 11 software installation. You must install it manually by using the following command:
# pkg install system/svm
The clzonecluster command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.
When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.
When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.
For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements.
To use the Solaris Volume Manager for Sun Cluster software with Support for Oracle RAC, perform the following tasks. Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.
For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Oracle Solaris Cluster Software Installation Guide.
For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Oracle Solaris Cluster Software Installation Guide.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements.
Oracle Solaris Cluster software provides hardware RAID support for several storage devices. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.
See the Oracle Solaris Cluster hardware documentation for information about how to create LUNs.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2
Use the cldevice(1CL) command for this purpose.
The following example lists output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t2d0 d2 phys-schost-1:/dev/rdsk/c0t3d0 d3 phys-schost-2:/dev/rdsk/c4t4d0 d3 phys-schost-1:/dev/rdsk/c1t5d0 d4 phys-schost-2:/dev/rdsk/c3t5d0 d4 phys-schost-1:/dev/rdsk/c2t5d0 d5 phys-schost-2:/dev/rdsk/c4t4d1 d5 phys-schost-1:/dev/rdsk/c1t5d1 d6 phys-schost-2:/dev/rdsk/c3t5d1 d6 phys-schost-1:/dev/rdsk/c2t5d1 d7 phys-schost-2:/dev/rdsk/c0t2d0 d8 phys-schost-2:/dev/rdsk/c0t3d0
In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.
The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.
# cldevice show d4 === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-1:/dev/rdsk/c2t5d0 Replication: none default_fencing: global
For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.
To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.
For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
Use Oracle ASM with one storage management scheme from the following list:
Hardware RAID. For more information, see How to Use Oracle ASM With Hardware RAID.
Solaris Volume Manager for Sun Cluster. For more information, see How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database.
For information about the types of Oracle files that you can store by using Oracle ASM, see Storage Management Requirements.
Note - When an Oracle RAC installation in a zone cluster uses Oracle ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzonecluster command. When Oracle ASM runs inside a zone cluster, the administration of Oracle ASM occurs entirely within the same zone cluster.
Use the cldevice(1CL) command for this purpose.
The following example shows an extract from output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- … d5 phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0 d5 phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 d5 phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0 d5 phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0 d6 phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0 d6 phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 d6 phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0 d6 phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0 …
In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.
The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.
# cldevice show d5 d6 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 Replication: none default_fencing: global
For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0 for the Oracle ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.
Note - If Oracle ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.
To specify the raw device, append sX to the DID device name that you obtained in Step 3, where X is the slice number.
# chown oraasm:oinstall /dev/did/rdsk/dNsX # chmod 660 /dev/disk/rdsk/dNsX # ls -lhL /dev/did/rdsk/dNsX crw-rw---- 1 oraasm oinstall 239, 128 Jun 15 04:38 /dev/did/rdsk/dNsX
For more information about changing the ownership and permissions of raw devices for use by Oracle ASM, see your Oracle documentation.
# dd if=/dev/zero of=/dev/did/rdsk/dNsX bs=1024k count=200 2000+0 records in 2000+0 records out
For example, to use the /dev/did/ path for the Oracle ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:
ASM_DISKSTRING = '/dev/did/rdsk/*'
For more information, see your Oracle documentation.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
Oracle RAC is supported on cluster file systems:
Cluster file systems use the Oracle Solaris Cluster Proxy File System (PxFS)
For general information about how to create and mount PxFS-based cluster file systems, see the following documentation:
For information that is specific to the use of cluster file systems with Support for Oracle RAC, see the subsections that follow.
Types of Oracle Files That You Can Store on a PxFS-Based Cluster File System
Optimizing Performance and Availability When Using a PxFS-Based Cluster File System
You can store only these files that are associated with Oracle RAC on a PxFS-based cluster file system:
Oracle RDBMS binary files
Oracle Grid Infrastructure binary files
Note - Oracle Grid Infrastructure binaries cannot reside on a cluster file system.
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Archived redo log files
Flashback log files
Oracle cluster registry (OCR) files
Oracle Grid Infrastructure voting disk
Note - You must not store data files, control files, online redo log files, or Oracle recovery files on a PxFS-based cluster file system.
The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.
To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Oracle Solaris Cluster Concepts Guide.
See Creating Cluster File Systems in Oracle Solaris Cluster Software Installation Guide for information about how to create and mount the cluster file system.
Note - Oracle Grid Infrastructure binaries cannot reside on a cluster file system.
For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.
|
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.