Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide |
1. Installing Support for Oracle RAC
2. Configuring Storage for Oracle Files
Summary of Configuration Tasks for Storage for Oracle Files
Tasks for Configuring the Sun QFS Shared File System for Oracle Files
Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files
Tasks for Configuring VxVM for Oracle Files
Tasks for Configuring Hardware RAID Support for Oracle Files
Tasks for Configuring ASM for Oracle Files
Tasks for Configuring Qualified NAS Devices for Oracle Files
Tasks for Configuring a Cluster File System for Oracle Files
Installing Storage Management Software With Support for Oracle RAC
Using Solaris Volume Manager for Sun Cluster
How to Use Solaris Volume Manager for Sun Cluster
How to Use Hardware RAID Support
Using the Sun QFS Shared File System
Distributing Oracle Files Among Sun QFS Shared File Systems
Sun QFS File Systems for RDBMS Binary Files and Related Files
Sun QFS File Systems for Database Files and Related Files
Optimizing the Performance of the Sun QFS Shared File System
How to Install and Configure the Sun QFS Shared File System
How to Use Oracle ASM With Hardware RAID
Types of Oracle Files That You Can Store on a Cluster File System
Optimizing Performance and Availability When Using a Cluster File System
3. Registering and Configuring the Resource Groups
4. Enabling Oracle RAC to Run in a Cluster
5. Administering Support for Oracle RAC
6. Troubleshooting Support for Oracle RAC
7. Modifying an Existing Configuration of Support for Oracle RAC
8. Upgrading Support for Oracle RAC
A. Sample Configurations of This Data Service
B. Preset Actions for DBMS Errors and Logged Alerts
Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements for Oracle Files.
Note - For information about how to install and configure qualified NAS devices with Support for Oracle RAC, see Oracle Solaris Cluster 3.3 With Network-Attached Storage Devices Manual.
This section contains the following information:
Solaris Volume Manager for Sun Cluster is always installed in the global cluster, even when supporting zone clusters. The clzc command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.
When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.
When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.
For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements for Oracle Files.
To use the Solaris Volume Manager for Sun Cluster software with Support for Oracle RAC, perform the following tasks. Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.
For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Oracle Solaris Cluster Software Installation Guide.
For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Oracle Solaris Cluster Software Installation Guide.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
For information about the types of Oracle files that you can store by using VxVM, see Storage Management Requirements for Oracle Files.
Note - Using VxVM for Oracle RAC in zone clusters is not supported in this release.
To use the VxVM software with Support for Oracle RAC, perform the following tasks.
See your VxVM documentation for more information about VxVM licensing requirements.
See Chapter 5, Installing and Configuring Veritas Volume Manager, in Oracle Solaris Cluster Software Installation Guide and the VxVM documentation for more information.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements for Oracle Files.
Oracle Solaris Cluster software provides hardware RAID support for several storage devices. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.
See the Oracle Solaris Cluster hardware documentation for information about how to create LUNs.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2
Use the cldevice(1CL) command for this purpose.
The following example lists output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t2d0 d2 phys-schost-1:/dev/rdsk/c0t3d0 d3 phys-schost-2:/dev/rdsk/c4t4d0 d3 phys-schost-1:/dev/rdsk/c1t5d0 d4 phys-schost-2:/dev/rdsk/c3t5d0 d4 phys-schost-1:/dev/rdsk/c2t5d0 d5 phys-schost-2:/dev/rdsk/c4t4d1 d5 phys-schost-1:/dev/rdsk/c1t5d1 d6 phys-schost-2:/dev/rdsk/c3t5d1 d6 phys-schost-1:/dev/rdsk/c2t5d1 d7 phys-schost-2:/dev/rdsk/c0t2d0 d8 phys-schost-2:/dev/rdsk/c0t3d0
In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.
The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.
# cldevice show d4 === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-1:/dev/rdsk/c2t5d0 Replication: none default_fencing: global
For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.
To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.
For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
The Sun QFS shared file system is always installed in the global-cluster voting node, even when a file system is used by a zone cluster. You configure specific Sun QFS shared file system into a specific zone cluster using the clzc command. The scalable mount-point resource belongs to this zone cluster. The metadata server resource, SUNW.qfs, belongs to the global cluster.
You must use the Sun QFS shared file system with one storage management scheme from the following list:
Hardware RAID support
Solaris Volume Manager for Sun Cluster
You can store all the files that are associated with Oracle RAC on the Sun QFS shared file system.
Distribute these files among several file systems as explained in the subsections that follow.
For RDBMS binary files and related files, create one file system in the cluster to store the files.
The RDBMS binary files and related files are as follows:
Oracle relational database management system (RDBMS) binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Oracle Clusterware binary files
For database files and related files, determine whether you require one file system for each database or multiple file systems for each database.
For simplicity of configuration and maintenance, create one file system to store these files for all Oracle RAC instances of the database.
To facilitate future expansion, create multiple file systems to store these files for all Oracle RAC instances of the database.
Note - If you are adding storage for an existing database, you must create additional file systems for the storage that you are adding. In this situation, distribute the database files and related files among the file systems that you will use for the database.
Each file system that you create for database files and related files must have its own metadata server. For information about the resources that are required for the metadata servers, see Resources for the Sun QFS Metadata Server.
The database files and related files are as follows:
Data files
Control files
Online redo log files
Archived redo log files
Flashback log files
Recovery files
Oracle cluster registry (OCR) files
Oracle Clusterware voting disk
For optimum performance with Solaris Volume Manager for Sun Cluster, configure the volume manager and the file system as follows:
Use Solaris Volume Manager for Sun Cluster to mirror the logical unit numbers (LUNs) of your disk arrays.
If you require striping, configure the striping by using the file system's stripe option.
Mirroring the LUNs of your disk arrays involves the following operations:
Creating RAID-0 metadevices
Using the RAID-0 metadevices or Solaris Volume Manager soft partitions of such metadevices as Sun QFS devices
The input/output (I/O) load on your system might be heavy. In this situation, ensure that the LUN for Solaris Volume Manager metadata or hardware RAID metadata maps to a different physical disk than the LUN for data. Mapping these LUNs to different physical disks ensures that contention is minimized.
Before You Begin
You might use Solaris Volume Manager metadevices as devices for the shared file systems. In this situation, ensure that the metaset and its metadevices are created and available on all nodes before configuring the shared file systems.
For information about how to install Sun QFS, see Using SAM-QFS With Sun Cluster.
For information about how to create a Sun QFS file system, see Using SAM-QFS With Sun Cluster.
For each Sun QFS shared file system, set the correct mount options for the types of Oracle files that the file system is to store.
For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.
For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:
In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:
fs=fs-name stripe=width mh_write qwrite forcedirectio rdlease=300 Set this value for optimum performance. wrlease=300 Set this value for optimum performance. aplease=300 Set this value for optimum performance.
Note - Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.
# mount mount-point
Specifies the mount point of the file system that you are mounting.
For information about configuring Sun QFS shared file system into a zone cluster, see How to Add a QFS Shared File System to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
Note - If you have configured Sun QFS shared file system for a zone cluster, perform this step in that zone cluster.
Change the file-system ownership as follows:
Owner: the database administrator (DBA) user
Group: the DBA group
The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.
# chown user-name:group-name mount-point
Specifies the user name of the DBA user. This user is normally named oracle.
Specifies the name of the DBA group. This group is normally named dba.
Specifies the mount point of the file system whose ownership you are changing.
Note - When Sun QFS shared file system is configured for a zone cluster, you need to perform this step in that zone cluster.
# chmod u+rw mount-point
Specifies the mount point of the file system to whose owner you are granting read access and write access.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
Use Oracle ASM with one storage management scheme from the following list:
Hardware RAID. For more information, see How to Use Oracle ASM With Hardware RAID.
Solaris Volume Manager for Sun Cluster. For more information, see How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database.
VxVM. For more information, see How to Create a VxVM Shared-Disk Group for the Oracle RAC Database.
For information about the types of Oracle files that you can store by using Oracle ASM, see Storage Management Requirements for Oracle Files.
Note - When an Oracle RAC installation in a zone cluster uses Oracle ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzonecluster command. When Oracle ASM runs inside a zone cluster, the administration of Oracle ASM occurs entirely within the same zone cluster.
Use the cldevice(1CL) command for this purpose.
The following example shows an extract from output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- … d5 phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0 d5 phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 d5 phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0 d5 phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0 d6 phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0 d6 phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 d6 phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0 d6 phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0 …
In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.
The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.
# cldevice show d5 d6 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 Replication: none default_fencing: global
For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Oracle Solaris Cluster Software Installation Guide.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0 for the Oracle ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.
Note - If Oracle ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.
To specify the raw device, append sX to the DID device name that you obtained in Step 3, where X is the slice number.
# chown oraasm:oinstall /dev/did/rdsk/dNsX # chmod 660 /dev/disk/rdsk/dNsX # ls -lhL /dev/did/rdsk/dNsX crw-rw---- 1 oraasm oinstall 239, 128 Jun 15 04:38 /dev/did/rdsk/dNsX
For more information about changing the ownership and permissions of raw devices for use by Oracle ASM, see your Oracle documentation.
# dd if=/dev/zero of=/dev/did/rdsk/dNsX bs=1024k count=200 2000+0 records in 2000+0 records out
Note - If Oracle ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.
For example, to use the /dev/did/ path for the Oracle ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:
ASM_DISKSTRING = '/dev/did/rdsk/*'
For more information, see your Oracle documentation.
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.
For general information about how to create and mount cluster file systems, see the following documentation:
For information that is specific to the use of the cluster file system with Support for Oracle RAC, see the subsections that follow.
Types of Oracle Files That You Can Store on a Cluster File System
Optimizing Performance and Availability When Using a Cluster File System
You can store only these files that are associated with Oracle RAC on the cluster file system:
Oracle RDBMS binary files
Oracle Clusterware binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Archived redo log files
Flashback log files
Oracle cluster registry (OCR) files
Oracle Clusterware voting disk
Note - You must not store data files, control files, online redo log files, or Oracle recovery files on the cluster file system.
The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.
To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Oracle Solaris Cluster Concepts Guide.
See Creating Cluster File Systems in Oracle Solaris Cluster Software Installation Guide for information about how to create and mount the cluster file system.
For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.
|
Next Steps
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Chapter 3, Registering and Configuring the Resource Groups.