This chapter explains how to configure storage for Oracle files.
Installing Storage Management Software With Sun Cluster Support for Oracle RAC
Registering and Configuring the RAC Framework Resource Group
Registering and Configuring Storage Resources for Oracle Files
This section summarizes the following tasks for configuring each storage management scheme for Oracle files:
Tasks for Configuring the Sun QFS Shared File System for Oracle Files
Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files
Tasks for Configuring Hardware RAID Support for Oracle Files
Tasks for Configuring Qualified NAS Devices for Oracle Files
Tasks for Configuring a Cluster File System for Oracle Files
The following tables summarizes the tasks for configuring the Sun QFS shared file system and provides cross-references to detailed instructions for performing the tasks. The first table provides information on Oracle RAC running in the global cluster and the second table provide information on Oracle RAC running in a zone cluster.
Perform these tasks in the order in which they are listed in the table.
Table 2–1 Tasks for Configuring the Sun QFS Shared File System for Oracle Files in the Global Cluster
Task |
Instructions |
---|---|
Install and configure the Sun QFS shared file system | |
Install and configure the other storage management scheme that you are using with the Sun QFS shared file system |
If you are using Solaris Volume Manager for Sun Cluster, see Using Solaris Volume Manager for Sun Cluster. If you are using hardware RAID support, see Using Hardware RAID Support. |
Register and configure the RAC framework resource group |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
If you are using Solaris Volume Manager for Sun Cluster, create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database | |
Register and configure storage resources for Oracle files |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
Table 2–2 Tasks for Configuring the Sun QFS Shared File System for Oracle Files in a Zone Cluster
Task |
Instructions |
---|---|
Install and configure the Sun QFS shared file system in the global cluster | |
Install and configure the other storage management scheme that you are using with the Sun QFS shared file system in the global cluster |
If you are using Solaris Volume Manager for Sun Cluster, see Using Solaris Volume Manager for Sun Cluster. If you are using hardware RAID support, see Using Hardware RAID Support. |
Register and configure the RAC framework resource group in the global cluster |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
If you are using Solaris Volume Manager for Sun Cluster, create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database in the global cluster | |
Configure Sun QFS shared file system for the zone cluster | |
Register and configure the storage resources for Oracle files in the zone cluster |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
The following tables summarize the tasks for configuring Solaris Volume Manager for Sun Cluster and provides cross-references to detailed instructions for performing the tasks.
Perform these tasks in the order in which they are listed in the table.
Table 2–3 Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files in the Global Cluster
Task |
Instructions |
---|---|
Configure Solaris Volume Manager for Sun Cluster | |
Register and configure the RAC framework resource group |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
Create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database | |
Register and configure storage resources for Oracle files |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
Table 2–4 Tasks for Configuring Solaris Volume Manager for Sun Cluster for Oracle Files in a Zone Cluster
Task |
Instructions |
---|---|
Configure Solaris Volume Manager for Sun Cluster in the global cluster | |
Register and configure the RAC framework resource group in the global cluster |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
Create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle RAC database in the global cluster | |
Configure Solaris Volume Manager devices in a zone cluster | |
Register and configure storage resources for Oracle files in the zone cluster |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
The following table summarizes the tasks for configuring VxVM and provides cross-references to detailed instructions for performing the tasks.
Perform these tasks in the order in which they are listed in the table.
Table 2–5 Tasks for Configuring VxVM for Oracle Files
Task |
Instructions |
---|---|
Install and configure VxVM | |
Register and configure the RAC framework resource group |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
Create a VxVM shared-disk group for the Oracle RAC database |
How to Create a VxVM Shared-Disk Group for the Oracle RAC Database |
Register and configure storage resources for Oracle files |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
VxVM devices are currently not supported by zone clusters.
The following table summarizes the tasks for configuring hardware RAID support and provides cross-references to detailed instructions for performing the tasks.
Table 2–6 Tasks for Configuring Hardware RAID Support for Oracle Files
Task |
Instructions |
---|---|
Configure hardware RAID support |
For information configuring hardware RAID for a zone cluster, see Adding Storage Devices to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
The following table summarizes the tasks for configuring ASM and provides cross-references to detailed instructions for performing the tasks.
Table 2–7 Tasks for Configuring ASM for Oracle Files
Task |
Instructions |
---|---|
Configure devices for ASM |
For information about configuring ASM for a zone cluster, see Adding Storage Devices to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
The following table summarizes the tasks for configuring qualified NAS devices and provides cross-references to detailed instructions for performing the tasks.
Perform these tasks in the order in which they are listed in the table.
Table 2–8 Tasks for Configuring Qualified NAS Devices for Oracle Files
Task |
Instructions |
---|---|
Install and configure the qualified NAS device |
Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS |
Register and configure the RAC framework resource group |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
Register and configure storage resources for Oracle files |
If you are using the clsetup utility for this task, see Registering and Configuring Storage Resources for Oracle Files. If you are using the Sun Cluster maintenance commands for this task, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands. |
The NAS devices are currently not supported in zone clusters.
The following table summarizes the tasks for configuring the cluster file system and provides cross-references to detailed instructions for performing the tasks.
Perform these tasks in the order in which they are listed in the table.
Table 2–9 Tasks for Configuring a Cluster File System for Oracle Files
Task |
Instructions |
---|---|
Install and configure the cluster file system | |
Register and configure the RAC framework resource group |
If you are using the clsetup utility for this task, see Registering and Configuring the RAC Framework Resource Group. If you are using the Sun Cluster maintenance commands for this task, see How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands. |
A cluster file system is currently not supported for Oracle RAC in zone clusters.
Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements for Oracle Files.
For information about how to install and configure qualified NAS devices with Sun Cluster Support for Oracle RAC, see Sun Cluster 3.1 - 3.2 With Network-Attached Storage Devices Manual for Solaris OS.
This section contains the following information:
Solaris Volume Manager for Sun Cluster is always installed in the global cluster, even when supporting zone clusters. The clzc command configures Solaris Volume Manager for Sun Cluster devices from the global-cluster voting node into the zone cluster. All administration tasks for Solaris Volume Manager for Sun Cluster are performed in the global-cluster voting node, even when the Solaris Volume Manager for Sun Cluster volume is used in a zone cluster.
When an Oracle RAC installation inside a zone cluster uses a file system that exists on top of a Solaris Volume Manager for Sun Cluster volume, you should still configure the Solaris Volume Manager for Sun Cluster volume in the global cluster. In this case, the scalable device group resource belongs to this zone cluster.
When an Oracle RAC installation inside a zone cluster runs directly on the Solaris Volume Manager for Sun Cluster volume, you must first configure the Solaris Volume Manager for Sun Cluster in the global cluster and then configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster. In this case, the scalable device group belongs to this zone cluster.
For information about the types of Oracle files that you can store by using Solaris Volume Manager for Sun Cluster, see Storage Management Requirements for Oracle Files.
To use the Solaris Volume Manager for Sun Cluster software with Sun Cluster Support for Oracle RAC, perform the following tasks.
Ensure that you are using at least the Solaris 9 9/05 or Solaris 10 5/09 OS.
Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.
Configure the Solaris Volume Manager for Sun Cluster software on the cluster nodes.
For information about configuring Solaris Volume Manager for Sun Cluster in the global cluster, see Configuring Solaris Volume Manager Software in Sun Cluster Software Installation Guide for Solaris OS.
If you are using a zone cluster, configure the Solaris Volume Manager for Sun Cluster volume into the zone cluster.
For information on configuring Solaris Volume Manager for Sun Cluster volume into a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
For information about the types of Oracle files that you can store by using VxVM, see Storage Management Requirements for Oracle Files.
Using VxVM for Oracle RAC in zone clusters is not supported in this release.
To use the VxVM software with Sun Cluster Support for Oracle RAC, perform the following tasks.
If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.
See your VxVM documentation for more information about VxVM licensing requirements.
Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle RAC support. Before you install the Oracle RAC packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.
Install and configure the VxVM software on the cluster nodes.
See Chapter 5, Installing and Configuring Veritas Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
For information about the types of Oracle files that you can store by using hardware RAID support, see Storage Management Requirements for Oracle Files.
Sun Cluster software provides hardware RAID support for several storage devices. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle RAC on a cluster that uses StorEdge SE9960 disk arrays with hardware RAID, perform the following task.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Create LUNs on the disk arrays.
See the Sun Cluster hardware documentation for information about how to create LUNs.
After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2 |
To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.
Determine the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.
Use the cldevice(1CL) command for this purpose.
The following example lists output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t2d0 d2 phys-schost-1:/dev/rdsk/c0t3d0 d3 phys-schost-2:/dev/rdsk/c4t4d0 d3 phys-schost-1:/dev/rdsk/c1t5d0 d4 phys-schost-2:/dev/rdsk/c3t5d0 d4 phys-schost-1:/dev/rdsk/c2t5d0 d5 phys-schost-2:/dev/rdsk/c4t4d1 d5 phys-schost-1:/dev/rdsk/c1t5d1 d6 phys-schost-2:/dev/rdsk/c3t5d1 d6 phys-schost-1:/dev/rdsk/c2t5d1 d7 phys-schost-2:/dev/rdsk/c0t2d0 d8 phys-schost-2:/dev/rdsk/c0t3d0 |
In this example, the cldevice output identifies that the raw DID that corresponds to the disk arrays' shared LUNs is d4.
Obtain the full DID device name that corresponds to the DID device that you identified in Step 3.
The following example shows the output from the cldevice show for the DID device that was identified in the example in Step 3. The command is run from node phys-schost-1.
# cldevice show d4 === DID Device Instances === DID Device Name: /dev/did/rdsk/d4 Full Device Path: phys-schost-1:/dev/rdsk/c2t5d0 Replication: none default_fencing: global |
If you are using a zone cluster configure the DID devices into the zone cluster. Otherwise, proceed to Step 6.
For information about configuring DID devices into a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
Create or modify a slice on each DID device to contain the disk-space allocation for the raw device.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0, you might choose to allocate 100 GB of disk space in slice s0.
Change the ownership and permissions of the raw devices that you are using to allow access to these devices.
To specify the raw device, append sN to the DID device name that you obtained in Step 4, where N is the slice number.
For example, the cldevice output in Step 4 identifies that the raw DID that corresponds to the disk is /dev/did/rdsk/d4. If you choose to use slice s0 on these devices, specify the raw device /dev/did/rdsk/d4s0.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
The Sun QFS shared file system is always installed in the global-cluster voting node, even when a file system is used by a zone cluster. You configure specific Sun QFS shared file system into a specific zone cluster using the clzc command. The scalable mount-point resource belongs to this zone cluster. The metadata server resource, SUNW.qfs, belongs to the global cluster.
You must use the Sun QFS shared file system with one storage management scheme from the following list:
Hardware RAID support
Solaris Volume Manager for Sun Cluster
You can store all the files that are associated with Oracle RAC on the Sun QFS shared file system.
Distribute these files among several file systems as explained in the subsections that follow.
For RDBMS binary files and related files, create one file system in the cluster to store the files.
The RDBMS binary files and related files are as follows:
Oracle relational database management system (RDBMS) binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Oracle Cluster Ready Services (CRS) binary files
For database files and related files, determine whether you require one file system for each database or multiple file systems for each database.
For simplicity of configuration and maintenance, create one file system to store these files for all Oracle RAC instances of the database.
To facilitate future expansion, create multiple file systems to store these files for all Oracle RAC instances of the database.
If you are adding storage for an existing database, you must create additional file systems for the storage that you are adding. In this situation, distribute the database files and related files among the file systems that you will use for the database.
Each file system that you create for database files and related files must have its own metadata server. For information about the resources that are required for the metadata servers, see Resources for the Sun QFS Metadata Server.
The database files and related files are as follows:
Data files
Control files
Online redo log files
Archived redo log files
Flashback log files
Recovery files
Oracle cluster registry (OCR) files
Oracle CRS voting disk
For optimum performance with Solaris Volume Manager for Sun Cluster, configure the volume manager and the file system as follows:
Use Solaris Volume Manager for Sun Cluster to mirror the logical unit numbers (LUNs) of your disk arrays.
If you require striping, configure the striping by using the file system's stripe option.
Mirroring the LUNs of your disk arrays involves the following operations:
Creating RAID-0 metadevices
Using the RAID-0 metadevices or Solaris Volume Manager soft partitions of such metadevices as Sun QFS devices
The input/output (I/O) load on your system might be heavy. In this situation, ensure that the LUN for Solaris Volume Manager metadata or hardware RAID metadata maps to a different physical disk than the LUN for data. Mapping these LUNs to different physical disks ensures that contention is minimized.
You might use Solaris Volume Manager metadevices as devices for the shared file systems. In this situation, ensure that the metaset and its metadevices are created and available on all nodes before configuring the shared file systems.
Ensure that the Sun QFS software is installed on all nodes of the global cluster where Sun Cluster Support for Oracle RAC is to run.
For information about how to install Sun QFS, see Using SAM-QFS With Sun Cluster.
Ensure that each Sun QFS shared file system is correctly created for use with Sun Cluster Support for Oracle RAC.
For information about how to create a Sun QFS file system, see Using SAM-QFS With Sun Cluster.
For each Sun QFS shared file system, set the correct mount options for the types of Oracle files that the file system is to store.
For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.
For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:
In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:
fs=fs-name stripe=width mh_write qwrite forcedirectio rdlease=300 Set this value for optimum performance. wrlease=300 Set this value for optimum performance. aplease=300 Set this value for optimum performance.
Specifies the name that uniquely identifies the file system.
Specifies the required stripe width for devices in the file system. The required stripe width is a multiple of the file system's disk allocation unit (DAU). width must be an integer that is greater than or equal to 1.
Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.
Mount each Sun QFS shared file system that you are using for Oracle files.
# mount mount-point |
Specifies the mount point of the file system that you are mounting.
If you are using a zone cluster, configure the Sun QFS shared file system into the zone cluster. Otherwise, go to Step 5.
For information about configuring Sun QFS shared file system into a zone cluster, see How to Add a QFS Shared File System to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
Change the ownership of each file system that you are using for Oracle files.
If you have configured Sun QFS shared file system for a zone cluster, perform this step in that zone cluster.
Change the file-system ownership as follows:
Owner: the database administrator (DBA) user
Group: the DBA group
The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.
# chown user-name:group-name mount-point |
Specifies the user name of the DBA user. This user is normally named oracle.
Specifies the name of the DBA group. This group is normally named dba.
Specifies the mount point of the file system whose ownership you are changing.
Grant to the owner of each file system whose ownership you changed in Step 5 read access and write access to the file system.
When Sun QFS shared file system is configured for a zone cluster, you need to perform this step in that zone cluster.
# chmod u+rw mount-point |
Specifies the mount point of the file system to whose owner you are granting read access and write access.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
Use ASM with one storage management scheme from the following list:
Hardware RAID. For more information, see How to Use ASM With Hardware RAID.
Solaris Volume Manager for Sun Cluster. For more information, see How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database.
VxVM. For more information, see How to Create a VxVM Shared-Disk Group for the Oracle RAC Database.
For information about the types of Oracle files that you can store by using ASM, see Storage Management Requirements for Oracle Files.
When an Oracle RAC installation in a zone cluster uses ASM, you must configure all the devices needed by that Oracle RAC installation into that zone cluster by using the clzc command. When ASM runs inside a zone cluster, the administration of ASM occurs entirely within the same zone cluster.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
On a cluster member, log in as root or become superuser.
Determine the identities of device identity (DID) devices that correspond to shared disks that are available in the cluster.
Use the cldevice(1CL) command for this purpose.
The following example shows an extract from output from the cldevice list -v command.
# cldevice list -v DID Device Full Device Path ---------- ---------------- … d5 phys-schost-3:/dev/rdsk/c3t216000C0FF084E77d0 d5 phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 d5 phys-schost-2:/dev/rdsk/c4t216000C0FF084E77d0 d5 phys-schost-4:/dev/rdsk/c2t216000C0FF084E77d0 d6 phys-schost-3:/dev/rdsk/c4t216000C0FF284E44d0 d6 phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 d6 phys-schost-2:/dev/rdsk/c5t216000C0FF284E44d0 d6 phys-schost-4:/dev/rdsk/c3t216000C0FF284E44d0 … |
In this example, DID devices d5 and d6 correspond to shared disks that are available in the cluster.
Obtain the full DID device name for each DID device that you are using for the ASM disk group.
The following example shows the output from the cldevice show for the DID devices that were identified in the example in Step 2. The command is run from node phys-schost-1.
# cldevice show d5 d6 === DID Device Instances === DID Device Name: /dev/did/rdsk/d5 Full Device Path: phys-schost-1:/dev/rdsk/c5t216000C0FF084E77d0 Replication: none default_fencing: global DID Device Name: /dev/did/rdsk/d6 Full Device Path: phys-schost-1:/dev/rdsk/c6t216000C0FF284E44d0 Replication: none default_fencing: global |
If you are using a zone cluster, configure the DID devices into the zone cluster. Otherwise, proceed to Step 5.
For information about configuring DID devices in a zone cluster, see How to Add a DID Device to a Zone Cluster in Sun Cluster Software Installation Guide for Solaris OS.
Create or modify a slice on each DID device to contain the disk-space allocation for the ASM disk group.
Use the format(1M) command, fmthard(1M) command, or prtvtoc(1M) for this purpose. Specify the full device path from the node where you are running the command to create or modify the slice.
For example, if you choose to use slice s0 for the ASM disk group, you might choose to allocate 100 Gbytes of disk space in slice s0.
Change the ownership and permissions of the raw devices that you are using for ASM to allow access by ASM to these devices.
If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.
To specify the raw device, append sN to the DID device name that you obtained in Step 3, where N is the slice number.
For example, the cldevice output in Step 3 identifies that the raw DIDs that correspond to the disk are /dev/did/rdsk/d5 and /dev/did/rdsk/d6. If you choose to use slice s0 on these devices, specify the raw devices /dev/did/rdsk/d5s0 and /dev/did/rdsk/d6s0.
For more information about changing the ownership and permissions of raw devices for use by ASM, see your Oracle documentation.
Modify the ASM_DISKSTRING ASM instance-initialization parameter to specify the devices that you are using for the ASM disk group.
If ASM on hardware RAID is configured for a zone cluster, perform this step in that zone cluster.
For example, to use the /dev/did/ path for the ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:
ASM_DISKSTRING = '/dev/did/rdsk/*'
For more information, see your Oracle documentation.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
For general information about how to create and mount cluster file systems, see the following documentation:
For information that is specific to the use of the cluster file system with Sun Cluster Support for Oracle RAC, see the subsections that follow.
Types of Oracle Files That You Can Store on a Cluster File System
Optimizing Performance and Availability When Using a Cluster File System
You can store only these files that are associated with Oracle RAC on the cluster file system:
Oracle RDBMS binary files
Oracle CRS binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Archived redo log files
Flashback log files
Oracle cluster registry (OCR) files
Oracle CRS voting disk
You must not store data files, control files, online redo log files, or Oracle recovery files on the cluster file system.
The I/O performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle RAC database instance. This device group contains the file system that holds archived redo log files of the database instance.
To improve the availability of your cluster, consider increasing the desired number of secondary nodes for device groups. However, increasing the desired number of secondary nodes for device groups might also impair performance. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Device Groups in Sun Cluster Concepts Guide for Solaris OS.
Create and mount the cluster file system.
See Creating Cluster File Systems in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.
If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.
For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.
File Type |
Options |
---|---|
global, logging |
|
global, logging |
|
global, logging |
|
global, logging |
|
global, logging |
|
global, logging |
|
global, logging, forcedirectio |
|
global, logging, forcedirectio |
|
global, logging, forcedirectio |
|
global, logging, forcedirectio |
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Registering and Configuring the RAC Framework Resource Group.
Registering and configuring the RAC framework resource group enables Oracle RAC to run with Sun Cluster software.
You must register and configure the RAC framework resource group. Otherwise, Oracle RAC cannot run with Sun Cluster software.
On the Solaris 9 OS, only one RAC framework resource can exist on a machine. On appropriate versions of the Solaris 10 OS where the zone cluster feature is supported, multiple RAC framework resource groups can exist.
The RAC framework resource in the global-cluster voting node supports any volume manager used by RAC anywhere on the machine, including the global cluster and all zone clusters. The RAC framework resource in the global-cluster voting node can also support any Oracle RAC installation running in the global cluster. The RAC framework resource in the zone cluster supports the Oracle RAC installation running in that specific zone cluster.
This section contains the following information about registering the RAC framework resource group:
Tools for Registering and Configuring the RAC Framework Resource Group
How to Register and Configure the RAC Framework Resource Group by Using clsetup
Sun Cluster software provides the following tools for registering and configuring the RAC framework resource group:
The clsetup utility. For more information, see How to Register and Configure the RAC Framework Resource Group by Using clsetup.
In the Sun Cluster 3.2 11/09 release, the clsetup utility configures volume-manager resources in the RAC framework (SUNW.rac_framework) resource group. To configure RAC to use the multiple-owner volume-manager framework (SUNW.vucmm_framework) resource group, instead perform How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands.
Sun Cluster Manager. For more information, see the Sun Cluster Manager online help.
Sun Cluster maintenance commands. For more information, see Appendix D, Command-Line Alternatives.
The clsetup utility and Sun Cluster Manager each provide a wizard for configuring resources for the RAC framework resource group. The wizards reduce the possibility of configuration errors that might result from command syntax errors or omissions. These wizards also ensure that all required resources are created and that all required dependencies between resources are set.
Sun Cluster Manager and the clsetup utility run only in the global-cluster voting node of the global cluster.
When you register and configure the RAC framework resource group for a cluster, the RAC framework resource group is created.
Perform this procedure during your initial setup of Sun Cluster Support for Oracle RAC. Perform this procedure from one node only.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
The following instructions explain how to perform this operation by using the clsetup utility. The clsetup utility configured volume-manager resources in the RAC framework resource group. To configure the multiple-owner volume-manager framework resource group to contain volume-manager resources, instead use How to Register and Configure the Framework Resource Groups in the Global Cluster by Using Sun Cluster Maintenance Commands.
Ensure that the following prerequisites are met:
All preinstallation tasks for Oracle RAC are completed.
The Sun Cluster nodes are prepared.
The data services packages are installed.
All storage management software that you intend to use is installed and configured on all nodes where Oracle RAC is to run.
Ensure that you have the following information:
The names of the nodes where you require Sun Cluster Support for Oracle RAC to run.
The list of storage management schemes that you are using for Oracle files.
Become superuser on any cluster node.
Start the clsetup utility.
# clsetup |
The clsetup main menu is displayed.
Type the number that corresponds to the option for data services and press Return.
The Data Services menu is displayed.
Type the number that corresponds to the option for configuring Sun Cluster Support for Oracle RAC and press Return.
The clsetup utility displays information about Sun Cluster Support for Oracle RAC.
Press Return to continue.
The clsetup utility prompts you to select whether you are performing the initial configuration of Sun Cluster Support for Oracle RAC or administering an existing configuration.
The clsetup utility currently allows ongoing administration of RAC framework running only in the global cluster. For ongoing administration of framework configured in the zone cluster, you need to run the Sun Cluster CLIs.
Type the number that corresponds to the option for performing the initial configuration of Sun Cluster Support for Oracle RAC and press Return.
The clsetup utility displays a list of components of Oracle RAC to configure.
Type the number that corresponds to the option for the RAC framework resource group and press Return.
The clsetup utility prompts you to select the Oracle RAC clusters location. This location can be global cluster or zone cluster.
Type the number that corresponds to the option for the location of the Oracle RAC clusters and press Return.
Type the number that corresponds to the option for the required zone cluster and press Return.
The clsetup utility displays a list of components of Oracle RAC to configure.
Type the number that corresponds to the option for the component of Oracle RAC and press Return.
The clsetup utility displays the list of prerequisites for performing this task.
Verify that the prerequisites are met, and press Return.
The clsetup utility displays a list of the cluster nodes on which the Sun Cluster Support for Oracle RAC packages are installed.
Select the nodes where you require Sun Cluster Support for Oracle RAC to run.
To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.
To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes and press Return.
Ensure that the nodes are listed in the order in which the nodes are to appear in the RAC framework resource group's node list.
To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.
Ensure that the nodes are listed in the order in which the nodes are to appear in the RAC framework resource group's node list.
To confirm your selection of nodes, type d and press Return.
The clsetup utility displays a list of storage management schemes for Oracle files.
Type the numbers that correspond to the storage management schemes that you are using for Oracle files and press Return.
To confirm your selection of storage management schemes, type d and press Return.
The clsetup utility displays the names of the Sun Cluster objects that the utility will create.
If you require a different name for any Sun Cluster objects, change each name as follows.
Type the number that corresponds to the name that you are changing and press Return.
The clsetup utility displays a screen where you can specify the new name.
At the New Value prompt, type the new name and press Return.
The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.
To confirm your selection of Sun Cluster object names, type d and press Return.
The clsetup utility displays information about the Sun Cluster configuration that the utility will create.
To create the configuration, type c and Press Return.
The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.
Press Return to continue.
The clsetup utility returns you to the list of options for configuring Sun Cluster Support for Oracle RAC.
(Optional) Type q and press Return repeatedly until you quit the clsetup utility.
If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing RAC framework resource group when you restart the utility.
Determine if the RAC framework resource group and its resources are online.
Use the clresourcegroup(1CL) utility for this purpose. By default, the clsetup utility assigns the name rac-framework-rg to the RAC framework resource group.
If the RAC framework resource group and its resources are not online, bring them online.
The following table lists the default resource configuration that the clsetup utility creates when you complete this task.
For a zone cluster, the framework resources are created based on the storage management scheme you select. For detailed information for the resource configuration for zone clusters, see the figures in Appendix A, Sample Configurations of This Data Service.
The next step depends on the volume manager that you are using, as shown in the following table.
Volume Manager |
Next Step |
---|---|
Solaris Volume Manager for Sun Cluster | |
VxVM with the cluster feature |
How to Create a VxVM Shared-Disk Group for the Oracle RAC Database |
None |
Registering and Configuring Storage Resources for Oracle Files |
If you are using a volume manager for Oracle database files, the volume manager requires a global device group for the Oracle RAC database to use.
The type of global device group to create depends on the volume manager that you are using:
If you are using Solaris Volume Manager for Sun Cluster, create a Solaris Volume Manager for Sun Cluster multi-owner disk set. See How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle RAC Database.
If you are using VxVM, create a VxVM shared-disk group. See How to Create a VxVM Shared-Disk Group for the Oracle RAC Database.
Perform this task only if you are using Solaris Volume Manager for Sun Cluster.
If you are using Solaris Volume Manager for Sun Cluster, Solaris Volume Manager requires a multi-owner disk set for the Oracle RAC database, the Sun QFS shared file system, or ASM to use. For information about Solaris Volume Manager for Sun Cluster multi–owner disk sets, see Multi-Owner Disk Set Concepts in Solaris Volume Manager Administration Guide.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Ensure that the required Sun Cluster Support for Oracle RAC software packages are installed on each node. For more information, see Installing the Sun Cluster Support for Oracle RAC Packages.
Unless you are using the Sun QFS shared file system, do not create any file systems in the multi-owner disk set. In configurations without the Sun QFS shared file system, only the raw data file uses this disk set.
Disk devices that you add to the multi-owner disk set must be directly attached to all the cluster nodes.
Create a multi-owner disk set.
Use the metaset(1M) command for this purpose.
# metaset -s setname -M -a -h nodelist |
Specifies the name of the disk set that you are creating.
Specifies that the disk set that you are creating is a multi-owner disk set.
Specifies that the nodes that the -h option specifies are to be added to the disk set.
Specifies a space-separated list of nodes that are to be added to the disk set. The Sun Cluster Support for Oracle RAC software packages must be installed on each node in the list.
Add global devices to the disk set that you created in Step 1.
# metaset -s setname -a devicelist |
Specifies that you are modifying the disk set that you created in Step 1.
Specifies that the devices that devicelist specifies are to be added to the disk set.
Specifies a space-separated list of full device ID path names for the global devices that are to be added to the disk set. To enable consistent access to each device from any node in the cluster, ensure that each device ID path name is of the form /dev/did/dsk/dN, where N is the device number.
For the disk set that you created in Step 1, create the volumes that the Oracle RAC database or Sun QFS shared file system will use.
If you are creating many volumes for Oracle data files, you can simplify this step by using soft partitions. However, if you are using the Sun QFS shared file system and the I/O load on your system is heavy, use separate partitions for data and metadata. Otherwise, the performance of your system might be impaired. For information about soft partitions, see Chapter 12, Soft Partitions (Overview), in Solaris Volume Manager Administration Guide and Chapter 13, Soft Partitions (Tasks), in Solaris Volume Manager Administration Guide.
Create each volume by concatenating slices on global devices that you added in Step 2. Use the metainit(1M) command for this purpose.
# metainit -s setname volume-abbrev numstripes width slicelist |
Specifies that you are creating a volume for the disk set that you created in Step 1.
Specifies the abbreviated name of the volume that you are creating. An abbreviated volume name has the format dV, where V is the volume number.
Specifies the number of slices in each stripe. If you set width to greater than 1, the slices are striped.
Specifies a space-separated list of slices that the volume contains. Each slice must reside on a global device that you added in Step 2.
If you are using mirrored devices, create the mirrors by using volumes that you created in Step 3 as submirrors.
If you are not using mirrored devices, omit this step.
Use the metainit command to create each mirror as follows:
# metainit -s setname mirror -m submirror-list |
Specifies that you are creating a mirror for the disk set that you created in Step 1.
Specifies the name of the mirror that you are creating in the form of an abbreviated volume name. An abbreviated volume name has the format dV, where V is the volume number.
Specifies a space-separated list of submirrors that the mirror is to contain. Each submirror must be a volume that you created in Step 3. Specify the name of each submirror in the form of an abbreviated volume name.
For information on configuring a Solaris Volume Manager disk set in a zone cluster, see How to Add a Disk Set to a Zone Cluster (Solaris Volume Manager) in Sun Cluster Software Installation Guide for Solaris OS.
Verify that each node is correctly added to the multi-owner disk set.
Use the metaset command for this purpose.
# metaset -s setname |
Specifies that you are verifying the disk set that you created in Step 1.
This command displays a table that contains the following information for each node that is correctly added to the disk set:
The Host column contains the node name.
The Owner column contains the text multi-owner.
The Member column contains the text Yes.
Verify that the multi-owner disk set is correctly configured.
# cldevicegroup show setname |
Specifies that configuration information only for the disk set that you created in Step 1 is displayed.
This command displays the device group information for the disk set. For a multi-owner disk set, the device group type is Multi-owner_SVM.
Verify the online status of the multi-owner disk set.
# cldevicegroup status setname |
This command displays the status of the multi-owner disk set on each node in the multi-owner disk set.
(Configurations without the Sun QFS shared file system only) On each node that can own the disk set, change the ownership of each volume that you created in Step 3.
If you are using the Sun QFS shared file system, omit this step.
For a zone cluster, perform this step in the zone cluster.
Change the volume ownership as follows:
Owner: the DBA user
Group: the DBA group
The DBA user and the DBA group are created as explained in How to Create the DBA Group and the DBA User Accounts.
Ensure that you change ownership only of volumes that the Oracle RAC database will use.
# chown user-name:group-name volume-list |
Specifies the user name of the DBA user. This user is normally named oracle.
Specifies the name of the DBA group. This group is normally named dba.
Specifies a space-separated list of the logical names of the volumes that you created for the disk set. The format of these names depends on the type of device where the volume resides, as follows:
For block devices: /dev/md/setname/dsk/dV
For raw devices: /dev/md/setname/rdsk/dV
The replaceable items in these names are as follows:
Specifies the name of the multi-owner disk set that you created in Step 1.
Specifies the volume number of a volume that you created in Step 3.
Ensure that this list specifies each volume that you created in Step 3.
(Configurations without the Sun QFS shared file system only) Grant to the owner of each volume whose ownership you changed in Step 8 read access and write access to the volume.
If you are using the Sun QFS shared file system, omit this step.
For a zone cluster, perform this step in the zone cluster.
Grant access to the volume on each node that can own the disk set. Ensure that you change access permissions only of volumes that the Oracle RAC database will use.
# chmod u+rw volume-list |
Specifies a space-separated list of the logical names of the volumes to whose owners you are granting read access and write access. Ensure that this list contains the volumes that you specified in Step 8.
If you are using ASM, specify the raw devices that you are using for the ASM disk group.
To specify the devices, modify the ASM_DISKSTRING ASM instance-initialization parameter.
For example, to use the /dev/md/setname/rdsk/d path for the ASM disk group, add the value /dev/md/*/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:
ASM_DISKSTRING = '/dev/md/*/rdsk/d*'
If you are using mirrored devices, specify external redundancy in the ASM configuration.
For more information, see your Oracle documentation.
This example shows the sequence of operations that is required to create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for a four-node cluster. The disk set uses mirrored devices.
The disk set is to be used with the Sun QFS shared file system. This example does not show the creation of the Sun QFS shared file system on the devices that are added to the disk set.
To create the multi-owner disk set, the following command is run:
# metaset -s oradg -M -a -h pclus1 pclus2 pclus3 pclus4 |
The multi-owner disk set is named oradg. The nodes pclus1, pclus2, pclus3, and pclus4 are added to this disk set.
To add global devices to the disk set, the following command is run:
# metaset -s oradg -a /dev/did/dsk/d8 /dev/did/dsk/d9 /dev/did/dsk/d15 \ /dev/did/dsk/d16 |
The preceding command adds the following global devices to the disk set:
/dev/did/dsk/d8
/dev/did/dsk/d9
/dev/did/dsk/d15
/dev/did/dsk/d16
To create volumes for the disk set, the following commands are run:
# metainit -s oradg d10 1 1 /dev/did/dsk/d9s0 # metainit -s oradg d11 1 1 /dev/did/dsk/d16s0 # metainit -s oradg d20 1 1 /dev/did/dsk/d8s0 # metainit -s oradg d21 1 1 /dev/did/dsk/d15s0 |
Each volume is created by a one-on-one concatenation of a slice as shown in the following table. The slices are not striped.
Volume |
Slice |
---|---|
d10 |
/dev/did/dsk/d9s0 |
d11 |
/dev/did/dsk/d16s0 |
d20 |
/dev/did/dsk/d8s0 |
d21 |
/dev/did/dsk/d15s0 |
To create mirrors for the disk set, the following commands are run:
# metainit -s oradg d1 -m d10 d11 # metainit -s oradg d2 -m d20 d21 |
The preceding commands create a mirror that is named d1 from volumes d10 and d11, and a mirror that is named d2 from volumes d20 and d21.
To verify that each node is correctly added to the multi-owner disk set, the following command is run:
# metaset -s oradgMulti-owner Set name = oradg, Set number = 1, Master = pclus2 Host Owner Member pclus1 multi-owner Yes pclus2 multi-owner Yes pclus3 multi-owner Yes pclus4 multi-owner Yes Drive Dbase d8 Yes d9 Yes d15 Yes d16 Yes |
To verify that the multi-owner disk set is correctly configured, the following command is run:
# cldevicegroup show oradg === Device Groups === Device Group Name: oradg Type: Multi-owner_SVM failback: false Node List: pclus1, pclus2, pclus3, pclus4 preferenced: false numsecondaries: 0 diskset name: oradg |
To verify the online status of the multi-owner disk set, the following command is run:
# cldevicegroup status oradg === Cluster Device Groups === --- Device Group Status --- Device Group Name Primary Secondary Status ----------------- ------- --------- ------ --- Multi-owner Device Group Status --- Device Group Name Node Name Status ----------------- --------- ------ oradg pclus1 Online pclus2 Online pclus3 Online pclus4 Online |
Go to Registering and Configuring Storage Resources for Oracle Files.
Perform this task only if you are using VxVM with the cluster feature.
If you are using VxVM with the cluster feature, VxVM requires a shared-disk group for the Oracle RAC database or ASM to use.
Ensure that the required Sun Cluster Support for Oracle RAC software packages are installed on each node. For more information, see Installing the Sun Cluster Support for Oracle RAC Packages.
Do not register the shared-disk group as a cluster device group with the cluster.
Do not create any file systems in the shared-disk group because only the raw data file uses this disk group.
Disks that you add to the shared-disk group must be directly attached to all the cluster nodes.
Ensure that your VxVM license is current. If your license expires, the node panics.
Use Veritas commands that are provided for creating a VxVM shared-disk group.
For information about VxVM shared-disk groups, see your VxVM documentation.
If you are using ASM, specify the raw devices that you are using for the ASM disk group.
To specify the devices, modify the ASM_DISKSTRING ASM instance-initialization parameter.
For example, to use the /dev/md/setname/rdsk/d path for the ASM disk group, add the value /dev/md/*/rdsk/d* to the ASM_DISKSTRING parameter. If you are modifying this parameter by editing the Oracle initialization parameter file, edit the parameter as follows:
ASM_DISKSTRING = '/dev/md/*/rdsk/d*'
If you are using mirrored devices, specify external redundancy in the ASM configuration.
For more information, see your Oracle documentation.
Go to Registering and Configuring Storage Resources for Oracle Files.
Storage resources provide fault monitoring and automatic fault recovery for global device groups and file systems.
If you are using global device groups or shared file systems for Oracle files, configure storage resources to manage the availability of the storage on which the Oracle software depends.
Configure storage resources for the following types of global device groups:
Solaris Volume Manager for Sun Cluster multi-owner disk sets
VxVM shared-disk groups
Configure storage resources for the following types of shared file systems:
A Sun QFS shared file system
A file system on a qualified NAS device
This section contains the following information about registering and configuring storage resources for Oracle files:
Tools for Registering and Configuring Storage Resources for Oracle Files
How to Register and Configure Storage Resources for Oracle Files by Using clsetup
Sun Cluster provides the following tools for registering and configuring storage resources for Oracle files:
The clsetup(1CL) utility. For more information, see How to Register and Configure Storage Resources for Oracle Files by Using clsetup.
Sun Cluster Manager. For more information, see the Sun Cluster Manager online help.
Sun Cluster maintenance commands. For more information, see Creating Storage Management Resources by Using Sun Cluster Maintenance Commands.
The clsetup utility and Sun Cluster Manager each provide a wizard for configuring storage resources for Oracle files. The wizards reduce the possibility of configuration errors that might result from command syntax errors or omissions. These wizards also ensure that all required resources are created and that all required dependencies between resources are set.
This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
Perform this procedure from only one node of the cluster.
Ensure that the following prerequisites are met:
The RAC framework resource group is created and is online. For more information, see Registering and Configuring the RAC Framework Resource Group.
Required volumes, global device groups, and file systems are created. For more information, see the following sections:
Required file systems are mounted.
Ensure that you have the following information:
The name of each scalable device group that you are using for Oracle files, if any
The mount point of each shared file system that you are using for Oracle files, if any
On one node of the cluster, become superuser.
Start the clsetup utility.
# clsetup |
The clsetup main menu is displayed.
Type the number that corresponds to the option for data services and press Return.
The Data Services menu is displayed.
Type the number that corresponds to the option for configuring Sun Cluster Support for Oracle RAC and press Return.
The clsetup utility displays information about Sun Cluster Support for Oracle RAC.
Press Return to continue.
The clsetup utility prompts you to select whether you are performing the initial configuration of Sun Cluster Support for Oracle RAC or administering an existing configuration.
Type the number that corresponds to the option for performing the initial configuration of Sun Cluster Support for Oracle RAC and press Return.
The clsetup utility displays a list of components of Oracle RAC to configure.
Type the number that corresponds to the option for storage resources for Oracle files and press Return.
The clsetup utility prompts you to select the Oracle RAC clusters location. This can be a global cluster or a zone cluster.
Type the number that corresponds to the option for the location of the Oracle RAC clusters and press Return.
Type the number that corresponds to the option for the required zone cluster and press Return.
The clsetup utility displays the list of components of Oracle RAC to configure.
Type the number that corresponds to the option for the component of Oracle RAC and press Return.
The clsetup utility displays the list of prerequisites for performing in this task.
Verify that the prerequisites are met, and press Return.
The response of the clsetup utility depends on how the RAC framework resource group was configured.
By using the clsetup wizard or the Sun Cluster Manager wizard. The clsetup utility displays a list of the resources for scalable device groups that are configured on the cluster. If no suitable resources exist, this list is empty.
By using the scsetup utility or Sun Cluster maintenance commands. The clsetup utility displays a list of storage management schemes for Oracle files.
If you are prompted to select storage management schemes for Oracle files, select the schemes.
If you are prompted for resources for scalable device groups, omit this step.
Type the numbers that correspond to the storage management schemes that you are using for Oracle files and press Return.
To confirm your selection of storage management schemes, type d and press Return.
The clsetup utility displays a list of the resources for scalable device groups that are configured on the cluster. If no suitable resources exist, this list is empty.
If no suitable resources exist, or if no resource exists for a device group that you are using, add a resource to the list.
If resources exist for all the device groups that you are using, omit this step.
For each resource that you are adding, perform the following steps:
Type c and press Return.
The clsetup utility displays a list of the scalable device groups that are configured on the cluster.
Type the number that corresponds to the device group that you are using for Oracle files and press Return.
Once you select the device group, you can either select the entire disk group or choose to specify logical devices, or disks, in the disk group.
Choose whether you want to specify logical devices.
Type a comma-separated list of numbers that corresponds to the logical devices or disks you choose or type a for all.
The clsetup utility returns you to the list of resources for scalable device groups that are configured on the cluster.
To confirm your selection of device groups, type d and press Return.
The clsetup utility returns you to the list of the resources for scalable device groups that are configured on the cluster. The resource that you are creating is added to the list.
If a suitable existing resource that you intend to use is not listed, type r to refresh the list.
When the list contains resources for all the device groups that you are using, type the numbers that correspond to the resources that you require.
You can select existing resources, resources that are not yet created, or a combination of existing resources and new resources. If you select more than one existing resource, the selected resources must be in the same resource group.
To confirm your selection of resources for device groups, type d and press Return.
The clsetup utility displays a list of the resources for shared file-system mount points that are configured on the cluster. If no suitable resources exist, this list is empty.
If no suitable resources exist, or if no resource exists for a file-system mount point that you are using, add a resource to the list.
If resources exist for all the file-system mount points that you are using, omit this step.
For each resource that you are adding, perform the following steps:
Type c and press Return.
The clsetup utility displays a list of the shared file systems that are configured on the cluster.
Type a comma-separated or space-separated list of numbers that correspond to the file systems that you are using for Oracle files and press Return.
To confirm your selection of file systems, type d and press Return.
The clsetup utility returns you to the list of the resources for file-system mount points that are configured on the cluster. The resource that you are creating is added to the list.
If a suitable existing resource that you intend to use is not listed, type r to refresh the list.
When the list contains resources for all the file-system mount points that you are using, type the numbers that correspond to the resources that you require.
You can select existing resources, resources that are not yet created, or a combination of existing resources and new resources. If you select more than one existing resource, the selected resources must be in the same resource group.
To confirm your selection of resources for file-system mount points, type d and press Return.
The clsetup utility displays the names of the Sun Cluster objects that the utility will create or add to your configuration.
If you need to modify a Sun Cluster object that the utility will create, modify the object as follows:
Type the number that corresponds to the Sun Cluster object that you are modifying and press Return.
The clsetup utility displays a list of properties that are set for the object.
Modify each property that you are changing as follows:
When you have modified all the properties that you need to change, type d.
The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create or add to your configuration.
When you have modified all the Sun Cluster objects that you need to change, type d.
The clsetup utility displays information about the RAC framework resource group for which storage resources will be configured.
To create the configuration, type c and Press Return.
The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.
Press Return to continue.
The clsetup utility returns you to the list of options for configuring Sun Cluster Support for Oracle RAC.
(Optional) Type q and press Return repeatedly until you quit the clsetup utility.
If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing RAC framework resource group when you restart the utility.
Determine if the resource groups that the wizard created are online.
# clresourcegroup status |
If a resource group that the wizard created is not online, bring the resource group online.
For each resource group that you are bringing online, type the following command:
# clresourcegroup online -emM rac-storage-rg |
Specifies the name of the resource group that you are bringing online.
The following table lists the default resource configuration that the clsetup utility creates when you complete this task.
For detailed information for the resource configuration for zone clusters, see the figures in Appendix A, Sample Configurations of This Data Service.