This chapter explains how to install Sun Cluster Support for Oracle Real Application Clusters on your Sun Cluster nodes.
Overview of the Installation Process for Sun Cluster Support for Oracle Real Application Clusters
Installing Storage Management Software With Sun Cluster Support for Oracle Real Application Clusters
Installing Sun Cluster Support for Oracle Real Application Clusters Packages
The following table summarizes the installation tasks and provides cross-references to detailed instructions for performing the tasks.
Perform these tasks in the order in which they are listed in the table.
Table 1–1 Tasks for Installing Sun Cluster Support for Oracle Real Application Clusters
Task |
Instructions |
---|---|
Understand preinstallation considerations and special requirements | |
Install storage management software |
Installing Storage Management Software With Sun Cluster Support for Oracle Real Application Clusters |
Install data service packages |
Installing Sun Cluster Support for Oracle Real Application Clusters Packages |
Prepare the Sun Cluster nodes | |
SPARC: Install the UNIX Distributed Lock Manager |
Oracle Real Application Clusters is an application that can run on more than one node concurrently. Sun Cluster Support for Oracle Real Application Clusters is a set of packages that, when installed, enables Oracle Real Application Clusters to run on Sun Cluster nodes. This data service also enables Oracle Real Application Clusters to be managed by using Sun Cluster commands.
In earlier versions of Oracle, this application is referred to as Oracle Parallel Server. In this book, references to Oracle Real Application Clusters also apply to Oracle Parallel Server unless this book explicitly states otherwise.
This data service provides fault monitoring only to enable the status of Oracle Real Application Clusters resources to be monitored by Sun Cluster utilities. This data service does not provide automatic fault recovery because the Oracle Real Application Clusters software provides similar functionality.
Before you begin the installation, note the hardware and software requirements in the subsections that follow.
Sun Cluster Support for Oracle Real Application Clusters requires a functioning cluster with the initial cluster framework already installed. See Sun Cluster Software Installation Guide for Solaris OS for details about initial installation of cluster software.
Verify that you have obtained and installed the appropriate licenses for your software. If you install your licenses incorrectly or incompletely, the nodes might fail to boot correctly.
For example, if you are using VxVM with the cluster feature, verify that you have installed a valid license for the Volume Manager cluster feature by running one of the following commands:
If you are using Sun StorEdgeTM QFS shared file system version 4.2, verify that you have installed a valid license for Sun StorEdge QFS on each node. To verify that a valid license is installed on a node, run the samcmd l command on the node.
Check with a Sun Enterprise Services representative for the current supported topologies for Sun Cluster Support for Oracle Real Application Clusters, cluster interconnect, storage management scheme, and hardware configurations.
Ensure that you have installed all of the applicable software patches for the Solaris Operating System, Sun Cluster, Oracle, and your volume manager. If you need to install any Sun Cluster Support for Oracle Real Application Clusters patches, you must apply these patches after you install the data service packages.
Sun Cluster Support for Oracle Real Application Clusters enables you to use the storage management schemes for Oracle files that are listed in the following table. The table summarizes the types of Oracle files that each storage management scheme can store. Ensure that you choose a combination of storage management schemes that can store all types of Oracle files.
Table 1–2 Storage Management Schemes for Oracle Files
Oracle File Type |
Storage Management Scheme |
|||||||
---|---|---|---|---|---|---|---|---|
Solaris Volume Manager for Sun Cluster |
VxVM |
Hardware RAID |
Sun StorEdge QFS |
Network Appliance NAS Devices |
ASM |
Cluster File System |
Local Disks |
|
RDBMS binary files |
No |
No |
No |
Yes |
Yes |
No |
Yes |
Yes |
CRS binary files |
No |
No |
No |
Yes |
Yes |
No |
Yes |
Yes |
Configuration files |
No |
No |
No |
Yes |
Yes |
No |
Yes |
Yes |
System parameter file (SPFILE) |
No |
No |
No |
Yes |
Yes |
Yes |
Yes |
No |
Alert files |
No |
No |
No |
Yes |
Yes |
No |
Yes |
Yes |
Trace files |
No |
No |
No |
Yes |
Yes |
No |
Yes |
Yes |
Data files |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
No |
Control files |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
No |
Online redo log files |
Yes |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
No |
Archived redo log files |
No |
No |
No |
Yes |
Yes |
Yes |
Yes |
No |
Flashback log files |
No |
No |
No |
Yes |
Yes |
Yes |
Yes |
No |
Recovery files |
No |
No |
No |
Yes |
Yes |
Yes |
No |
No |
OCR files |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
Yes |
No |
CRS voting disk |
Yes |
Yes |
Yes |
Yes |
Yes |
No |
Yes |
No |
Some types of files are not included in all releases of Oracle Real Application Clusters. For information about which types of file are included in the release that you are using, see your Oracle documentation.
You can use the following storage management schemes for the Oracle Real Application Clusters database:
Solaris Volume Manager for Sun Cluster
Solaris Volume Manager for Sun Cluster is supported only with Oracle Real Application Clusters. Solaris Volume Manager for Sun Cluster with Oracle Parallel Server is not supported.
VERITAS Volume Manager (VxVM) with the cluster feature
VxVM is supported only on the SPARC platform.
Hardware redundant array of independent disks (RAID) support
Sun StorEdge QFS shared file system with hardware RAID support
Sun StorEdge QFS shared file system is supported only on the SPARC platform.
Network Appliance network-attached storage (NAS) devices
Oracle Automatic Storage Management (ASM)
You can install the Oracle binary files and Oracle configuration files on one of the following locations.
The local disks of each cluster node
A shared file system from the following list:
The Sun StorEdge QFS shared file system
The cluster file system
A file system on a Network Appliance NAS device
Placing the Oracle binary files and Oracle configuration files on the individual cluster nodes enables you to upgrade the Oracle application later without shutting down the data service.
The disadvantage is that you then have several copies of the Oracle application binary files and Oracle configuration files to maintain and administer.
To simplify the maintenance of your Oracle installation, you can install the Oracle binary files and Oracle configuration files on a shared file system. The following shared file systems are supported:
The Sun StorEdge QFS shared file system
The cluster file system
If you use the cluster file system, decide which volume manager to use:
Solaris Volume Manager
VxVM without the cluster feature
VxVM is supported only on the SPARC platform.
A file system on a Network Appliance NAS device
If you put the Oracle binary files and Oracle configuration files on a shared file system, you have only one copy to maintain and manage. However, you must shut down the data service in the entire cluster to upgrade the Oracle application. If a short period of downtime for upgrades is acceptable, place a single copy of the Oracle binary files and Oracle configuration files on a shared file system.
You can store all of the files that are associated with Oracle Real Application Clusters on the Sun StorEdge QFS shared file system.
For information about how to create a Sun StorEdge QFS shared file system, see the following documentation for Sun StorEdge QFS:
Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide
Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide
Distribute these files among several file systems as explained in the subsections that follow.
For RDBMS binary files and related files, create one file system in the cluster to store the files.
The RDBMS binary files and related files are as follows:
Oracle relational database management system (RDBMS) binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Oracle Cluster Ready Services (CRS) binary files
For database files and related files, determine whether you require one file system for each database or multiple file systems for each database.
For simplicity of configuration and maintenance, create one file system to store these files for all Oracle Real Application Clusters instances of the database.
To facilitate future expansion, create multiple file systems to store these files for all Oracle Real Application Clusters instances of the database.
If you are adding storage for an existing database, you must create additional file systems for the storage that you are adding. In this situation, distribute the database files and related files among the file systems that you will use for the database.
Each file system that you create for database files and related files must have its own metadata server. For information about the resources that are required for the metadata servers, see SPARC: Resources for the Sun StorEdge QFS Shared File System.
The database files and related files are as follows:
Data files
Control files
Online redo log files
Archived redo log files
Flashback log files
Recovery files
Oracle cluster registry (OCR) files
Oracle CRS voting disk
You can store only these files that are associated with Oracle Real Application Clusters on the cluster file system:
Oracle RDBMS binary files
Oracle CRS binary files
Oracle configuration files (for example, init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
System parameter file (SPFILE)
Alert files (for example, alert_sid.log)
Trace files (*.trc)
Archived redo log files
Flashback log files
Oracle cluster registry (OCR) files
Oracle CRS voting disk
You must not store data files, control files, online redo log files, or Oracle recovery files on the cluster file system.
The input/output (I/O) performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle Real Application Clusters database instance. This device group contains the file system that holds archived redo log files of the database instance.
If you are using the cluster file system with Sun Cluster 3.1, consider increasing the desired number of secondary nodes for device groups. By increasing the desired number of secondary nodes for device groups, you can improve the availability of your cluster. To increase the desired number of secondary nodes for device groups, change the numsecondaries property. For more information, see Multiported Disk Device Groups in Sun Cluster Concepts Guide for Solaris OS.
For information about how to create cluster file systems, see the following documentation:
Use the questions in the subsections that follow to plan the installation and configuration of Sun Cluster Support for Oracle Real Application Clusters. Write the answers to these questions in the space that is provided on the data service worksheets in Configuration Worksheets in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.
If you are using Oracle 10g, no Oracle RAC server resources are required. These resources are not required with Oracle 10g because Oracle CRS starts and shuts down Oracle Real Application Clusters database instances. In versions of Oracle earlier than 10g, these resources are required to enable Sun Cluster to start and shut down database instances.
Which resource groups will you use for the Oracle Real Application Clusters (RAC) server resources?
You require one resource group for each Oracle Real Application Clusters database instance. Each resource group contains the Oracle RAC server resource for the database instance.
Use the answer to this question when you perform the procedure in Registering and Configuring Oracle RAC Server Resources.
If you are using Oracle 10g, no Oracle listener resources are required. These resources are not required with Oracle 10g because Oracle CRS starts and shuts down Oracle Real Application Clusters database instances. In versions of Oracle earlier than 10g, these resources are required to enable Sun Cluster to start and shut down database instances.
Which resource groups will you use for the Oracle listener resources?
Use the answer to this question when you perform the procedure in Registering and Configuring Oracle Listener Resources.
The resource groups depend on your configuration of Oracle listeners with Real Application Clusters database instances. For general information about possible configurations of listeners for Real Application Clusters instances, see your Oracle documentation. Example configurations are described in the subsections that follow.
One listener serves only one Real Application Clusters instance. The listener listens on the fixed Internet Protocol (IP) address of the node. The listener cannot fail over.
In this situation, configure the listener resource as follows:
Configure the listener resource and the RAC server resource in the same resource group.
Ensure that this resource group is mastered on only one node.
One listener serves several Real Application Clusters instances on the same node. The listener uses Oracle's transparent application failover (TAF) and load balancing to distribute client connections across all Real Application Clusters instances. The listener cannot fail over.
In this situation, configure the listener resource as follows:
Configure the listener resource in its own resource group.
Ensure that the listener's resource group is mastered on only one node.
Create a dependency between the listener's resource group and RAC servers' resource groups.
One listener that can fail over serves several Real Application Clusters instances on the same node. When the listener fails over to another node, the listener serves several Real Application Clusters instances on the other node.
The listener uses Oracle's TAF and load balancing to distribute client connections across all Real Application Clusters instances. To ensure fast error detection and short failover times, the listener listens on an address that is represented by a LogicalHostname resource.
In this situation, configure the listener resource as follows:
Configure the listener resource and the LogicalHostname resource in the same resource group.
Ensure that this resource group is mastered on the nodes on which Oracle Real Application Clusters is running.
For more information, see LogicalHostname Resources for Oracle Listener Resources.
One listener serves all Real Application Clusters instances on all nodes. The listener listens on an address that is represented by a LogicalHostname resource. This configuration ensures that the address is plumbed very quickly on another node after a node fails.
You can use this configuration if you configure Real Application Clusters instances to use a multithreaded server (MTS). In such a configuration, the REMOTE_LISTENERS parameter in the init.ora file specifies that each dispatcher registers with the listener on a logical IP address.
All clients connect through the one listener. The listener redirects each client connection to the least busy dispatcher. The least busy dispatcher might be on a different node from the listener.
If the listener fails, the listener's fault monitor restarts the listener. If the node where the listener is running fails, the listener is restarted on a different node. In both situations the dispatchers reregister after the listener is restarted.
If you are using one listener for the entire cluster, configure the following resources in the same resource group:
The listener resource
The LogicalHostname resource
For more information, see LogicalHostname Resources for Oracle Listener Resources.
If you are using Oracle 10g, no LogicalHostname resources are required.
Which LogicalHostname resources will Oracle listener resources use?
Use the answer to this question when you perform the procedure in Registering and Configuring Oracle Listener Resources.
If a cluster node that is running an instance of Oracle Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the Transmission Control Protocol/Internet Protocol (TCP/IP) network timeout is high, the client application might require a significant length of time to detect the failure. Typically, client applications require between three and nine minutes to detect such failures.
In such situations, client applications can connect to listener resources that are listening on an address that is represented by the Sun Cluster LogicalHostname resource. Configure the LogicalHostname resource and the listener resource in a separate resource group. Ensure that this resource group is mastered on the nodes on which Oracle Real Application Clusters is running. If a node fails, the resource group that contains the LogicalHostname resource and the listener resource fails over to another surviving node on which Oracle Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Real Application Clusters.
If you are using the Sun StorEdge QFS shared file system, answer the following questions:
Which resources will you create to represent the metadata server for the Sun StorEdge QFS shared file system?
One resource is required for each Sun StorEdge QFS metadata server.
Which resource groups will you use for these resources?
You might use multiple file systems for database files and related files. For more information, see SPARC: Requirements for Using the Sun StorEdge QFS Shared File System.
If you are using Oracle 10g, Oracle CRS manage Real Application Clusters database instances. These database instances must be started only after all shared file systems are mounted. To meet this requirement, ensure that the file system that contains the Oracle CRS voting disk is mounted only after the file systems for other database files have been mounted. This behavior ensures that, when a node is booted, Oracle CRS are started only after all Sun StorEdge QFS file systems are mounted.
To enable Sun Cluster to mount the file systems in the required order, configure resource groups for the metadata servers of the file systems as follows:
Create the resources for the metadata servers in separate resource groups.
Set the resource group for the file system that contains the Oracle CRS voting disk to depend on the other resource groups.
For more information, see the following documentation for Sun StorEdge QFS:
Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide
Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide
Use the answers to these questions when you perform the procedure in Registering and Configuring Oracle RAC Server Resources.
If you plan you use the scrgadm utility to create the RAC framework resource group, what name will you assign to this resource group?
If you use the scsetup utility to create the RAC framework resource group, omit this question. The scsetup utility automatically assigns a name when you create the resource group.
For more information, see Registering and Configuring the RAC Framework Resource Group.
Where will the Oracle configuration files reside?
For the advantages and disadvantages of using the local file system instead of the cluster file system, see Storage Management Requirements for Oracle Binary Files and Oracle Configuration Files.
This section lists special requirements for Sun Cluster Support for Oracle Real Application Clusters.
Before you decide which architecture to use for the Oracle UDLM and the Oracle relational database management system (RDBMS), note the following points.
The architecture of both Oracle components must match. For example, if you have 64-bit architecture for your Oracle UDLM, you must have 64-bit architecture for your RDBMS.
If you have 32-bit architecture for your Oracle components, you can boot the node on which the components reside in either 32-bit mode or 64-bit mode. However, if you have 64-bit architecture for your Oracle components, you must boot the node on which the components reside in 64-bit mode.
You must use the same architecture when you boot all of the nodes. For example, if you boot one node to use 32-bit architecture, you must boot all of the nodes to use 32-bit architecture.
The following list shows the locations of the data service log files.
Current log: /var/cluster/ucmm/ucmm_reconf.log.
Previous logs: /var/cluster/ucmm/ucmm_reconf.log.0 (0,1,...) – This location is dependent on the Oracle UDLM package.
Oracle UDLM logs: /var/cluster/ucmm/dlm_nodename/logs – If you cannot find the Oracle log files at this location, contact Oracle support.
Oracle UDLM core files: /var/cluster/ucmm/dlm_nodename/cores – If you cannot find the Oracle log files at this location, contact Oracle support.
Logs for Oracle RAC server resource: /var/opt/SUNWscor/oracle_servermessage_log.resource.
For information about the installation, administration, and operation of the Oracle Real Application Clusters Guard option, see the Oracle documentation. If you plan to use this product option with Sun Cluster 3.1, note the points in the subsections that follow before you install Sun Cluster 3.1.
If you use the Oracle Real Application Clusters Guard option with Sun Cluster 3.1, the following restrictions apply to hostnames that you use in your cluster:
Hostnames cannot contain special characters.
You cannot change the hostnames after you install Sun Cluster 3.1.
For more information about these restrictions and any other requirements, see the Oracle documentation.
If you use the Oracle Real Application Clusters Guard option with Sun Cluster 3.1, do not use Sun Cluster commands to perform the following operations:
Manipulating the state of resources that Oracle Real Application Clusters Guard installs. Using Sun Cluster commands for this purpose might cause failures.
Querying the state of the resources that Oracle Real Application Clusters Guard installs. This state might not reflect the actual state. To check the state of the Oracle Real Application Clusters Guard, use the commands that Oracle supplies.
Install the software for the storage management schemes that you are using for Oracle files. For more information, see Storage Management Requirements for Oracle Files.
For information about how to install and configure Network Appliance NAS devices with Sun Cluster Support for Oracle Real Application Clusters, see Sun Cluster 3.1 With Network-Attached Storage Devices Manual for Solaris OS.
To use the Solaris Volume Manager for Sun Cluster software with Sun Cluster Support for Oracle Real Application Clusters, perform the following tasks.
Ensure that you are using Solaris 9 9/04, Solaris 10, or compatible versions.
Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.
Configure the Solaris Volume Manager for Sun Cluster software on the cluster nodes.
For more information, see Installing and Configuring Solstice DiskSuite or Solaris Volume Manager Software in Sun Cluster Software Installation Guide for Solaris OS.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
To use the VxVM software with Sun Cluster Support for Oracle Real Application Clusters, perform the following tasks.
If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.
See your VxVM documentation for more information about VxVM licensing requirements.
Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle Real Application Clusters support. Before you install the Oracle Real Application Clusters packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.
Install and configure the VxVM software on the cluster nodes.
See Chapter 4, Installing and Configuring VERITAS Volume Manager, in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
You can use Sun Cluster Support for Oracle Real Application Clusters with hardware RAID support.
For example, you can use Sun StorEdgeTM A3500/A3500FC disk arrays with hardware RAID support and without VxVM software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle Real Application Clusters on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.
Create LUNs on the disk arrays.
See the Sun Cluster hardware documentation for information about how to create LUNs.
After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2 |
To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.
Run the scdidadm(1M)command to find the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.
The following example lists output from the scdidadm -L command.
# scdidadm -L 1 phys-schost-1:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 1 phys-schost-2:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 3 phys-schost-2:/dev/rdsk/c4t4d0 /dev/did/rdsk/d3 3 phys-schost-1:/dev/rdsk/c1t5d0 /dev/did/rdsk/d3 4 phys-schost-2:/dev/rdsk/c3t5d0 /dev/did/rdsk/d4 4 phys-schost-1:/dev/rdsk/c2t5d0 /dev/did/rdsk/d4 5 phys-schost-2:/dev/rdsk/c4t4d1 /dev/did/rdsk/d5 5 phys-schost-1:/dev/rdsk/c1t5d1 /dev/did/rdsk/d5 6 phys-schost-2:/dev/rdsk/c3t5d1 /dev/did/rdsk/d6 6 phys-schost-1:/dev/rdsk/c2t5d1 /dev/did/rdsk/d6 |
Use the DID that the scdidadm output identifies to set up the raw devices.
For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sN raw device, where N is the slice number.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
You must use Sun StorEdge QFS shared file system with hardware RAID support.
For detailed instructions for installing, configuring, and using Sun StorEdge QFS shared file system, see Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide and Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.
Ensure that the Sun StorEdge QFS software is installed.
Ensure that each Sun StorEdge QFS shared file system is correctly configured for use with Sun Cluster Support for Oracle Real Application Clusters.
Ensure that each Sun StorEdge QFS shared file system is mounted with the correct options for use with Sun Cluster Support for Oracle Real Application Clusters.
For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.
For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:
In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:
stripe=width sync_meta=1 mh_write qwrite forcedirectio nstreams=1024 rdlease=300Set this value for optimum performance. wrlease=300Set this value for optimum performance. aplease=300Set this value for optimum performance.
Specifies the required stripe width for devices in the file system. The required stripe width is a multiple of the file system's disk allocation unit (DAU). width must be an integer that is greater than or equal to 1.
Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.
Register and configure the data service for the Sun StorEdge QFS metadata server.
For detailed instructions, see Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
Run the scdidadm(1M) command to find the raw device identity (DID) that corresponds to shared disks that are available in the cluster.
The following example lists output from the scdidadm -L command.
# scdidadm -L 1 phys-schost-1:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 1 phys-schost-2:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 |
Use the DID that the scdidadm output identifies to set up the disk in the ASM disk group.
For example, the scdidadm output might identify that the raw DID that corresponds to the disk is d2. In this instance, use the /dev/did/rdsk/d2sN raw device, where N is the slice number.
Modify the ASM_DISKSTRING parameter to specify the devices that you are using for the ASM disk group.
For example, to use the /dev/did/ for the ASM disk group, add the value /dev/did/rdsk/d* to the ASM_DISKSTRING parameter as follows:
ASM_DISKSTRING = '/dev/did/rdsk/*'
For more information, see your Oracle documentation.
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
Create and mount the cluster file system.
See Configuring the Cluster in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.
If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.
For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.
File Type |
Options |
---|---|
global, logging, forcedirectio |
|
Oracle application binary files, configuration files, alert files, and trace files |
global, logging |
Ensure that all other storage management schemes that you are using for Oracle files are installed.
After all storage management schemes that you are using for Oracle files are installed, go to Preparing the Sun Cluster Nodes.
Preparing the Sun Cluster nodes modifies the configuration of the operating system to enable Oracle Real Application Clusters to run on Sun Cluster nodes. Preparing the Sun Cluster nodes and disks involves the following tasks:
Bypassing the NIS name service
Creating the database administrator group and the Oracle user account
Configuring shared memory for the Oracle Real Application Clusters software
Perform these tasks on all nodes where Sun Cluster Support for Oracle Real Application Clusters can run. If you do not perform these tasks on all nodes, the Oracle installation is incomplete. An incomplete Oracle installation causes Sun Cluster Support for Oracle Real Application Clusters to fail during startup.
Bypassing the NIS name service protects the Sun Cluster Support for Oracle Real Application Clusters data service against a failure of a cluster node's public network. A failure of a cluster node's public network might cause the NIS name service to become unavailable. If Sun Cluster Support for Oracle Real Application Clusters refers to the NIS name service, unavailability of the name service might cause the Sun Cluster Support for Oracle Real Application Clusters data service to fail.
Bypassing the NIS name service ensures that the Sun Cluster Support for Oracle Real Application Clusters data service does not refer to the NIS name service when the data service sets the user identifier (ID). The Sun Cluster Support for Oracle Real Application Clusters data service sets the user ID when the data service starts or stops the database.
Become superuser on all nodes where Sun Cluster Support for Oracle Real Application Clusters can run.
On each node, include the following entries in the /etc/nsswitch.conf file.
passwd: files nis [TRYAGAIN=0] publickey: files nis [TRYAGAIN=0] project: files nis [TRYAGAIN=0] group: files
For more information about the /etc/nsswitch.conf file, see the nsswitch.conf(4) man page.
Go to How to Create the Database Administrator Group and the Oracle User Account.
Perform the following steps as superuser on each cluster node.
On each node, create an entry for the database administrator group in the /etc/group file, and add potential users to the group.
This group normally is named dba. Verify that root and oracle are members of the dba group, and add entries as necessary for other database administrator (DBA) users. Verify that the group IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Real Application Clusters. For example, add the following entry to the /etc/group file.
dba:*:520:root,oracle |
You can create the name service entries in a network name service, such as the Network Information Service (NIS) or NIS+, so that the information is available to the data service clients. You can also create entries in the local /etc files to eliminate dependency on the network name service.
On each node, create an entry for the Oracle user ID (the group and password) in the /etc/passwd file, and run the pwconv(1M) command to create an entry in the /etc/shadow file.
This Oracle user ID is normally oracle. For example, add the following entry to the /etc/passwd file.
# useradd -u 120 -g dba -d /oracle-home oracle |
Ensure that the user IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Real Application Clusters.
Go to How to Configure Shared Memory for the Oracle Real Application Clusters Software.
To enable the Oracle Real Application Clusters software to run correctly, you must ensure that sufficient shared memory is available on all of the cluster nodes. Perform this task on each cluster node.
Become superuser on a cluster node.
Update the /etc/system file with the shared memory configuration information.
You must configure these parameters on the basis of the resources that are available in the cluster. However, the value of each parameter must be sufficient to enable the Oracle Real Application Clusters software to create a shared memory segment that conforms to its configuration requirements. For the minimum required value of each parameter, see your Oracle documentation.
The following example shows entries to configure in the /etc/system file.
*SHARED MEMORY/ORACLE set shmsys:shminfo_shmmax=4294967295 set semsys:seminfo_semmap=1024 set semsys:seminfo_semmni=2048 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=2048 set semsys:seminfo_semmnu=2048 set semsys:seminfo_semume=200 set shmsys:shminfo_shmmin=200 set shmsys:shminfo_shmmni=200 set shmsys:shminfo_shmseg=200 set semsys:seminfo_semvmx=32767 |
Shut down and reboot each node whose /etc/system file you updated in Step 2.
Before you reboot, you must ensure that you have correctly installed your volume manager packages. If you use VxVM, check that you have installed the software and that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur. For information about how to recover from a node panic during installation, see Node Panic During Initialization of Sun Cluster Support for Oracle Real Application Clusters.
For detailed instructions, see Shutting Down and Booting a Single Cluster Node in Sun Cluster System Administration Guide for Solaris OS.
Go to Installing Sun Cluster Support for Oracle Real Application Clusters Packages.
If you did not install the Sun Cluster Support for Oracle Real Application Clusters packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Real Application Clusters. To complete this procedure, you need the Sun Cluster Agents CD-ROM.
The Sun Cluster Support for Oracle Real Application Clusters packages are as follows:
Packages for the RAC framework
Packages for the storage management scheme that you are using for the Oracle Real Application Clusters database
If you are using Solaris 10, install these packages only in the global zone. Also ensure that these packages are not propagated to any local zones that are created after you install the packages.
Install the Sun Cluster Support for Oracle Real Application Clusters packages by using the pkgadd utility.
The scinstall(1M) utility does not support automatic installation of the packages for this data service.
Load the Sun Cluster Agents CD-ROM into the CD-ROM drive.
Become superuser.
Change the current working directory to the directory that contains the packages for the RAC framework.
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC_FRAMEWORK_3.1/Solaris_N/Packages |
N is the version number of the Solaris OS that you are using. For example, if you are using Solaris 10, N is 10.
On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, start the pkgadd utility.
Change the current working directory to the directory that contains the packages that the combination of storage management schemes requires.
If you are using Solaris Volume Manager for Sun Cluster, run the following command:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC_SVM_3.1/Solaris_N/Packages |
N is the version number of the Solaris OS that you are using. For example, if you are using Solaris 10, N is 10.
If you are using VxVM with the cluster feature, run the following command:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC_CVM_3.1/Solaris_N/Packages |
N is the version number of the Solaris OS that you are using. For example, if you are using Solaris 10, N is 10.
If you are using hardware RAID, Sun StorEdge QFS, Network Appliance NAS devices, or ASM without a volume manager, run the following command:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC_HWRAID_3.1/Solaris_N/Packages |
N is the version number of the Solaris OS that you are using. For example, if you are using Solaris 10, N is 10.
On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, start the pkgadd utility.
The next step depends on the platform that you are using, as shown in the following table.
Platform |
Next Step |
---|---|
SPARC | |
x86 |
Registering and Configuring the RAC Framework Resource Group |
For detailed instructions for installing the Oracle UDLM, see the Oracle Real Application Clusters documentation.
Before you install the Oracle UDLM, ensure that you have created entries for the database administrator group and the Oracle user ID. See How to Create the Database Administrator Group and the Oracle User Account for details.
You must install the Oracle UDLM software on the local disk of each node.
Become superuser on a cluster node.
Install the Oracle UDLM software.
See the appropriate Oracle Real Application Clusters installation documentation for instructions.
Ensure that you did not receive any error messages when you installed the Oracle UDLM packages. If an error occurred during package installation, correct the problem before you install the Oracle UDLM software.
Go to Registering and Configuring the RAC Framework Resource Group.