This chapter describes the steps to install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters on your Sun Cluster nodes. This chapter contains the following procedures.
How to Create a Node-Specific Directory for the Cluster File System
How to Create a Node-Specific File for the Cluster File System
How to Install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages
The following table summarizes the installation tasks and provides cross-references to detailed instructions for performing the tasks.
Table 1–1 Task Map: Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters
Task |
Cross-Reference |
---|---|
Understand preinstallation considerations and special requirements | |
(Optional) Install volume management software | |
(Optional) Create node-specific files and directories that the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software requires. |
Creating Node-Specific Files and Directories for the Cluster File System |
Install data service packages |
Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages |
Install the UNIX Distributed Lock Manager | |
(Optional) Create a shared-disk group for the Oracle Parallel Server/Real Application Clusters database |
Creating a VxVM Shared-Disk Group for the Oracle Parallel Server/Real Application Clusters Database |
Oracle Parallel Server/Real Application Clusters is a scalable application that can run on more than one node concurrently. Before you install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, consider the points that are listed in the subsections that follow.
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters is an atypical Sun Cluster high-availability data service. This data service is a set of packages that, when installed, enables Oracle Parallel Server/Real Application Clusters to run on Sun Cluster nodes. This data service also enables Sun Cluster Support for Oracle Parallel Server/Real Application Clusters to be managed by using Sun Clustercommands.
This data service does not provide automatic failover or fault monitoring because the Oracle Parallel Server/Real Application Clusters software already provides this functionality. The Oracle Parallel Server/Real Application Clusters software is not registered with or managed by the Sun Cluster Resource Group Manager (RGM).
You can configure Oracle Parallel Server/Real Application Clusters to use the shared-disk architecture of the Sun Cluster software. In this configuration, a single database is shared among multiple instances of the Oracle Parallel Server/Real Application Clusters software that access the database concurrently. The UNIX Distributed Lock Manager (Oracle UDLM) controls access to shared resources between cluster nodes.
Before you begin the installation, note the hardware and software requirements in the subsections that follow.
Sun Cluster Support for Oracle Parallel Server/Real Application Clusters requires a functioning cluster with the initial cluster framework already installed. See the Sun Cluster Software Installation Guide for Solaris OS for details about initial installation of cluster software.
Decide which storage management scheme to use:
VERITAS Volume Manager (VxVM) with the cluster feature
Hardware redundant array of independent disks (RAID) support
The cluster file system
If you use the cluster file system, decide which volume manager to use:
Solaris Volume Manager
VxVM without the cluster feature
Verify that you have obtained and installed the appropriate licenses for your software. If you install your licenses incorrectly or incompletely, the nodes might abort.
For example, if you use VxVM with the cluster feature, verify that you have installed a valid license for the Volume Manager cluster feature by running one of the following commands:
Check with a Sun Enterprise Services representative for the current supported topologies for Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, cluster interconnect, storage management scheme, and hardware configurations.
Ensure that you have installed all of the applicable software patches for the Solaris operating environment, Sun Cluster, Oracle, and your volume manager. If you need to install any Sun Cluster Support for Oracle Parallel Server/Real Application Clusters patches, you must apply these patches after you install the data service.
You can install the application binary files and application configuration files on one of the following locations.
The local disks of each cluster node. Placing the application binary files and application configuration files on the individual cluster nodes enables you to upgrade the application later without shutting down the data service.
The disadvantage is that you then have several copies of the application binary files and application configuration files to maintain and administer.
The cluster file system. If you put the application binary files and application configuration files on the cluster file system, you have only one copy to maintain and manage. However, you must shut down the data service in the entire cluster to upgrade the application. If a small amount of downtime for upgrades is acceptable, place a single copy of the application binary files and application configuration files on the cluster file system.
You can store only these files that are associated with Oracle Parallel Server/Real Application Clusters on the cluster file system:
Application binary files
Configuration files (for example init.ora, tnsnames.ora, listener.ora, and sqlnet.ora)
Archived redo log files
Alert files (for example alert_sid.log)
Trace files (*.trc)
You must not store data files, control files, nor online redo log files on the cluster file system.
The input/output (I/O) performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle Parallel Server/Real Application Clusters database instance. This device group contains the cluster file system that holds archived redo log files of the database instance.
See the planning chapter of the Sun Cluster Software Installation Guide for Solaris OS for information about how to create cluster file systems.
This section lists special requirements for Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.
Before you decide which architecture to use for the Oracle UDLM and the Oracle relational database management system (RDBMS), note the following points.
The architecture of both Oracle components must match. For example, if you have 64-bit architecture for your Oracle UDLM, you must have 64-bit architecture for your RDBMS.
If you have 32-bit architecture for your Oracle components, you can boot the node on which the components reside in either 32-bit mode or 64-bit mode. However, if you have 64-bit architecture for your Oracle components, you must boot the node on which the components reside in 64-bit mode.
You must use the same architecture when you boot all of the nodes. For example, if you boot one node to use 32-bit architecture, you must boot all of the nodes to use 32-bit architecture.
The following list shows the locations of the data service log files.
Current log: /var/cluster/ucmm/ucmm_reconf.log
Previous logs: /var/cluster/ucmm/ucmm_reconf.log.0 (0,1,...) – This location is dependent on the Oracle UDLM package.
Oracle UDLM logs: /var/cluster/ucmm/dlm_nodename/logs – If you cannot find the Oracle log files at this location, contact Oracle support.
Oracle UDLM core files: /var/cluster/ucmm/dlm_nodename/cores – If you cannot find the Oracle log files at this location, contact Oracle support.
In an Oracle Parallel Server/Real Application Clusters environment, multiple Oracle instances cooperate to provide access to the same shared database. The Oracle clients can use any of the instances to access the database. Thus, if one or more instances have failed, clients can connect to a surviving instance and continue to access the database.
If a node fails, boot the node into maintenance mode to correct the problem. After you have corrected the problem, reboot the node. See the Sun Cluster System Administration Guide for Solaris OS for more information.
When you install this data service, ensure that you complete all steps of all procedures that precede installing the Oracle RDBMS software and creating your Oracle database before you reboot the nodes. Otherwise, the nodes will panic. If the nodes panic, you must boot into maintenance mode to correct the problem. After you have corrected the problem, you must reboot the nodes. The procedures that you must complete are listed in Table 2–1.
If a cluster node that is running an instance of Oracle Parallel Server/Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the Transmission Control Protocol/Internet Protocol (TCP/IP) network timeout is high, the client application might require a significant length of time to detect the failure. Typically, client applications require between three and nine minutes to detect such failures.
In such situations, client applications can use the Sun Cluster LogicalHostname resource for connecting to an Oracle Parallel Server/Real Application Clusters database that is running on Sun Cluster. You can configure the LogicalHostname resource in a separate resource group that is mastered on the nodes on which Oracle Parallel Server/Real Application Clusters is running. If a node fails, the LogicalHostname resource fails over to another surviving node on which Oracle Parallel Server/Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Parallel Server/Real Application Clusters.
Before using the LogicalHostname resource for this purpose, consider the effect on existing user connections of failover or failback of the LogicalHostname resource.
For information about the installation, administration, and operation of the Oracle Parallel Fail Safe/Real Application Clusters Guard option, see the Oracle documentation. If you plan to use this product option with Sun Cluster 3.1, note the points in the subsections that follow before you install Sun Cluster 3.1.
If you use the Oracle Parallel Fail Safe/Real Application Clusters Guard option with Sun Cluster 3.1, the following restrictions apply to hostnames that you use in your cluster:
Hostnames cannot contain special characters.
You cannot change the hostnames after you install Sun Cluster 3.1.
For more information about these restrictions and any other requirements, see the Oracle documentation.
If you use the Oracle Parallel Fail Safe/Real Application Clusters Guard option with Sun Cluster 3.1, do not use Sun Cluster commands to perform the following operations:
Manipulating the state of resources that Oracle Parallel Fail Safe/Real Application Clusters Guard installs. Using Sun Cluster commands for this purpose might cause failures.
Querying the state of the resources that Oracle Parallel Fail Safe/Real Application Clusters Guard installs. This state might not reflect the actual state. To check the state of the Oracle Parallel Fail Safe/Real Application Clusters Guard, use the commands that Oracle supplies.
For Sun Cluster Support for Oracle Parallel Server/Real Application Clusters disks, use the following configurations.
VxVM with the cluster feature
Hardware RAID support
The cluster file system
To use the VxVM software with Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, perform the following tasks.
(Optional) If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.
See your VxVM documentation for more information about VxVM licensing requirements.
Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle Parallel Server/Real Application Clusters support. Before you install the Oracle Parallel Server/Real Application Clusters packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.
Install and configure the VxVM software on the cluster nodes.
See “Installing and Configuring VERITAS Volume Manager” in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.
Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.
You can use Sun Cluster Support for Oracle Parallel Server/Real Application Clusters with hardware RAID support.
For example, you can use Sun StorEdgeTM A3500/A3500FC disk arrays with hardware RAID support and without VxVM software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle Parallel Server/Real Application Clusters on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.
Create LUNs on the disk arrays.
See the Sun Cluster hardware documentation for information about how to create LUNs.
After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.
The following example lists output from the format command.
# format 0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@2,0 1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248> /sbus@3,0/SUNW,fas@3,8800000/sd@3,0 2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,0 3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@1/rdriver@5,1 4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,0 5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@2/rdriver@5,1 6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64> /pseudo/rdnexus@3/rdriver@4,2 |
To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.
Run the scdidadm(1M) command to find the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.
The following example lists output from the scdidadm -L command.
# scdidadm -L 1 phys-schost-1:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 1 phys-schost-2:/dev/rdsk/c0t2d0 /dev/did/rdsk/d1 2 phys-schost-1:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 2 phys-schost-2:/dev/rdsk/c0t3d0 /dev/did/rdsk/d2 3 phys-schost-2:/dev/rdsk/c4t4d0 /dev/did/rdsk/d3 3 phys-schost-1:/dev/rdsk/c1t5d0 /dev/did/rdsk/d3 4 phys-schost-2:/dev/rdsk/c3t5d0 /dev/did/rdsk/d4 4 phys-schost-1:/dev/rdsk/c2t5d0 /dev/did/rdsk/d4 5 phys-schost-2:/dev/rdsk/c4t4d1 /dev/did/rdsk/d5 5 phys-schost-1:/dev/rdsk/c1t5d1 /dev/did/rdsk/d5 6 phys-schost-2:/dev/rdsk/c3t5d1 /dev/did/rdsk/d6 6 phys-schost-1:/dev/rdsk/c2t5d1 /dev/did/rdsk/d6 |
Use the DID that the scdidadm output identifies to set up the raw devices.
For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sN raw device, where N is the slice number.
Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.
Create and mount the cluster file system.
See “Configuring the Cluster” in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.
When you add an entry to the /etc/vfstab file for the mount point, set UNIX file system (UFS) file-system-specific options for various types of Oracle files.
See the following table.
File Type |
Options |
---|---|
global, logging, forcedirectio |
|
Oracle application binary files, configuration files, alert files, and trace files |
global, logging |
Go to Creating Node-Specific Files and Directories for the Cluster File System to create node-specific files and directories that the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software requires.
When Oracle software is installed on the cluster file system, all the
files in the directory that the ORACLE_HOME
environment variable specifies are accessible by all cluster
nodes.
An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.
To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.
Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:
For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that is to maintain node-specific information.
Ensure that the local directory structure that you create matches the global directory structure that contains the node-specific information. For example, the global directory /global/oracle/network/agent might contain node-specific information that you require to be stored locally under the /local directory. In this situation, you would create a directory that is named /local/oracle/network/agent.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global directory that is to maintain node-specific information.
Ensure that the local copy of the node-specific information is contained in the local directory that you created in Step 1.
# cp -pr global-dir local-dir-parent |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies that the directory and all its files, including any subdirectories and their files, are copied.
Specifies the full path of the global directory that you are copying.
This directory resides on the cluster file system under the directory that
the ORACLE_HOME
environment variable
specifies.
Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.
Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.
From any cluster node, remove the global directory that you copied in Step 2.
# rm -r global-dir |
Specifies that the directory and all its files, including any subdirectories and their files, are removed.
Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.
# ln -s local-dir global-dir |
This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the required directories on the local file system, the following commands are run:
# mkdir -p /local/oracle/network/agent |
# mkdir -p /local/oracle/network/log |
# mkdir -p /local/oracle/network/trace |
# mkdir -p /local/oracle/srvm/log |
# mkdir -p /local/oracle/apache |
To make local copies of the global directories that are to maintain node-specific information, the following commands are run:
# cp -pr $ORACLE_HOME/network/agent /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/log /local/oracle/network/. |
# cp -pr $ORACLE_HOME/network/trace /local/oracle/network/. |
# cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/. |
# cp -pr $ORACLE_HOME/apache /local/oracle/. |
The following operations are performed on only one node:
To remove the global directories, the following commands are run:
# rm -r $ORACLE_HOME/network/agent |
# rm -r $ORACLE_HOME/network/log |
# rm -r $ORACLE_HOME/network/trace |
# rm -r $ORACLE_HOME/srvm/log |
# rm -r $ORACLE_HOME/apache |
To create symbolic links from the local directories to their corresponding global directories, the following commands are run:
# ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent |
# ln -s /local/oracle/network/log $ORACLE_HOME/network/log |
# ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace |
# ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log |
# ln -s /local/oracle/apache $ORACLE_HOME/apache |
Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:
For information about other files that might be required to maintain node-specific information, see your Oracle documentation.
On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.
# mkdir -p local-dir |
Specifies that all nonexistent parent directories are created first
Specifies the full path name of the directory that you are creating
On each cluster node, make a local copy of the global file that is to maintain node-specific information.
# cp -p global-file local-dir |
Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.
Specifies the file name and full path of the global file that you are
copying. This file was installed on the cluster file system under the directory
that the ORACLE_HOME
environment
variable specifies.
Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.
Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.
From any cluster node, remove the global file that you copied in Step 2.
# rm global-file |
Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.
From any cluster node, create a symbolic link from the local copy of the file to the global file that you removed in Step a.
# ln -s local-file global-file |
This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:
The ORACLE_HOME
environment variable specifies the /global/oracle directory.
The local file system on each node is located under the /local directory.
The following operations are performed on each node:
To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:
# mkdir -p /local/oracle/network/admin |
To make a local copy of the global files that are to maintain node-specific information, the following commands are run:
# cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \ /local/oracle/network/admin/. |
# cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \ /local/oracle/network/admin/. |
The following operations are performed on only one node:
To remove the global files, the following commands are run:
# rm $ORACLE_HOME/network/admin/snmp_ro.ora |
# rm $ORACLE_HOME/network/admin/snmp_rw.ora |
To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:
# ln -s /local/oracle/network/admin/snmp_ro.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
# ln -s /local/oracle/network/admin/snmp_rw.ora \ $ORACLE_HOME/network/admin/snmp_rw.ora |
Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.
If you did not install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters. To complete this procedure, you need the Sun Java Enterprise System Accessory CD Volume 3.
Install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages by using the pkgadd utility.
Because of the preparation that is required before installation, the scinstall(1M) utility does not support automatic installation of the data service packages.
Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.
Become superuser.
Change the current working directory to the directory that contains the packages for the version of the Solaris operating environment that you are using.
If you are using Solaris 8, run the following command:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_8/Packages |
If you are using Solaris 9, run the following command:
# cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_9/Packages |
On each cluster node that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, transfer the contents of the required software packages from the CD-ROM to the node.
The required software packages depend on the storage management scheme that you are using.
If you are using VxVM with the cluster feature, run the following command:
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm |
If you are using hardware RAID support, run the following command:
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr |
If you are using the cluster file system, run the following command:
# pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr |
Before you reboot the nodes, you must ensure that you have correctly installed and configured the Oracle UDLM software. For more information, see Installing the Oracle UDLM. Also verify that you have correctly installed your volume manager packages. If you plan to use VxVM, check that you have installed the software and check that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur.
Go to Installing the Oracle UDLM to install the Oracle UDLM.
Installing the Oracle UDLM involves the following tasks:
Preparing the nodes
Installing the Oracle UDLM software
For the Oracle UDLM software to run correctly, sufficient shared memory must be available on all of the cluster nodes. See the Oracle Parallel Server/Real Application Clusters CD-ROM for all of the installation instructions. To prepare the Sun Cluster nodes, check that you have completed the following tasks.
You have correctly set up the Oracle user account and database administration group.
You have configured the system to support the shared memory requirements of the Oracle UDLM.
Perform the following steps as superuser on each cluster node.
On each node, create an entry for the database administrator group in the /etc/group file, and add potential users to the group.
This group normally is named dba. Verify that root and oracle are members of the dba group, and add entries as necessary for other database administrator (DBA) users. Verify that the group IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters. For example, add the following entry to the /etc/group file.
dba:*:520:root,oracle |
You can make the name service entries in a network name service, such as the Network Information Service (NIS) or NIS+, so that the information is available to the data service clients. You can also make entries in the local /etc files to eliminate dependency on the network name service.
On each node, create an entry for the Oracle user ID (the group and password) in the /etc/passwd file, and run the pwconv(1M) command to create an entry in the /etc/shadow file.
This Oracle user ID is normally oracle. For example, add the following entry to the /etc/passwd file.
# useradd -u 120 -g dba -d /oracle-home oracle |
Ensure that the user IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.
After you set up the cluster environment for Oracle Parallel Server/Real Application Clusters, go to How to Install the Oracle UDLM Software to install the Oracle UDLM software on each cluster node.
You must install the Oracle UDLM software on the local disk of each node.
Before you install the Oracle UDLM software, ensure that you have created entries for the database administrator group and the Oracle user ID. See How to Prepare the Sun Cluster Nodes for details.
Become superuser on a cluster node.
Install the Oracle UDLM software.
See the appropriate Oracle Parallel Server/Real Application Clusters installation documentation for instructions.
Ensure that you did not receive any error messages when you installed the Oracle UDLM packages. If an error occurred during package installation, correct the problem before you install the Oracle UDLM software.
Update the /etc/system file with the shared memory configuration information.
You must configure these parameters on the basis of the resources that are available in the cluster. Decide the appropriate values, but ensure that the Oracle UDLM can create a shared memory segment that conforms to its configuration requirements.
The following example shows entries to configure in the /etc/system file.
*SHARED MEMORY/ORACLE set shmsys:shminfo_shmmax=268435456 set semsys:seminfo_semmap=1024 set semsys:seminfo_semmni=2048 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=2048 set semsys:seminfo_semmnu=2048 set semsys:seminfo_semume=200 set shmsys:shminfo_shmmin=200 set shmsys:shminfo_shmmni=200 set shmsys:shminfo_shmseg=200 |
Shut down and reboot each node on which the Oracle UDLM software is installed.
Before you reboot, you must ensure that you have correctly installed and configured the Oracle UDLM software. Also verify that you have correctly installed your volume manager packages. If you use VxVM, check that you have installed the software and that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur.
For detailed instructions, see “Shutting Down and Booting a Single Cluster Node” in Sun Cluster System Administration Guide for Solaris OS.
After you have installed the Oracle UDLM software on each cluster node, the next step depends on your storage management scheme.
If you are using VxVM without the cluster file system, go to Creating a VxVM Shared-Disk Group for the Oracle Parallel Server/Real Application Clusters Database to create a shared-disk group for the Oracle Parallel Server/Real Application Clusters database.
Otherwise, go to Registering and Configuring Sun Cluster Support for Oracle Parallel Server/Real Application Clusters to register and configure Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.
Perform this task only if you are using VxVM without the cluster file system.
If you are using VxVM without the cluster file system, VxVM requires a shared-disk group for the Oracle Parallel Server/Real Application Clusters database to use.
Before you create a VxVM shared-disk group for the Oracle Parallel Server/Real Application Clusters database, note the following points.
Do not register the shared-disk group as a cluster device group with the cluster.
Do not create any file systems in the shared-disk group because only the raw data file uses this disk group.
Disks that you add to the shared-disk group must be directly attached to all of the cluster nodes.
Ensure that your VxVM license is current. If your license expires, the node panics.
Use VERITAS commands that are provided for creating a VxVM shared-disk group.
For information about VxVM shared-disk groups, see your VxVM documentation.
After you have created a shared-disk group for the Oracle Parallel Server/Real Application Clusters database, go to Registering and Configuring Sun Cluster Support for Oracle Parallel Server/Real Application Clusters to register and configure Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.