Sun Cluster Data Service for Oracle Parallel Server/Real Application Clusters Guide for Solaris OS

Chapter 1 Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

This chapter describes the steps to install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters on your Sun Cluster nodes. This chapter contains the following procedures.

Overview of the Installation Process for Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

The following table summarizes the installation tasks and provides cross-references to detailed instructions for performing the tasks.

Table 1–1 Task Map: Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

Task 

Cross-Reference 

Understand preinstallation considerations and special requirements 

Preinstallation Considerations

Special Requirements

(Optional) Install volume management software 

Installing Storage Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

(Optional) Create node-specific files and directories that the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software requires. 

Creating Node-Specific Files and Directories for the Cluster File System

Install data service packages 

Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages

Install the UNIX Distributed Lock Manager 

Installing the Oracle UDLM

(Optional) Create a shared-disk group for the Oracle Parallel Server/Real Application Clusters database 

Creating a VxVM Shared-Disk Group for the Oracle Parallel Server/Real Application Clusters Database

Preinstallation Considerations

Oracle Parallel Server/Real Application Clusters is a scalable application that can run on more than one node concurrently. Before you install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, consider the points that are listed in the subsections that follow.

Atypical Features of This Data Service

Sun Cluster Support for Oracle Parallel Server/Real Application Clusters is an atypical Sun Cluster high-availability data service. This data service is a set of packages that, when installed, enables Oracle Parallel Server/Real Application Clusters to run on Sun Cluster nodes. This data service also enables Sun Cluster Support for Oracle Parallel Server/Real Application Clusters to be managed by using Sun Clustercommands.

This data service does not provide automatic failover or fault monitoring because the Oracle Parallel Server/Real Application Clusters software already provides this functionality. The Oracle Parallel Server/Real Application Clusters software is not registered with or managed by the Sun Cluster Resource Group Manager (RGM).

You can configure Oracle Parallel Server/Real Application Clusters to use the shared-disk architecture of the Sun Cluster software. In this configuration, a single database is shared among multiple instances of the Oracle Parallel Server/Real Application Clusters software that access the database concurrently. The UNIX Distributed Lock Manager (Oracle UDLM) controls access to shared resources between cluster nodes.

Hardware and Software Requirements

Before you begin the installation, note the hardware and software requirements in the subsections that follow.

Sun Cluster Framework Requirements

Sun Cluster Support for Oracle Parallel Server/Real Application Clusters requires a functioning cluster with the initial cluster framework already installed. See the Sun Cluster Software Installation Guide for Solaris OS for details about initial installation of cluster software.

Storage Management Requirements

Decide which storage management scheme to use:

Software License Requirements

Verify that you have obtained and installed the appropriate licenses for your software. If you install your licenses incorrectly or incompletely, the nodes might abort.

For example, if you use VxVM with the cluster feature, verify that you have installed a valid license for the Volume Manager cluster feature by running one of the following commands:

Supported Topology Requirements

Check with a Sun Enterprise Services representative for the current supported topologies for Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, cluster interconnect, storage management scheme, and hardware configurations.

Patch Installation Requirements

Ensure that you have installed all of the applicable software patches for the Solaris operating environment, Sun Cluster, Oracle, and your volume manager. If you need to install any Sun Cluster Support for Oracle Parallel Server/Real Application Clusters patches, you must apply these patches after you install the data service.

Location of Application Binary Files and Application Configuration Files

You can install the application binary files and application configuration files on one of the following locations.

Requirements for Using the Cluster File System

You can store only these files that are associated with Oracle Parallel Server/Real Application Clusters on the cluster file system:


Note –

You must not store data files, control files, nor online redo log files on the cluster file system.


The input/output (I/O) performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle Parallel Server/Real Application Clusters database instance. This device group contains the cluster file system that holds archived redo log files of the database instance.

See the planning chapter of the Sun Cluster Software Installation Guide for Solaris OS for information about how to create cluster file systems.

Special Requirements

This section lists special requirements for Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.

32-Bit Mode or 64-Bit Mode

Before you decide which architecture to use for the Oracle UDLM and the Oracle relational database management system (RDBMS), note the following points.

Log File Locations

The following list shows the locations of the data service log files.

Node Failures and Recovery Procedures

In an Oracle Parallel Server/Real Application Clusters environment, multiple Oracle instances cooperate to provide access to the same shared database. The Oracle clients can use any of the instances to access the database. Thus, if one or more instances have failed, clients can connect to a surviving instance and continue to access the database.


Note –

If a node fails, boot the node into maintenance mode to correct the problem. After you have corrected the problem, reboot the node. See the Sun Cluster System Administration Guide for Solaris OS for more information.



Note –

When you install this data service, ensure that you complete all steps of all procedures that precede installing the Oracle RDBMS software and creating your Oracle database before you reboot the nodes. Otherwise, the nodes will panic. If the nodes panic, you must boot into maintenance mode to correct the problem. After you have corrected the problem, you must reboot the nodes. The procedures that you must complete are listed in Table 2–1.


Using the Sun Cluster LogicalHostname Resource With Oracle Parallel Server/Real Application Clusters

If a cluster node that is running an instance of Oracle Parallel Server/Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the Transmission Control Protocol/Internet Protocol (TCP/IP) network timeout is high, the client application might require a significant length of time to detect the failure. Typically, client applications require between three and nine minutes to detect such failures.

In such situations, client applications can use the Sun Cluster LogicalHostname resource for connecting to an Oracle Parallel Server/Real Application Clusters database that is running on Sun Cluster. You can configure the LogicalHostname resource in a separate resource group that is mastered on the nodes on which Oracle Parallel Server/Real Application Clusters is running. If a node fails, the LogicalHostname resource fails over to another surviving node on which Oracle Parallel Server/Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Parallel Server/Real Application Clusters.


Caution – Caution –

Before using the LogicalHostname resource for this purpose, consider the effect on existing user connections of failover or failback of the LogicalHostname resource.


Using the Oracle Parallel Fail Safe/Real Application Clusters Guard Option With Sun Cluster 3.1

For information about the installation, administration, and operation of the Oracle Parallel Fail Safe/Real Application Clusters Guard option, see the Oracle documentation. If you plan to use this product option with Sun Cluster 3.1, note the points in the subsections that follow before you install Sun Cluster 3.1.

Hostname Restrictions

If you use the Oracle Parallel Fail Safe/Real Application Clusters Guard option with Sun Cluster 3.1, the following restrictions apply to hostnames that you use in your cluster:

For more information about these restrictions and any other requirements, see the Oracle documentation.

Sun Cluster Command Usage Restrictions

If you use the Oracle Parallel Fail Safe/Real Application Clusters Guard option with Sun Cluster 3.1, do not use Sun Cluster commands to perform the following operations:

Installing Storage Management Software With Sun Cluster Support for Oracle Parallel Server/Real Application Clusters

For Sun Cluster Support for Oracle Parallel Server/Real Application Clusters disks, use the following configurations.

How to Use VxVM

To use the VxVM software with Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, perform the following tasks.

  1. (Optional) If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution – Caution –

    Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle Parallel Server/Real Application Clusters support. Before you install the Oracle Parallel Server/Real Application Clusters packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See “Installing and Configuring VERITAS Volume Manager” in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.

How to Use Hardware RAID Support

You can use Sun Cluster Support for Oracle Parallel Server/Real Application Clusters with hardware RAID support.

For example, you can use Sun StorEdgeTM A3500/A3500FC disk arrays with hardware RAID support and without VxVM software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle Parallel Server/Real Application Clusters on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster hardware documentation for information about how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note –

    To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.


  3. Run the scdidadm(1M) command to find the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.

    The following example lists output from the scdidadm -L command.


    # scdidadm -L
    
    1        phys-schost-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-schost-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-schost-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-schost-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-schost-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-schost-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-schost-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-schost-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-schost-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-schost-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-schost-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-schost-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6
  4. Use the DID that the scdidadm output identifies to set up the raw devices.

    For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sN raw device, where N is the slice number.

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.

How to Use the Cluster File System

  1. Create and mount the cluster file system.

    See “Configuring the Cluster” in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.

  2. When you add an entry to the /etc/vfstab file for the mount point, set UNIX file system (UFS) file-system-specific options for various types of Oracle files.

    See the following table.

    File Type 

    Options 

    Archived redo log files

    global, logging, forcedirectio

    Oracle application binary files, configuration files, alert files, and trace files

    global, logging

Where to Go From Here

Go to Creating Node-Specific Files and Directories for the Cluster File System to create node-specific files and directories that the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software requires.

Creating Node-Specific Files and Directories for the Cluster File System

When Oracle software is installed on the cluster file system, all the files in the directory that the ORACLE_HOME environment variable specifies are accessible by all cluster nodes.

An installation might require that some Oracle files or directories maintain node-specific information. You can satisfy this requirement by using a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the cluster file system.

To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the cluster file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.

Creating a Node-Specific Directory for the Cluster File System

Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:

For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.

How to Create a Node-Specific Directory for the Cluster File System

  1. On each cluster node, create the local directory that is to maintain node-specific information.

    Ensure that the local directory structure that you create matches the global directory structure that contains the node-specific information. For example, the global directory /global/oracle/network/agent might contain node-specific information that you require to be stored locally under the /local directory. In this situation, you would create a directory that is named /local/oracle/network/agent.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global directory that is to maintain node-specific information.

    Ensure that the local copy of the node-specific information is contained in the local directory that you created in Step 1.


    # cp -pr global-dir local-dir-parent
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    -r

    Specifies that the directory and all its files, including any subdirectories and their files, are copied.

    global-dir

    Specifies the full path of the global directory that you are copying. This directory resides on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir-parent

    Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.

  3. Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.

    1. From any cluster node, remove the global directory that you copied in Step 2.


      # rm -r global-dir
      
      -r

      Specifies that the directory and all its files, including any subdirectories and their files, are removed.

      global-dir

      Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.


      # ln -s local-dir global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-dir

      Specifies that the local directory that you created in Step 1 is the source of the link

      global-dir

      Specifies that the global directory that you removed in Step a is the target of the link


Example 1–1 Creating Node-Specific Directories

This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the required directories on the local file system, the following commands are run:


    # mkdir -p /local/oracle/network/agent
    

    # mkdir -p /local/oracle/network/log
    

    # mkdir -p /local/oracle/network/trace
    

    # mkdir -p /local/oracle/srvm/log
    

    # mkdir -p /local/oracle/apache
    
  2. To make local copies of the global directories that are to maintain node-specific information, the following commands are run:


    # cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/log /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.
    

    # cp -pr $ORACLE_HOME/apache /local/oracle/.
    

The following operations are performed on only one node:

  1. To remove the global directories, the following commands are run:


    # rm -r $ORACLE_HOME/network/agent
    

    # rm -r $ORACLE_HOME/network/log
    

    # rm -r $ORACLE_HOME/network/trace
    

    # rm -r $ORACLE_HOME/srvm/log
    

    # rm -r $ORACLE_HOME/apache
    
  2. To create symbolic links from the local directories to their corresponding global directories, the following commands are run:


    # ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent 
    

    # ln -s /local/oracle/network/log $ORACLE_HOME/network/log
    

    # ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace
    

    # ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log
    

    # ln -s /local/oracle/apache $ORACLE_HOME/apache
    

Creating a Node-Specific File for the Cluster File System

Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:

For information about other files that might be required to maintain node-specific information, see your Oracle documentation.

How to Create a Node-Specific File for the Cluster File System

  1. On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global file that is to maintain node-specific information.


    # cp -p global-file local-dir
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    global-file

    Specifies the file name and full path of the global file that you are copying. This file was installed on the cluster file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir

    Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.

  3. Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.

    1. From any cluster node, remove the global file that you copied in Step 2.


      # rm global-file
      
      global-file

      Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the file to the global file that you removed in Step a.


      # ln -s local-file global-file
      
      -s

      Specifies that the link is a symbolic link

      local-file

      Specifies that the file that you copied in Step 2 is the source of the link

      global-file

      Specifies that the global version of the file that you removed in Step a is the target of the link


Example 1–2 Creating Node-Specific Files

This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:


    # mkdir -p /local/oracle/network/admin
    
  2. To make a local copy of the global files that are to maintain node-specific information, the following commands are run:


    # cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
      /local/oracle/network/admin/.
    

    # cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
      /local/oracle/network/admin/.
    

The following operations are performed on only one node:

  1. To remove the global files, the following commands are run:


    # rm $ORACLE_HOME/network/admin/snmp_ro.ora
    

    # rm $ORACLE_HOME/network/admin/snmp_rw.ora
    
  2. To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:


    # ln -s /local/oracle/network/admin/snmp_ro.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

    # ln -s /local/oracle/network/admin/snmp_rw.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages to install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters software packages.

Installing Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages

If you did not install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters. To complete this procedure, you need the Sun Java Enterprise System Accessory CD Volume 3.

Install the Sun Cluster Support for Oracle Parallel Server/Real Application Clusters packages by using the pkgadd utility.


Note –

Because of the preparation that is required before installation, the scinstall(1M) utility does not support automatic installation of the data service packages.


How to Install Sun Cluster Support for Oracle Parallel Server/Real Application Clusters Packages

  1. Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.

  2. Become superuser.

  3. Change the current working directory to the directory that contains the packages for the version of the Solaris operating environment that you are using.

    • If you are using Solaris 8, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_8/Packages
      
    • If you are using Solaris 9, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_9/Packages
      
  4. On each cluster node that can run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters, transfer the contents of the required software packages from the CD-ROM to the node.

    The required software packages depend on the storage management scheme that you are using.

    • If you are using VxVM with the cluster feature, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
      
    • If you are using hardware RAID support, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
      
    • If you are using the cluster file system, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr
      

Caution – Caution –

Before you reboot the nodes, you must ensure that you have correctly installed and configured the Oracle UDLM software. For more information, see Installing the Oracle UDLM. Also verify that you have correctly installed your volume manager packages. If you plan to use VxVM, check that you have installed the software and check that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur.


Where to Go From Here

Go to Installing the Oracle UDLM to install the Oracle UDLM.

Installing the Oracle UDLM

Installing the Oracle UDLM involves the following tasks:

How to Prepare the Sun Cluster Nodes

For the Oracle UDLM software to run correctly, sufficient shared memory must be available on all of the cluster nodes. See the Oracle Parallel Server/Real Application Clusters CD-ROM for all of the installation instructions. To prepare the Sun Cluster nodes, check that you have completed the following tasks.


Note –

Perform the following steps as superuser on each cluster node.


  1. On each node, create an entry for the database administrator group in the /etc/group file, and add potential users to the group.

    This group normally is named dba. Verify that root and oracle are members of the dba group, and add entries as necessary for other database administrator (DBA) users. Verify that the group IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters. For example, add the following entry to the /etc/group file.


    dba:*:520:root,oracle

    You can make the name service entries in a network name service, such as the Network Information Service (NIS) or NIS+, so that the information is available to the data service clients. You can also make entries in the local /etc files to eliminate dependency on the network name service.

  2. On each node, create an entry for the Oracle user ID (the group and password) in the /etc/passwd file, and run the pwconv(1M) command to create an entry in the /etc/shadow file.

    This Oracle user ID is normally oracle. For example, add the following entry to the /etc/passwd file.


    # useradd -u 120 -g dba -d /oracle-home oracle
    

    Ensure that the user IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.

Where to Go From Here

After you set up the cluster environment for Oracle Parallel Server/Real Application Clusters, go to How to Install the Oracle UDLM Software to install the Oracle UDLM software on each cluster node.

How to Install the Oracle UDLM Software


Note –

You must install the Oracle UDLM software on the local disk of each node.



Caution – Caution –

Before you install the Oracle UDLM software, ensure that you have created entries for the database administrator group and the Oracle user ID. See How to Prepare the Sun Cluster Nodes for details.


  1. Become superuser on a cluster node.

  2. Install the Oracle UDLM software.

    See the appropriate Oracle Parallel Server/Real Application Clusters installation documentation for instructions.


    Note –

    Ensure that you did not receive any error messages when you installed the Oracle UDLM packages. If an error occurred during package installation, correct the problem before you install the Oracle UDLM software.


  3. Update the /etc/system file with the shared memory configuration information.

    You must configure these parameters on the basis of the resources that are available in the cluster. Decide the appropriate values, but ensure that the Oracle UDLM can create a shared memory segment that conforms to its configuration requirements.

    The following example shows entries to configure in the /etc/system file.


    *SHARED MEMORY/ORACLE
    set shmsys:shminfo_shmmax=268435456
    set semsys:seminfo_semmap=1024
    set semsys:seminfo_semmni=2048
    set semsys:seminfo_semmns=2048
    set semsys:seminfo_semmsl=2048
    set semsys:seminfo_semmnu=2048
    set semsys:seminfo_semume=200
    set shmsys:shminfo_shmmin=200
    set shmsys:shminfo_shmmni=200
    set shmsys:shminfo_shmseg=200

  4. Shut down and reboot each node on which the Oracle UDLM software is installed.


    Caution – Caution –

    Before you reboot, you must ensure that you have correctly installed and configured the Oracle UDLM software. Also verify that you have correctly installed your volume manager packages. If you use VxVM, check that you have installed the software and that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur.


    For detailed instructions, see “Shutting Down and Booting a Single Cluster Node” in Sun Cluster System Administration Guide for Solaris OS.

Where to Go From Here

After you have installed the Oracle UDLM software on each cluster node, the next step depends on your storage management scheme.

Creating a VxVM Shared-Disk Group for the Oracle Parallel Server/Real Application Clusters Database


Note –

Perform this task only if you are using VxVM without the cluster file system.


If you are using VxVM without the cluster file system, VxVM requires a shared-disk group for the Oracle Parallel Server/Real Application Clusters database to use.

Before You Begin

Before you create a VxVM shared-disk group for the Oracle Parallel Server/Real Application Clusters database, note the following points.

How to Create a VxVM Shared-Disk Group for the Oracle Parallel Server/Real Application Clusters Database

    Use VERITAS commands that are provided for creating a VxVM shared-disk group.

    For information about VxVM shared-disk groups, see your VxVM documentation.

Where to Go From Here

After you have created a shared-disk group for the Oracle Parallel Server/Real Application Clusters database, go to Registering and Configuring Sun Cluster Support for Oracle Parallel Server/Real Application Clusters to register and configure Sun Cluster Support for Oracle Parallel Server/Real Application Clusters.