Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS

Chapter 1 Installing Sun Cluster Support for Oracle Real Application Clusters

This chapter describes the steps to install Sun Cluster Support for Oracle Real Application Clusters on your Sun Cluster nodes. This chapter contains the following procedures.

Overview of the Installation Process for Sun Cluster Support for Oracle Real Application Clusters

The following table summarizes the installation tasks and provides cross-references to detailed instructions for performing the tasks.

Perform these tasks in the order in which they are listed in the table.

Table 1–1 Tasks for Installing Sun Cluster Support for Oracle Real Application Clusters

Task 

Instructions 

Understand preinstallation considerations and special requirements 

Preinstallation Considerations

Special Requirements

Install storage management software 

Installing Storage Management Software With Sun Cluster Support for Oracle Real Application Clusters

Create node-specific files and directories that the Sun Cluster Support for Oracle Real Application Clusters software requires 

Creating Node-Specific Files and Directories for a Shared File System

Install data service packages 

Installing Sun Cluster Support for Oracle Real Application Clusters Packages

Prepare the Sun Cluster nodes 

Preparing the Sun Cluster Nodes

Install the UNIX Distributed Lock Manager 

Installing the Oracle UDLM

Create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters database 

Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters Database

Create a VxVM shared-disk group for the Oracle Real Application Clusters database 

Creating a VxVM Shared-Disk Group for the Oracle Real Application Clusters Database

Preinstallation Considerations

Oracle Real Application Clusters is a scalable application that can run on more than one node concurrently. Sun Cluster Support for Oracle Real Application Clusters is a set of packages that, when installed, enables Oracle Real Application Clusters to run on Sun Cluster nodes. This data service also enables Oracle Real Application Clusters to be managed by using Sun Cluster commands.


Note –

In earlier versions of Oracle, this scalable application is referred to as “Oracle Parallel Server”. In this book, references to “Oracle Real Application Clusters” also apply to Oracle Parallel Server unless this book explicitly states otherwise.


This data service provides fault monitoring only to enable the status of Oracle Real Application Clusters resources to be monitored by Sun Cluster utilities. This data service does not provide automatic fault recovery because the Oracle Real Application Clusters software provides similar functionality.

Hardware and Software Requirements

Before you begin the installation, note the hardware and software requirements in the subsections that follow.

Sun Cluster Framework Requirements

Sun Cluster Support for Oracle Real Application Clusters requires a functioning cluster with the initial cluster framework already installed. See Sun Cluster Software Installation Guide for Solaris OS for details about initial installation of cluster software.

Storage Management Requirements for the Oracle Real Application Clusters Database

You must configure Oracle Real Application Clusters to use the shared-disk architecture of the Sun Cluster software. In this configuration, a single database is shared among multiple instances of the Oracle Real Application Clusters software that access the database concurrently. The UNIX Distributed Lock Manager (Oracle UDLM) controls access to shared resources between cluster nodes.

To satisfy these requirements, use one storage management scheme from the following list:

Software License Requirements

Verify that you have obtained and installed the appropriate licenses for your software. If you install your licenses incorrectly or incompletely, the nodes might fail to boot correctly.

For example, if you are using VxVM with the cluster feature, verify that you have installed a valid license for the Volume Manager cluster feature by running one of the following commands:

Supported Topology Requirements

Check with a Sun Enterprise Services representative for the current supported topologies for Sun Cluster Support for Oracle Real Application Clusters, cluster interconnect, storage management scheme, and hardware configurations.

Patch Installation Requirements

Ensure that you have installed all of the applicable software patches for the Solaris Operating System, Sun Cluster, Oracle, and your volume manager. If you need to install any Sun Cluster Support for Oracle Real Application Clusters patches, you must apply these patches after you install the data service packages.

Location of Oracle Binary Files and Oracle Configuration Files

You can install the Oracle binary files and Oracle configuration files on one of the following locations.

Using Local Disks for Oracle Binary Files and Oracle Configuration Files

Placing the Oracle binary files and Oracle configuration files on the individual cluster nodes enables you to upgrade the Oracle application later without shutting down the data service.

The disadvantage is that you then have several copies of the Oracle application binary files and Oracle configuration files to maintain and administer.

Using a Shared File System for Oracle Binary Files and Oracle Configuration Files

To simplify the maintenance of your Oracle installation, you can install the Oracle binary files and Oracle configuration files on a shared file system. The following shared file systems are supported:

If you put the Oracle binary files and Oracle configuration files on a shared file system, you have only one copy to maintain and manage. However, you must shut down the data service in the entire cluster to upgrade the Oracle application. If a small amount of downtime for upgrades is acceptable, place a single copy of the Oracle binary files and Oracle configuration files on a shared file system.

Requirements for Using the Sun StorEdge QFS Shared File System

You can store all of the files that are associated with Oracle Real Application Clusters on the Sun StorEdge QFS shared file system.

Distribute these files among several file systems as follows:

For information about how to create a Sun StorEdge QFS shared file system, see the following documentation for Sun StorEdge QFS:

Requirements for Using the Cluster File System

You can store only these files that are associated with Oracle Real Application Clusters on the cluster file system:


Note –

You must not store data files, control files, or online redo log files on the cluster file system.


The input/output (I/O) performance during the writing of archived redo log files is affected by the location of the device group for archived redo log files. For optimum performance, ensure that the primary of the device group for archived redo log files is located on the same node as the Oracle Real Application Clusters database instance. This device group contains the file system that holds archived redo log files of the database instance.

For information about how to create cluster file systems, see:

Configuration Planning Questions

Use the questions in the subsections that follow to plan the installation and configuration of Sun Cluster Support for Oracle Real Application Clusters. Write the answers to these questions in the space that is provided on the data service worksheets in “Configuration Worksheets” in Sun Cluster 3.1 Data Service Planning and Administration Guide.

Resource Groups for Oracle RAC Server Resources

Which resource groups will you use for the Oracle Real Application Clusters (RAC) server resources?

You require one resource group for each Oracle Real Application Clusters database instance. Each resource group contains the Oracle RAC server resource for the database instance.

Use the answer to this question when you perform the procedure in Registering and Configuring Oracle RAC Server Resources.

Resource Groups for Oracle Listener Resources

Which resource groups will you use for the Oracle listener resources?

Use the answer to this question when you perform the procedure in Registering and Configuring Oracle Listener Resources.

The resource groups depend on your configuration of Oracle listeners with Real Application Clusters database instances. For general information about possible configurations of listeners for Real Application Clusters instances, see your Oracle documentation. Example configurations are described in the subsections that follow.

One Listener For One Real Application Clusters Instance

One listener serves only one Real Application Clusters instance. The listener listens on the fixed Internet Protocol (IP) address of the node. The listener cannot fail over.

In this situation, configure the listener resource as follows:

One Listener That Cannot Fail Over for Several Real Application Clusters Instances

One listener serves several Real Application Clusters instances on the same node. The listener uses Oracle's transparent application failover (TAF) and load balancing to distribute client connections across all Real Application Clusters instances. The listener cannot fail over.

In this situation, configure the listener resource as follows:

One Listener That Can Fail Over for Several Real Application Clusters Instances

One listener that can fail over serves several Real Application Clusters instances on the same node. When the listener fails over to another node, the listener serves several Real Application Clusters instances on the other node.

The listener uses Oracle's TAF and load balancing to distribute client connections across all Real Application Clusters instances. To ensure fast error detection and short failover times, the listener listens on an address that is represented by a LogicalHostname resource.

In this situation, configure the listener resource as follows:

For more information, see LogicalHostname Resources for Oracle Listener Resources.

One Listener for the Entire Cluster

One listener serves all Real Application Clusters instances on all nodes. The listener listens on an address that is represented by a LogicalHostname resource. This configuration ensures that the address is plumbed very quickly on another node after a node fails.

You can use this configuration if you configure Real Application Clusters instances to use a multithreaded server (MTS). In such a configuration, the REMOTE_LISTENERS parameter in the init.ora file specifies that each dispatcher registers with the listener on a logical IP address.

All clients connect through the one listener. The listener redirects each client connection to the least busy dispatcher. The least busy dispatcher might be on a different node from the listener.

If the listener fails, the listener's fault monitor restarts the listener. If the node where the listener is running fails, the listener is restarted on a different node. In both situations the dispatchers reregister after the listener is restarted.

If you are using one listener for the entire cluster, configure the following resources in the same resource group:

For more information, see LogicalHostname Resources for Oracle Listener Resources.

LogicalHostname Resources for Oracle Listener Resources

Which LogicalHostname resources will Oracle listener resources use?

Use the answer to this question when you perform the procedure in Registering and Configuring Oracle Listener Resources.

If a cluster node that is running an instance of Oracle Real Application Clusters fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the Transmission Control Protocol/Internet Protocol (TCP/IP) network timeout is high, the client application might require a significant length of time to detect the failure. Typically, client applications require between three and nine minutes to detect such failures.

In such situations, client applications can connect to listener resources that are listening on an address that is represented by the Sun Cluster LogicalHostname resource. Configure the LogicalHostname resource and the listener resource in a separate resource group. Ensure that this resource group is mastered on the nodes on which Oracle Real Application Clusters is running. If a node fails, the resource group that contains the LogicalHostname resource and the listener resource fails over to another surviving node on which Oracle Real Application Clusters is running. The failover of the LogicalHostname resource enables new connections to be directed to the other instance of Oracle Real Application Clusters.

Resources for the Sun StorEdge QFS Shared File System

If you are using the Sun StorEdge QFS shared file system, answer the following questions:

For more information, see the following documentation for Sun StorEdge QFS:

Use the answers to these questions when you perform the procedure in Registering and Configuring Oracle RAC Server Resources.

Location of System Configuration Files

Where will the system configuration files reside?

For the advantages and disadvantages of using the local file system instead of the cluster file system, see Location of Oracle Binary Files and Oracle Configuration Files.

Special Requirements

This section lists special requirements for Sun Cluster Support for Oracle Real Application Clusters.

32-Bit Mode or 64-Bit Mode

Before you decide which architecture to use for the Oracle UDLM and the Oracle relational database management system (RDBMS), note the following points.

Log File Locations

The following list shows the locations of the data service log files.

Rebooting Nodes During the Installation of Sun Cluster Support for Oracle Real Application Clusters

During installation of this data service, reboot the nodes only after you have installed and configured the Oracle UDLM software, and satisfied the prerequisites for performing this task. Otherwise, the nodes panic.

For information about how to recover from a node panic during installation, see Node Panic During Initialization of Sun Cluster Support for Oracle Real Application Clusters.

Using the Oracle Real Application Clusters Guard Option With Sun Cluster 3.1

For information about the installation, administration, and operation of the Oracle Real Application Clusters Guard option, see the Oracle documentation. If you plan to use this product option with Sun Cluster 3.1, note the points in the subsections that follow before you install Sun Cluster 3.1.

Hostname Restrictions

If you use the Oracle Real Application Clusters Guard option with Sun Cluster 3.1, the following restrictions apply to hostnames that you use in your cluster:

For more information about these restrictions and any other requirements, see the Oracle documentation.

Sun Cluster Command Usage Restrictions

If you use the Oracle Real Application Clusters Guard option with Sun Cluster 3.1, do not use Sun Cluster commands to perform the following operations:

Installing Storage Management Software With Sun Cluster Support for Oracle Real Application Clusters

Install the software for the storage management schemes that you are using for the Oracle Real Application Clusters database and the Oracle software.

How to Use Solaris Volume Manager for Sun Cluster

To use the Solaris Volume Manager for Sun Cluster software with Sun Cluster Support for Oracle Real Application Clusters, perform the following tasks.

  1. Ensure that you are using Solaris 9 9/04 or compatible versions.

    Solaris Volume Manager for Sun Cluster is installed during the installation of the Solaris Operating System.

  2. Configure the Solaris Volume Manager for Sun Cluster software on the cluster nodes.

    For more information, see “Installing and Configuring Solstice DiskSuite/Solaris Volume Manager Software” in Sun Cluster Software Installation Guide for Solaris OS.

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Real Application Clusters Packages to install the Sun Cluster Support for Oracle Real Application Clusters software packages.

How to Use VxVM

To use the VxVM software with Sun Cluster Support for Oracle Real Application Clusters, perform the following tasks.

  1. If you are using VxVM with the cluster feature, obtain a license for the Volume Manager cluster feature in addition to the basic VxVM license.

    See your VxVM documentation for more information about VxVM licensing requirements.


    Caution – Caution –

    Failure to correctly install the license for the Volume Manager cluster feature might cause a panic when you install Oracle Real Application Clusters support. Before you install the Oracle Real Application Clusters packages, run the vxlicense -p or vxlicrep command to ensure that you have installed a valid license for the Volume Manager cluster feature.


  2. Install and configure the VxVM software on the cluster nodes.

    See “Installing and Configuring VERITAS Volume Manager” in Sun Cluster Software Installation Guide for Solaris OS and the VxVM documentation for more information.

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Real Application Clusters Packages to install the Sun Cluster Support for Oracle Real Application Clusters software packages.

How to Use Hardware RAID Support

You can use Sun Cluster Support for Oracle Real Application Clusters with hardware RAID support.

For example, you can use Sun StorEdgeTM A3500/A3500FC disk arrays with hardware RAID support and without VxVM software. To use this combination, configure raw device identities (/dev/did/rdsk*) on top of the disk arrays' logical unit numbers (LUNs). To set up the raw devices for Oracle Real Application Clusters on a cluster that uses StorEdge A3500/A3500FC disk arrays with hardware RAID, perform the following steps.

  1. Create LUNs on the disk arrays.

    See the Sun Cluster hardware documentation for information about how to create LUNs.

  2. After you create the LUNs, run the format(1M) command to partition the disk arrays' LUNs into as many slices as you need.

    The following example lists output from the format command.


    # format
    
    0. c0t2d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@2,0
    1. c0t3d0 <SUN18G cyl 7506 alt 2 hd 19 sec 248>
       /sbus@3,0/SUNW,fas@3,8800000/sd@3,0
    2. c1t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,0
    3. c1t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@1/rdriver@5,1
    4. c2t5d0 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,0
    5. c2t5d1 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@2/rdriver@5,1
    6. c3t4d2 <Symbios-StorEDGEA3000-0301 cyl 21541 alt 2 hd 64 sec 64>
       /pseudo/rdnexus@3/rdriver@4,2

    Note –

    To prevent a loss of disk partition information, do not start the partition at cylinder 0 for any disk slice that is used for raw data. The disk partition table is stored in cylinder 0 of the disk.


  3. Run the scdidadm(1M) command to find the raw device identity (DID) that corresponds to the LUNs that you created in Step 1.

    The following example lists output from the scdidadm -L command.


    # scdidadm -L
    
    1        phys-schost-1:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    1        phys-schost-2:/dev/rdsk/c0t2d0   /dev/did/rdsk/d1
    2        phys-schost-1:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    2        phys-schost-2:/dev/rdsk/c0t3d0   /dev/did/rdsk/d2
    3        phys-schost-2:/dev/rdsk/c4t4d0   /dev/did/rdsk/d3
    3        phys-schost-1:/dev/rdsk/c1t5d0   /dev/did/rdsk/d3
    4        phys-schost-2:/dev/rdsk/c3t5d0   /dev/did/rdsk/d4
    4        phys-schost-1:/dev/rdsk/c2t5d0   /dev/did/rdsk/d4
    5        phys-schost-2:/dev/rdsk/c4t4d1   /dev/did/rdsk/d5
    5        phys-schost-1:/dev/rdsk/c1t5d1   /dev/did/rdsk/d5
    6        phys-schost-2:/dev/rdsk/c3t5d1   /dev/did/rdsk/d6
    6        phys-schost-1:/dev/rdsk/c2t5d1   /dev/did/rdsk/d6
  4. Use the DID that the scdidadm output identifies to set up the raw devices.

    For example, the scdidadm output might identify that the raw DID that corresponds to the disk arrays' LUNs is d4. In this instance, use the /dev/did/rdsk/d4sN raw device, where N is the slice number.

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Real Application Clusters Packages to install the Sun Cluster Support for Oracle Real Application Clusters software packages.

How to Use Sun StorEdge QFS Shared File System

You must use Sun StorEdge QFS shared file system with hardware RAID support.


Note –

For detailed instructions for installing, configuring, and using Sun StorEdge QFS shared file system, see Sun StorEdge QFS and Sun StorEdge SAM-FS Software Installation and Configuration Guide and Sun StorEdge QFS and Sun StorEdge SAM-FS File System Administration Guide.


  1. Ensure that the Sun StorEdge QFS software is installed.

  2. Ensure that each Sun StorEdge QFS shared file system is correctly configured for use with Sun Cluster Support for Oracle Real Application Clusters.

  3. Ensure that each Sun StorEdge QFS shared file system is mounted with the correct options for use with Sun Cluster Support for Oracle Real Application Clusters.

    • For the file system that contains binary files, configuration files, alert files, and trace files, use the default mount options.

    • For the file systems that contain data files, control files, online redo log files, and archived redo log files, set the mount options as follows:

      • In the /etc/vfstab file set the shared option.

      • In the /etc/opt/SUNWsamfs/samfs.cmd file or the /etc/vfstab file, set the following options:

        stripe=width
        sync_meta=1
        mh_write
        qwrite
        forcedirectio
        nstreams=1024
        rdlease=300Set this value for optimum performance.
        wrlease=300Set this value for optimum performance.
        aplease=300Set this value for optimum performance.
        
        width

        Specifies the required stripe width for devices in the file system. The required stripe width is a multiple of the file system's disk allocation unit (DAU). width must be an integer that is greater than or equal to 1.


      Note –

      Ensure that settings in the /etc/vfstab file do not conflict with settings in the /etc/opt/SUNWsamfs/samfs.cmd file. Settings in the /etc/vfstab file override settings in the /etc/opt/SUNWsamfs/samfs.cmd file.


  4. Register and configure the data service for the Sun StorEdge QFS metadata server.

Where to Go From Here

The next step depends on whether you are using the Sun StorEdge QFS file system for Oracle binary files and Oracle configuration files.

How to Use the Cluster File System

  1. Create and mount the cluster file system.

    See “Configuring the Cluster” in Sun Cluster Software Installation Guide for Solaris OS for information about how to create and mount the cluster file system.

  2. If you are using the UNIX file system (UFS), ensure that you specify the correct mount options for various types of Oracle files.

    For the correct options, see the table that follows. You set these options when you add an entry to the /etc/vfstab file for the mount point.

    File Type 

    Options 

    Archived redo log files

    global, logging, forcedirectio

    Oracle application binary files, configuration files, alert files, and trace files

    global, logging

Where to Go From Here

Go to Creating Node-Specific Files and Directories for a Shared File System to create node-specific files and directories that the Sun Cluster Support for Oracle Real Application Clusters software requires.

Creating Node-Specific Files and Directories for a Shared File System

To simplify the maintenance of your Oracle installation, you can install the Oracle binary files and Oracle configuration files on a shared file system. The following shared file systems are supported:

When Oracle software is installed on a shared file system, all the files in the directory that the ORACLE_HOME environment variable specifies are accessible by all cluster nodes. However, some Oracle files and directories must maintain node-specific information.

If you install Oracle software on a shared file system, you must create local copies of files and directories that must maintain node-specific information. To ensure that these files and directories are accessible by all cluster nodes, use a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the shared file system.

To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the shared file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.

Creating a Node-Specific Directory for a Shared File System

Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:

For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.

How to Create a Node-Specific Directory for a Shared File System

  1. On each cluster node, create the local directory that is to maintain node-specific information.

    Ensure that the local directory structure that you create matches the global directory structure that contains the node-specific information. For example, the global directory /global/oracle/network/agent might contain node-specific information that you require to be stored locally under the /local directory. In this situation, you would create a directory that is named /local/oracle/network/agent.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global directory that is to maintain node-specific information.

    Ensure that the local copy of the node-specific information is contained in the local directory that you created in Step 1.


    # cp -pr global-dir local-dir-parent
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    -r

    Specifies that the directory and all its files, including any subdirectories and their files, are copied.

    global-dir

    Specifies the full path of the global directory that you are copying. This directory resides on the shared file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir-parent

    Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.

  3. Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.

    1. From any cluster node, remove the global directory that you copied in Step 2.


      # rm -r global-dir
      
      -r

      Specifies that the directory and all its files, including any subdirectories and their files, are removed.

      global-dir

      Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.


      # ln -s local-dir global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-dir

      Specifies that the local directory that you created in Step 1 is the source of the link

      global-dir

      Specifies that the global directory that you removed in Step a is the target of the link


Example 1–1 Creating Node-Specific Directories

This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the required directories on the local file system, the following commands are run:


    # mkdir -p /local/oracle/network/agent
    

    # mkdir -p /local/oracle/network/log
    

    # mkdir -p /local/oracle/network/trace
    

    # mkdir -p /local/oracle/srvm/log
    

    # mkdir -p /local/oracle/apache
    
  2. To make local copies of the global directories that are to maintain node-specific information, the following commands are run:


    # cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/log /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.
    

    # cp -pr $ORACLE_HOME/apache /local/oracle/.
    

The following operations are performed on only one node:

  1. To remove the global directories, the following commands are run:


    # rm -r $ORACLE_HOME/network/agent
    

    # rm -r $ORACLE_HOME/network/log
    

    # rm -r $ORACLE_HOME/network/trace
    

    # rm -r $ORACLE_HOME/srvm/log
    

    # rm -r $ORACLE_HOME/apache
    
  2. To create symbolic links from the local directories to their corresponding global directories, the following commands are run:


    # ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent 
    

    # ln -s /local/oracle/network/log $ORACLE_HOME/network/log
    

    # ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace
    

    # ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log
    

    # ln -s /local/oracle/apache $ORACLE_HOME/apache
    

Creating a Node-Specific File for a Shared File System

Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:

For information about other files that might be required to maintain node-specific information, see your Oracle documentation.

How to Create a Node-Specific File for a Shared File System

  1. On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global file that is to maintain node-specific information.


    # cp -p global-file local-dir
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    global-file

    Specifies the file name and full path of the global file that you are copying. This file was installed on the shared file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir

    Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.

  3. Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.

    1. From any cluster node, remove the global file that you copied in Step 2.


      # rm global-file
      
      global-file

      Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the file to the global file that you removed in Step a.


      # ln -s local-file global-file
      
      -s

      Specifies that the link is a symbolic link

      local-file

      Specifies that the file that you copied in Step 2 is the source of the link

      global-file

      Specifies that the global version of the file that you removed in Step a is the target of the link


Example 1–2 Creating Node-Specific Files

This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:


    # mkdir -p /local/oracle/network/admin
    
  2. To make a local copy of the global files that are to maintain node-specific information, the following commands are run:


    # cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
      /local/oracle/network/admin/.
    

    # cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
      /local/oracle/network/admin/.
    

The following operations are performed on only one node:

  1. To remove the global files, the following commands are run:


    # rm $ORACLE_HOME/network/admin/snmp_ro.ora
    

    # rm $ORACLE_HOME/network/admin/snmp_rw.ora
    
  2. To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:


    # ln -s /local/oracle/network/admin/snmp_ro.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

    # ln -s /local/oracle/network/admin/snmp_rw.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

Where to Go From Here

Go to Installing Sun Cluster Support for Oracle Real Application Clusters Packages to install the Sun Cluster Support for Oracle Real Application Clusters software packages.

Installing Sun Cluster Support for Oracle Real Application Clusters Packages

If you did not install the Sun Cluster Support for Oracle Real Application Clusters packages during your initial Sun Cluster installation, perform this procedure to install the packages. Perform this procedure on all of the cluster nodes that can run Sun Cluster Support for Oracle Real Application Clusters. To complete this procedure, you need the Sun Java Enterprise System Accessory CD Volume 3.

The Sun Cluster Support for Oracle Real Application Clusters packages are as follows:

Install the Sun Cluster Support for Oracle Real Application Clusters packages by using the pkgadd utility.


Note –

Because of the preparation that is required before installation, the scinstall(1M) utility does not support automatic installation of the packages for the RAC framework resource group.


How to Install Sun Cluster Support for Oracle Real Application Clusters Packages

  1. Load the Sun Java Enterprise System Accessory CD Volume 3 into the CD-ROM drive.

  2. Become superuser.

  3. Change the current working directory to the directory that contains the packages for the RAC framework resource group.

    This directory depends on the version of the Solaris Operating System that you are using

    • If you are using Solaris 8, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_8/Packages
      
    • If you are using Solaris 9, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_Oracle_RAC/Solaris_9/Packages
      
  4. On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, transfer the contents of the required software packages from the CD-ROM to the node.

    The required software packages depend on the storage management scheme that you are using for the Oracle Real Application Clusters database.

    • If you are using Solaris Volume Manager for Sun Cluster, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWscmd
      
    • If you are using VxVM with the cluster feature, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWcvmr SUNWcvm
      
    • If you are using hardware RAID support, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
      
    • If you are using Sun StorEdge QFS shared file system with hardware RAID support, run the following command:


      # pkgadd -d . SUNWscucm SUNWudlm SUNWudlmr SUNWschwr
      
  5. Change the current working directory to the directory that contains the packages for the Oracle RAC server resource and Oracle RAC listener resource.

    This directory depends on the version of the Solaris Operating System that you are using.

    • If you are using Solaris 8, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_HA_Oracle_3.1/Solaris_8/Packages
      
    • If you are using Solaris 9, run the following command:


      # cd /cdrom/cdrom0/components/SunCluster_HA_Oracle_3.1/Solaris_9/Packages
      
  6. On each cluster node that can run Sun Cluster Support for Oracle Real Application Clusters, transfer the contents of the required software packages from the CD-ROM to the node.

    The required software packages depend on the locale that you require.

    • To install the C locale, run the following command:


      # pkgadd -d . SUNWscor
      
    • To install the Simplified Chinese locale, run the following command:


      # pkgadd -d . SUNWcscor
      
    • To install the Japanese locale, run the following command:


      # pkgadd -d . SUNWjscor
      

Where to Go From Here

Go to Preparing the Sun Cluster Nodes to prepare the Sun Cluster nodes.

Preparing the Sun Cluster Nodes

Preparing the Sun Cluster nodes modifies the configuration of the operating system to enable Oracle Real Application Clusters to run on Sun Cluster nodes. Preparing the Sun Cluster nodes and disks involves the following tasks:


Caution – Caution –

Perform these tasks on all nodes where Sun Cluster Support for Oracle Real Application Clusters can run. If you do not perform these tasks on all nodes, the Oracle installation is incomplete. An incomplete Oracle installation causes Sun Cluster Support for Oracle Real Application Clusters to fail during startup.


How to Bypass the NIS Name Service

Bypassing the NIS name service protects the Sun Cluster Support for Oracle Real Application Clusters data service against a failure of a cluster node's public network. A failure of a cluster node's public network might cause the NIS name service to become unavailable. If Sun Cluster Support for Oracle Real Application Clusters refers to the NIS name service, unavailability of the name service might cause the Sun Cluster Support for Oracle Real Application Clusters data service to fail.

Bypassing the NIS name service ensures that the Sun Cluster Support for Oracle Real Application Clusters data service does not refer to the NIS name service when the data service sets the user identifier (ID). The Sun Cluster Support for Oracle Real Application Clusters data service sets the user ID when the data service starts or stops the database.

  1. Become superuser on all nodes where Sun Cluster Support for Oracle Real Application Clusters can run.

  2. On each node, include the following entries in the /etc/nsswitch.conf file.


    passwd:    files nis [TRYAGAIN=0]
    publickey: files nis [TRYAGAIN=0]
    project:   files nis [TRYAGAIN=0]
    group:     files

    For more information about the /etc/nsswitch.conf file, see the nsswitch.conf(4) man page.

Where to Go From Here

Go to How to Create the Database Administrator Group and the Oracle User Account.

How to Create the Database Administrator Group and the Oracle User Account


Note –

Perform the following steps as superuser on each cluster node.


  1. On each node, create an entry for the database administrator group in the /etc/group file, and add potential users to the group.

    This group normally is named dba. Verify that root and oracle are members of the dba group, and add entries as necessary for other database administrator (DBA) users. Verify that the group IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Real Application Clusters. For example, add the following entry to the /etc/group file.


    dba:*:520:root,oracle

    You can create the name service entries in a network name service, such as the Network Information Service (NIS) or NIS+, so that the information is available to the data service clients. You can also create entries in the local /etc files to eliminate dependency on the network name service.

  2. On each node, create an entry for the Oracle user ID (the group and password) in the /etc/passwd file, and run the pwconv(1M) command to create an entry in the /etc/shadow file.

    This Oracle user ID is normally oracle. For example, add the following entry to the /etc/passwd file.


    # useradd -u 120 -g dba -d /oracle-home oracle
    

    Ensure that the user IDs are the same on all of the nodes that run Sun Cluster Support for Oracle Real Application Clusters.

Where to Go From Here

After you set up the cluster environment for Oracle Real Application Clusters, go to How to Install the Oracle UDLM to install the Oracle UDLM software on each cluster node.

Installing the Oracle UDLM

To enable the Oracle UDLM software to run correctly, you must ensure that sufficient shared memory is available on all of the cluster nodes. For detailed instructions for installing the Oracle UDLM, see the Oracle Real Application Clusters CD-ROM.


Caution – Caution –

Before you install the Oracle UDLM, ensure that you have created entries for the database administrator group and the Oracle user ID. See How to Create the Database Administrator Group and the Oracle User Account for details.


How to Install the Oracle UDLM


Note –

You must install the Oracle UDLM software on the local disk of each node.


  1. Become superuser on a cluster node.

  2. Install the Oracle UDLM software.

    See the appropriate Oracle Real Application Clusters installation documentation for instructions.


    Note –

    Ensure that you did not receive any error messages when you installed the Oracle UDLM packages. If an error occurred during package installation, correct the problem before you install the Oracle UDLM software.


  3. Update the /etc/system file with the shared memory configuration information.

    You must configure these parameters on the basis of the resources that are available in the cluster. Decide the appropriate values, but ensure that the Oracle UDLM can create a shared memory segment that conforms to its configuration requirements.

    The following example shows entries to configure in the /etc/system file.


    *SHARED MEMORY/ORACLE
    set shmsys:shminfo_shmmax=268435456
    set semsys:seminfo_semmap=1024
    set semsys:seminfo_semmni=2048
    set semsys:seminfo_semmns=2048
    set semsys:seminfo_semmsl=2048
    set semsys:seminfo_semmnu=2048
    set semsys:seminfo_semume=200
    set shmsys:shminfo_shmmin=200
    set shmsys:shminfo_shmmni=200
    set shmsys:shminfo_shmseg=200

  4. Shut down and reboot each node on which the Oracle UDLM software is installed.


    Caution – Caution –

    Before you reboot, you must ensure that you have correctly installed and configured the Oracle UDLM software. Also verify that you have correctly installed your volume manager packages. If you use VxVM, check that you have installed the software and that the license for the VxVM cluster feature is valid. Otherwise, a panic will occur.


    For detailed instructions, see “Shutting Down and Booting a Single Cluster Node” in Sun Cluster System Administration Guide for Solaris OS.

Where to Go From Here

After you have installed the Oracle UDLM software on each cluster node, the next step depends on your storage management scheme as shown in the following table.

Storage Management Scheme 

Next Step 

Solaris Volume Manager for Sun Cluster 

Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters Database

VxVM with the cluster feature 

Creating a VxVM Shared-Disk Group for the Oracle Real Application Clusters Database

Other 

Registering and Configuring the RAC Framework Resource Group

Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters Database


Note –

Perform this task only if you are using Solaris Volume Manager for Sun Cluster.


If you are using Solaris Volume Manager for Sun Cluster, Solaris Volume Manager requires a multi-owner disk set for the Oracle Real Application Clusters database to use. For information about Solaris Volume Manager for Sun Cluster multi–owner disk sets, see “Disk Set Concepts for Solaris Volume Manager for Sun Cluster” in Solaris Volume Manager Administration Guide.

Before You Begin

Before you create a multi-owner disk set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters database, note the following points.

How to Create a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster for the Oracle Real Application Clusters Database

  1. Create a multi-owner disk set.

    Use the metaset(1M) command for this purpose.


    # metaset -s setname -M -a -h nodelist
    
    -s setname

    Specifies the name of the disk set that you are creating.

    -M

    Specifies that the disk set that you are creating is a multi-owner disk set.

    -a

    Specifies that the nodes that the -h option specifies are to be added to the disk set.

    -h nodelist

    Specifies a space-separated list of nodes that are to be added to the disk set. The Sun Cluster Support for Oracle Real Application Clusters software packages must be installed on each node in the list.

  2. Add global devices to the disk set that you created in Step 1.


    # metaset -s setname -a devicelist
    
    -s setname

    Specifies that you are modifying the disk set that you created in Step 1.

    -a

    Specifies that the devices that devicelist specifies are to be added to the disk set.

    devicelist

    Specifies a space-separated list of full device ID path names for the global devices that are to be added to the disk set. To enable consistent access to each device from any node in the cluster, ensure that each device ID path name is of the form /dev/did/dsk/dN, where N is the device number.

  3. For the disk set that you created in Step 1, create the volumes that the Oracle Real Application Clusters database will use.


    Note –

    If you are creating many volumes for Oracle data files, you can simplify this step by using soft partitions. For more information, see “Soft Partitions (Overview)” in Solaris Volume Manager Administration Guide and “Soft Partitions (Tasks)” in Solaris Volume Manager Administration Guide.


    Create each volume by concatenating slices on global devices that you added in Step 2. Use the metainit(1M) command for this purpose.


    # metainit -s setname volume-abbrev numstripes width slicelist
    
    -s setname

    Specifies that you are creating a volume for the disk set that you created in Step 1.

    volume-abbrev

    Specifies the abbreviated name of the volume that you are creating. An abbreviated volume name has the format dV, where V is the volume number.

    numstripes

    Specifies the number of stripes in the volume.

    width

    Specifies the number of slices in each stripe. If you set width to greater than 1, the slices are striped.

    slicelist

    Specifies a space-separated list of slices that the volume contains. Each slice must reside on a global device that you added in Step 2.

  4. Verify that each node is correctly added to the multi-owner disk set.

    Use the metastat(1M) command for this purpose.


    # metastat -s setname
    
    -s setname

    Specifies that you are verifying the disk set that you created in Step 1

    This command displays a table that contains the following information for each node that is correctly added to the disk set:

    • The Host column contains the node name.

    • The Owner column contains the text multi-owner.

    • The Member column contains the text Yes.

  5. Verify that the multi-owner disk set is correctly configured.


    # scconf -pvv | grep setname
    
    setname

    Specifies that configuration information only for the disk set that you created in Step 1 is displayed

    This command displays the device group information for the disk set. For a multi-owner disk set, the device group type is Multi-owner_SVM.

  6. Verify the online status of the multi-owner disk set.


    # scstat -D
    

    This command displays the node names of nodes in the multi-owner disk set that are online.

  7. On each node that can own the disk set, change the ownership of each volume that you created in Step 3 as follows:

    • Owner: oracle

    • Group: dba

    Ensure that you change ownership only of volumes that the Oracle Real Application Clusters database will use.


    # chown oracle:dba volume-list
    
    volume-list

    Specifies a space-separated list of the logical names of the volumes that you created for the disk set. The format of these names depends on the type of device where the volume resides, as follows:

    • For block devices: /dev/md/setname/dsk/dV

    • For raw devices: /dev/md/setname/rdsk/dV

    The replaceable items in these names are as follows:

    setname

    Specifies the name of the multi-owner disk set that you created in Step 1

    V

    Specifies the volume number of a volume that you created in Step 3

    Ensure that this list specifies each volume that you created in Step 3.

  8. On each node that can own the disk set, grant the oracle user read access and write access to each volume for which you changed the ownership in Step 7.

    Ensure that you change access permissions only of volumes that the Oracle Real Application Clusters database will use.


    # chmod u+rw volume-list
    
    volume-list

    Specifies a space-separated list of the logical names of the volumes to which you are granting the oracle user read access and write access. Ensure that this list contains the volumes that you specified in Step 7.


Example 1–3 Creating a Multi-Owner Disk Set in Solaris Volume Manager for Sun Cluster

This example shows the sequence of operations that is required to create a multi-owner disk set in Solaris Volume Manager for Sun Cluster. This example assumes that the volumes reside on raw devices.

  1. To create the multi-owner disk set, the following command is run:


    # metaset -s racdbset -M -a -h rachost1 rachost2 rachost3 rachost4
    

    The multi-owner disk set is named racdbset. The nodes rachost1, rachost2, rachost3, and rachost4 are added to this disk set.

  2. To add the global device /dev/did/dsk/d0 to the disk set, the following command is run:


    # metaset -s racdbset -a /dev/did/dsk/d0
    
  3. To create a volume for the disk set, the following command is run:


    # metainit -s racdbset d0 1 1 /dev/did/dsk/d0s0 
    

    The volume is named d0. This volume is created by a one-on-one concatenation of the slice /dev/did/dsk/d0s0. The slice is not striped.

  4. To verify that each node is correctly added to the multi-owner disk set, the following command is run:


    # metastat -s racdbset
    Multi-owner Set name = racdbset, Set number = 1, Master = rachost2
    
    Host                Owner          Member
       rachost1           multi-owner   Yes
       rachost2           multi-owner   Yes
       rachost3           multi-owner   Yes
       rachost4           multi-owner   Yes
    
    Drive Dbase
    
    d6    Yes
    
    d10   Yes
  5. To verify that the multi-owner disk set is correctly configured, the following command is run:


    # scconf -pvv | grep racdbset
    Device group name:                                 racdbset
       (racdbset) Device group type:                       Multi-owner_SVM
       (racdbset) Device group failback enabled:           no
       (racdbset) Device group node list:       rachost1, rachost2, rachost3, rachost4
       (racdbset) Device group ordered node list:          no
       (racdbset) Device group desired number of secondaries: 0
       (racdbset) Device group diskset name:               racdbset
  6. To verify the online status of the multi-owner disk set, the following command is run:


    # scstat -D
    
    -- Device Group Servers --
    
                              Device Group        Primary             Secondary
                              ------------        -------             ---------
    
    
    -- Device Group Status --
    
                                   Device Group        Status
                                   ------------        ------
    
    
    -- Multi-owner Device Groups --
    
                                   Device Group        Online Status
                                   ------------        -------------
       Multi-owner device group:   racdbset            rachost1,rachost2,rachost3,rachost4
  7. To change the ownership of the volume in the disk set to owner oracle in group dba, the following command is run:


    # chown oracle:dba /dev/md/racdbset/rdsk/d0
    

    This command is run on each node that can own the disk set.

  8. To grant the oracle user read access to the volume in the disk set, the following command is run:


    # chmod u+rw /dev/md/racdbset/rdsk/d0
    

    This command is run on each node that can own the disk set.


Where to Go From Here

After you have created a multi-owner disk set for the Oracle Real Application Clusters database, go to Registering and Configuring the RAC Framework Resource Group to register and configure Sun Cluster Support for Oracle Real Application Clusters.

Creating a VxVM Shared-Disk Group for the Oracle Real Application Clusters Database


Note –

Perform this task only if you are using VxVM with the cluster feature.


If you are using VxVM with the cluster feature, VxVM requires a shared-disk group for the Oracle Real Application Clusters database to use.

Before You Begin

Before you create a VxVM shared-disk group for the Oracle Real Application Clusters database, note the following points.

How to Create a VxVM Shared-Disk Group for the Oracle Real Application Clusters Database

  1. Use VERITAS commands that are provided for creating a VxVM shared-disk group.

    For information about VxVM shared-disk groups, see your VxVM documentation.

Where to Go From Here

After you have created a shared-disk group for the Oracle Real Application Clusters database, go to Registering and Configuring the RAC Framework Resource Group to register and configure Sun Cluster Support for Oracle Real Application Clusters.