Sun Cluster Data Service for Oracle Real Application Clusters Guide for Solaris OS

Chapter 3 Enabling Oracle Real Application Clusters to Run in a Cluster

This chapter explains how to enable Oracle Real Application Clusters to run on your Sun Cluster nodes.

Overview of Tasks for Enabling Oracle Real Application Clusters to Run in a Cluster

Table 3–1 summarizes the tasks for enabling Oracle Real Application Clusters to run in a cluster.

Perform these tasks in the order in which they are listed in the table.

Table 3–1 Tasks for Enabling Oracle Real Application Clusters to Run in a Cluster

Task 

Instructions 

Install the Oracle Real Application Clusters software 

Installing Oracle Real Application Clusters Software

Your Oracle documentation. 

Verify the installation of the Oracle Real Application Clusters software 

Verifying the Installation of Oracle Real Application Clusters.

Create your Oracle database 

Creating an Oracle Database.

Create node-specific files and directories that the Sun Cluster Support for Oracle Real Application Clusters software requires 

Creating Node-Specific Files and Directories for a Shared File System

(Not required for Oracle 10g) Automate the startup and shutdown of Oracle Real Application Clusters database instances 

Automating the Startup and Shutdown of Oracle Real Application Clusters Database Instances.

Verify the Sun Cluster Support for Oracle Real Application Clusters installation and configuration 

Verifying the Sun Cluster Support for Oracle Real Application Clusters Installation and Configuration.

Installing Oracle Real Application Clusters Software

For detailed instructions for installing Oracle Real Application Clusters, see your Oracle documentation.

By default, the Oracle installer installs CRS on all nodes in a cluster. Instructions for installing CRS on a subset of Sun Cluster nodes are available at the Oracle MetaLink web site. See Oracle MetaLink note 280589.1 How to install Oracle 10g CRS on a cluster where one or more nodes are not to be configured to run CRS.

After installing the Oracle Real Application Clusters software, verify the installation of the software. For more information, see Verifying the Installation of Oracle Real Application Clusters.

Verifying the Installation of Oracle Real Application Clusters

After you have installed Oracle Real Application Clusters, verify that the installation is correct. Perform this verification before you attempt to create your Oracle database. This verification does not verify that Real Application Clusters database instances can be started and stopped automatically.

ProcedureHow to Verify the Installation of Oracle Real Application Clusters

Steps
  1. Confirm that the owner, group, and mode of the $ORACLE_HOME/bin/oracle file are as follows:

    • Owner: oracle

    • Group: dba

    • Mode: -rwsr-s--x


    # ls -l $ORACLE_HOME/bin/oracle
    
  2. Confirm that the binary files for the Oracle listener exist in the $ORACLE_HOME/bin directory.

Next Steps

Go to Creating an Oracle Database.

Creating an Oracle Database

Perform this task to configure and create the initial Oracle database in a Sun Cluster environment. If you create and configure additional databases, you do not need to repeat this task.

ProcedureHow to Create an Oracle Database

Steps
  1. Ensure that the init$ORACLE_SID.ora file or the config$ORACLE_SID.ora file specifies the correct locations of the control files and alert files.

    The locations of these files are specified as follows:

    • The location of control files is specified by the control_files keyword.

    • The location of alert files is specified by the background_dump_dest keyword.

  2. If you use Solaris authentication for database logins, set the remote_os_authent variable in the init$ORACLE_SID.ora file to True.

  3. Ensure that all files that are related to the database are in the correct location.

  4. Start the creation of the database by using one command from the following list:

    • The Oracle dbca command

    • The Oracle sqlplus command

  5. Ensure that the file names of your control files match the file names in your configuration files.

Next Steps

The next steps depend on the location of your Oracle binary files and Oracle configuration files. See the following table.

Location 

Next Step 

Shared file system 

Creating Node-Specific Files and Directories for a Shared File System

Local disks of each node 

Automating the Startup and Shutdown of Oracle Real Application Clusters Database Instances

Creating Node-Specific Files and Directories for a Shared File System

To simplify the maintenance of your Oracle installation, you can install the Oracle binary files and Oracle configuration files on a shared file system. The following shared file systems are supported:

When Oracle software is installed on a shared file system, all the files in the directory that the ORACLE_HOME environment variable specifies are accessible by all cluster nodes. However, some Oracle files and directories must maintain node-specific information.

If you install Oracle software on a shared file system, you must create local copies of files and directories that must maintain node-specific information. To ensure that these files and directories are accessible by all cluster nodes, use a symbolic link whose target is a file or a directory on a file system that is local to a node. Such a file system is not part of the shared file system.

To use a symbolic link for this purpose, you must allocate an area on a local file system. To enable Oracle applications to create symbolic links to files in this area, the applications must be able to access files in this area. Because the symbolic links reside on the shared file system, all references to the links from all nodes are the same. Therefore, all nodes must have the same namespace for the area on the local file system.

Creating a Node-Specific Directory for a Shared File System

Perform this procedure for each directory that is to maintain node-specific information. The following directories are typically required to maintain node-specific information:

For information about other directories that might be required to maintain node-specific information, see your Oracle documentation.

ProcedureHow to Create a Node-Specific Directory for a Shared File System

Steps
  1. On each cluster node, create the local directory that is to maintain node-specific information.

    Ensure that the local directory structure that you create matches the global directory structure that contains the node-specific information. For example, the global directory /global/oracle/network/agent might contain node-specific information that you require to be stored locally under the /local directory. In this situation, you would create a directory that is named /local/oracle/network/agent.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global directory that is to maintain node-specific information.

    Ensure that the local copy of the node-specific information is contained in the local directory that you created in Step 1.


    # cp -pr global-dir local-dir-parent
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    -r

    Specifies that the directory and all its files, including any subdirectories and their files, are copied.

    global-dir

    Specifies the full path of the global directory that you are copying. This directory resides on the shared file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir-parent

    Specifies the directory on the local node that is to contain the local copy. This directory is the parent directory of the directory that you created in Step 1.

  3. Replace the global directory that you copied in Step 2 with a symbolic link to the local copy of the global directory.

    1. From any cluster node, remove the global directory that you copied in Step 2.


      # rm -r global-dir
      
      -r

      Specifies that the directory and all its files, including any subdirectories and their files, are removed.

      global-dir

      Specifies the file name and full path of the global directory that you are removing. This directory is the global directory that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the directory to the global directory that you removed in Step a.


      # ln -s local-dir global-dir
      
      -s

      Specifies that the link is a symbolic link

      local-dir

      Specifies that the local directory that you created in Step 1 is the source of the link

      global-dir

      Specifies that the global directory that you removed in Step a is the target of the link


Example 3–1 Creating Node-Specific Directories

This example shows the sequence of operations that is required to create node-specific directories on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the required directories on the local file system, the following commands are run:


    # mkdir -p /local/oracle/network/agent
    

    # mkdir -p /local/oracle/network/log
    

    # mkdir -p /local/oracle/network/trace
    

    # mkdir -p /local/oracle/srvm/log
    

    # mkdir -p /local/oracle/apache
    
  2. To make local copies of the global directories that are to maintain node-specific information, the following commands are run:


    # cp -pr $ORACLE_HOME/network/agent /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/log /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/network/trace /local/oracle/network/.
    

    # cp -pr $ORACLE_HOME/srvm/log /local/oracle/srvm/.
    

    # cp -pr $ORACLE_HOME/apache /local/oracle/.
    

The following operations are performed on only one node:

  1. To remove the global directories, the following commands are run:


    # rm -r $ORACLE_HOME/network/agent
    

    # rm -r $ORACLE_HOME/network/log
    

    # rm -r $ORACLE_HOME/network/trace
    

    # rm -r $ORACLE_HOME/srvm/log
    

    # rm -r $ORACLE_HOME/apache
    
  2. To create symbolic links from the local directories to their corresponding global directories, the following commands are run:


    # ln -s /local/oracle/network/agent $ORACLE_HOME/network/agent 
    

    # ln -s /local/oracle/network/log $ORACLE_HOME/network/log
    

    # ln -s /local/oracle/network/trace $ORACLE_HOME/network/trace
    

    # ln -s /local/oracle/srvm/log $ORACLE_HOME/srvm/log
    

    # ln -s /local/oracle/apache $ORACLE_HOME/apache
    

Creating a Node-Specific File for a Shared File System

Perform this procedure for each file that is to maintain node-specific information. The following files are typically required to maintain node-specific information:

For information about other files that might be required to maintain node-specific information, see your Oracle documentation.

ProcedureHow to Create a Node-Specific File for a Shared File System

Steps
  1. On each cluster node, create the local directory that will contain the file that is to maintain node-specific information.


    # mkdir -p local-dir
    
    -p

    Specifies that all nonexistent parent directories are created first

    local-dir

    Specifies the full path name of the directory that you are creating

  2. On each cluster node, make a local copy of the global file that is to maintain node-specific information.


    # cp -p global-file local-dir
    
    -p

    Specifies that the owner, group, permissions modes, modification time, access time, and access control lists are preserved.

    global-file

    Specifies the file name and full path of the global file that you are copying. This file was installed on the shared file system under the directory that the ORACLE_HOME environment variable specifies.

    local-dir

    Specifies the directory that is to contain the local copy of the file. This directory is the directory that you created in Step 1.

  3. Replace the global file that you copied in Step 2 with a symbolic link to the local copy of the file.

    1. From any cluster node, remove the global file that you copied in Step 2.


      # rm global-file
      
      global-file

      Specifies the file name and full path of the global file that you are removing. This file is the global file that you copied in Step 2.

    2. From any cluster node, create a symbolic link from the local copy of the file to the global file that you removed in Step a.


      # ln -s local-file global-file
      
      -s

      Specifies that the link is a symbolic link

      local-file

      Specifies that the file that you copied in Step 2 is the source of the link

      global-file

      Specifies that the global version of the file that you removed in Step a is the target of the link


Example 3–2 Creating Node-Specific Files

This example shows the sequence of operations that is required to create node-specific files on a two-node cluster. This cluster is configured as follows:

The following operations are performed on each node:

  1. To create the local directory that will contain the files that are to maintain node-specific information, the following command is run:


    # mkdir -p /local/oracle/network/admin
    
  2. To make a local copy of the global files that are to maintain node-specific information, the following commands are run:


    # cp -p $ORACLE_HOME/network/admin/snmp_ro.ora \
      /local/oracle/network/admin/.
    

    # cp -p $ORACLE_HOME/network/admin/snmp_rw.ora \
      /local/oracle/network/admin/.
    

The following operations are performed on only one node:

  1. To remove the global files, the following commands are run:


    # rm $ORACLE_HOME/network/admin/snmp_ro.ora
    

    # rm $ORACLE_HOME/network/admin/snmp_rw.ora
    
  2. To create symbolic links from the local copies of the files to their corresponding global files, the following commands are run:


    # ln -s /local/oracle/network/admin/snmp_ro.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

    # ln -s /local/oracle/network/admin/snmp_rw.ora \
      $ORACLE_HOME/network/admin/snmp_rw.ora
    

Next Steps

Go to Automating the Startup and Shutdown of Oracle Real Application Clusters Database Instances.

Automating the Startup and Shutdown of Oracle Real Application Clusters Database Instances


Note –

If you are using Oracle 10g, omit this task. In Oracle 10g, Oracle CRS starts and shuts down Oracle Real Application Clusters database instances.


Automating the startup and shutdown of Oracle Real Application Clusters database instances involves registering and configuring the following resources:

The Oracle RAC server resources provide fault monitoring only to enable the status of Oracle Real Application Clusters resources to be monitored by Sun Cluster utilities. These resources do not provide automatic fault recovery.

The procedures that follow contain instructions for registering and configuring resources. These instructions explain how to set only extension properties that Sun Cluster Support for Oracle Real Application Clusters requires you to set. Optionally, you can set additional extension properties to override their default values. For more information, see the following sections:

Registering and Configuring Oracle RAC Server Resources

The SUNW.oracle_rac_server resource type represents the Oracle RAC server in a Sun Cluster configuration. Each instance of the Oracle RAC server is represented by a single SUNW.oracle_rac_server resource.

Configure each SUNW.oracle_rac_server resource as a single-instance resource that is restricted to run on only one node. You enforce this restriction as follows:

Oracle RAC server instances should be started only after the RAC framework is enabled on a cluster node. You ensure that this requirement is met by creating the following affinities and dependencies:

If you are using Sun StorEdge QFS shared file system, ensure that each Oracle RAC server instance is started only after Sun StorEdge QFS resources for this instance are started on a cluster node. You meet this requirement by creating a dependency between the Oracle RAC server resource and its related Sun StorEdge QFS resources.

ProcedureHow to Register and Configure Oracle RAC Server Resources

Steps
  1. On one node of the cluster, become superuser.

  2. Register the SUNW.oracle_rac_server resource type.


    # scrgadm -a -t SUNW.oracle_rac_server
    
  3. For each node where Sun Cluster Support for Oracle Real Application Clusters can run, create a resource group and a resource for the Oracle RAC server.

    1. Create a failover resource group to contain the Oracle RAC server resource.


      # scrgadm -a -g rac-server-rg -h node \
      -y RG_AFFINITIES=++rac-fmwk-rg \
      [-y RG_DEPENDENCIES=sqfs-rg-list]
      
      -g rac-server-rg

      Specifies the name that you are assigning to the resource group.

      -h node

      Specifies the node for which you are creating the resource group. You must specify only one node.

      -y RG_AFFINITIES=++rac-fmwk-rg

      Creates a strong positive affinity to the RAC framework resource group. If the RAC framework resource group was created by using the scsetup utility, the RAC framework resource group is named rac-framework-rg.

      -y RG_DEPENDENCIES=sqfs-rg-list

      Specifies a comma-separated list of Sun StorEdge QFS resource groups on which this Oracle RAC server instance depends. These resource groups are created when you register and configure the data service for the Sun StorEdge QFS metadata server. For more information about these resources, see Configuration Planning Questions. Create this dependency only if you are using Sun StorEdge QFS shared file system.

    2. Add an instance of the SUNW.oracle_rac_server resource type to the resource group that you created in Step a.

      When you create this resource, specify the following information about the resource:

      • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

      • The Oracle system identifier. This identifier is the name of the Oracle database instance.


      # scrgadm -a -j rac-server-resource -g rac-server-rg \
      -t SUNW.oracle_rac_server \
      -y RESOURCE_DEPENDENCIES=rac-fmwk-rs[, sqfs-rs-list] \
      -x ORACLE_SID=ora-sid \
      -x ORACLE_HOME=ora-home
      
      -j rac-server-resource

      Specifies the name that you are assigning to the SUNW.oracle_rac_server resource.

      -g rac-server-rg

      Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step a.

      -y RESOURCE_DEPENDENCIES=rac-fmwk-rs[, sqfs-rs-list]

      Specifies the resources on which this Oracle RAC server instance depends.

      You must specify the RAC framework resource. If the RAC framework resource group is created by using the scsetup utility, this resource is named rac_framework.

      If you are using Sun StorEdge QFS shared file system, you must also specify a comma-separated list of Sun StorEdge QFS resources. These resources are created when you register and configure the data service for the Sun StorEdge QFS metadata server. For more information about these resources, see SPARC: Resources for the Sun StorEdge QFS Shared File System.

      -x ORACLE_SID=ora-sid

      Specifies the Oracle system identifier. This identifier is the name of the Oracle database instance.

      -x ORACLE_HOME=ora-home

      Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.


Example 3–3 Registering and Configuring Oracle RAC Server Resources

This example shows the sequence of operations that is required to register and configure Oracle RAC server resources for a two-node cluster.

The example assumes that a RAC framework resource group named rac-framework-rg has been created. The example also assumes that this resource group contains a SUNW.rac_framework resource named rac_framework.

  1. To register the SUNW.oracle_rac_server resource type, the following command is run:


    # scrgadm -a -t SUNW.oracle_rac_server
    
  2. To create the RAC1-rg resource group for node node1, the following command is run:


    # scrgadm -a -g RAC1-rg -h node1 \
    -y RG_AFFINITIES=++rac-framework-rg
    
  3. To create the RAC2-rg resource group for node node2, the following command is run:


    # scrgadm -a -g RAC2-rg -h node2 \
    -y RG_AFFINITIES=++rac-framework-rg
    
  4. To create the RAC1-rs resource in the RAC1-rg resource group for node node1, the following command is run:


    # scrgadm -a -j RAC1-rs -g RAC1-rg \
    -t SUNW.oracle_rac_server \
    -y RESOURCE_DEPENDENCIES=rac_framework \
    -x ORACLE_SID=RAC1 \
    -x ORACLE_HOME=/oracle
    
  5. To create the RAC2-rs resource in the RAC2-rg resource group for node node2, the following command is run:


    # scrgadm -a -j RAC2-rs -g  RAC2-rg \
    -t SUNW.oracle_rac_server \
    -y RESOURCE_DEPENDENCIES=rac_framework \
    -x ORACLE_SID=RAC2 \
    -x ORACLE_HOME=/oracle
    

Next Steps

Go to Registering and Configuring Oracle Listener Resources.

Registering and Configuring Oracle Listener Resources

How you configure Oracle listener resources depends on how you require Oracle listeners to serve Oracle Real Application Clusters database instances. For more information, see Resource Groups for Oracle Listener Resources.

ProcedureHow to Register and Configure Oracle Listener Resources

Steps
  1. On one node of the cluster, become superuser.

  2. Register the SUNW.oracle_listener resource type.


    # scrgadm -a -t SUNW.oracle_listener
    
  3. If your configuration of Oracle listeners requires a separate resource group, create a failover resource group for the listener resource.

    Create this resource group only if your configuration of Oracle listeners requires a separate resource group. When you create this resource group, create any dependencies on other resource groups that your configuration requires. For more information, see Resource Groups for Oracle Listener Resources.


    # scrgadm -a -g rac-listener-rg \
    [-y RG_DEPENDENCIES=rg-list]\
    -h nodelist
    
    -g rac-listener-rg

    Specifies the name that you are assigning to the resource group.

    -y RG_DEPENDENCIES=rg-list

    Specifies a comma-separated list of resource groups that this resource group depends on. If the Oracle home directory resides on a Sun StorEdge QFS shared file system, rg-list must specify the resource group for the Sun StorEdge QFS metadata server for the file system.

    If the resource group for the listener resource depends on no other resource groups, omit this option.

    -h nodelist

    Specifies a comma-separated list of nodes where the resource group can be brought online. The list may contain more than one node only if you are configuring the listener to use a LogicalHostname resource. Otherwise, you must specify only one node.

  4. Add an instance of the SUNW.oracle_listener resource to each resource group that is to contain a SUNW.oracle_listener resource.

    When you create this resource, specify the following information about the resource:

    • The name of the Oracle listener. This name must match the corresponding entry in the listener.ora file.

    • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.


    # scrgadm -a -j listener-resource -g listener-rg \
    -t SUNW.oracle_listener \
    [-y RESOURCE_DEPENDENCIES=sqfs-rs-list] \
    -x LISTENER_NAME=listener \ 
    -x ORACLE_HOME=oracle-home
    
    -j listener-resource

    Specifies the name that you are assigning to the SUNW.oracle_listener resource.

    -g listener-rg

    Specifies the resource group to which you are adding the resource.

    -y RESOURCE_DEPENDENCIES=sqfs-rs-list

    Specifies a comma-separated list of Sun StorEdge QFS resources on which this Oracle listener instance depends. These resources are created when you register and configure the data service for the Sun StorEdge QFS metadata server. For more information about these resources, see SPARC: Resources for the Sun StorEdge QFS Shared File System. Create this dependency only if the Oracle home directory resides on a Sun StorEdge QFS shared file system.

    -x LISTENER_NAME=listener

    Specifies the name of the Oracle listener instance. This name must match the corresponding entry in the listener.ora file.

    -x ORACLE_HOME=ora-home

    Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

  5. Bring online each RAC server resource group that you created in How to Register and Configure Oracle RAC Server Resources.

    For each resource group, type the following command:


    # scswitch -Z -g rac-server-rg
    
    -Z

    Moves the resource group to the MANAGED state, and brings online the resource group

    -g rac-server-rg

    Specifies that a resource group that you created in How to Register and Configure Oracle RAC Server Resources is to be moved to the MANAGED state and brought online

  6. If you created Oracle listener resource groups in Step 3, bring online these resource groups.

    For each resource group that you created, type the following command:


    # scswitch -Z -g rac-listener-rg
    
    -Z

    Moves the resource group to the MANAGED state, and brings online the resource group

    -g rac-listener-rg

    Specifies that a resource group that you created in Step 3 is to be moved to the MANAGED state and brought online


Example 3–4 Registering and Configuring Oracle Listener Resources

This example shows the sequence of operations that is required to register and configure Oracle RAC listener resources for a two-node cluster.

In this example, each listener serves only one Real Application Clusters instance. The listeners cannot fail over.

The example assumes that RAC server resource groups named RAC1-rg and RAC2-rg have been created as shown in Example 3–3.

  1. To register the SUNW.oracle_listener resource type, the following command is run:


    # scrgadm -a -t SUNW.oracle_listener
    
  2. To create the LRAC1-rs resource in the RAC1-rg resource group for node node1, the following command is run:


    # scrgadm -a -j LRAC1-rs -g RAC1-rg \
    -t SUNW.oracle_listener \
    -x LISTENER_NAME=LRAC1 \
    -x ORACLE_HOME=/oracle
    
  3. To create the LRAC2-rs resource in the RAC2-rg resource group for node node2, the following command is run:


    # scrgadm -a -j LRAC2-rs -g RAC2-rg \
    -t SUNW.oracle_listener \
    -x LISTENER_NAME=LRAC2 \
    -x ORACLE_HOME=/oracle
    

Next Steps

Go to Verifying the Sun Cluster Support for Oracle Real Application Clusters Installation and Configuration.

Verifying the Sun Cluster Support for Oracle Real Application Clusters Installation and Configuration

After you install, register, and configure Sun Cluster Support for Oracle Real Application Clusters, verify the installation and configuration. Verifying the Sun Cluster Support for Oracle Real Application Clusters installation and configuration determines if Real Application Clusters database instances can be started and stopped automatically.

ProcedureHow to Verify the Sun Cluster Support for Oracle Real Application Clusters Installation and Configuration

Perform this task as superuser for each Oracle RAC server resource group that you created when you performed the procedure inRegistering and Configuring Oracle RAC Server Resources.

Steps
  1. Verify that the Oracle RAC server resource group is correctly configured.


    # scrgadm -pv -g rac-server-rg
    
    -g rac-server-rg

    Specifies the name of the Oracle RAC server resource group for the node

  2. Bring online the Oracle RAC server resource group.


    # scswitch -Z -g rac-server-rg
    
    -g rac-server-rg

    Specifies the name of the Oracle RAC server resource group for the node

  3. Verify that the Oracle RAC server resource group and its resources are online.


    # scstat -g
    
  4. Take offline the Oracle RAC server resource group.


    # scswitch -F -g rac-server-rg
    
    -g rac-server-rg

    Specifies the name of the Oracle RAC server resource group for the node

  5. Verify that the Oracle RAC server resource group and its resources are offline.


    # scstat -g
    
  6. Bring online again the Oracle RAC server resource group.


    # scswitch -Z -g rac-server-rg
    
    -g rac-server-rg

    Specifies the name of the Oracle RAC server resource group for the node

  7. Verify that the Oracle RAC server resource group and its resources are online.


    # scstat -g