Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

Appendix D Command-Line Alternatives

Sun Cluster maintenance commands enable you to automate the creation, modification, and removal of the RAC framework resource group by using scripts. Automating this process reduces the time for propagating the same configuration information to many nodes in a cluster.

This appendix contains the following sections:

Setting Sun Cluster Support for Oracle RAC Extension Properties

The procedures in the sections that follow contain instructions for registering and configuring resources. These instructions explain how to set only extension properties that Sun Cluster Support for Oracle RAC requires you to set. Optionally, you can set additional extension properties to override their default values. For more information, see the following sections:

Registering and Configuring the RAC Framework Resource Group by Using Sun Cluster Maintenance Commands

The task in this section is an alternative for the resource-configuration steps in How to Register and Configure the RAC Framework Resource Group by Using clsetup.

Overview of the RAC Framework Resource Group

The RAC framework resource group enables Oracle RAC to run with Sun Cluster. This resource group contains an instance of the following single-instance resource types:

In addition, the RAC framework resource group contains an instance of a single-instance resource type that represents the volume manager that you are using for Oracle files, if any.


Note –

The resource types that are defined for the RAC framework resource group do not enable the Resource Group Manager (RGM) to manage instances of Oracle RAC.


ProcedureHow to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -n nodelist \
      -p maximum_primaries=num-in-list \
      -p desired_primaries=num-in-list \
      [-p rg_description="description" \]
    -p rg_mode=Scalable rac-fmwk-rg
    
    -n nodelist=nodelist

    Specifies a comma-separated list of cluster nodes on which Sun Cluster Support for Oracle RAC is to be enabled. The Sun Cluster Support for Oracle RAC software packages must be installed on each node in this list.

    -p maximum_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p desired_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p rg_description="description"

    Specifies an optional brief description of the resource group. This description is displayed when you use Sun Cluster maintenance commands to obtain information about the resource group.

    -p rg_mode=Scalable

    Specifies that the resource group is scalable.

    rac-fmwk-rg

    Specifies the name that you are assigning to the resource group.

  3. Register the SUNW.rac_framework resource type.


    # clresourcetype register SUNW.rac_framework
    
  4. Add an instance of the SUNW.rac_framework resource type to the resource group that you created in Step 2.


    # clresource create -g rac-fmwk-rg -t SUNW.rac_framework rac-fmwk-rs
    
    -g rac-fmwk-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

    rac-fmwk-rs

    Specifies the name that you are assigning to the SUNW.rac_framework resource.

  5. Register the SUNW.rac_udlm resource type.


    # clresourcetype register SUNW.rac_udlm
    

    Note –

    If you are performing the steps in this procedure to register and configure the RAC resource framework in a zone cluster and the RAC support is not required in the global cluster, you do not need to create the SUNW.rac_udlm resource in the global cluster. In that case, go to Step 7.


  6. Add an instance of the SUNW.rac_udlm resource type to the resource group that you created in Step 2.

    Ensure that this instance depends on the SUNW.rac_framework resource that you created in Step 4.


    # clresource create -g resource-group \
     -t SUNW.rac_udlm \
     -p resource_dependencies=rac-fmwk-rs rac-udlm-rs
    
    -g rac-fmwk-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

    -p resource_dependencies=rac-fmwk-rs

    Specifies that this instance depends on the SUNW.rac_framework resource that you created in Step 4.

    rac-udlm-rs

    Specifies the name that you are assigning to the SUNW.rac_udlm resource.

  7. Register and add an instance of the resource type that represents the volume manager that you are using for Oracle files, if any.

    If you are not using a volume manager, omit this step.

    • If you are using Solaris Volume Manager for Sun Cluster, register and add the instance as follows:

      1. Register the SUNW.rac_svm resource type.


        # clresourcetype register SUNW.rac_svm
        
      2. Add an instance of the SUNW.rac_svm resource type to the resource group that you created in Step 2.

        Ensure that this instance depends on the rac_framework resource that you created in Step 4.


        # clresource create -g rac-fmwk-rg \
          -t SUNW.rac_svm \
          -p resource_dependencies=rac-fmwk-rs rac-svm-rs
        
        -g rac-fmwk-rg

        Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

        -p resource_dependencies=rac-fmwk-rs

        Specifies that this instance depends on the SUNW.rac_framework resource that you created in Step 4.

        rac-svm-rs

        Specifies the name that you are assigning to the SUNW.rac_svm resource.

    • If you are using VxVM with the cluster feature, register and add the instance as follows.

      1. Register the SUNW.rac_cvm resource type.


        # clresourcetype register SUNW.rac_cvm
        
      2. Add an instance of the SUNW.rac_cvm resource type to the resource group that you created in Step 2.

        Ensure that this instance depends on the rac_framework resource that you created in Step 4.


        # clresource create -g rac-fmwk-rg \
          -t SUNW.rac_cvm \
          -p resource_dependencies=rac-fmwk-rs rac-cvm-rs
        
        -g rac-fmwk-rg

        Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

        -p resource_dependencies=rac-fmwk-rs

        Specifies that this instance depends on the SUNW.rac_framework resource that you created in Step 4.

        rac-cvm-rs

        Specifies the name that you are assigning to the SUNW.rac_cvm resource.

  8. Bring online and in a managed state the RAC framework resource group and its resources.


    # clresourcegroup online -emM rac-fmwk-rg
    
    rac-fmwk-rg

    Specifies that the resource group that you created in Step 2 is to be moved to the MANAGED state and brought online.

ProcedureHow to Register and Configure the RAC Framework Resource Group in the Zone Cluster by Using Sun Cluster Maintenance Commands

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform the steps in this procedure to register and configure the RAC framework resource group in a zone cluster for the Sun StorageTek QFS shared file system with SVM. For this configuration, you need to create the RAC framework resource group in both the global cluster and zone cluster.


Note –

When a step in the procedure requires running the Sun Cluster commands in a zone cluster, you should run the command from the global cluster and use the -Z option to specify the zone cluster.


Before You Begin

Perform the steps to register and configure the RAC framework resource group rac-fmwk-rg, with resources rac-fmwk-rs and rac-svm-rs in the global cluster.


Note –

For information on registering and configuring the RAC framework resource group in the global cluster, see How to Register and Configure the RAC Framework Resource Group in the Global Cluster by Using Sun Cluster Maintenance Commands.


  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -Z zcname -n nodelist \
     -p maximum_primaries=num-in-list \
     -p desired_primaries=num-in-list \
     [-p rg_description="description" \]
    -p rg_mode=Scalable rac-fmwk-rg
    
  3. Register the SUNW.rac_framework resource type.


    # clresourcetype register -Z zcname SUNW.rac_framework
    
  4. Add an instance of the SUNW.rac_framework resource type to the resource group that you created in Step 2.


    # clresource create -Z zcname -g rac-fmwk-rg -t SUNW.rac_framework rac-fmwk-rs
    
    -g rac-fmwk-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

    rac-fmwk-rs

    Specifies the name that you are assigning to the SUNW.rac_framework resource.

  5. Register the SUNW.rac_udlm resource type.


    # clresourcetype register -Z zcname SUNW.rac_udlm
    
  6. Add an instance of the SUNW.rac_udlm resource type to the resource group that you created in Step 2.

    Ensure that this instance depends on the SUNW.rac_framework resource that you created in Step 4.


    # clresource create -Z zcname -g resource-group \
     -t SUNW.rac_udlm \
     -p resource_dependencies=rac-fmwk-rs rac-udlm-rs
    
    -g rac-fmwk-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 2.

    -p resource_dependencies=rac-fmwk-rs

    Specifies that this instance depends on the SUNW.rac_framework resource that you created in Step 4.

    rac-udlm-rs

    Specifies the name that you are assigning to the SUNW.rac_udlm resource.

  7. Bring online and in a managed state the RAC framework resource group and its resources.


    # clresourcegroup online -Z zcname -emM rac-fmwk-rg
    
    rac-fmwk-rg

    Specifies that the resource group that you created in Step 2 is to be moved to the MANAGED state and brought online.

Creating Storage Management Resources by Using Sun Cluster Maintenance Commands

The tasks in this section are alternatives for the resource-configuration steps in How to Register and Configure Storage Resources for Oracle Files by Using clsetup.

The following resources to represent storage for Oracle files are required:

Resources for Scalable Device Groups and Scalable File-System Mountpoints

If you are using Solaris Volume Manager for Sun Cluster or VxVM, configure storage resources as follows:

If you are using Sun StorageTek QFS or qualified NAS devices, configure storage resources as follows:

The resource that represents a Sun StorageTek QFS shared file system can start only if the file system's Sun StorageTek QFS metadata server is running. Similarly, the resource that represents a Sun StorageTek QFS shared file system can stop only if the file system's Sun StorageTek QFS metadata server is stopped. To meet this requirement, configure a resource for each Sun StorageTek QFS metadata server. For more information, see Resources for the Sun StorageTek QFS Metadata Server.

Resources for the Sun StorageTek QFS Metadata Server

If you are using the Sun StorageTek QFS shared file system, create one resource for each Sun StorageTek QFS metadata server. The configuration of resource groups for these resources depends on the version of Oracle that you are using.

Configuration of Sun StorageTek QFS Resource Groups With Oracle 9i and Oracle 10g R2

If you are using Oracle 9i or Oracle 10g R2, the configuration of resource groups depends on the number of file systems in your configuration.

Configuration of Sun StorageTek QFS Resource Groups With Oracle 10g R1

If you are using Oracle 10g, Oracle CRS manage RAC database instances. These database instances must be started only after all shared file systems are mounted.

You might use multiple file systems for database files and related files. For more information, see Sun StorageTek QFS File Systems for Database Files and Related Files. In this situation, ensure that the file system that contains the Oracle CRS voting disk is mounted only after the file systems for other database files have been mounted. This behavior ensures that, when a node is booted, Oracle CRS are started only after all Sun StorageTek QFS file systems are mounted.

If you are using Oracle 10g R1, the configuration of resource groups must ensure that Sun Cluster mounts the file systems in the required order. To meet this requirement, configure resource groups for the metadata servers of the file systems as follows:

ProcedureHow to Create a Resource for a Scalable Device Group in the Global Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the scalable device group resource.

    Set a strong positive affinity by the resource group for the RAC framework resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -p nodelist=nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    scal-dg-rg
    
  3. Register the SUNW.ScalDeviceGroup resource type.


    # clresourcetype register SUNW.ScalDeviceGroup
    
  4. For each scalable device group that you are using for Oracle files, add an instance of the SUNW.ScalDeviceGroup resource type to the resource group that you created in Step 2.

    Set a strong dependency for the instance of SUNW.ScalDeviceGroup on the resource in the RAC framework resource group that represents the volume manager for the device group. Limit the scope of this dependency to only the node where the SUNW.ScalDeviceGroup resource is running.


    # clresource create -t SUNW.ScalDeviceGroup -g scal-dg-rg \
    -p resource_dependencies=fm-vol-mgr-rs{local_node} \
    -p diskgroupname=disk-group scal-dg-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM scal-dg-rg
    

ProcedureHow to Create a Resource for a Scalable Device Group in a Zone Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the scalable device group resource.

    Set a strong positive affinity by the resource group for the RAC framework resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -Z zcname -p nodelist=nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    scal-dg-rg
    
  3. Register the SUNW.ScalDeviceGroup resource type.


    # clresourcetype register -Z zcname SUNW.ScalDeviceGroup
    
  4. For each scalable device group that you are using for Oracle files, add an instance of the SUNW.ScalDeviceGroup resource type to the resource group that you created in Step 2.

    Set a strong dependency for the instance of SUNW.ScalDeviceGroup on the resource in the RAC framework resource group that represents the volume manager for the device group. Limit the scope of this dependency to only the node where the SUNW.ScalDeviceGroup resource is running.


    # clresource create -Z zcname -t SUNW.ScalDeviceGroup -g scal-dg-rg \
    -p resource_dependencies=fm-vol-mgr-rs{local_node} \
    -p diskgroupname=disk-group scal-dg-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -Z zcname-emM scal-dg-rg
    

ProcedureHow to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster

Perform this task only if you are using the Sun StorageTek QFS shared file system.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a failover resource group to contain the resources for the Sun StorageTek QFS metadata server.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresourcegroup create -n nodelist \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
    qfs-mds-rg
    
  3. Register the SUNW.qfs resource type.


    # clresourcetype register SUNW.qfs
    
  4. For each Sun StorageTek QFSshared file system that you are using, add an instance of the SUNW.qfs resource type to the resource group that you created in Step 2.

    Each instance of SUNW.qfs represents the metadata server of the file system.

    If you are also using a volume manager, set a strong dependency by the instance of SUNW.qfs on the resource for the scalable device group that is to store the file system . This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresource create -t SUNW.qfs -g qfs-mds-rg \
    -p qfsfilesystem=path \
    [-p resource_dependencies=scal-dg-rs \]
    qfs-mds-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM qfs-mds-rg
    

ProcedureHow to Register and Configure Resources for the Sun StorageTek QFS Metadata Server for a Zone Cluster

Perform the steps in this procedure to register and configure resources for the Sun StorageTek QFS metadata server for a zone cluster. You must perform these steps in the global cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the SUNW.wait_zc_boot resource in the global cluster.


    # clresourcegroup create -n nodelist \
    -p rg_mode=Scalable \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    [-p rg_mode=Scalable \
    zc-wait-rg
    
  3. Register the SUNW.wait_zc_boot resource type.


    # clresourcetype register SUNW.wait_zc_boot
    
  4. Add an instance of the SUNW.wait_zc_boot resource type to the resource group that you created in Step 2.


    # clresource create -g zc-wait-rg -t SUNW.wait_zc_boot \
    -p ZCName=zcname zc-wait-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM zc-wait-rg
    
  6. Create a failover resource group to contain the resources for the Sun StorageTek QFS metadata server.

    Set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresourcegroup create -n nodelist \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
  7. Register the SUNW.qfs resource type.


    # clresourcetype register SUNW.qfs
    
  8. Add an instance of the SUNW.qfs resource type to the resource group that you created in Step 6 for each Sun StorageTek QFSshared file system that you are using.

    Each instance of SUNW.qfs represents the metadata server of the file system.

    If you are also using a volume manager, set a strong dependency by the instance of SUNW.qfs on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresource create -t SUNW.qfs -g qfs-mds-rg \
    -p qfsfilesystem=path \
    [-p resource_dependencies=scal-dg-rs,zc-wait-rs, \
    qfs-mds-rs]
  9. Bring online and in a managed state the resource group that you created in Step 6.


    # clresourcegroup online -emM qfs-mds-rg
    

ProcedureHow to Create a Resource for a File-System Mountpoint in the Global Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the resource for a scalable file-system mountpoint.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -n nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
    -p rg_mode=Scalable scal-mp-rg
    
  3. Register the SUNW.ScalMountPoint resource type.


    # clresourcetype register SUNW.ScalMountPoint
    
  4. For each shared file system that requires a scalable file-system mountpoint resource, add an instance of the SUNW.ScalMountPoint resource type to the resource group that you created in Step 2.

    • For each Sun StorageTek QFS shared file system, type the following command:

      Set a strong dependency by the instance of SUNW.ScalMountPoint on the resource for the Sun StorageTek QFS metadata server for the file system. The resource for the Sun StorageTek QFS metadata server set is created in How to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster.

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -t SUNW.ScalMountPoint -g scal-mp-rg \
      -p resource_dependencies=qfs-mds-rs \
      [-p resource_dependencies_offline_restart=scal-dg-rs \]
      -p mountpointdir=mp-path \
      -p filesystemtype=s-qfs \
      -p targetfilesystem=fs-name qfs-mp-rs
      
    • For each file system on a qualified NAS device, type the following command:

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -t SUNW.ScalMountPoint -g scal-mp-rg \
      [-p resource_dependencies_offline_restart=scal-dg-rs \]
      -p mountpointdir=mp-path \
      -p filesystemtype=nas \
      -p targetfilesystem=nas-device:fs-name nas-mp-rs
      
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM scal-mp-rg
    

ProcedureHow to Create a Resource for a File-System Mountpoint in Zone Cluster

Perform the steps in this procedure to create a resource for a file-system mountpoint in a zone cluster. For RAC configuration with the Sun StorageTek QFSshared file system on Solaris Volume Manager for Sun Cluster and the Sun StorageTek QFSshared file system on hardware RAID, you should create a scalable resource group to contain all the scalable mountpoint resources in a zone cluster.


Note –

The nodelist is the list of global-cluster voting nodes where the zone cluster is created.


  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the resource for a scalable file-system mountpoint in zone cluster.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    [-p rg_affinities=++global:scal-dg-rg \]
    [-p rg_description="description" \]
    -p rg_mode=Scalable scal-mp-rg
    
  3. Register the SUNW.ScalMountPoint resource type.


    # clresourcetype register -Z zcname SUNW.ScalMountPoint
    
  4. For each shared file system that requires a scalable file-system mountpoint resource, add an instance of the SUNW.ScalMountPoint resource type to the resource group that you created in Step 2.

    • For each Sun StorageTek QFS shared file system, do the following:

      Set a strong dependency by the instance of SUNW.ScalMountPoint on the resource for the Sun StorageTek QFS metadata server for the file system. The resource for the Sun StorageTek QFS metadata server set is created in How to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster.

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -Z zcname -t SUNW.ScalMountPoint -d -g scal-mp-rg \
      -p resource_dependencies=global:qfs-mds-rs \
      [-y resource_dependencies_offline_restart=global:scal-dg-rs \]
      -x mountpointdir=mp-path \
      -x filesystemtype=s-qfs \
      -x targetfilesystem=fs-name qfs-mp-rs
      
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -Z zcname -emM scal-mp-rg
    

Creating Resources for Interoperation With Oracle 10g by Using Sun Cluster Maintenance Commands

The task in this section is an alternative for the resource-configuration steps in How to Enable Sun Cluster and Oracle 10g R2 CRS to Interoperate

Resources for interoperation with Oracle 10g R2 enable you to administer RAC database instances by using Sun Cluster interfaces. These resources also ensure that dependencies by Oracle CRS resources on Sun Cluster resources are met. These resources enable the high-availability frameworks that are provided by Sun Cluster and Oracle CRS to interoperate.

The following resources for interoperation with Oracle 10g are required:

You must assign to an Oracle CRS resource that represents a Sun Cluster resource a name in the following form:

sun.node.sc-rs

node

Specifies the name of the node where the Oracle CRS resource is to run.

sc-rs

Specifies the name of the Sun Cluster resource that the Oracle CRS resource represents.

For example, the name of the Oracle CRS resource for node pclus1 that represents the Sun Cluster resource scal-dg-rs must be as follows:

sun.pclus1.scal-dg-rs

Figure D–1 Proxy Resources for Configurations With a Volume Manager

Diagram showing proxy resources for Oracle 10g configurations
with a volume manager

Figure D–2 Proxy Resources for Configurations With a Shared File System

Diagram showing proxy resources for Oracle 10g configurations
with a shared file system

ProcedureHow to Create Sun Cluster Resources for Interoperation With Oracle 10g

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Register the SUNW.crs_framework resource type.


    # clresourcetype register SUNW.crs_framework
    
  3. Add an instance of the SUNW.crs_framework resource type to the RAC framework resource group.

    For information about this resource group, see Registering and Configuring the RAC Framework Resource Group.

    Set a strong dependency by the instance of SUNW.crs_framework on the instance of SUNW.rac_framework in the RAC framework resource group.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set an offline-restart dependency by the instance of SUNW.crs_framework on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running.

    You might have configured a storage resource for the file system that you are using for binary files. In this situation, set an offline-restart dependency by the instance of SUNW.crs_framework on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running. Set the dependency on the resource that you created in How to Create a Resource for a File-System Mountpoint in the Global Cluster.


    # clresource create -t SUNW.crs_framework \
    -g rac-fmwk-rg \
    -p resource_dependencies=rac-fmwk-rs \
    [-p resource_dependencies_offline_restart=db-storage-rs{local_node}\
    [,bin-storage-rs{local_node}]] \
    crs-fmwk-rs
    
  4. Create a scalable resource group to contain the proxy resource for the Oracle RAC database server.

    Set a strong positive affinity by the scalable resource group for the RAC framework resource group.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set a strong positive affinity by the scalable resource group for the resource group that contains the storage resource for database files.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -n nodelist \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg[,db-storage-rg] \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    rac-db-rg
    
  5. Register the SUNW.scalable_rac_server_proxy resource type.


    # clresourcetype register SUNW.scalable_rac_server_proxy
    
  6. Add an instance of the SUNW.scalable_rac_server_proxy resource type to the resource group that you created in Step 4.

    Set a strong dependency by the instance of SUNW.scalable_rac_server_proxy on the instance of SUNW.rac_framework in the RAC framework resource group.

    Set an offline-restart dependency by the instance of SUNW.scalable_rac_server_proxy on the instance of SUNW.crs_framework that you created in Step 3.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set an offline-restart dependency by the instance of SUNW.scalable_rac_server_proxy on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running.

    Set a different value of the oracle_sid extension property for each node that can master the resource.


    # clresource create -g rac-db-rg \
    -t SUNW.scalable_rac_server_proxy \
    -p resource_dependencies=rac-fmwk-rs \
    -p resource_dependencies_offline_restart=crs-fmk-rs[, db-storage-rs] \
    -p oracle_home=ora-home \
    -p crs_home=crs-home \
    -p db_name=db-name \
    -p oracle_sid{node1-id}=sid-node1 \
    [ -p oracle_sid{node2-id}=sid-node2 \…]
    rac-srvr-proxy-rs
    
  7. Bring online the resource group that you created in Step 4.


    # clresourcegroup online -emM rac-db-rg
    

ProcedureHow to Create Sun Cluster Resources in a Zone Cluster for Interoperation With Oracle 10g

Perform this procedure on only one node of the cluster.


Note –

When a step in the procedure requires running the Sun Cluster commands in a zone cluster, you should run the command from the global cluster and use the -Z option to specify the zone cluster.


  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Register the SUNW.crs_framework resource type.


    # clresourcetype register -Z zcname SUNW.crs_framework
    
  3. Add an instance of the SUNW.crs_framework resource type to the RAC framework resource group.

    For information about this resource group, see Registering and Configuring the RAC Framework Resource Group.

    Set a strong dependency by the instance of SUNW.crs_framework on the instance of SUNW.rac_framework in the RAC framework resource group.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set an offline-restart dependency by the instance of SUNW.crs_framework on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running.

    You might have configured a storage resource for the file system that you are using for binary files. In this situation, set an offline-restart dependency by the instance of SUNW.crs_framework on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running. Set the dependency on the resource that you created in How to Create a Resource for a File-System Mountpoint in Zone Cluster.


    # clresource create -Z zcname -t SUNW.crs_framework \
    -g rac-fmwk-rg \
    -p resource_dependencies=rac-fmwk-rs \
    [-p resource_dependencies_offline_restart=db-storage-rs{local_node} \
    [,bin-storage-rs{local_node}]] \
    crs-fmwk-rs
    
  4. Create a scalable resource group to contain the proxy resource for the Oracle RAC database server.

    Set a strong positive affinity by the scalable resource group for the RAC framework resource group.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set a strong positive affinity by the scalable resource group for the resource group that contains the storage resource for database files.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -Z zcname -n nodelist \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg[,db-storage-rg] \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    rac-db-rg
    
  5. Register the SUNW.scalable_rac_server_proxy resource type.


    # clresourcetype register -Z zcname SUNW.scalable_rac_server_proxy
    
  6. Add an instance of the SUNW.scalable_rac_server_proxy resource type to the resource group that you created in Step 4.

    Set a strong dependency by the instance of SUNW.scalable_rac_server_proxy on the instance of SUNW.rac_framework in the RAC framework resource group.

    Set an offline-restart dependency by the instance of SUNW.scalable_rac_server_proxy on the instance of SUNW.crs_framework that you created in Step 3.

    You might have configured a storage resource for the storage that you are using for database files. In this situation, set an offline-restart dependency by the instance of SUNW.scalable_rac_server_proxy on the storage resource. Limit the scope of this dependency to only the node where the storage resource is running.

    Set a different value of the oracle_sid extension property for each node that can master the resource.


    # clresource create -Z zcname -g rac-db-rg \
    -t SUNW.scalable_rac_server_proxy \
    -p resource_dependencies=rac-fmwk-rs \
    -p resource_dependencies_offline_restart=crs-fmk-rs \
    [, db-storage-rs, bin-storage-rs] \
    -p oracle_home=ora-home \
    -p crs_home=crs-home \
    -p db_name=db-name \
    -p oracle_sid{node1-id}=sid-node1 \
    [ -p oracle_sid{node2-id}=sid-node2 \…]
    rac-srvr-proxy-rs
    
  7. Bring online the resource group that you created in Step 4.


    # clresourcegroup online -Z zcname -emM rac-db-rg
    

ProcedureHow to Create an Oracle CRS Resource for Interoperation With Sun Cluster

Oracle CRS resources are similar to Sun Cluster resources. Oracle CRS resources represent items that the CRS manage in a similar way to how Sun Cluster resources represent items that the Sun Cluster RGM manages.

Depending on your configuration, some Oracle components that are represented as CRS resources might depend on file systems and global devices that Sun Cluster manages. For example, if you are using file systems and global devices for Oracle files, the Oracle RAC database and the Oracle listener might depend on these file systems and global devices.

Create an Oracle CRS resource for each Sun Cluster resource for scalable device groups and scalable file-system mountpoints on which Oracle components depend. The Oracle CRS resources that you create track the status of their associated Sun Cluster resources. The Oracle CRS resources also ensure the orderly startup of Oracle CRS resources.

Perform this task on each cluster node where Sun Cluster Support for Oracle RAC is to run.


Note –

Some steps in this procedure require you to use Oracle CRS commands. In these steps, the syntax of the command for Oracle release 10g R2 is provided. If you are you are using a version of Oracle other than 10g R2, see your Oracle documentation for the correct command syntax.



Note –

To create an Oracle CRS resource in a zone cluster, you should perform the steps in this procedure in that zone cluster.


  1. On the node where you are performing this task, global-cluster node for a global cluster or zone-cluster node for a zone cluster, become superuser.

  2. If the /var/cluster/ucmm/profile directory does not exist, create it.

    Profiles for CRS resources are created in this directory.


    # mkdir -p /var/cluster/ucmm/profile
    
  3. Create a profile for the Oracle CRS resource.


    # crs-home/bin/crs_profile \
    -create sun.node.sc-rs \
    -t application -d  "description "  \
    -dir /var/cluster/ucmm/profile \
    -a /opt/SUNWscor/dsconfig/bin/scproxy_crs_action \
    -p restricted -h node -f -o st=1800
    
  4. Register the Oracle CRS resource for which you created a profile in Step 3.


    # crs-home/bin/crs_register sun.node.sc-rs \
    -dir /var/cluster/ucmm/profile
    
  5. Ensure that the Sun Cluster resource for which the Oracle CRS resource is a proxy is online.

    1. Obtain the state of the Sun Cluster resource.


      # clresource status sc-rs
      
    2. If the state of the Sun Cluster resource is not online, bring online the resource group that contains the Sun Cluster resource.

      If the state of the Sun Cluster resource is online, omit this step.


      # clresourcegroup -emM sc-rg
      
  6. Start the Oracle CRS resource that you registered in Step 4.


    # crs-home/bin/crs_start sun.node.sc-rs
    
  7. Add the Oracle CRS resource that you registered in Step 4 to the list of resources that the dependent Oracle CRS resource requires.

    1. If the dependent Oracle CRS resource is the Oracle RAC database instance, obtain the name of the instance.


      # crs-home/bin/srvctl config database -d db-name | grep node
      
    2. Obtain the list of resources that the dependent Oracle CRS resource requires.


      # crs-home/bin/crs_stat -p depend-crs-rs | grep REQUIRED_RESOURCES
      
    3. Append the name of the Oracle CRS resource to the list that you obtained in Step b.


      # crs-home/bin/crs_register depend-crs-rs \
      -update -r "existing-list sun.node.sc-rs"
      

Registering and Configuring Sun Cluster Resources for Interoperation With Oracle 9i by Using Sun Cluster Maintenance Commands

The task in this section is an alternative for the resource-configuration steps in How to Automate the Startup and Shutdown of Oracle 9i RAC Database Instances.

Resources for interoperation with Oracle 9i enable you to administer RAC database instances by using Sun Cluster interfaces. These resources also provide fault monitoring and automatic fault recovery for Oracle RAC. The automatic fault recovery that this data service provides supplements the automatic fault recovery that the Oracle RAC software provides.

The following resources for interoperation with Oracle 9i are required:

Oracle 9i RAC Server Resources


Note –

If you are using Oracle 10g, no Oracle RAC server resources are required. For more information, see Creating Resources for Interoperation With Oracle 10g by Using Sun Cluster Maintenance Commands.


You require one scalable resource group for each Oracle RAC database. Each resource group contains the Oracle RAC server resource that represents all instances of the database in the cluster. Ensure that this scalable resource group is mastered on all nodes where Oracle RAC is to run.

Oracle 9i Listener Resources


Note –

If you are using Oracle 10g, no Oracle listener resources are required. For more information, see Creating Resources for Interoperation With Oracle 10g by Using Sun Cluster Maintenance Commands.


If your configuration of Oracle RAC requires Oracle listeners, configure each listener to serve only one RAC database instance. This configuration provides the highest availability and scalability, and the easiest management.


Note –

Not all configurations of Oracle RAC require Oracle listeners. For example, if the RAC database server and the database client are running on the same machine, no Oracle listeners are required.


If your configuration includes Oracle listeners, configure one scalable resource to represent all listeners that serve a specific RAC database. Configure the listener resource as follows:

Logical Hostname Resources for Oracle 9i Listener Resources


Note –

If you are using Oracle 10g, no LogicalHostname resources are required.


To ensure that Oracle listeners can continue to access the database after failure of an instance on a node, each node requires a logical hostname resource. On each node, the scalable Oracle listener listens on an IP address that is represented by the logical hostname resource.

If a cluster node that is running an instance of Oracle RAC fails, an operation that a client application attempted might be required to time out before the operation is attempted again on another instance. If the Transmission Control Protocol/Internet Protocol (TCP/IP) network timeout is high, the client application might require a significant length of time to detect the failure. Typically, client applications require between three and nine minutes to detect such failures.

In such situations, client applications can connect to listener resources that are listening on an address that is represented by the Sun Cluster logical hostname resource. If a node fails, the resource group that contains the logical hostname resource fails over to another surviving node on which Oracle RAC is running. The failover of the logical hostname resource enables new connections to be directed to the other instance of Oracle RAC.

Configure LogicalHostname resources for each listener resource as follows:

ProcedureHow to Register and Configure Sun Cluster Resources in a Global Cluster for Interoperation With Oracle 9i

The SUNW.scalable_rac_server resource type represents the Oracle RAC server in a Sun Cluster configuration.

Oracle RAC server instances should be started only after the RAC framework is enabled on a cluster node. You ensure that this requirement is met by creating the following affinities and dependencies:

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create the logical hostname resources to represent the IP addresses on which the Oracle listeners are to listen.

    Each node where Sun Cluster Support for Oracle RAC can run requires a logical hostname resource. Create each logical hostname resource as follows:

    1. Create a failover resource group to contain the logical hostname resource.

      Set the properties of the resource group as follows:

      • Specify the node for which you are creating the logical hostname resource as the primary node.

      • Specify the remaining nodes where Sun Cluster Support for Oracle RAC can run as potential primary nodes.

      • Choose an order for the potential primary nodes that ensures that the logical hostname resources are distributed equally throughout the cluster.

      • Ensure that the resource group is failed back to the primary node when the database instance on the primary node recovers after a failure.


      # clresourcegroup create -n nodelist -p failback=true \
      [-p rg_description="description" \]
      lh-name-rg
      
      -n nodelist

      Specifies a comma-separated list of names of the nodes that can master this resource group. Ensure that the node for which you are creating the logical hostname resource appears first in the list. Choose an order for the remaining nodes that ensures that the logical hostname resources are distributed equally throughout the cluster.

      -p rg_description="description"

      Specifies an optional brief description of the resource group. This description is displayed when you use Sun Cluster maintenance commands to obtain information about the resource group.

      lh-name-rg

      Specifies your choice of name to assign to the resource group.

    2. Add a logical hostname resource to the resource group that you created in Step a.


      # clreslogicalhostname create -h lh-name -g lh-name-rg lh-name-rs
      
      -h lh-name

      Specifies the logical hostname that this resource is to make available. An entry for this logical hostname must exist in the name service database.

      -glh-name-rg

      Specifies that you are adding the resource to the resource group that you created in Step 2.

      lh-name-rs

      Specifies your choice of name to assign to the logical hostname resource.

  3. Create a scalable resource group to contain the Oracle RAC server resource and Oracle listener resource.


    # clresourcegroup create -n nodelist \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable rac-db-rg
    
    -n nodelist

    Specifies a comma-separated list of cluster nodes on which Sun Cluster Support for Oracle RAC is to be enabled. The Sun Cluster Support for Oracle RAC software packages must be installed on each node in this list.

    -p maximum_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p desired_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p rg_affinities=++rac-fmwk-rg

    Creates a strong positive affinity to the RAC framework resource group. If the RAC framework resource group was created by using the clsetup utility, the RAC framework resource group is named rac-framework-rg.

    -p rg_description="description"

    Specifies an optional brief description of the resource group. This description is displayed when you use Sun Cluster maintenance commands to obtain information about the resource group.

    -p rg_mode=Scalable

    Specifies that the resource group is scalable.

    rac-db-rg

    Specifies the name that you are assigning to the resource group.

  4. Register the SUNW.scalable_rac_listener resource type.


    # clresourcetype register SUNW.scalable_rac_listener
    
  5. Add an instance of the SUNW.scalable_rac_listener resource type to the resource group that you created in Step 3.

    When you create this resource, specify the following information about the resource:

    • The name of the Oracle listener on each node where Oracle RAC is to run. This name must match the corresponding entry in the listener.ora file for the node.

    • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.


    # clresource create -g rac-db-rg \
    -t SUNW.scalable_rac_listener \
    -p resource_dependencies_weak=lh-rs-list \
    [-p resource_dependencies=db-bin-rs  \]
    -p listener_name{node}=listener[…] \
    -p oracle_home=ora-home \
    rac-lsnr-rs
    
    -g rac-db-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 3.

    [-p resource_dependencies=db-bin-rs]

    Specifies that this Oracle listener resource has a strong dependency on the storage resource for binary files. Specify this dependency only if you are using the Sun StorageTek QFS shared file system or a qualified NAS device for Oracle binary files. The storage resource for Oracle binary files is created when you perform the tasks in Registering and Configuring Storage Resources for Oracle Files.

    -p listener_name{node}=ora-sid

    Specifies the name of the Oracle listener instance on node node. This name must match the corresponding entry in the listener.ora file.

    -p resource_dependencies_weak=lh-rs-list

    Specifies a comma-separated list of resources on which this resource is to have a weak dependency. The list must contain all the logical hostname resources that you created in Step 2.

    -p oracle_home=ora-home

    Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    rac-lsnr-rs

    Specifies the name that you are assigning to the SUNW.scalable_rac_listener resource.

  6. Register the SUNW.scalable_rac_server resource type.


    # clresourcetype register SUNW.scalable_rac_server
    
  7. Add an instance of the SUNW.scalable_rac_server resource type to the resource group that you created in Step 3.

    When you create this resource, specify the following information about the resource:

    • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    • The Oracle system identifier on each node where Oracle RAC is to run. This identifier is the name of the Oracle database instance on the node.

    • The full path to the alert log file on each node where Oracle RAC is to run.


    # clresource create -g rac-db-rg \
    -t SUNW.scalable_rac_server \
    -p resource_dependencies=rac-fmwk-rs \
    -p resource_dependencies_offline_restart=[db-storage-rs][,db-bin-rs] \
    -p resource_dependencies_weak=rac-lsnr-rs \
    -p oracle_home=ora-home \
    -p connect_string=string \
    -p oracle_sid{node}=ora-sid[…] \
    -p alert_log_file{node}=al-file[…] \
    rac-srvr-rs
    
    -g rac-db-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 3.

    -p resource_dependencies=rac-fmwk-rs

    Specifies the resources on which this Oracle RAC server resource has a strong dependency.

    You must specify the RAC framework resource. If the RAC framework resource group is created by using the clsetup utility or Sun Cluster Manager, this resource is named rac-framework-rs.

    If you are using a volume manager or the Sun StorageTek QFS shared file system for database files, you must also specify the storage resource for database files.

    If you are using the Sun StorageTek QFS shared file system for Oracle binary files, you must also specify the storage resource for binary files.

    The storage resources for Oracle files are created when you perform the tasks in Registering and Configuring Storage Resources for Oracle Files.

    -p resource_dependencies_weak=rac-lsnr-rs

    Specifies a weak dependency by this Oracle RAC server resource on the Oracle listener resource that you created in Step 5.

    -p oracle_sid{node}=ora-sid

    Specifies the Oracle system identifier on node node. This identifier is the name of the Oracle database instance on the node. You must set a different value for this property on each node where Oracle RAC is to run.

    -p oracle_home=ora-home

    Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    -p connect_string=string

    Specifies the Oracle database user ID and password that the fault monitor uses to connect to the Oracle database. string is specified as follows:

    userid/password
    
    userid

    Specifies the Oracle database user ID that the fault monitor uses to connect to the Oracle database.

    password

    Specifies the password that is set for the Oracle database user userid.

    The database user ID and password are defined during the setup of Oracle RAC. To use Solaris authentication, type a slash (/) instead of a user ID and password.

    rac-srvr-rs

    Specifies the name that you are assigning to the SUNW.scalable_rac_server resource.

  8. Bring online the resource group that you created in Step 3.


    # clresourcegroup online -emM rac-db-rg
    
    rac-db-rg

    Specifies that a resource group that you created in Step 3 is to be moved to the MANAGED state and brought online.


Example D–1 Registering and Configuring Sun Cluster Resources for Interoperation With Oracle 9i

This example shows the sequence of operations that is required to register and configure Sun Cluster resources for interoperation with Oracle 9i on a two-node cluster.

The example makes the following assumptions:

  1. To create the logical hostname resource for node phys-schost-1, the following commands are run:


    # clresourcegroup create -n phys-schost-1,phys-schost-2 -p failback=true  \
    -p rg_description="Logical hostname schost-1 RG" \
    schost-1-rg
    # clreslogicalhostname create -h schost-1 -g schost-1-rg schost-1
    
  2. To create the logical hostname resource for node phys-schost-2, the following commands are run:


    # clresourcegroup create -n phys-schost-2,phys-schost-1 -p failback=true \
    -p rg_description="Logical hostname schost-2 RG" \
    schost-2-rg
    # clreslogicalhostname create -h schost-2 -g schost-2-rg schost-2
    
  3. To create a scalable resource group to contain the Oracle RAC server resource and Oracle listener resource, the following command is run:


    # clresourcegroup create -S \
    -p rg_affinities=++rac_framework-rg \
    -p rg_description="RAC 9i server and listener RG" \
    rac-db-rg
    
  4. To register the SUNW.scalable_rac_listener resource type, the following command is run:


    # clresourcetype register SUNW.scalable_rac_listener
    
  5. To add an instance of the SUNW.scalable_rac_listener resource type to the rac-db-rg resource group, the following command is run:


    # clresource create -g rac-db-rg \
    -t SUNW.scalable_rac_listener \
    -p resource_dependencies_weak=schost-1,schost-2 \
    -p listener_name\{phys-schost-1\}=LISTENER1 \
    -p listener_name\{phys-schost-2\}=LISTENER2 \
    -p oracle_home=/home/oracle/product/9.2.0  \
    scalable_rac_listener-rs
    

    A different value of the listener_name extension property is set for each node that can master the resource.

  6. To register the SUNW.scalable_rac_server resource type, the following command is run:


    # clresourcetype register SUNW.scalable_rac_server
    
  7. To add an instance of the SUNW.scalable_rac_listener resource type to the rac-db-rg resource group, the following command is run:


    # clresource create -g rac-db-rg \
      -t SUNW.scalable_rac_server
      -p resource_dependencies=rac_framework-rs, db-storage-rs
    -p resource_dependencies_weak=scalable_rac_listener-rs \
    -p oracle_home=/home/oracle/product/9.2.0 \
    -p connect_string=scooter/t!g3r \
    -p oracle_sid\{phys-schost-1\}=V920RAC1 \
    -p oracle_sid\{phys-schost-2\}=V920RAC2 \
    -p alert_log_file\{phys-schost-1\}=/home/oracle/9.2.0/rdbms/log/alert_V920RAC1.log \
    -p alert_log_file\{phys-schost-2\}=/home/oracle/9.2.0/rdbms/log/alert_V920RAC2.log \
    scalable_rac_server-rs
    

    A different value of the following extension properties is set for each node that can master the resource:

    • alert_log_file

    • oracle_sid

  8. To bring online the resource group that contains the Oracle RAC server resource and Oracle listener resource, the following command is run:


    # clresourcegroup online -emM rac-db-rg
    

Next Steps

Go to Verifying the Installation and Configuration of Sun Cluster Support for Oracle RAC.

ProcedureHow to Register and Configure Sun Cluster Resources in a Zone Cluster for Interoperation With Oracle 9i

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform the steps in this procedure to register and configure Sun Cluster resources in a zone cluster for interoperation with Oracle 9i.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create the logical hostname resources to represent the IP addresses on which the Oracle listeners are to listen.

    Each node where Sun Cluster Support for Oracle RAC can run requires a logical hostname resource. Create each logical hostname resource as follows:

    1. Create a failover resource group to contain the logical hostname resource.

      Set the properties of the resource group as follows:

      • Specify the node for which you are creating the logical hostname resource as the primary node.

      • Specify the remaining nodes where Sun Cluster Support for Oracle RAC can run as potential primary nodes.

      • Choose an order for the potential primary nodes that ensures that the logical hostname resources are distributed equally throughout the cluster.

      • Ensure that the resource group is failed back to the primary node when the database instance on the primary node recovers after a failure.


      # clresourcegroup create -Z zcname -n nodelist -p failback=true \
      [-p rg_description="description" \]
      lh-name-rg
      
      -n nodelist

      Specifies a comma-separated list of names of the nodes that can master this resource group. Ensure that the node for which you are creating the logical hostname resource appears first in the list. Choose an order for the remaining nodes that ensures that the logical hostname resources are distributed equally throughout the cluster.

      -p rg_description="description"

      Specifies an optional brief description of the resource group. This description is displayed when you use Sun Cluster maintenance commands to obtain information about the resource group.

      lh-name-rg

      Specifies your choice of name to assign to the resource group.

    2. Add a logical hostname resource to the resource group that you created in Step a.


      # clreslogicalhostname create -Z zcname -h lh-name -g lh-name-rg lh-name-rs
      
      -h lh-name

      Specifies the logical hostname that this resource is to make available. An entry for this logical hostname must exist in the name service database.

      -glh-name-rg

      Specifies that you are adding the resource to the resource group that you created in Step 2.

      lh-name-rs

      Specifies your choice of name to assign to the logical hostname resource.

  3. Create a scalable resource group to contain the Oracle RAC server resource and Oracle listener resource.


    # clresourcegroup create -Z zcname -n nodelist \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable rac-db-rg
    
    -n nodelist

    Specifies a comma-separated list of cluster nodes on which Sun Cluster Support for Oracle RAC is to be enabled. The Sun Cluster Support for Oracle RAC software packages must be installed on each node in this list.

    -p maximum_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p desired_primaries=num-in-list

    Specifies the number of nodes on which Sun Cluster Support for Oracle RAC is to be enabled. This number must equal the number of nodes in nodelist.

    -p rg_affinities=++rac-fmwk-rg

    Creates a strong positive affinity to the RAC framework resource group. If the RAC framework resource group was created by using the clsetup utility, the RAC framework resource group is named rac-framework-rg.

    -p rg_description="description"

    Specifies an optional brief description of the resource group. This description is displayed when you use Sun Cluster maintenance commands to obtain information about the resource group.

    -p rg_mode=Scalable

    Specifies that the resource group is scalable.

    rac-db-rg

    Specifies the name that you are assigning to the resource group.

  4. Register the SUNW.scalable_rac_listener resource type.


    # clresourcetype register -Z zcname SUNW.scalable_rac_listener
    
  5. Add an instance of the SUNW.scalable_rac_listener resource type to the resource group that you created in Step 3.

    When you create this resource, specify the following information about the resource:

    • The name of the Oracle listener on each node where Oracle RAC is to run. This name must match the corresponding entry in the listener.ora file for the node.

    • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.


    # clresource create -Z zcname -g rac-db-rg \
    -t SUNW.scalable_rac_listener \
    -p resource_dependencies_weak=lh-rs-list \
    [-p resource_dependencies=db-bin-rs  \]
    -p listener_name{node}=listener[…] \
    -p oracle_home=ora-home \
    rac-lsnr-rs
    
    -g rac-db-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 3.

    [-p resource_dependencies=db-bin-rs]

    Specifies that this Oracle listener resource has a strong dependency on the storage resource for binary files. Specify this dependency only if you are using the Sun StorageTek QFS shared file system for Oracle binary files. The storage resource for Oracle binary files is created when you perform the tasks in Registering and Configuring Storage Resources for Oracle Files.

    -p listener_name{node}=ora-sid

    Specifies the name of the Oracle listener instance on node node. This name must match the corresponding entry in the listener.ora file.

    -p resource_dependencies_weak=lh-rs-list

    Specifies a comma-separated list of resources on which this resource is to have a weak dependency. The list must contain all the logical hostname resources that you created in Step 2.

    -p oracle_home=ora-home

    Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    rac-lsnr-rs

    Specifies the name that you are assigning to the SUNW.scalable_rac_listener resource.

  6. Register the SUNW.scalable_rac_server resource type.


    # clresourcetype register -Z zcname SUNW.scalable_rac_server
    
  7. Add an instance of the SUNW.scalable_rac_server resource type to the resource group that you created in Step 3.

    When you create this resource, specify the following information about the resource:

    • The Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    • The Oracle system identifier on each node where Oracle RAC is to run. This identifier is the name of the Oracle database instance on the node.

    • The full path to the alert log file on each node where Oracle RAC is to run.


    # clresource create -Z zcname -g rac-db-rg \
    -t SUNW.scalable_rac_server \
    -p resource_dependencies=rac-fmwk-rs \
    -p resource_dependencies_offline_restart=[db-storage-rs][,db-bin-rs] \
    -p resource_dependencies_weak=rac-lsnr-rs \
    -p oracle_home=ora-home \
    -p connect_string=string \
    -p oracle_sid{node}=ora-sid[…] \
    -p alert_log_file{node}=al-file[…] \
    rac-srvr-rs
    
    -g rac-db-rg

    Specifies the resource group to which you are adding the resource. This resource group must be the resource group that you created in Step 3.

    -p resource_dependencies=rac-fmwk-rs

    Specifies the resources on which this Oracle RAC server resource has a strong dependency.

    You must specify the RAC framework resource. If the RAC framework resource group is created by using the clsetup utility or Sun Cluster Manager, this resource is named rac-framework-rs.

    If you are using a volume manager or the Sun StorageTek QFS shared file system for database files, you must also specify the storage resource for database files.

    If you are using the Sun StorageTek QFS shared file system for Oracle binary files, you must also specify the storage resource for binary files.

    The storage resources for Oracle files are created when you perform the tasks in Registering and Configuring Storage Resources for Oracle Files.

    -p resource_dependencies_weak=rac-lsnr-rs

    Specifies a weak dependency by this Oracle RAC server resource on the Oracle listener resource that you created in Step 5.

    -p oracle_sid{node}=ora-sid

    Specifies the Oracle system identifier on node node. This identifier is the name of the Oracle database instance on the node. You must set a different value for this property on each node where Oracle RAC is to run.

    -p oracle_home=ora-home

    Specifies the path to the Oracle home directory. The Oracle home directory contains the binary files, log files, and parameter files for the Oracle software.

    -p connect_string=string

    Specifies the Oracle database user ID and password that the fault monitor uses to connect to the Oracle database. string is specified as follows:

    userid/password
    
    userid

    Specifies the Oracle database user ID that the fault monitor uses to connect to the Oracle database.

    password

    Specifies the password that is set for the Oracle database user userid.

    The database user ID and password are defined during the setup of Oracle RAC. To use Solaris authentication, type a slash (/) instead of a user ID and password.

    rac-srvr-rs

    Specifies the name that you are assigning to the SUNW.scalable_rac_server resource.

  8. Bring online the resource group that you created in Step 3


    # clresourcegroup online -Z zcname -emM rac-db-rg
    
    rac-db-rg

    Specifies that a resource group that you created in Step 3 is to be moved to the MANAGED state and brought online.