Sun Cluster Data Service for Oracle RAC Guide for Solaris OS

Creating Storage Management Resources by Using Sun Cluster Maintenance Commands

The tasks in this section are alternatives for the resource-configuration steps in How to Register and Configure Storage Resources for Oracle Files by Using clsetup.

The following resources to represent storage for Oracle files are required:

Resources for Scalable Device Groups and Scalable File-System Mountpoints

If you are using Solaris Volume Manager for Sun Cluster or VxVM, configure storage resources as follows:

If you are using Sun StorageTek QFS or qualified NAS devices, configure storage resources as follows:

The resource that represents a Sun StorageTek QFS shared file system can start only if the file system's Sun StorageTek QFS metadata server is running. Similarly, the resource that represents a Sun StorageTek QFS shared file system can stop only if the file system's Sun StorageTek QFS metadata server is stopped. To meet this requirement, configure a resource for each Sun StorageTek QFS metadata server. For more information, see Resources for the Sun StorageTek QFS Metadata Server.

Resources for the Sun StorageTek QFS Metadata Server

If you are using the Sun StorageTek QFS shared file system, create one resource for each Sun StorageTek QFS metadata server. The configuration of resource groups for these resources depends on the version of Oracle that you are using.

Configuration of Sun StorageTek QFS Resource Groups With Oracle 9i and Oracle 10g R2

If you are using Oracle 9i or Oracle 10g R2, the configuration of resource groups depends on the number of file systems in your configuration.

Configuration of Sun StorageTek QFS Resource Groups With Oracle 10g R1

If you are using Oracle 10g, Oracle CRS manage RAC database instances. These database instances must be started only after all shared file systems are mounted.

You might use multiple file systems for database files and related files. For more information, see Sun StorageTek QFS File Systems for Database Files and Related Files. In this situation, ensure that the file system that contains the Oracle CRS voting disk is mounted only after the file systems for other database files have been mounted. This behavior ensures that, when a node is booted, Oracle CRS are started only after all Sun StorageTek QFS file systems are mounted.

If you are using Oracle 10g R1, the configuration of resource groups must ensure that Sun Cluster mounts the file systems in the required order. To meet this requirement, configure resource groups for the metadata servers of the file systems as follows:

ProcedureHow to Create a Resource for a Scalable Device Group in the Global Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the scalable device group resource.

    Set a strong positive affinity by the resource group for the RAC framework resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -p nodelist=nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    scal-dg-rg
    
  3. Register the SUNW.ScalDeviceGroup resource type.


    # clresourcetype register SUNW.ScalDeviceGroup
    
  4. For each scalable device group that you are using for Oracle files, add an instance of the SUNW.ScalDeviceGroup resource type to the resource group that you created in Step 2.

    Set a strong dependency for the instance of SUNW.ScalDeviceGroup on the resource in the RAC framework resource group that represents the volume manager for the device group. Limit the scope of this dependency to only the node where the SUNW.ScalDeviceGroup resource is running.


    # clresource create -t SUNW.ScalDeviceGroup -g scal-dg-rg \
    -p resource_dependencies=fm-vol-mgr-rs{local_node} \
    -p diskgroupname=disk-group scal-dg-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM scal-dg-rg
    

ProcedureHow to Create a Resource for a Scalable Device Group in a Zone Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the scalable device group resource.

    Set a strong positive affinity by the resource group for the RAC framework resource group.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -Z zcname -p nodelist=nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    -p rg_affinities=++rac-fmwk-rg \
    [-p rg_description="description" \]
    -p rg_mode=Scalable \
    scal-dg-rg
    
  3. Register the SUNW.ScalDeviceGroup resource type.


    # clresourcetype register -Z zcname SUNW.ScalDeviceGroup
    
  4. For each scalable device group that you are using for Oracle files, add an instance of the SUNW.ScalDeviceGroup resource type to the resource group that you created in Step 2.

    Set a strong dependency for the instance of SUNW.ScalDeviceGroup on the resource in the RAC framework resource group that represents the volume manager for the device group. Limit the scope of this dependency to only the node where the SUNW.ScalDeviceGroup resource is running.


    # clresource create -Z zcname -t SUNW.ScalDeviceGroup -g scal-dg-rg \
    -p resource_dependencies=fm-vol-mgr-rs{local_node} \
    -p diskgroupname=disk-group scal-dg-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -Z zcname-emM scal-dg-rg
    

ProcedureHow to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster

Perform this task only if you are using the Sun StorageTek QFS shared file system.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a failover resource group to contain the resources for the Sun StorageTek QFS metadata server.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresourcegroup create -n nodelist \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
    qfs-mds-rg
    
  3. Register the SUNW.qfs resource type.


    # clresourcetype register SUNW.qfs
    
  4. For each Sun StorageTek QFSshared file system that you are using, add an instance of the SUNW.qfs resource type to the resource group that you created in Step 2.

    Each instance of SUNW.qfs represents the metadata server of the file system.

    If you are also using a volume manager, set a strong dependency by the instance of SUNW.qfs on the resource for the scalable device group that is to store the file system . This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresource create -t SUNW.qfs -g qfs-mds-rg \
    -p qfsfilesystem=path \
    [-p resource_dependencies=scal-dg-rs \]
    qfs-mds-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM qfs-mds-rg
    

ProcedureHow to Register and Configure Resources for the Sun StorageTek QFS Metadata Server for a Zone Cluster

Perform the steps in this procedure to register and configure resources for the Sun StorageTek QFS metadata server for a zone cluster. You must perform these steps in the global cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the SUNW.wait_zc_boot resource in the global cluster.


    # clresourcegroup create -n nodelist \
    -p rg_mode=Scalable \
    -p maximum_primaries=num-in-list \
    -p desired_primaries=num-in-list \
    [-p rg_mode=Scalable \
    zc-wait-rg
    
  3. Register the SUNW.wait_zc_boot resource type.


    # clresourcetype register SUNW.wait_zc_boot
    
  4. Add an instance of the SUNW.wait_zc_boot resource type to the resource group that you created in Step 2.


    # clresource create -g zc-wait-rg -t SUNW.wait_zc_boot \
    -p ZCName=zcname zc-wait-rs
    
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM zc-wait-rg
    
  6. Create a failover resource group to contain the resources for the Sun StorageTek QFS metadata server.

    Set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresourcegroup create -n nodelist \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
  7. Register the SUNW.qfs resource type.


    # clresourcetype register SUNW.qfs
    
  8. Add an instance of the SUNW.qfs resource type to the resource group that you created in Step 6 for each Sun StorageTek QFSshared file system that you are using.

    Each instance of SUNW.qfs represents the metadata server of the file system.

    If you are also using a volume manager, set a strong dependency by the instance of SUNW.qfs on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    # clresource create -t SUNW.qfs -g qfs-mds-rg \
    -p qfsfilesystem=path \
    [-p resource_dependencies=scal-dg-rs,zc-wait-rs, \
    qfs-mds-rs]
  9. Bring online and in a managed state the resource group that you created in Step 6.


    # clresourcegroup online -emM qfs-mds-rg
    

ProcedureHow to Create a Resource for a File-System Mountpoint in the Global Cluster

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS.

Perform this procedure on only one node of the cluster.

  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the resource for a scalable file-system mountpoint.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create -n nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    [-p rg_affinities=++scal-dg-rg \]
    [-p rg_description="description" \]
    -p rg_mode=Scalable scal-mp-rg
    
  3. Register the SUNW.ScalMountPoint resource type.


    # clresourcetype register SUNW.ScalMountPoint
    
  4. For each shared file system that requires a scalable file-system mountpoint resource, add an instance of the SUNW.ScalMountPoint resource type to the resource group that you created in Step 2.

    • For each Sun StorageTek QFS shared file system, type the following command:

      Set a strong dependency by the instance of SUNW.ScalMountPoint on the resource for the Sun StorageTek QFS metadata server for the file system. The resource for the Sun StorageTek QFS metadata server set is created in How to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster.

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -t SUNW.ScalMountPoint -g scal-mp-rg \
      -p resource_dependencies=qfs-mds-rs \
      [-p resource_dependencies_offline_restart=scal-dg-rs \]
      -p mountpointdir=mp-path \
      -p filesystemtype=s-qfs \
      -p targetfilesystem=fs-name qfs-mp-rs
      
    • For each file system on a qualified NAS device, type the following command:

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -t SUNW.ScalMountPoint -g scal-mp-rg \
      [-p resource_dependencies_offline_restart=scal-dg-rs \]
      -p mountpointdir=mp-path \
      -p filesystemtype=nas \
      -p targetfilesystem=nas-device:fs-name nas-mp-rs
      
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -emM scal-mp-rg
    

ProcedureHow to Create a Resource for a File-System Mountpoint in Zone Cluster

Perform the steps in this procedure to create a resource for a file-system mountpoint in a zone cluster. For RAC configuration with the Sun StorageTek QFSshared file system on Solaris Volume Manager for Sun Cluster and the Sun StorageTek QFSshared file system on hardware RAID, you should create a scalable resource group to contain all the scalable mountpoint resources in a zone cluster.


Note –

The nodelist is the list of global-cluster voting nodes where the zone cluster is created.


  1. Become superuser or assume a role that provides solaris.cluster.admin and solaris.cluster.modify RBAC authorizations.

  2. Create a scalable resource group to contain the resource for a scalable file-system mountpoint in zone cluster.

    If you are also using a volume manager, set a strong positive affinity by the resource group for the resource group that contains the volume manager's scalable device-group resource. This resource group is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


    Tip –

    If you require Sun Cluster Support for Oracle RAC to run on all cluster nodes, specify the -S option in the command that follows and omit the options -n, -p maximum_primaries, -p desired_primaries, and -p rg_mode.



    # clresourcegroup create nodelist \
    -p desired_primaries=num-in-list \
    -p maximum_primaries=num-in-list \
    [-p rg_affinities=++global:scal-dg-rg \]
    [-p rg_description="description" \]
    -p rg_mode=Scalable scal-mp-rg
    
  3. Register the SUNW.ScalMountPoint resource type.


    # clresourcetype register -Z zcname SUNW.ScalMountPoint
    
  4. For each shared file system that requires a scalable file-system mountpoint resource, add an instance of the SUNW.ScalMountPoint resource type to the resource group that you created in Step 2.

    • For each Sun StorageTek QFS shared file system, do the following:

      Set a strong dependency by the instance of SUNW.ScalMountPoint on the resource for the Sun StorageTek QFS metadata server for the file system. The resource for the Sun StorageTek QFS metadata server set is created in How to Register and Configure Resources for the Sun StorageTek QFS Metadata Server in the Global Cluster.

      If you are also using a volume manager, set an offline-restart dependency by the instance of SUNW.ScalMountPoint on the resource for the scalable device group that is to store the file system. This resource is created in How to Create a Resource for a Scalable Device Group in the Global Cluster.


      # clresource create -Z zcname -t SUNW.ScalMountPoint -d -g scal-mp-rg \
      -p resource_dependencies=global:qfs-mds-rs \
      [-y resource_dependencies_offline_restart=global:scal-dg-rs \]
      -x mountpointdir=mp-path \
      -x filesystemtype=s-qfs \
      -x targetfilesystem=fs-name qfs-mp-rs
      
  5. Bring online and in a managed state the resource group that you created in Step 2.


    # clresourcegroup online -Z zcname -emM scal-mp-rg