Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Chapter 2 Administering Data Service Resources

This chapter describes how to use the Sun Cluster maintenance commands to manage resources, resource groups, and resource types within the cluster. To determine if you can use other tools to complete a procedure, see Tools for Data Service Resource Administration.

For overview information about resource types, resource groups, and resources, see Chapter 1, Planning for Sun Cluster Data Services and Sun Cluster Concepts Guide for Solaris OS.

This chapter contains the following sections.

Overview of Tasks for Administering Data Service Resources

The following table summarizes the tasks for installing and configuring Sun Cluster data services. The table also provides cross-references to detailed instructions for performing the tasks.

Table 2–1 Tasks for Administering Data Service Resources

Task 

Instructions 

Register a resource type 

How to Register a Resource Type

Upgrade a resource type 

How to Migrate Existing Resources to a New Version of the Resource Type

How to Install and Register an Upgrade of a Resource Type

Downgrade a resource type 

How to Downgrade a Resource to an Older Version of Its Resource Type

Create failover or scalable resource groups 

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Add logical hostnames or shared addresses and data service resources to resource groups 

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Enable resources and resource monitors, manage the resource group, and bring the resource group and its associated resources online 

How to Enable a Resource

How to Bring Online Resource Groups

Quiesce a resource group 

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspend and resume automatic recovery actions of a resource group 

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disable and enable resource monitors independent of the resource 

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Remove resource types from the cluster 

How to Remove a Resource Type

Remove resource groups from the cluster 

How to Remove a Resource Group

Remove resources from resource groups 

How to Remove a Resource

Switch the primary for a resource group 

How to Switch the Current Primary of a Resource Group

Disable resources and move their resource group into the UNMANAGED state

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Display resource type, resource group, and resource configuration information 

Displaying Resource Type, Resource Group, and Resource Configuration Information

Change resource type, resource group, and resource properties 

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

Clear error flags for failed Resource Group Manager (RGM) processes 

How to Clear the STOP_FAILED Error Flag on Resources

Clear the Start_failed resource state

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Reregister the built-in resource types LogicalHostname and SharedAddress

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Update the network interface ID list for the network resources, and update the node list for the resource group 

Adding a Node to a Resource Group

Remove a node from a resource group 

Removing a Node From a Resource Group

Migrate an application from a global-cluster voting node to a global-cluster non-voting node 

How to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

Set up HAStoragePlus for resource groups to synchronize the startups between those resource groups and device groups

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

Set up the HAStoragePlus to make a local Solaris ZFS highly available

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

Upgrade HAStorage to HAStoragePlus

How to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

How to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

Modify online the resource for a highly available file system 

Modifying Online the Resource for a Highly Available File System

Change the global file system to local file system in a HAStoragePlus resource

Changing the Global File System to Local File System in a HAStoragePlus Resource

Upgrade the built-in resource types LogicalHostname and SharedAddress

Upgrading a Resource Type

Upgrading a Preregistered Resource Type

Upgrade the HAStoragePlus resource type

Upgrading a Resource Type

Upgrading the HAStoragePlus Resource Type

Distribute online resource groups among cluster nodes 

Distributing Online Resource Groups Among Cluster Nodes

Replicate and upgrade configuration data for resource groups, resource types, and resources 

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

Enable Solaris SMF services to run with Sun Cluster 

Enabling Solaris SMF Services to Run With Sun Cluster

Tune fault monitors for Sun Cluster data services 

Tuning Fault Monitors for Sun Cluster Data Services


Note –

The procedures in this chapter describe how to use the Sun Cluster maintenance commands to complete these tasks. Other tools also enable you to administer your resources. See Tools for Data Service Resource Administration for details about these options.


Configuring and Administering Sun Cluster Data Services

Configuring a Sun Cluster data service involves the following tasks.

Use the procedures in this chapter to update your data service configuration after the initial configuration. For example, to change resource type, resource group, and resource properties, go to Changing Resource Type, Resource Group, and Resource Properties.

Registering a Resource Type

A resource type provides specification of common properties and callback methods that apply to all of the resources of the given type. You must register a resource type before you create a resource of that type. For details about resource types, see Chapter 1, Planning for Sun Cluster Data Services.

An administrator can register a resource type for a zone cluster by specifying a resource type registration (RTR) file that resides inside the zone cluster. In other words, the file must be under the zone root path. The RTR file inside the zone cluster cannot have the Global_zone property set to TRUE. The RTR file inside the zone cluster cannot be of type RTR_LOGICAL_HOSTNAME or RTR_SHARED_ADDRESS.

The administrator can also register a resource type for a zone cluster from the location /usr/cluster/lib/rgm/rtreg. The administrator in the zone cluster cannot modify any RTR files in this directory. This enables registering system resource types for a zone cluster, even when the RTR file has one of the properties that cannot be set directly from the zone cluster. This process provides a secure way of delivering system resource types.

The resource types in the /usr/cluster/lib/rgm/rtreg directory are for the exclusive use of the global cluster.

ProcedureHow to Register a Resource Type


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the name for the resource type that you plan to register. The resource type name is an abbreviation for the data service name. For information about resource type names of data services that are supplied with Sun Cluster, see the release notes for your release of Sun Cluster.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Register the resource type.


    # clresourcetype register resource-type
    
    resource-type

    Specifies name of the resource type to add. See the release notes for your release of Sun Cluster to determine the predefined name to supply.

  3. Verify that the resource type has been registered.


    # clresourcetype show
    

Example 2–1 Registering a Resource Type

The following example registers the SUNW.krb5 resource type, which represents the Sun Java System Web Server application in a Sun Cluster configuration.


# clresourcetype register SUNW.krb5
# clresourcetype show SUNW.krb5

Resource Type:                                  SUNW.krb5
RT_description:                                  HA-Kerberos KDC server for Sun Cluster
RT_version:                                      3.2
API_version:                                     6
RT_basedir:                                      /opt/SUNWsckrb5/bin
Single_instance:                                 False
Proxy:                                           False
Init_nodes:                                      All potential masters
Installed_nodes:                                 <All>
Failover:                                        True
Pkglist:                                         SUNWsckrb5
RT_system:                                       False

Next Steps

After registering resource types, you can create resource groups and add resources to the resource group. For details, see Creating a Resource Group.

See Also

The following man pages:

Upgrading a Resource Type

Upgrading a resource type enables you to use new features that are introduced in the new version of the resource type. A new version of a resource type might differ from a previous version in the following ways.

Upgrading a resource type involves the tasks that are explained in the following sections:

  1. How to Install and Register an Upgrade of a Resource Type

  2. How to Migrate Existing Resources to a New Version of the Resource Type

ProcedureHow to Install and Register an Upgrade of a Resource Type

The instructions that follow explain how to use the clresource(1CL) command to perform this task. However, you are not restricted to using the clresource command for this task. Instead of the clresource command, you can use SunPlex Manager or the Resource Group option of the clsetup(1CL) command to perform this task.

Before You Begin

Consult the documentation for the resource type to determine what you must do before installing the upgrade package on a node. One action from the following list will be required:

If you must reboot the node in noncluster mode, prevent a loss of service by performing a rolling upgrade. In a rolling upgrade, you install the package on each node individually while leaving the remaining nodes running in cluster mode.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Install the package for the resource type upgrade on all cluster nodes where instances of the resource type are to be brought online.

  3. Register the new version of the resource type.

    To ensure that the correct version of the resource type is registered, you must specify the following information:

    • The resource type name

    • The resource type registration (RTR) file that defines the resource type


    # clresourcetype register -f path-to-new-rtr-file resource-type-name
    

    The format of the resource type name is as follows:

    vendor-id.base-rt-name:rt-version
    

    For an explanation of this format, see Format of Resource Type Names.

  4. Display the newly registered resource type.


    # clresourcetype show resource-type-name
    
  5. If necessary, set the Installed_nodes property to the nodes where the package for the resource type upgrade is installed.

    You must perform this step if the package for the resource type upgrade is not installed on all cluster nodes.

    The nodelist property of all resource groups that contain instances of the resource type must be a subset of the Installed_nodes property of the resource type.


    # clresourcetype set -n installed-node-list resource-type
    
    -n installed-node-list

    Specifies the names of nodes on which this resource type is installed.

ProcedureHow to Migrate Existing Resources to a New Version of the Resource Type

The instructions that follow explain how to use the clresource(1CL) command to perform this task. However, you are not restricted to using the clresource command for this task. Instead of the clresource command, you can use SunPlex Manager or the Resource Group option of the clsetup(1CL) command to perform this task.

Before You Begin

Consult the instructions for upgrading the resource type to determine when you can migrate resources to a new version of the resource type.

The instructions might state that you cannot upgrade your existing version of the resource. If you cannot migrate the resource, consider the following alternatives:

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. For each resource of the resource type that is to be migrated, change the state of the resource or its resource group to the appropriate state.

    • If you can migrate the resource at any time, no action is required.

    • If you can migrate the resource only when the resource is unmonitored, type the following command:


      # clresource unmonitor resource
      
    • If you can migrate the resource only when the resource is offline, type the following command:


      # clresource disable resource
      

      Note –

      If other resources depend on the resource that you are migrating, this step fails. In this situation, consult the error message that is printed to determine the names of the dependent resources. Then repeat this step, specifying a comma-separated list that contains the resource that you are migrating and any dependent resources.


    • If you can migrate the resource only when the resource is disabled, type the following command:


      # clresource disable resource
      

      Note –

      If other resources depend on the resource that you are migrating, this step fails. In this situation, consult the error message that is printed to determine the names of the dependent resources. Then repeat this step, specifying a comma-separated list that contains the resource that you are migrating and any dependent resources.


    • If you can migrate the resource only when the resource group is unmanaged, type the following commands:


      # clresource disable -g resource-group +
      # clresourcegroup offline resource-group
      # clresourcegroup unmanage resource-group
      

      The replaceable items in these commands are as follows:

      resource-group

      Specifies the resource group that is to be unmanaged

  3. For each resource of the resource type that is to be migrated, change the Type_version property to the new version.

    If necessary, set other properties of the same resource to appropriate values in the same command. To set these properties, specify the -p option in the command.

    To determine whether you are required to set other properties, consult the instructions for upgrading the resource type. You might be required to set other properties for the following reasons:

    • An extension property has been introduced in the new version of the resource type.

    • The default value of an existing property has been changed in the new version of the resource type.


    # clresource set -p Type_version=new-version \
    [-p extension-property=new-value] [-p standard-property=new-value] resource
    

    Note –

    If the existing version of the resource type does not support upgrades to the new version, this step fails.


  4. Restore the previous state of the resource or resource group by reversing the command that you typed in Step 2.

    • If you can migrate the resource at any time, no action is required.


      Note –

      After migrating a resource that can be migrated at any time, the resource probe might not display the correct resource type version. In this situation, disable and re-enable the resource's fault monitor to ensure that the resource probe displays the correct resource type version.


    • If you can migrate the resource only when the resource is unmonitored, type the following command:


      # clresource monitor resource
      
    • If you can migrate the resource only when the resource is offline, type the following command:


      # clresource enable resource
      

      Note –

      If you disabled in Step 2 other resources that depend on the resource that you are migrating, enable the dependent resources also.


    • If you can migrate the resource only when the resource is disabled, type the following command:


      # clresource enable resource
      

      Note –

      If you disabled in Step 2 other resources that depend on the resource that you are migrating, enable the dependent resources also.


    • If you can migrate the resource only when the resource group is unmanaged, type the following commands:


      # clresource enable -g resource-group +
      # clresourcegroup manage resource-group
      # clresourcegroup online resource-group
      

Example 2–2 Migrating a Resource That Can Be Migrated Only When Offline

This example shows the migration of a resource that can be migrated only when the resource is offline. The new resource type package contains methods that are located in new paths. Because the methods are not overwritten during the installation, the resource does not need to be disabled until after the upgraded resource type is installed.

The characteristics of the resource in this example are as follows:

This example assumes that the upgrade package is already installed on all cluster nodes according to the supplier's directions.


# clresourcetype register -f /opt/XYZmyrt/etc/XYZ.myrt myrt
# clresource disable myresource
# clresource set -p Type_version=2.0 myresource
# clresource enable myresource


Example 2–3 Migrating a Resource That Can Be Migrated Only When Unmonitored

This example shows the migration of a resource that can be migrated only when the resource is unmonitored. The new resource type package contains only the monitor and RTR file. Because the monitor is overwritten during installation, monitoring of the resource must be disabled before the upgrade package is installed.

The characteristics of the resource in this example are as follows:

The following operations are performed in this example.

  1. Before the upgrade package is installed, the following command is run to disable monitoring of the resource:


    # clresource unmonitor  myresource
    
  2. The upgrade package is installed on all cluster nodes according to the supplier's directions.

  3. To register the new version of the resource type, the following command is run:


    # clresourcetype register -f /opt/XYZmyrt/etc/XYZ.myrt myrt
    
  4. To change the Type_version property to the new version, the following command is run:


    # clresource set -p Type_version=2.0 myresource
    
  5. To enable monitoring of the resource after its migration, the following command is run:


    # clresource monitor myresource
    

Downgrading a Resource Type

You can downgrade a resource to an older version of its resource type. The conditions for downgrading a resource to an older version of the resource type are more restrictive than the conditions for upgrading to a newer version of the resource type. The resource group that contains the resource must be unmanaged.

ProcedureHow to Downgrade a Resource to an Older Version of Its Resource Type

The instructions that follow explain how to use the clresource(1CL) command to perform this task. However, you are not restricted to using the clresource command for this task. Instead of the clresource command, you can use SunPlex Manager or the Resource Group option of the clsetup(1CL) command to perform this task.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.

  2. Switch offline the resource group that contains the resource that you are downgrading.


    clresourcegroup offline resource-group
    
  3. Disable all resources in the resource group that contains the resource that you are downgrading.


    clresource disable -g resource-group +
    
  4. Unmanage the resource group that contains the resource that you are downgrading.


    clresourcegroup unmanage resource-group
    
  5. If necessary, reregister the old version of the resource type to which you are downgrading.

    Perform this step only if the version to which you are downgrading is no longer registered. If the version to which you are downgrading is still registered, omit this step.


    clresourcetype register resource-type-name
    
  6. For the resource that you are downgrading, set the Type_version property to old version to which you are downgrading.

    If necessary, edit other properties of the same resource to appropriate values in the same command.


    clresource set -p Type_version=old-version resource-todowngrade
    
  7. Enable all the resources that you disabled in Step 3.


    # clresource enable -g resource-group +
    
  8. Bring to a managed state the resource group that contains the resource that you downgraded.


    # clresourcegroup manage resource-group
    
  9. Bring online the resource group that contains the resource that you downgraded.


    # clresourcegroup online resource-group
    

Creating a Resource Group

A resource group contains a set of resources, all of which are brought online or offline together on a given node or set of nodes. You must create an empty resource group before you place resources into it. A resource group can be configured to run in global-cluster non-voting nodes.


Note –

The global-cluster non voting nodes that are specified in the resource group's node list do not need to exist when the resource group is created. If the node specified in the node list is not detected by the RGM, a warning message is displayed but does not result in an error.


The two resource group types are failover and scalable. A failover resource group can be online on one node only at any time, while a scalable resource group can be online on multiple nodes simultaneously.

The following procedures explain how to use the clresourcegroup(1CL) command to create a resource group.

For conceptual information about resource groups, see Chapter 1, Planning for Sun Cluster Data Services and Sun Cluster Concepts Guide for Solaris OS.

ProcedureHow to Create a Failover Resource Group

A failover resource group contains the following types of resources:

The network address resources and their dependent data service resources move between cluster nodes when data services fail over or are switched over.


Note –

Perform this procedure from any cluster node.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create the failover resource group.


    # clresourcegroup create [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is created on all nodes in the cluster.


    Note –

    To achieve highest availability, specify global-cluster non-voting nodes on different global-cluster voting nodes in a failover resource group's node list instead of different nodes on the same global-cluster voting node.


    resource-group

    Specifies your choice of the name of the failover resource group to add. This name must begin with an ASCII character.

  3. Verify that the resource group has been created.


    # clresourcegroup show resource-group
    

Example 2–4 Creating a Failover Resource Group

This example shows the creation of the failover resource group resource-group-1. The global cluster voting nodes phys-schost-1 and phys-schost-2 can master this resource group.


# clresourcegroup create -n phys-schost1,phys-schost-2 resource-group-1
# clresourcegroup show -v resource-group-1

=== Resource Groups and Resources ===          

Resource Group:                                 resource-group1
RG_description:                                 <NULL>
RG_mode:                                        Failover
RG_state:                                       Unmanaged
RG_project_name:                                default
RG_affinities:                                  <NULL>
RG_SLM_type:                                    manual
Auto_start_on_new_cluster:                      True
Failback:                                       False
Nodelist:                                       phys-schost-1 phys-schost-2
Maximum_primaries:                              1
Desired_primaries:                              1
RG_dependencies:                                <NULL>
Implicit_network_dependencies:                  True
Global_resources_used:                          <All>
Pingpong_interval:                              3600
Pathprefix:                                     <NULL>
RG_System:                                      False
Suspend_automatic_recovery:                     False

Next Steps

After you create a failover resource group, you can add application resources to this resource group. See Tools for Adding Resources to Resource Groups for the procedure.

See Also

The clresourcegroup(1CL) man page.

ProcedureHow to Create a Scalable Resource Group

A scalable resource group is used with scalable services. The shared address feature is the Sun Cluster networking facility that enables the multiple instances of a scalable service to appear as a single service. You must first create a failover resource group that contains the shared addresses on which the scalable resources depend. Next, create a scalable resource group, and add scalable resources to that group. The node list of a scalable resource group or of the shared address resource group must not contain more than one global-cluster non-voting node on the same node. Each instance of the scalable service must run on a different cluster node.

You can configure a scalable resource group to run in a global-cluster non-voting node as well. Do not configure a scalable resource to run in multiple global-cluster non-voting nodes on the same node.


Note –

Perform this procedure from any cluster node.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create the failover resource group that holds the shared addresses that the scalable resource is to use.

  3. Create the scalable resource group.


    # clresourcegroup create\-p Maximum_primaries=m\-p Desired_primaries=n\
    -p RG_dependencies=depend-resource-group\
    [-n node-zone-list] resource-group
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for this resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    -p RG_dependencies=depend-resource-group

    Identifies the resource group that contains the shared address resource on which the resource group that is being created depends.

    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes in which this resource group is to be available. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is created on all nodes in the cluster.

    The node list of the scalable resource can contain the same list or a subset of nodename:zonename pairs as the node list of the shared address resource

    resource-group

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  4. Verify that the scalable resource group has been created.


    # clresourcegroup show resource-group
    

Example 2–5 Creating a Scalable Resource Group

This example shows the creation of the scalable resource group resource-group-1. This resource group is to be hosted in the global cluster of nodes phys-schost-1 and phys-schost-2. The scalable resource group depends on the failover resource group resource-group-2, which contains the shared address resources.


# clresourcegroup create\
-p Maximum_primaries=2\
-p Desired_primaries=2\
-p RG_dependencies=resource-group-2\
-n phys-schost-1, phys-schost-2\
resource-group-1

# clresourcegroup show resource-group-1

=== Resource Groups and Resources ===          

Resource Group:                                  resource-group-1
RG_description:                                  <NULL>
RG_mode:                                         Scalable
RG_state:                                        Unmanaged
RG_project_name:                                 default
RG_affinities:                                   <NULL>
Auto_start_on_new_cluster:                       True
Failback:                                        False
Nodelist:                                        phys-schost-1 phys-schost-2
Maximum_primaries:                               2
Desired_primaries:                               2
RG_dependencies:                                 resource-group2
Implicit_network_dependencies:                   True
Global_resources_used:                           <All>
Pingpong_interval:                               3600
Pathprefix:                                      <NULL>
RG_System:                                       False
Suspend_automatic_recovery:                      False

Next Steps

After you have created a scalable resource group, you can add scalable application resources to the resource group. See How to Add a Scalable Application Resource to a Resource Group for details.

See Also

The clresourcegroup(1CL) man page.

Tools for Adding Resources to Resource Groups

A resource is an instantiation of a resource type. You must add resources to a resource group before the RGM can manage the resources. This section describes the following three resource types.

Sun Cluster provides the following tools for adding resources to resource groups:

You can use the wizards in the Sun Cluster Manager, the clsetup utility, or the Sun cluster maintenance commands to add the logical hostname resources and shared-address resources to the resource group.

Sun Cluster Manager and the clsetup utility enable you to add resources to the resource group interactively. Configuring these resources interactively reduces the possibility for configuration errors that might result from command syntax errors or omissions. Sun Cluster Manager and the clsetup utility ensure that all required resources are created and that all required dependencies between resources are set.

Always add logical hostname resources and shared address resources to failover resource groups. Add data service resources for failover data services to failover resource groups. Failover resource groups contain both the logical hostname resources and the application resources for the data service. Scalable resource groups contain only the application resources for scalable services. The shared address resources on which the scalable service depends must reside in a separate failover resource group. You must specify dependencies between the scalable application resources and the shared address resources for the data service to scale across cluster nodes.


Note –

The DEPRECATED flag marks the logical hostname or shared address resource as a deprecated address. These addresses are not suitable for outbound requests since they can migrate to a different cluster node due to a failover or switchover.


For more information about resources, see Sun Cluster Concepts Guide for Solaris OS and Chapter 1, Planning for Sun Cluster Data Services.

ProcedureHow to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

The following instructions explain how to add a logical hostname resource to a resource group by using the clsetup utility. Perform this procedure from one node only.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

Before You Begin

Ensure that the following prerequisites are met:

Ensure that you have the following information:

  1. Become superuser on any cluster node.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring the Logical Hostname resource and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return to continue.

    The clsetup utility displays a list of the cluster nodes where the logical hostname resource can be brought online.

  6. Select the nodes where the logical hostname resource can be brought online.

    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.

    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes. Then press Return.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the logical hostname resource group's node list. The first node in the list is the primary node of this resource group.

  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a screen where you can specify the logical hostname that the resource is to make available.

  8. Type the logical hostname that this resource is to make available and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create.

  9. If you require a different name for any Sun Cluster object, change the name as follows.

    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.

  10. To confirm your selection of Sun Cluster object names, type d and press Return.

    The clsetup utility displays information about the Sun Cluster configuration that the utility will create.

  11. To create the configuration, type c and press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  12. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing logical hostname resource group when you restart the utility.

  13. Verify that the logical hostname resource has been created.

    Use the clresource(1CL) utility for this purpose. By default, the clsetup utility assigns the name node_name-rg to the resource group.


    # clresource show node_name-rg
    

ProcedureHow to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface


Note –

When you add a logical hostname resource to a resource group, the extension properties of the resource are set to their default values. To specify a nondefault value, you must modify the resource after you add the resource to a resource group. For more information, see How to Modify a Logical Hostname Resource or a Shared Address Resource.



Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add the logical hostname resource to the resource group.


    # clreslogicalhostname create -g resource-group -h hostnamelist, … [-N netiflist] resource
    
    -g resource-group

    Specifies the name of the resource group in which this resource resides.

    -h hostnamelist, …

    Specifies a comma-separated list of UNIX hostnames (logical hostnames) by which clients communicate with services in the resource group. When a logical hostname resource is added to a resource group that runs in a global-cluster non-voting node, the corresponding IP addresses are configured in that node. These IP addresses are available only to applications that are running in that global-cluster non-voting node.

    You must specify the fully qualified name with the -h option if you require a fully qualified hostname.

    -N netiflist

    Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1


    Note –

    Sun Cluster does not support the use of the adapter name for netif.


    resource

    Specifies an optional resource name of your choice. You cannot use the fully qualified name in the resource name.

  3. Verify that the logical hostname resource has been added.


    # clresource show resource
    

Example 2–6 Adding a Logical Hostname Resource to a Resource Group

This example shows the addition of logical hostname resource (resource-1) to a resource group (resource-group-1).


# clreslogicalhostname create -g resource-group-1 -h schost-1 resource-1
# clresource show resource-1

=== Resources ===                              

Resource:                                        resource-1
Type:                                            SUNW.LogicalHostname:2
Type_version:                                    2
Group:                                           resource-group-1
R_description:                                   
Resource_project_name:                           default
Enabled{phats1}:                                 True
Enabled{phats2}:                                 True
Monitored{phats1}:                               True
Monitored{phats2}:                               True


Example 2–7 Adding Logical Hostname Resources That Identify IPMP Groups

This example shows the addition of the following logical host name resources to the resource group nfs-fo-rg:


# clreslogicalhostname create -g nfs-fo-rg -h cs23-rs -N sc_ipmp0@1,sc_ipmp0@2 cs23-rs
# clreslogicalhostname create -g nfs-fo-rg -h cs24-rs -N sc_ipmp1@1,sc_ipmp1@2 cs24-rs

Next Steps

After you add logical hostname resources, see How to Bring Online Resource Groups to bring the resources online.

Troubleshooting

Adding a resource causes the Sun Cluster software to validate the resource. If the validation fails, the clreslogicalhostname command prints an error message and exits. To determine why the validation failed, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the clreslogicalhostname command.

See Also

The clreslogicalhostname(1CL) man page.

ProcedureHow to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

The following instructions explain how to add a shared address resource to a resource group by using the clsetup utility. Perform this procedure from any cluster node.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

Before You Begin

Ensure that the following prerequisites are met:

Ensure that you have the following information:

  1. Become superuser on any cluster node.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring the shared address resource and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return to continue.

    The clsetup utility displays a list of the cluster nodes where the shared address resource can be brought online.

  6. Select the nodes where the shared address resource can be brought online.

    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.

    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes. Then press Return.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a screen where you can specify the shared address that the resource is to make available.

  8. Type the shared address that this resource is to make available and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create.

  9. If you require a different name for any Sun Cluster object, change the name as follows.

    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.

  10. To confirm your selection of Sun Cluster object names, type d and press Return.

    The clsetup utility displays information about the Sun Cluster configuration that the utility will create.

  11. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  12. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing shared address resource group when you restart the utility.

  13. Verify that the shared address resource has been created.

    Use the clresource(1CL) utility for this purpose. By default, the clsetup utility assigns the name node_name-rg to the resource group.


    # clresource show node_name-rg
    

ProcedureHow to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface


Note –

When you add a shared address resource to a resource group, the extension properties of the resource are set to their default values. To specify a nondefault value, you must modify the resource after you add the resource to a resource group. For more information, see How to Modify a Logical Hostname Resource or a Shared Address Resource.



Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add the shared address resource to the resource group.


    # clressharedaddress create -g resource-group -h hostnamelist, … \
    [-X auxnodelist] [-N netiflist] resource
    
    -g resource-group

    Specifies the resource group name. In the node list of a shared address resource, do not specify more than one global-cluster non-voting node on the same global-cluster voting node. Specify the same list of nodename:zonename pairs as the node list of the scalable resource group.

    -h hostnamelist, …

    Specifies a comma-separated list of shared address hostnames.

    -X auxnodelist

    Specifies a comma-separated list of node names or IDs that identify the cluster nodes that can host the shared address but never serve as primary if failover occurs. These nodes are mutually exclusive, with the nodes identified as potential masters in the resource group's node list. If no auxiliary node list is explicitly specified, the list defaults to the list of all cluster node names that are not included in the node list of the resource group that contains the shared address resource.


    Note –

    To ensure that a scalable service runs in all global-cluster non-voting nodes that were created to master the service, the complete list of nodes must be included in the node list of the shared address resource group or the auxnodelist of the shared address resource. If all the nodes are listed in the node list, the auxnodelist can be omitted.


    -N netiflist

    Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.


    Note –

    Sun Cluster does not support the use of the adapter name for netif.


    resource

    Specifies an optional resource name of your choice.

  3. Verify that the shared address resource has been added and validated.


    # clresource show resource
    

Example 2–8 Adding a Shared Address Resource to a Resource Group

This example shows the addition of a shared address resource (resource-1) to a resource group (resource-group-1).


# clressharedaddress create -g resource-group-1 -h schost-1 resource-1
# clresource show resource-1

=== Resources ===                              

  Resource:                                        resource-1
  Type:                                            SUNW.SharedAddress:2
  Type_version:                                    2
  Group:                                           resource-group-1
  R_description:                                   
  Resource_project_name:                           default
  Enabled{phats1}:                                 False
  Enabled{phats2}:                                 False
  Monitored{phats1}:                               True
  Monitored{phats2}:                               True

Next Steps

After you add a shared address resource, use the procedure How to Bring Online Resource Groups to enable the resource.

Troubleshooting

Adding a resource causes the Sun Cluster software to validate the resource. If the validation fails, the clressharedaddress command prints an error message and exits. To determine why the validation failed, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the clressharedaddress command.

See Also

The clressharedaddress(1CL) man page.

ProcedureHow to Add a Failover Application Resource to a Resource Group

A failover application resource is an application resource that uses logical hostnames that you previously created in a failover resource group.


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.


Note –

This procedure also applies to proxy resources.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add a failover application resource to the resource group.


    # clresource create -g resource-group -t resource-type \
    [-p "extension-property[{node-specifier}]"=value, …] [-p standard-property=value, …] resource
    
    -g resource-group

    Specifies the name of a failover resource group. This resource group must already exist.

    -t resource-type

    Specifies the name of the resource type for the resource.

    -p "extension-property[{node-specifier}]"=value, …

    Specifies a comma-separated list of extension properties that you are setting for the resource. The extension properties that you can set depend on the resource type. To determine which extension properties to set, see the documentation for the resource type.

    node-specifier is an optional qualifier to the -p and -x options. This qualifier indicates that the extension property or properties on only the specified node or nodes are to be set when the resource is created. The specified extension properties on other nodes in the cluster are not set. If you do not include node-specifier, the specified extension properties on all nodes in the cluster are set. You can specify a node name or a node identifier for node-specifier. Examples of the syntax of node-specifier include the following:


    -p "myprop{phys-schost-1}"
    

    The braces ({}) indicate that you are setting the specified extension property on only node phys-schost-1. For most shells, the double quotation marks (“) are required.

    You can also use the following syntax to set an extension property in two different global-cluster voting nodes on two different nodes:


    -x "myprop{phys-schost-1:zoneA,phys-schost-2:zoneB}"
    

    Note –

    The extension property that you specify with node-specifier must be declared in the RTR file as a per-node property. See Appendix B, Standard Properties for information about the Per_node resource property attribute.


    -p standard-property=value, …

    Specifies a comma-separated list of standard properties that you are setting for the resource. The standard properties that you can set depend on the resource type. To determine which standard properties to set, see the documentation for the resource type and Appendix B, Standard Properties.

    resource

    Specifies your choice of the name of the resource to add.

    The resource is created in the enabled state.

  3. Verify that the failover application resource has been added and validated.


    # clresource show resource
    

Example 2–9 Adding a Failover Application Resource to a Resource Group

This example shows the addition of a resource (resource-1) to a resource group (resource-group-1). The resource depends on logical hostname resources (schost-1, schost-2), which must reside in the same failover resource groups that you defined previously.


# clresource create -g resource-group-1 -t resource-type-1 \
-p Network_resources_used=schost-1,schost2  resource-1\
# clresource show resource-1

=== Resources ===

  Resource:                                        resource-1
  Type:                                            resource-type-1
  Type_version:                                    
  Group:                                           resource-group-1
  R_description:                                   
  Resource_project_name:                           default
  Enabled{phats1}:                                 False
  Enabled{phats2}:                                 False
  Monitored{phats1}:                               True
  Monitored{phats2}:                               True

Next Steps

After you add a failover application resource, use the procedure How to Bring Online Resource Groups to enable the resource.

Troubleshooting

Adding a resource causes the Sun Cluster software to validate the resource. If the validation fails, the clresource command prints an error message and exits. To determine why the validation failed, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the clresource command.

See Also

The clresource(1CL) man page.

ProcedureHow to Add a Scalable Application Resource to a Resource Group

A scalable application resource is an application resource that uses shared-address resources. The shared-address resources are in a failover resource group.


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.


Note –

This procedure also applies to proxy resources.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Add a scalable application resource to the resource group.


    # clresource create -g resource-group -t resource-type \
    -p Network_resources_used=network-resource[,network-resource...] \
    -p Scalable=True
    [-p "extension-property[{node-specifier}]"=value, …] [-p standard-property=value, …] resource
    
    -g resource-group

    Specifies the name of a scalable service resource group that you previously created.

    -t resource-type

    Specifies the name of the resource type for this resource.

    -p Network_resources_used= network-resource[,network-resource...]

    Specifies the list of network resources (shared addresses) on which this resource depends.

    -p Scalable=True

    Specifies that this resource is scalable.

    -p "extension-property[{node-specifier}]"=value, …

    Specifies a comma-separated list of extension properties that you are setting for the resource. The extension properties that you can set depend on the resource type. To determine which extension properties to set, see the documentation for the resource type.

    node-specifier is an optional qualifier to the -p and -x options. This qualifier indicates that the extension property or properties on only the specified node or nodes are to be set when the resource is created. The specified extension properties on other nodes in the cluster are not set. If you do not include node-specifier, the specified extension properties on all nodes in the cluster are set. You can specify a node name or a node identifier for node-specifier. Examples of the syntax of node-specifier include the following:


    -p "myprop{phys-schost-1}"
    

    The braces ({}) indicate that you are setting the specified extension property on only node phys-schost-1. For most shells, the double quotation marks (“) are required.

    You can also use the following syntax to set an extension property in two different global-cluster voting nodes on two different nodes:


    -x "myprop{phys-schost-1:zoneA,phys-schost-2:zoneB}"
    

    Note –

    The extension property that you specify with node-specifier must be declared in the RTR file as a per-node property. See Appendix B, Standard Properties for information about the Per_node resource property attribute.


    -p standard-property=value, …

    Specifies a comma-separated list of standard properties that you are setting for the resource. The standard properties that you can set depend on the resource type. For scalable services, you typically set the Port_list, Load_balancing_weights, and Load_balancing_policy properties. To determine which standard properties to set, see the documentation for the resource type and Appendix B, Standard Properties.

    resource

    Specifies your choice of the name of the resource to add.

    The resource is created in the enabled state.

  3. Verify that the scalable application resource has been added and validated.


    # clresource show resource
    

Example 2–10 Adding a Scalable Application Resource to a Resource Group

This example shows the addition of a resource (resource-1) to a resource group (resource-group-1). Note that resource-group-1 depends on the failover resource group that contains the network addresses that are in use (schost-1 and schost-2 in the following example). The resource depends on shared address resources (schost-1, schost-2), which must reside in one or more failover resource groups that you defined previously.


# clresource create -g resource-group-1 -t resource-type-1 \
-p Network_resources_used=schost-1,schost-2 resource-1 \
-p Scalable=True
# clresource show resource-1

=== Resources ===                              

  Resource:                                        resource-1
  Type:                                            resource-type-1
  Type_version:                                    
  Group:                                           resource-group-1
  R_description:                                   
  Resource_project_name:                           default
  Enabled{phats1}:                                 False
  Enabled{phats2}:                                 False
  Monitored{phats1}:                               True
  Monitored{phats2}:                               True

Next Steps

After you add a scalable application resource, follow the procedure How to Bring Online Resource Groups to enable the resource.

Troubleshooting

Adding a resource causes the Sun Cluster software to validate the resource. If the validation fails, the clresource command prints an error message and exits. To determine why the validation failed, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the clresource command.

See Also

The clresource(1CL) man page.

Bringing Online Resource Groups

To enable resources to begin providing HA services, you must perform the following operations:

You can perform these tasks individually or by using a single command.

After you bring online a resource group, it is configured and ready for use. If a resource or node fails, the RGM switches the resource group online on alternate nodes to maintain availability of the resource group.

ProcedureHow to Bring Online Resource Groups

Perform this task from any cluster node.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.

  2. Type the command to bring online the resource groups.

    • If you have intentionally disabled a resource or a fault monitor that must remain disabled, type the following command:


      # clresourcegroup online rg-list
      
      rg-list

      Specifies a comma-separated list of the names of the resource groups to bring online. The resource groups must exist. The list may contain one resource group name or more than one resource group name.

      You can omit the rg-list option. If you omit this option, all resource groups are brought online.

    • If you require the resources and their fault monitors to be enabled when the resource groups are brought online, type the following command:


      # clresourcegroup online -emM rg-list
      
      rg-list

      Specifies a comma-separated list of the names of the resource groups to bring online. The resource groups must exist. The list can contain one resource group name or more than one resource group name.

      You can omit the rg-list option. If you omit this option, all resource groups are brought online.


    Note –

    If any resource group that you are bringing online declares a strong affinity for other resource groups, this operation might fail. For more information, see Distributing Online Resource Groups Among Cluster Nodes.


  3. Verify that each resource group that you specified in Step 2 is online.

    The output from this command indicates on which nodes each resource group is online.


    # clresourcegroup status 
    

Example 2–11 Bringing Online a Resource Group

This example shows how to bring online the resource group resource-group-1 and verify its status. All resources in this resource and their fault monitors are also enabled.


# clresourcegroup online -emM resource-group-1
# clresourcegroup status

Next Steps

If you brought resource groups online without enabling their resources and fault monitors, enable the fault monitors of any resources that you require to be enabled. For more information, see How to Enable a Resource Fault Monitor.

See Also

The clresourcegroup(1CL) man page.

Enabling a Resource

You can enable a resource that you neglected to enable when you brought online a resource group.

ProcedureHow to Enable a Resource


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have created and have the name of the resource that you intend to enable.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.

  2. Enable the resource.


    # clresource enable [-n node-zone-list] resource
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes on which to enable the resource. If you specify a global-cluster non-voting node, the format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource is enabled on all nodes in its resource group's node list.


    Note –

    If you specify more than one node with the -n option, you can specify only one resource.


    resource

    Specifies the name of the resource that you want to enable.

  3. Verify that the resource has been enabled.


    # clresource status
    

    The output from this command indicates the state of the resource that you have enabled.

See Also

The clresource(1CL) man page.

Quiescing Resource Groups

To stop a resource group from continuously switching from one node to another when a START or STOP method fails, bring it to a quiescent state. To bring a resource group to a quiescent state, you issue the clresourcegroup quiesce command.

When you quiesce a resource group, resource methods that are executing are allowed to run until they are completed. If a serious problem occurs, you might need to quiesce a resource group immediately. To do so, you specify the -k command option, which kills the following methods:


Note –

The Init, Fini Boot, and Update methods are not killed when you specify this command option.


However, if you immediately quiesce a resource group by killing methods, you might leave one of its resources in an error state such as Start_failed or Stop_failed. You must clear these error states yourself.

ProcedureHow to Quiesce a Resource Group

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Quiesce the resource group.


    # clresourcegroup quiesce resource-group
    

ProcedureHow to Quiesce a Resource Group Immediately

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Immediately quiesce the resource group.


    # clresourcegroup quiesce -k resource-group
    

    The Prenet_start, Start, Monitor_start, Monitor_stop, Stop, and Postnet_stop methods that are associated with the resource group are killed immediately. The resource group is brought to a quiescent state.

    The clresourcegroup quiesce -k command blocks until the specified resource group has reached a quiescent state.

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

You can temporarily suspend the automatic recovery actions of a resource group. You might need to suspend the automatic recovery of a resource group to investigate and fix a problem in the cluster. Or, you might need to perform maintenance on resource group services.

To suspend the automatic recovery actions of a resource group, you issue the clresourcegroup suspend command. To resume automatic recovery actions, you issue the clresourcegroup resume command.

When you suspend the automatic recovery actions of a resource group, you also bring the resource group to a quiescent state.

A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic recovery. Whether online or offline, suspended data services remain in their current state. You can still manually switch the resource group to a different state on specified nodes. You can also still enable or disable individual resources in the resource group.

A dependency or affinity is suspended and not enforced when you suspend the automatic recovery actions of a resource group that does one of the following:

When you suspend one of these categories of resource groups, Sun Cluster displays a warning that the dependency or affinity is suspended as well.


Note –

Setting the RG_system property does not affect your ability to suspend or resume the automatic recovery actions of a resource group. However, if you suspend a resource group for which the RG_system property is set to TRUE, a warning message is produced. The RG_system property specifies that a resource group contains critical system services. If set to TRUE, the RG_system property prevents users from inadvertently stopping, deleting, or modifying a resource group or its resources.


Immediately Suspending Automatic Recovery by Killing Methods

When you suspend the automatic recovery actions of a resource group, resource methods that are executing are allowed to run until they are completed. If a serious problem occurs, you might need to suspend the automatic recovery actions of a resource group immediately. To do so, you specify the -k command option, which kills the following methods:


Note –

The Init, Fini Boot, and Update methods are not killed when you include this command option.


However, if you immediately suspend automatic recovery actions by killing methods, you might leave one of its resources in an error state such as Start_failed or Stop_failed. You must clear these error states yourself.

ProcedureHow to Suspend the Automatic Recovery Actions of a Resource Group

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Suspend the automatic recovery actions of the resource group.


    # clresourcegroup suspend resource-group
    

    The resource group that you specify is not automatically started, restarted, or failed over until you resume automatic recovery actions. See How to Resume the Automatic Recovery Actions of a Resource Group.

ProcedureHow to Suspend the Automatic Recovery Actions of a Resource Group Immediately

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Immediately suspend the automatic recovery actions of the resource group.


    # clresourcegroup suspend -k resource-group
    

    The Prenet_start, Start, Monitor_start, Monitor_stop, Stop, and Postnet_stop methods that are associated with the resource group are killed immediately. Automatic recovery actions of the resource group is suspended. The resource group is not automatically started, restarted, or failed over until you resume automatic recovery actions. See How to Resume the Automatic Recovery Actions of a Resource Group.

    The clresourcegroup suspend -k command blocks until the specified resource group has reached a quiescent state.

ProcedureHow to Resume the Automatic Recovery Actions of a Resource Group

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Resume the automatic recovery actions of the resource group.


    # clresourcegroup resume resource-group
    

    The resource group that you specify is automatically started, restarted, or failed over.

Disabling and Enabling Resource Monitors

The procedures in this section explain how to disable or enable resource fault monitors, not the resources themselves. A resource can continue to operate normally while its fault monitor is disabled. However, if the fault monitor is disabled and a data service fault occurs, automatic fault recovery is not initiated.

See the clresource(1CL) man page for additional information.


Note –

Perform these procedures from any cluster node.


ProcedureHow to Disable a Resource Fault Monitor

  1. On any cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable the resource fault monitor.


    # clresource unmonitor [-n node-zone-list] resource
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes on which to unmonitor the resource. If you specify a global-cluster non-voting node, the format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource is unmonitored on all nodes in its resource group's node list.


    Note –

    If you specify more than one node with the -n option, you can specify only one resource.


    resource

    Specifies the name of the resource or resources.

  3. Run the clresource command on each cluster node and check for monitored fields (RS Monitored) to verify that the resource fault monitor has been disabled.


    # clresource show -v
    

Example 2–12 Disabling a Resource Fault Monitor


# clresource unmonitor resource-1
# clresource show -v
...
RS Monitored: no...

ProcedureHow to Enable a Resource Fault Monitor

  1. On any cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Enable the resource fault monitor.


    # clresource monitor [-n node-zone-list] resource
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes on which to monitor the resource. If you specify a global-cluster non-voting node, the format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global cluster, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource is monitored on all nodes in its resource group's node list.


    Note –

    If you specify more than one node with the -n option, you can specify only one resource.


    resource

    Specifies the name of the resource or resources.

  3. Run the clresource command on each cluster node and check for monitored fields (RS Monitored) to verify that the resource fault monitor has been enabled.


    # clresource show -v
    

Example 2–13 Enabling a Resource Fault Monitor


# clresource monitor resource-1
# clresource show -v
...
RS Monitored: yes...

Removing Resource Types

You do not need to remove resource types that are not in use. However, if you want to remove a resource type, follow this procedure.


Note –

Perform this procedure from any cluster node.


ProcedureHow to Remove a Resource Type

Removing a resource type involves disabling and removing all resources of that type in the cluster before unregistering the resource type.

Before You Begin

To identify all instances of the resource type that you are removing, type the following command:


# clresourcetype show -v
  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable each resource of the resource type that you are removing.


    # clresource disable resource
    
    resource

    Specifies the name of the resource to disable.

  3. Remove each resource of the resource type that you are removing.


    # clresource delete resource
    
    resource

    Specifies the name of the resource to remove.

  4. Unregister the resource type.


    # clresourcetype unregister resource-type
    
    resource-type

    Specifies the name of the resource type to unregister.

  5. Verify that the resource type has been removed.


    # clresourcetype show
    

Example 2–14 Removing a Resource Type

This example shows how to disable and remove all of the resources of a resource type (resource-type-1) and then unregister the resource type. In this example, resource-1 is a resource of the resource type resource-type-1.


# clresource disable resource-1
# clresource delete resource-1
# clresourcetype unregister resource-type-1

See Also

The following man pages:

Removing Resource Groups

To remove a resource group, you must first remove all of the resources from the resource group.


Note –

Perform this procedure from any cluster node.


ProcedureHow to Remove a Resource Group

Before You Begin

To identify all resources in the resource group that you are removing, type the following command:


# clresource show -v
  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Run the following command to switch the resource group offline.


    # clresourcegroup offline resource-group
    
    resource-group

    Specifies the name of the resource group to take offline.

  3. Disable all of the resources in the resource group that you are removing.


    # clresource disable resource
    
    resource

    Specifies the name of the resource to disable.

  4. Remove all of the resources from the resource group.

    For each resource, type the following command.


    # clresource delete resource
    
    resource

    Specifies the name of the resource to be removed.

  5. Remove the resource group.


    # clresourcegroup delete resource-group
    
    resource-group

    Specifies the name of the resource group to be removed.

  6. Verify that the resource group has been removed.


    # clresourcegroup show
    

Example 2–15 Removing a Resource Group

This example shows how to remove a resource group (resource-group-1) after you have removed its resource (resource-1).


# clresourcegroup offline resource-group-1
# clresource disable resource-1
# clresource delete resource-1
# clresourcegroup delete resource-group-1

See Also

The following man pages:

Removing Resources

Disable the resource before you remove it from a resource group.


Note –

Perform this procedure from any cluster node.


ProcedureHow to Remove a Resource

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable the resource that you are removing.


    # clrsource disable resource
    
    resource

    Specifies the name of the resource to disable.

  3. Remove the resource.


    # clresource delete resource
    
    resource

    Specifies the name of the resource to remove.

  4. Verify that the resource has been removed.


    # clresource show
    

Example 2–16 Removing a Resource

This example shows how to disable and remove a resource (resource-1).


# clresource disable resource-1
# clresource delete resource-1

See Also

clresource(1CL)

Switching the Current Primary of a Resource Group

Use the following procedure to switch over a resource group from its current primary to another node that is to become the new primary.

ProcedureHow to Switch the Current Primary of a Resource Group


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that the following conditions are met:

To see a list of potential primaries for the resource group, type the following command:


# clresourcegroup show -v
  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Switch the resource group to a new set of primaries.


    # clresourcegroup switch [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of global-cluster non-voting nodes that can master this resource group. The resource group is switched offline on all of the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is switched on all nodes in the resource group's node list.

    resource-group

    Specifies the name of the resource group to switch.


    Note –

    If any resource group that you are switching declares a strong affinity for other resource groups, the attempt to switch might fail or be delegated. For more information, see Distributing Online Resource Groups Among Cluster Nodes.


  3. Verify that the resource group has been switched to the new primary.

    The output from this command indicates the state of the resource group that has been switched over.


    # clresourcegroup status 
    

Example 2–17 Switching a Resource Group to a New Primary

This example shows how to switch the resource group resource-group-1 from its current primary phys-schost-1 to the potential primary phys-schost-2.

  1. To verify that the resource group is online on phys-schost-1, the following command is run:


    phys-schost-1# clresourcegroup status 
                
    === Cluster Resource Groups ===
    
        Group Name                   Node Name          Suspended        Status
        ----------                   ---------          ---------         ------
    
    resource-group1                phys-schost-1             No           Online
                                   phys-schost-2             No           Offline
  2. To perform the switch, the following command is run:


    phys-schost-1# clresourcegroup switch -n phys-schost-2 resource-group-1
    
  3. To verify that the group is switched to be online on phys-schost-2, the following command is run:


    phys-schost-1# clresourcegroup status 
               
    === Cluster Resource Groups ===
    
        Group Name                   Node Name          Suspended        Status
        ----------                   ---------          ---------         ------
    
    resource-group1                phys-schost-1             No           Offline
                                   phys-schost-2             No           Online

See Also

The clresourcegroup(1CL) page.

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

At times, you must bring a resource group into the UNMANAGED state before you perform an administrative procedure on it. Before you move a resource group into the UNMANAGED state, you must disable all of the resources that are part of the resource group and bring the resource group offline.

See the clresourcegroup(1CL) man page for additional information.


Note –

Perform this procedure from any cluster node.


ProcedureHow to Disable a Resource and Move Its Resource Group Into the UNMANAGED State


Note –

When a shared address resource is disabled, the resource might still be able to respond to ping(1M) commands from some hosts. To ensure that a disabled shared address resource cannot respond to ping commands, you must bring the resource's resource group to the UNMANAGED state.


Before You Begin

Ensure that you have the following information.

To determine the resource and resource group names that you need for this procedure, type:


# clresourcegroup show -v
  1. On any cluster member, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.

  2. Disable all resources in the resource group.


    # clresource disable [-n node-zone-list] -g resource-group +
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes on which to disable the resource. If you specify a global-cluster non-voting node, the format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource is disabled on all nodes in its resource group's node list.


    Note –

    If you specify more than one node with the -n option, you can specify only one resource.


  3. Switch the resource group offline.


    # clresourcegroup offline resource-group
    
    resource-group

    Specifies the name of the resource group to take offline.

  4. Move the resource group into the UNMANAGED state.


    # clresourcegroup unmanage resource-group
    
    resource-group

    Specifies the name of the resource group to move into the UNMANAGED state.

  5. Verify that the resources are disabled and that the resource group is in the UNMANAGED state.


    # clrsourcegroup show resource-group
    

Example 2–18 Disabling a Resource and Moving Its Resource Group Into the UNMANAGED State

This example shows how to disable the resource (resource-1) and then move the resource group (resource-group-1) into the UNMANAGED state.


# clresource disable resource-1
# clresourcegroup offline resource-group-1
# clresourcegroup unmanage resource-group-1
# clresourcegroup show resource-group-1

=== Resource Groups and Resources ===

Resource Group:                                 resource-group-1
RG_description:                                 <NULL>
RG_mode:                                        Failover
RG_state:                                       Unmanaged
Failback:                                       False
Nodelist:                                       phys-schost-1 phys-schost-2

  --- Resources for Group resource-group-1 ---

  Resource:                                      resource-1
  Type:                                          SUNW.LogicalHostname:2
  Type_version:                                  2
  Group:                                         resource-group-1
  R_description:                                 
  Resource_project_name:                         default
  Enabled{phys-schost-1}:                        False
  Enabled{phys-schost-2}:                        False
  Monitored{phys-schost-1}:                      True
  Monitored{phys-schost-2}:                      True

See Also

The following man pages:

Displaying Resource Type, Resource Group, and Resource Configuration Information

Before you perform administrative procedures on resources, resource groups, or resource types, view the current configuration settings for these objects.


Note –

You can view configuration settings for resources, resource groups, and resource types from any cluster node.


You can also use the clresourcetype, clresourcegroup, and clresource commands to check status information about specific resource types, resource groups, and resources. For example, the following command specifies that you want to view specific information about the resource apache-1 only.


# clresource show apache-1

For more information, see the following man pages:

Changing Resource Type, Resource Group, and Resource Properties

Sun Cluster defines standard properties for configuring resource types, resource groups, and resources. These standard properties are described in the following sections:

Resources also have extension properties, which are predefined for the data service that represents the resource. For a description of the extension properties of a data service, see the documentation for the data service.

To determine whether you can change a property, see the Tunable entry for the property in the description of the property.

The following procedures describe how to change properties for configuring resource types, resource groups, and resources.

ProcedureHow to Change Resource Type Properties


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Run the clresourcetype command to determine the name of the resource type that you need for this procedure.


    # clresourcetype show -v
    
  3. Change the resource type property.

    For resource types, you can change only certain properties. To determine whether you can change a property, see the Tunable entry for the property in Resource Type Properties.


    # clresourcetype set -n installed-node-list \
    [-p property=new-value]resource-type
    
    -n installed-node-list

    Specifies the names of nodes on which this resource type is installed.

    -p property=new-value

    Specifies the name of the standard property to change and the new value of the property.

    You cannot change the Installed_nodes property explicitly. To change this property, specify the -n installed-node-list option of the clresourcetype command.

  4. Verify that the resource type property has been changed.


    # clresourcetype show resource-type
    

Example 2–19 Changing a Resource Type Property

This example shows how to change the SUNW.apache property to define that this resource type is installed on the global-cluster voting nodes of (phys-schost-1 and phys-schost-2).


# clresourcetype set -n phys-schost-1,phys-schost-2 SUNW.apache
# clresourcetype show SUNW.apache

Resource Type:                                     SUNW.apache:4
  RT_description:                                  Apache Web Server on Sun Cluster
  RT_version:                                      4
  API_version:                                     2
  RT_basedir:                                      /opt/SUNWscapc/bin
  Single_instance:                                 False
  Proxy:                                           False
  Init_nodes:                                      All potential masters
  Installed_nodes:                                 All
  Failover:                                        False
  Pkglist:                                         SUNWscapc
  RT_system:                                       False

ProcedureHow to Change Resource Group Properties

This procedure explains how to change resource group properties. For a description of resource group properties, see Resource Group Properties.


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Change the resource group property.


    # clresourcegroup set -p property=new-value resource-group
    
    -p property

    Specifies the name of the property to change

    resource-group

    Specifies the name of the resource group

  3. Verify that the resource group property has been changed.


    # clresourcegroup show resource-group
    

Example 2–20 Changing a Resource Group Property

This example shows how to change the Failback property for the resource group (resource-group-1).


# clresourcegroup set-p Failback=True resource-group-1
# clrsourcegroup show resource-group-1

ProcedureHow to Change Resource Properties

This procedure explains how to change extension properties and standard properties of a resource.


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. View the current resource property settings.


    # clresource show -v resource
    
  3. Change the resource property.


    # clresource set -p standard-property=new-value | -p "extension-property \
    [{node-specifier}]"=new-value resource
    
    -p standard-property=new-value

    Specifies the name of the standard property to change.

    -p "extension-property[{node-specifier}]"=new-value

    Specifies the name of the extension property to change.

    node-specifier is an optional qualifier to the -p and -x options. This qualifier indicates that the extension property or properties on only the specified node or nodes are to be set when the resource is created. The specified extension properties on other nodes in the cluster are not set. If you do not include node-specifier, the specified extension properties on all nodes in the cluster are set. You can specify a node name or a node identifier for node-specifier. Examples of the syntax of node-specifier include the following:


    -p "myprop{phys-schost-1}"
    

    The braces ({}) indicate that you are setting the specified extension property on only node phys-schost-1. For most shells, the double quotation marks (“) are required.

    You can also use the following syntax to set an extension property in two different global-cluster non-voting nodes on two different global-cluster voting nodes:


    -x "myprop{phys-schost-1:zoneA,phys-schost-2:zoneB}"
    

    Note –

    The extension property that you specify with node-specifier must be declared in the RTR file as a per-node property. See Appendix B, Standard Properties for information about the Per_node resource property attribute.


    resource

    Specifies the name of the resource.

  4. Verify that the resource property has been changed.


    # clresource show -v resource
    

Example 2–21 Changing a Standard Resource Property

This example shows how to change the system-defined Start_timeout property for the resource (resource-1).


# clresource set -p start_timeout=30 resource-1
# clresource show -v resource-1


Example 2–22 Changing an Extension Resource Property

This example shows how to change an extension property (Log_level) for the resource (resource-1).


# clresource set -p Log_level=3 resource-1
# clresource show -v resource-1

ProcedureHow to Modify a Logical Hostname Resource or a Shared Address Resource

By default, logical hostname resources and shared address resources use name services for name resolution. You might configure a cluster to use a name service that is running on the same cluster. During the failover of a logical hostname resource or a shared address resource, a name service that is running on the cluster might also be failing over. If the logical hostname resource or the shared address resource uses the name service that is failing over, the resource fails to fail over.


Note –

Configuring a cluster to use a name server that is running on the same cluster might impair the availability of other services on the cluster.


To prevent such a failure to fail over, modify the logical hostname resource or the shared address resource to bypass name services. To modify the resource to bypass name services, set the CheckNameService extension property of the resource to false. You can modify the CheckNameService property at any time.


Note –

If your version of the resource type is earlier than 2, you must upgrade the resource type before you attempt to modify the resource. For more information, see Upgrading a Preregistered Resource Type.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Change the resource property.


    # clresource set -p CheckNameService=false resource
    
    -p CheckNameService=false

    Sets the CheckNameService extension property of the resource to false.

    resource

    Specifies the name of the logical hostname resource or shared address resource that you are modifying.

Clearing the STOP_FAILED Error Flag on Resources

When the Failover_mode resource property is set to NONE or SOFT, a failure of the resource's STOP method causes the following effects:

In this situation, you cannot perform the following operations:

ProcedureHow to Clear the STOP_FAILED Error Flag on Resources


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the following information.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Identify which resources have gone into the STOP_FAILED state and on which nodes.


    # clresource status 
    
  3. Manually stop the resources and their monitors on the nodes on which they are in STOP_FAILED state.

    This step might require that you kill processes or run commands that are specific to resource types or other commands.

  4. Clear the STOP_FAILED error flag on the resources.


    # clresource clear -f STOP_FAILED -n nodelist resource 
    
    -f STOP_FAILED

    Specifies the flag name.

    -n nodelist

    Specifies a comma-separated list of the names of the nodes where the resource is in the STOP_FAILED state. The list may contain one node name or more than one node name.

    resource

    Specifies the name of the resource.

  5. Check the resource group state on the nodes where you cleared the STOP_FAILED flag in Step 4.


    # clresourcegroup status
    

    The resource group state should now be OFFLINE or ONLINE.

    The resource group remains in the ERROR_STOP_FAILED state in the following combination of circumstances:

    • The resource group was being switched offline when the STOP method failure occurred.

    • The resource that failed to stop had a dependency on other resources in the resource group.

  6. If the resource group remains in the ERROR_STOP_FAILED state, correct the error as follows.

    1. Switch the resource group offline on the appropriate nodes.


      # clresourcegroup offline resource-group
      
      resource-group

      Specifies the name of the resource group to switch offline.

    2. Switch the resource group to the ONLINE state.

See Also

The following man pages:

Clearing the Start_failed Resource State

The Start_failed resource state indicates that a Start or Prenet_start method failed or timed out on a resource, but its resource group came online anyway. The resource group comes online even though the resource has been placed in a faulted state and might not be providing service. This state can occur if the resource's Failover_mode property is set to None or to another value that prevents the failover of the resource group.

Unlike the Stop_failed resource state, the Start_failed resource state does not prevent you or the Sun Cluster software from performing actions on the resource group. You need only to execute a command that restarts the resource.

Use any one of the following procedures to clear this condition.

ProcedureHow to Clear a Start_failed Resource State by Switching Over a Resource Group


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that the following conditions are met:

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Switch the resource group to the new node.


    # clresourcegroup switch [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all of the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global cluster-voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is switched on all nodes in the resource group's node list.

    resource-group

    Specifies the name of the resource group to switch.


    Note –

    If any resource group that you are switching declares a strong affinity for other resource groups, the attempt to switch might fail or be delegated. For more information, see Distributing Online Resource Groups Among Cluster Nodes.


  3. Verify that the resource group has been switched to the new node and that the Start_failed resource state is cleared.


    # clresourcegroup status
    

    The output from this command indicates the state of the resource and the resource group that has been switched over.


Example 2–23 Clearing a Start_failed Resource State by Switching Over a Resource Group

This example shows how to clear a Start_failed resource state that has occurred on the rscon resource in the resource-group-1 resource group. The command clears this condition by switching the resource group to the global cluster voting node phys-schost-2.

  1. To verify that the resource is in the Start_failed resource state on phys-schost-1, the following command is run:


    # clresource status
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Faulted         Faulted
                         phys-schost-2       Offline          Offline
    
     hastor              phys-schost-1       Online          Online
                         phys-schost-2       Offline         Offline
  2. To perform the switch, the following command is run:


    # clresourcegroup switch -n phys-schost-2 resource-group-1
    
  3. To verify that the resource group is switched to be online on phys-schost-2 and that the Start_failed resource status is cleared, the following command is run:


    # clresource status
    
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Offline         Offline
                         phys-schost-2       Online          Online
    
     hastor              phys-schost-1       Online          Online
                         phys-schost-2       Offline         Offline

See Also

The clresourcegroup(1CL) man page.

ProcedureHow to Clear a Start_failed Resource State by Restarting a Resource Group


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that the following conditions are met:

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Restart the resource group.


    # clresourcegroup restart -n node resource-group
    
    -n node

    Specifies the name of the node on which the resource group is to be restarted. This resource group is switched offline on all of the other nodes.

    resource-group

    Specifies the name of the resource group to restart.

  3. Verify that the resource group has been restarted on the new node and that the Start_failed resource state is cleared.


    # clresourcegroup status
    

    The output from this command indicates the state of the resource and the resource group that has been restarted.


Example 2–24 Clearing a Start_failed Resource State by Restarting a Resource Group

This example shows how to clear a Start_failed resource state that has occurred on the rscon resource in the resource-group-1 resource group. The command clears this condition by restarting the resource group on the global-cluster voting node phys-schost-1.

  1. To verify that the resource is in the Start_failed resource state on phys-schost-1, the following command is run:


    # clresource status
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Faulted         Faulted
                         phys-schost-2       Offline          Offline
    
     hastor              phys-schost-1       Online          Online
                         phys-schost-2       Offline         Offline
  2. To restart the resource, the following command is run:


    # clresourcegroup restart -n phys-schost-1 –g resource-group-1
    
  3. To verify that the resource group is restarted on phys-schost-1 and that the Start_failed resource status is cleared, the following command is run:


    # clresource status
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Offline         Offline
     rscon               phys-schost-2       Online          Online
    
     hastor              phys-schost-1       Online          Online
     hastor              phys-schost-2       Offline         Offline

See Also

The clresourcegroup(1CL) man page.

ProcedureHow to Clear a Start_failed Resource State by Disabling and Enabling a Resource


Note –

Perform this procedure from any cluster node.


Before You Begin

Ensure that you have the name of the resource that you are disabling and enabling.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Disable and then enable the resource.


    # clresource disable resource
    # clresource enable resource
    
    resource

    Specifies the name of the resource.

  3. Verify that the resource has been disabled and enabled and that the Start_failed resource state is cleared.


    # clresource status
    

    The output from this command indicates the state of the resource that has been disabled and re-enabled.


Example 2–25 Clearing a Start_failed Resource State by Disabling and Enabling a Resource

This example shows how to clear a Start_failed resource state that has occurred on the rscon resource by disabling and enabling the resource.

  1. To verify that the resource is in the Start_failed resource state, the following command is run:


    # clresource status
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Faulted         Faulted
                         phys-schost-2       Offline          Offline
    
     hastor              phys-schost-1       Online          Online
                         phys-schost-2       Offline         Offline
  2. To disable and re-enable the resource, the following commands are run:


    # clresource disable rscon
    # clresource enable rscon
    
  3. To verify that the resource is re-enabled and that the Start_failed resource status is cleared, the following command is run:


    # clresource status
    
    
    === Cluster Resources ===
    
    Resource Name             Node Name       Status        Message
    --------------            ----------      -------        -------
     rscon               phys-schost-1       Online         Online
                         phys-schost-2       Offline        Offline
    
     hastor              phys-schost-1       Online          Online
                         phys-schost-2       Offline         Offline

See Also

The clresource(1CL) man page.

Upgrading a Preregistered Resource Type

In Sun Cluster 3.1 9/04, the following preregistered resource types are enhanced:

The purpose of these enhancements is to enable you to modify logical hostname resources and shared address resources to bypass name services for name resolution.

Upgrade these resource types if all conditions in the following list apply:

For general instructions that explain how to upgrade a resource type, see Upgrading a Resource Type. The information that you need to complete the upgrade of the preregistered resource types is provided in the subsections that follow.

Information for Registering the New Resource Type Version

The relationship between the version of each preregistered resource type and the release of Sun Cluster is shown in the following table. The release of Sun Cluster indicates the release in which the version of the resource type was introduced.

Resource Type  

Resource Type Version 

Sun ClusterRelease 

SUNW.LogicalHostname

 

1.0 

3.0 

3.1 9/04 

SUNW.SharedAddress

 

1.0 

3.0 

3.1 9/04 

To determine the version of the resource type that is registered, use one command from the following list:


Example 2–26 Registering a New Version of the SUNW.LogicalHostname Resource Type

This example shows the command for registering version 2 of the SUNW.LogicalHostname resource type during an upgrade.


# clresourcetype register SUNW.LogicalHostname:2

Information for Migrating Existing Instances of the Resource Type

The information that you need to migrate an instance of a preregistered resource type is as follows:


Example 2–27 Migrating a Logical Hostname Resource

This example shows the command for migrating the logical hostname resource lhostrs. As a result of the migration, the resource is modified to bypass name services for name resolution.


# clresource set -p CheckNameService=false -p Type_version=2 lhostrs

Reregistering Preregistered Resource Types After Inadvertent Deletion

The resource types SUNW.LogicalHostname and SUNW.SharedAddress are preregistered. All of the logical hostname and shared address resources use these resource types. You never need to register these two resource types, but you might inadvertently delete them. If you have deleted resource types inadvertently, use the following procedure to reregister them.


Note –

If you are upgrading a preregistered resource type, follow the instructions in Upgrading a Preregistered Resource Type to register the new resource type version.



Note –

Perform this procedure from any cluster node.


ProcedureHow to Reregister Preregistered Resource Types After Inadvertent Deletion

  1. Reregister the resource type.


    # clresourcetype register SUNW.resource-type
    
    resource-type

    Specifies the resource type to add (reregister). The resource type can be either SUNW.LogicalHostname or SUNW.SharedAddress.


Example 2–28 Reregistering a Preregistered Resource Type After Inadvertent Deletion

This example shows how to reregister the SUNW.LogicalHostname resource type.


# clresourcetype register SUNW.LogicalHostname

See Also

The clresourcetype(1CL) man page.

Adding or Removing a Node to or From a Resource Group

The procedures in this section enable you to perform the following tasks.

The procedures are slightly different, depending on whether you plan to add or remove the node to or from a failover or scalable resource group.

Failover resource groups contain network resources that both failover and scalable services use. Each IP subnetwork connected to the cluster has its own network resource that is specified and included in a failover resource group. The network resource is either a logical hostname or a shared address resource. Each network resource includes a list of IPMP groups that it uses. For failover resource groups, you must update the complete list of IPMP groups for each network resource that the resource group includes (the netiflist resource property).

The procedure for scalable resource groups involves the following steps:

  1. Repeating the procedure for failover groups that contain the network resources that the scalable resource uses

  2. Changing the scalable group to be mastered on the new set of hosts

For more information, see the clresourcegroup(1CL) man page.


Note –

Run either procedure from any cluster node.


Adding a Node to a Resource Group

The procedure to follow to add a node to a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:

You must supply the following information to complete the procedure.

Also, be sure to verify that the new node is already a cluster member.

ProcedureHow to Add a Node to a Scalable Resource Group

  1. For each network resource that a scalable resource in the resource group uses, make the resource group where the network resource is located run on the new node.

    See Step 1 through Step 5 in the following procedure for details.

  2. Add the new node to the list of nodes that can master the scalable resource group (the nodelist resource group property).

    This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.


    # clresourcegroup set [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all of the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    resource-group

    Specifies the name of the resource group to which the node is being added.

  3. (Optional) Update the scalable resource's Load_balancing_weights property to assign a weight to the node that you are adding to the resource group.

    Otherwise, the weight defaults to 1. See the clresourcegroup(1CL) man page for more information.

ProcedureHow to Add a Node to a Failover Resource Group

  1. Display the current node list and the current list of IPMP groups that are configured for each resource in the resource group.


    # clresourcegroup show -v resource-group | grep -i nodelist
    # clresourcegroup show -v resource-group | grep -i netiflist
    

    Note –

    The output of the command line for nodelist and netiflist identifies the nodes by node name. To identify node IDs, run the command clnode show -v | grep -i node-id.


  2. Update netiflist for the network resources that the node addition affects.

    This step overwrites the previous value of netiflist, and therefore you must include all the IPMP groups here.


    # clresource set  -p netiflist=netiflist network-resource
    
    -p netiflist=netiflist

    Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.

    network-resource

    Specifies the name of the network resource (logical hostname or shared address) that is being hosted on the netiflist entries.

  3. If the HAStoragePlus AffinityOn extension property equals True, add the node to the appropriate disk set or device group.

    • If you are using Solaris Volume Manager, use the metaset command.


      # metaset -s disk-set-name -a -h node-name
      
      -s disk-set-name

      Specifies the name of the disk set on which the metaset command is to work

      -a

      Adds a drive or host to the specified disk set

      -h node-name

      Specifies the node to be added to the disk set

    • SPARC: If you are using Veritas Volume Manager, use the clsetup utility.

      1. On any active cluster member, start the clsetup utility.


        # clsetup
        

        The Main Menu is displayed.

      2. On the Main Menu, type the number that corresponds to the option for device groups and volumes.

      3. On the Device Groups menu, type the number that corresponds to the option for adding a node to a VxVM device group.

      4. Respond to the prompts to add the node to the VxVM device group.

  4. Update the node list to include all of the nodes that can now master this resource group.

    This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.


    # clresourcegroup set [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of global-cluster non-voting nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    resource-group

    Specifies the name of the resource group to which the node is being added.

  5. Verify the updated information.


    # clresourcegroup show -vresource-group | grep -i nodelist
    # clresourcegroup show -vresource-group | grep -i netiflist
    

Example 2–29 Adding a Node to a Resource Group

This example shows how to add a global-cluster voting node (phys-schost-2) to a resource group (resource-group-1) that contains a logical hostname resource (schost-2).


# clresourcegroup show -v resource-group-1 | grep -i nodelist
( Nodelist:    phys-schost-1 phys-schost-3
# clresourcegroup show -v resource-group-1 | grep -i netiflist
( Res property name: NetIfList
 Res property class: extension
 List of IPMP
interfaces on each node
 Res property type: stringarray
 Res property value: sc_ipmp0@1 sc_ipmp0@3
 
(Only nodes 1 and 3 have been assigned IPMP groups. 
You must add an IPMP group for node 2.)

# clresource set  -p netiflist=sc_ipmp0@1,sc_ipmp0@2,sc_ipmp0@3 schost-2
# metaset -s red -a -h phys-schost-2
# clresourcegroup set -n  phys-schost-1,phys-schost-2,phys-schost-3 resource-group-1
# clresourcegroup show -v resource-group-1 | grep -i nodelist
 Nodelist:     phys-schost-1 phys-schost-2
               phys-schost-3
# clresourcegroup show -v resource-group-1 | grep -i netiflist
 Res property value: sc_ipmp0@1 sc_ipmp0@2
                     sc_ipmp0@3

Removing a Node From a Resource Group

The procedure to follow to remove a node from a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:

To complete the procedure, you must supply the following information.

Additionally, be sure to verify that the resource group is not mastered on the node that you are removing. If the resource group is mastered on the node that you are removing, run the clresourcegroup command to switch the resource group offline from that node. The following clresourcegroup command brings the resource group offline from a given node, provided that new-masters does not contain that node.


# clresourcegroup switch -n new-masters resource-group
-n new-masters

Specifies the nodes that is now to master the resource group.

resource-group

Specifies the name of the resource group that you are switching . This resource group is mastered on the node that you are removing.

For more information, see the clresourcegroup(1CL) man page.


Caution – Caution –

If you plan to remove a node from all the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then remove the node from the failover groups.


ProcedureHow to Remove a Node From a Scalable Resource Group

A scalable service is configured as two resource groups, as follows.

Additionally, the RG_dependencies property of the scalable resource group is set to configure the scalable group with a dependency on the failover resource group. For information about this property, see Appendix B, Standard Properties.

For details about scalable service configuration, see Sun Cluster Concepts Guide for Solaris OS.

Removing a node from the scalable resource group causes the scalable service to no longer be brought online on that node. To remove a node from the scalable resource group, perform the following steps.

  1. Remove the node from the list of nodes that can master the scalable resource group (the nodelist resource group property).


    # clresourcegroup set [-n node-zone-list] scalable-resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    scalable-resource-group

    Specifies the name of the resource group from which the node is being removed.

  2. (Optional) Remove the node from the failover resource group that contains the shared address resource.

    For details, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.

  3. (Optional) Update the Load_balancing_weights property of the scalable resource to remove the weight of the node that you are removing from the resource group.

See Also

The clresourcegroup(1CL) man page.

ProcedureHow to Remove a Node From a Failover Resource Group

Perform the following steps to remove a node from a failover resource group.


Caution – Caution –

If you plan to remove a node from all of the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then use this procedure to remove the node from the failover groups.



Note –

If the failover resource group contains shared address resources that scalable services use, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.


  1. Update the node list to include all of the nodes that can now master this resource group.

    This step removes the node and overwrites the previous value of the node list. Be sure to include all of the nodes that can master the resource group here.


    # clresourcegroup set [-n node-zone-list] failover-resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    failover-resource-group

    Specifies the name of the resource group from which the node is being removed.

  2. Display the current list of IPMP groups that are configured for each resource in the resource group.


    # clresourcegroup show -v failover-resource-group | grep -i netiflist
    
  3. Update netiflist for network resources that the removal of the node affects.

    This step overwrites the previous value of netiflist. Be sure to include all of the IPMP groups here.


    # clresource set -p netiflist=netiflist network-resource
    

    Note –

    The output of the preceding command line identifies the nodes by node name. Run the command line clnode show -v | grep -i “Node ID” to find the node ID.


    -p netiflist=netiflist

    Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.

    network-resource

    Specifies the name of the network resource that is hosted on the netiflist entries.


    Note –

    Sun Cluster does not support the use of the adapter name for netif.


  4. Verify the updated information.


    # clresourcegroup show -vfailover-resource-group | grep -i nodelist
    # clresourcegroup show -vfailover-resource-group | grep -i netiflist 
    

ProcedureHow to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

In a failover resource group that contains shared address resources that scalable services use, a node can appear in the following locations.

To remove the node from the node list of the failover resource group, follow the procedure How to Remove a Node From a Failover Resource Group.

To modify the auxnodelist of the shared address resource, you must remove and re-create the shared address resource.

If you remove the node from the failover group's node list, you can continue to use the shared address resource on that node to provide scalable services. To continue to use the shared address resource, you must add the node to the auxnodelist of the shared address resource. To add the node to the auxnodelist, perform the following steps.


Note –

You can also use the following procedure to remove the node from the auxnodelist of the shared address resource. To remove the node from the auxnodelist, you must delete and re-create the shared address resource.


  1. Switch the scalable service resource offline.

  2. Remove the shared address resource from the failover resource group.

  3. Create the shared address resource.

    Add the node ID or node name of the node that you removed from the failover resource group to the auxnodelist.


    # clressharedaddress create -g failover-resource-group \
     -X new-auxnodelist shared-address 
    
    failover-resource-group

    The name of the failover resource group that used to contain the shared address resource.

    new-auxnodelist

    The new, modified auxnodelist with the desired node added or removed.

    shared-address

    The name of the shared address.

Example – Removing a Node From a Resource Group

This example shows how to remove a node (phys-schost-3) from a resource group (resource-group-1) that contains a logical hostname resource (schost-1).


# clresourcegroup show -v resource-group-1 | grep -i nodelist
Nodelist:       phys-schost-1 phys-schost-2
                                             phys-schost-3
# clresourcegroup set -n phys-schost-1,phys-schost-2 resource-group-1
# clresourcegroup show -v resource-group-1 | grep -i netiflist
( Res property name: NetIfList
Res property class: extension
( List of IPMP 
interfaces on each node
( Res property type: stringarray
 Res property value: sc_ipmp0@1 sc_ipmp0@2
                     sc_ipmp0@3

(sc_ipmp0@3 is the IPMP group to be removed.)

# clresource set  -p  netiflist=sc_ipmp0@1,sc_ipmp0@2 schost-1
# clresourcegroup show -v resource-group-1 | grep -i nodelist
Nodelist:       phys-schost-1 phys-schost-2
# clresourcegroup show -v resource-group-1 | grep -i netiflist
 Res property value: sc_ipmp0@1 sc_ipmp0@2

Migrating the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

You can migrate the application resources from a global-cluster voting node to a global-cluster non-voting node.


Note –

The data services you want to migrate should be scalable and also be supported in global-cluster non-voting nodes


ProcedureHow to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

The procedure assumes a three node cluster with a global-cluster non-voting node created on each of the three nodes. The configuration directory that is made highly available using the HAStoragePlus resource should also be accessible from the global-cluster non-voting nodes.

  1. Create the failover resource group with the global-cluster voting node that holds the shared address that the scalable resource group is to use.


    # clresourcegroup create -n node1,node2,node3 sa-resource-group
    
    sa-resource-group

    Specifies your choice of the name of the failover resource group to add. This name must begin with an ASCII character.

  2. Add the shared address resource to the failover resource group.


    # clressharedaddress create -g sa-resource-group -h hostnamelist, … \
    [-X auxnodelist] -N netiflist network-resource
    
    -g sa-resource-group

    Specifies the resource group name. In the node list of a shared address resource, do not specify more than one global-cluster non-voting node on the same global-cluster voting node. Specify the same list of nodename:zonename pairs as the node list of the scalable resource group.

    -h hostnamelist, …

    Specifies a comma-separated list of shared address hostnames.

    -X auxnodelist

    Specifies a comma-separated list of node names or IDs or zones that identify the cluster nodes that can host the shared address but never serve as primary if failover occurs. These nodes are mutually exclusive, with the nodes identified as potential masters in the resource group's node list. If no auxiliary node list is explicitly specified, the list defaults to the list of all cluster node names that are not included in the node list of the resource group that contains the shared address resource.


    Note –

    To ensure that a scalable service runs in all global-cluster non-voting nodes that were created to master the service, the complete list of nodes must be included in the node list of the shared address resource group or the auxnodelist of the shared address resource. If all the global-cluster non-voting nodes are listed in the node list, the auxnodelist can be omitted.


    -N netiflist

    Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.


    Note –

    Sun Cluster does not support the use of the adapter name for netif.


    network-resource

    Specifies an optional resource name of your choice.

  3. Create the scalable resource group.


    # clresourcegroup create\-p Maximum_primaries=m\-p Desired_primaries=n\
    -n node1,node2,node3\
    -p RG_dependencies=sa-resource-group resource-group-1
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for this resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    resource-group-1

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  4. Create the HAStoragePlus resource hastorageplus-1, and define the filesystem mount points.


    # clresource create -g resource-group-1 -t SUNW.HAStoragePlus \
    -p FilesystemMountPoints=/global/resource-group-1 hastorageplus-1
    

    The resource is created in the enabled state.

  5. Register the resource type for the application.


    # clresourcetype register resource-type
    
    resource-type

    Specifies name of the resource type to add. See the release notes for your release of Sun Cluster to determine the predefined name to supply.

  6. Add the application resource to resource-group-1, and set the dependency to hastorageplus-1.


    # clresource create -g resource-group-1 -t SUNW.application \
    [-p "extension-property[{node-specifier}]"=value, …] -p Scalable=True \
    -p Resource_dependencies=network-resource -p Port_list=port-number/protocol \
    -p Resource_dependencies=hastorageplus-1 resource
    
  7. Bring the failover resource group online.


    # clresourcegroup online sa-resource-group
    
  8. Bring the scalable resource group online on all the nodes.


    # clresourcegroup online resource-group-1
    
  9. Install and boot zone1 on each of the nodes, node1, node2, node3.

  10. Bring the application resource group offline on two nodes (node1, node2).


    Note –

    Ensure the shared address is online on node3.



    # clresourcegroup switch -n node3 resource-group-1
    
    resource-group-1

    Specifies the name of the resource group to switch.

  11. Update the nodelist property of the failover resource group to include the global-cluster non-voting node of the corresponding nodes removed from the node list.


    # clresourcegroup set -n node1:zone1,node2:zone1,node3 sa-resource-group
    
  12. Update the nodelist property of the application resource group to include the global-cluster non-voting node of the corresponding nodes removed from node list.


    # clresourcegroup set node1:zone1,node2:zone1,node3 resource-group-1
    
  13. Bring the failover resource group and application resource group online only on the newly added zones.


    Note –

    The failover resource group will be online only on node1:zone1 and application resource group will be online only on node1:zone1 and node2:zone1.



    # clresourcegroup switch -n node1:zone1 sa-resource-group
    

    # clresourcegroup switch -n node1:zone1,node2:zone1 resource-group-1
    
  14. Update the nodelist property of both the resource groups to include the global-cluster non-voting node of node3 by removing the global node, node3 from the list.


    # clresourcegroup set node1:zone1,node2:zone1,node3:zone1 sa-resource-group
    

    # clresourcegroup set node1:zone1,node2:zone1,node3:zone1 resource-group-1
    
  15. Bring both the resource groups online on the global-cluster non-voting nodes.


    # clresourcegroup switch -n node1:zone1 sa-resource-group
    

    # clresourcegroup switch -n node1:zone1,node2:zone1,node3:zone1 resource-group-1
    

Synchronizing the Startups Between Resource Groups and Device Groups

After a cluster boots or services fail over to another node, global devices and local and cluster file systems might require time to become available. However, a data service can run its START method before global devices and local and cluster file systems come online. If the data service depends on global devices or local and cluster file systems that are not yet online, the START method times out. In this situation, you must reset the state of the resource groups that the data service uses and restart the data service manually.

To avoid these additional administrative tasks, use the HAStoragePlus resource type. Add an instance of HAStoragePlus to all resource groups whose data service resources depend on global devices or local and cluster file systems. Instances of these resource types perform the following operations:

To create an HAStoragePlus resource, see How to Set Up the HAStoragePlus Resource Type for New Resources.

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

When you configure HAStoragePlus resources for a zone cluster, you need to perform the following additional tasks before performing the steps for global cluster:


Note –

The cluster file system is not supported for Zone Cluster.


ProcedureHow to Set Up the HAStoragePlus Resource Type for New Resources

In the following example, the resource group resource-group-1 contains the following data services.


Note –

To create a HAStoragePlus resource with Solaris ZFS (Zettabyte File System) as a highly available local file system seeHow to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available section.


To create an HAStoragePlus resource hastorageplus-1 for new resources in resource-group-1, read Synchronizing the Startups Between Resource Groups and Device Groups and then perform the following steps.

To create an HAStoragePlus resource, see Enabling Highly Available Local File Systems.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify and solaris.cluster.admin RBAC authorizations.

  2. Create the resource group resource-group-1.


    # clresourcegroup create resource-group-1
    
  3. Determine whether the resource type is registered.

    The following command prints a list of registered resource types.


    # clresourcetype show | egrep Type
    
  4. If you need to, register the resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  5. Create the HAStoragePlus resource hastorageplus-1, and define the filesystem mount points and global device paths.


    # clresource create -g resource-group-1 -t SUNW.HAStoragePlus \
    -p GlobalDevicePaths=/dev/global/dsk/d5s2,dsk/d6 \
    -p FilesystemMountPoints=/global/resource-group-1 hastorageplus-1
    

    GlobalDevicePaths can contain the following values.

    • Global device group names, such as nfs-dg, dsk/d5

    • Paths to global devices, such as /dev/global/dsk/d1s2, /dev/md/nfsdg/dsk/d10

    FilesystemMountPoints can contain the following values.

    • Mount points of local or cluster file systems, such as /local-fs/nfs, /global/nfs


    Note –

    HAStoragePlus has a Zpools extension property that is used to configure ZFS file system storage pools and a ZpoolsSearchDir extension property that is used to specify the location to search for the devices of ZFS file system storage pools. The default value for the ZpoolsSearchDir extension property is /dev/dsk. The ZpoolsSearchDir extension property is similar to the -d option of the zpool(1M) command.


    The resource is created in the enabled state.

  6. Add the resources (Sun Java System Web Server, Oracle, and NFS) to resource-group-1, and set their dependency to hastorageplus-1.

    For example, for Sun Java System Web Server, run the following command.


    # clresource create  -g resource-group-1 -t SUNW.iws \
    -p Confdir_list=/global/iws/schost-1 -p Scalable=False \
    -p Network_resources_used=schost-1 -p Port_list=80/tcp \
    -p Resource_dependencies=hastorageplus-1 resource
    

    The resource is created in the enabled state.

  7. Verify that you have correctly configured the resource dependencies.


    # clresource show -v resource | egrep Resource_dependencies
    
  8. Set resource-group-1 to the MANAGED state, and bring resource-group-1 online.


    # clresourcegroup online -M resource-group-1
    
Affinity Switchovers

The HAStoragePlus resource type contains another extension property, AffinityOn, which is a Boolean that specifies whether HAStoragePlus must perform an affinity switchover for the global devices that are defined in GLobalDevicePaths and FileSystemMountPoints extension properties. For details, see the SUNW.HAStoragePlus(5) man page.


Note –

The setting of the AffinityOn flag is ignored for scalable services. Affinity switchovers are not possible with scalable resource groups.


ProcedureHow to Set Up the HAStoragePlus Resource Type for Existing Resources

Before You Begin

Read Synchronizing the Startups Between Resource Groups and Device Groups.

  1. Determine whether the resource type is registered.

    The following command prints a list of registered resource types.


    # clresourcetype show | egrep Type
    
  2. If you need to, register the resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  3. Create the HAStoragePlus resource hastorageplus-1.


    # clresource create -g resource-group \
    -t SUNW.HAStoragePlus -p GlobalDevicePaths= … \
    -p FileSystemMountPoints=... -p AffinityOn=True hastorageplus-1
    

    The resource is created in the enabled state.

  4. Set up the dependency for each of the existing resources, as required.


    # clresource set -p Resource_Dependencies=hastorageplus-1 resource
    
  5. Verify that you have correctly configured the resource dependencies.


    # clresource show -v resource | egrep Resource_dependencies
    

Configuring a HAStoragePlus Resource for Cluster File Systems

When a HAStoragePlus resource is configured for cluster file systems and brought online, it ensures that these file systems are available. The cluster file system is supported on Unix File System (UFS) and Veritas File System (VxFS). Use HAStoragePlus with local file systems if the data service is I/O intensive. See How to Change the Global File System to Local File System in a HAStoragePlus Resource for information on how to change the file system of an HAStoragePlus resource.

Sample Entries in /etc/vfstab for Cluster File Systems

The following examples show entries in the /etc/vfstab file for global devices that are to be used for cluster file systems.


Note –

The entries in the /etc/vfstab file for cluster file systems should contain the globalkeyword in the mount options.



Example 2–30 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager

This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.

/dev/md/kappa-1/dsk/d0   /dev/md/kappa-1/rdsk/d0
/global/local-fs/nfs ufs     5  yes     logging,global


Example 2–31 Entries in /etc/vfstab for a Global Device With VxVM

This example shows entries in the /etc/vfstab file for a global device that uses VxVM.


/dev/vx/dsk/kappa-1/appvol    /dev/vx/rdsk/kappa-1/appvol
/global/local-fs/nfs vxfs     5 yes     log,global

ProcedureHow to Set Up the HAStoragePlus Resource for Cluster File Systems

  1. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create a failover resource group.


    # clresourcegroup create resource-group-1
    
  3. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  4. Create the HAStoragePlus resource and define the filesystem mount points.


    # clresource create -g resource-group -t SUNW.HAStoragePlus \
     -p FileSystemMountPoints="mount-point-list" hasp-resource
    

    The resource is created in the enabled state.

  5. Add the data service resources to resource-group-1, and set their dependency to hasp-resource.

  6. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.


    # clresourcegroup online -M resource-group-1
    

ProcedureHow to Delete a HAStoragePlus Resource Type for Cluster File Systems

    Disable and delete the HAStoragePlus resource configured for cluster file systems.


    # clresource delete -F -g resource-group -t SUNW.HAStoragePlus resource
    

Enabling Highly Available Local File Systems

Using a highly available local file system improves the performance of I/O intensive data services. To make a local file system highly available in a Sun Cluster environment, use the HAStoragePlus resource type.

You can specify global or local file systems. Global file systems are accessible from all nodes in a cluster. Local file systems are accessible from a single cluster node. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node. These local file systems require the underlying devices to be Sun Cluster global devices.

These file system mount points are defined in the format paths[,...]. You can specify both the path in a global-cluster non-voting node and the path in a global-cluster voting node , in this format:

Non-GlobalZonePath:GlobalZonePath

The global-cluster voting node path is optional. If you do not specify a global-cluster voting node path, Sun Cluster assumes that the path in the global-cluster non-voting node and in the global-cluster voting node are the same. If you specify the path as Non-GlobalZonePath:GlobalZonePath, you must specify GlobalZonePath in the global-cluster voting node's /etc/vfstab.

The default setting for this property is an empty list.

You can use the SUNW.HAStoragePlus resource type to make a file system available to a global-cluster non-voting node. To enable the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global-cluster voting node and in the global-cluster non-voting node. The SUNW.HAStoragePlus resource type makes the file system available to the global-cluster non-voting node by mounting the file system in the global cluster. The resource type then performs a loopback mount in the global-cluster non-voting node. Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in the global-cluster voting node. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in global-cluster non-voting nodes.


Note –

Local file systems include the Unix File System (UFS), Quick File System (QFS), Veritas File System (VxFS), and Solaris ZFS (Zettabyte File System).


The instructions for each Sun Cluster data service that is I/O intensive explain how to configure the data service to operate with the HAStoragePlus resource type. For more information, see the individual Sun Cluster data service guides.


Note –

Do not use the HAStoragePlus resource type to make a root file system highly available.


Sun Cluster provides the following tools for setting up the HAStoragePlus resource type to make local file systems highly available:

Sun Cluster Manager and the clsetup utility enable you to add resources to the resource group interactively. Configuring these resources interactively reduces the possibility for configuration errors that might result from command syntax errors or omissions. Sun Cluster Manager and the clsetup utility ensure that all required resources are created and that all required dependencies between resources are set.

Configuration Requirements for Highly Available Local File Systems

Any file system on multihost disks must be accessible from any host that is directly connected to those multihost disks. To meet this requirement, configure the highly available local file system as follows:


Note –

The use of a volume manager with the global devices for a highly available local file system is optional.


Format of Device Names for Devices Without a Volume Manager

If you are not using a volume manager, use the appropriate format for the name of the underlying storage device. The format to use depends on the type of storage device as follows:

The replaceable elements in these device names are as follows:

Sample Entries in /etc/vfstab for Highly Available Local File Systems

The following examples show entries in the /etc/vfstab file for global devices that are to be used for highly available local file systems.


Note –

Solaris ZFS (Zettabyte File System) does not use the /etc/vfstab file.



Example 2–32 Entries in /etc/vfstab for a Global Device Without a Volume Manager

This example shows entries in the /etc/vfstab file for a global device on a physical disk without a volume manager.

/dev/global/dsk/d1s0       /dev/global/rdsk/d1s0
/global/local-fs/nfs  ufs     5  no     logging


Example 2–33 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager

This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.

/dev/md/kappa-1/dsk/d0   /dev/md/kappa-1/rdsk/d0
/global/local-fs/nfs ufs     5  no     logging


Example 2–34 Entries in /etc/vfstab for a Global Device With VxVM

This example shows entries in the /etc/vfstab file for a global device that uses VxVM.


/dev/vx/dsk/kappa-1/appvol    /dev/vx/rdsk/kappa-1/appvol
/global/local-fs/nfs vxfs     5 no     log

ProcedureHow to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

The following instructions explain how to how to set up the HAStoragePlus resource type by using the clsetup utility. Perform this procedure from any global-cluster voting node.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

Before You Begin

Ensure that the following prerequisites are met:

  1. Become superuser on any cluster voting node.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring the file system and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return to continue.

    The clsetup utility displays a list of the cluster nodes that can master the highly available HAStoragePlus resource.

  6. Select the nodes that can master the highly available HAStoragePlus resource.

    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.

    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes. Then press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the HAStoragePlus resource group's node list. The first node in the list is the primary node of this resource group.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a list of types of shared storage type where data is to be stored.

  8. Type the numbers that correspond to type of shared storage that you are using for storing the data and press Return.

    The clsetup utility displays the file system mount points that are configured in the cluster. If there are no existing mount points, the clsetup utility allows you to define a new mount point.

  9. Specify the default mount directory, the raw device path, the Global Mount option and the Check File System Periodically option and press Return.

    The clsetup utility returns you the properties of the mount point that the utility will create.

  10. To create the mount point, type d and press Return.

    The clsetup utility displays the available file system mount points.


    Note –

    You can use the c option to define another new mount point.


  11. Select the file system mount points.

    • To accept the default selection of all listed file system mount points in an arbitrary order, type a and press Return.

    • To select a subset of the listed file system mount points, type a comma-separated or space-separated list of the numbers that correspond to the file system mount points and press Return.

  12. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the global disk sets and device groups that are configured in the cluster.

  13. Select the global device groups.

    • To accept the default selection of all listed device groups in an arbitrary order, type a and press Return.

    • To select a subset of the listed device groups, type a comma-separated or space-separated list of the numbers that correspond to the device groups and press Return.

  14. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create.

  15. If you require a different name for any Sun Cluster object, change the name as follows.

    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.

  16. To confirm your selection of Sun Cluster object names, type d and press Return.

    The clsetup utility displays information about the Sun Cluster configuration that the utility will create.

  17. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  18. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing resource group when you restart the utility.

  19. Verify the HAStoragePlus resource has been created.

    Use the clresource(1CL) utility for this purpose. By default, the clsetup utility assigns the name node_name-rg to the resource group.


    # clresource show node_name-rg
    

ProcedureHow to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

The following procedure explains how to set up the HAStoragePlus resource type to make file systems other than Solaris ZFS highly available.

  1. On any node in the global cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Create a failover resource group.


    # clresourcegroup create resource-group
    
  3. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  4. Create the HAStoragePlus resource and define the file system mount points.


    # clresource create -g resource-group \
    -t SUNW.HAStoragePlus -p FileSystemMountPoints=mount-point-list hasp-resource
    
  5. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.


    # clresourcegroup online -M resource-group
    

Example 2–35 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for the Global Cluster

This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for the global cluster.


phys-schost-1# vi /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
# /dev/md/apachedg/dsk/d0 /dev/md/apachedg/rdsk/d0 /web-1 ufs 2 no logging
# clresourcegroup create hasp-rg 
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
 -p FileSystemMountPoints=/global/ufs-1 hasp-rs
# clresourcegroup online -M hasp-rg


Example 2–36 Setting Up the HAStoragePlus Resource Type to Make a UFS File System Highly Available for a Zone Cluster

This example assumes that the file system /web-1 is configured to the HAStoragePlus resource to make the file system highly available for a zone cluster sczone.


phys-schost-1# vi /etc/vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
/dev/md/apachedg/dsk/d0 /dev/md/apachedg/rdsk/d0 /web-1 ufs 2 no logging
# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/web-1
clzc:sczone:fs> set special=/dev/md/apachedg/dsk/d0
clzc:sczone:fs> set raw=/dev/md/apachedg/rdsk/d0
clzc:sczone:fs> set type=ufs
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg \
-t SUNW.HAStoragePlus -p FileSystemMountPoints=/global/ufs-1 hasp-rs
# clresourcegroup online -Z sczone -M hasp-rg

ProcedureHow to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

You perform the following primary tasks to make a local Solaris ZFS (Zettabyte File System) highly available:

This section describes how to complete both tasks.

  1. Create a ZFS storage pool.


    Caution – Caution –

    Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in a Sun Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note –

    The ZFS storage pool can be created using a full disk or a disk slice. It is preferred to create a ZFS storage pool using a full disk by specifying a Solaris logical device as ZFS file system performs better by enabling the disk write cache. ZFS file system labels the disk with EFI when a full disk is provided.


    See Creating a ZFS Storage Pool in Solaris ZFS Administration Guide for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS file system.

    You can create more than one ZFS file system in the same ZFS storage pool.


    Note –

    HAStoragePlus does not support file systems created on ZFS file system volumes.

    Do not place a ZFS file system in the FilesystemMountPoints extension property.


    See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS file system in a ZFS storage pool.

  3. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  4. Create a failover resource group.


    # clresourcegroup create resource-group
    
  5. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  6. Create a HAStoragePlus resource for the local ZFS file system.


    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \
    resource
    

    The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.


    # clresourcegroup online -M resource-group
    

Example 2–37 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available for the Global Cluster

The following example shows the commands to make a local ZFS file system highly available.


phys-schost-1% su
Password: 
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
you can create a ZFS storage pool using a disk slice by specifying a Solaris device 
identifier:
# zpool create HAzpool c1t8d0s2
or you can create a ZFS storage pool using disk slice by specifying a logical device 
identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
                    -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg


Example 2–38 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available for a Zone Cluster

The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.


phys-schost-1# cldevice list -v
# zpool create HAzpool c1t8d0 
# zfs create HAzpool/export 
# zfs create HAzpool/export/home
# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:fs> set name=HAzpool
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg \
-t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs
# clresourcegroup online -Z -sczone -M hasp-rg

ProcedureHow to Delete a HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

    Disable and delete the HAStoragePlus resource that makes a local Solaris ZFS (Zettabyte File System) highly available.


    # clresource delete -F -g resource-group -t SUNW.HAStoragePlus resource
    

Upgrading From HAStorage to HAStoragePlus

HAStorage is not supported in the current release of Sun Cluster software. Equivalent functionality is supported by HAStoragePlus. For instructions for upgrading from HAStorage to HAStorage, see the subsections that follow.


Note –

The resource groups that have the HAStorage resource configured will be in STOP_FAILED state since HAStorage is no longer supported. Clear the ERROR_STOP_FAILED flag for the resource and follow the instructions to upgrade HAStorage to HAStoragePlus.


ProcedureHow to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

The following example uses a simple HA-NFS resource that is active with HAStorage. The ServicePaths are the disk group nfsdg and the AffinityOn property is True. Furthermore, the HA-NFS resource has Resource_Dependencies set to the HAStorage resource.

  1. Bring offline the resource group nfs1-rg.


    # clresourcegroup offline nfs1-rg
    
  2. Remove the dependencies the application resources has on HAStorage.


    # clresource set -p Resource_Dependencies="" nfsserver-rs
    
  3. Disable the HAStorage resource.


    # clresource disable nfs1storage-rs
    
  4. Remove the HAStorage resource from the application resource group.


    # clresource delete nfs1storage-rs
    
  5. Unregister the HAStorage resource type.


    # clresourcetype unregister SUNW.HAStorage
    
  6. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  7. Create the HAStoragePlus resource.


    Note –

    Instead of using the ServicePaths property of HAStorage, you must use the FilesystemMountPoints property or GlobalDevicePaths property of HAStoragePlus.


    • To specify the mount point of a file system, type the following command.

      The FilesystemMountPoints extension property must match the sequence that is specified in /etc/vfstab.


      # clresource create -g nfs1-rg -t \
      SUNW.HAStoragePlus -p FilesystemMountPoints=/global/nfsdata -p \
      AffinityOn=True nfs1-hastp-rs
      
    • To specify global device paths, type the following command.


      # clresource create -g nfs1-rg -t \
      SUNW.HAStoragePlus -p GlobalDevicePaths=nfsdg -p AffinityOn=True nfs1-hastp-rs
      

    The resource is created in the enabled state.

  8. Disable the application server resource.


    # clresource disable nfsserver-rs
    
  9. Bring online the nfs1-rg group on a cluster node.


    # clresourcegroup online nfs1-rg
    
  10. Set up the dependencies between the application server and HAStoragePlus.


    # clresource set -p Resource_dependencies=nfs1-hastp-rs nfsserver-rs
    
  11. Bring online the nfs1-rg group on a cluster node.


    # clresourcegroup online -eM nfs1-rg
    

ProcedureHow to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

The following example uses a simple HA-NFS resource that is active with HAStorage. The ServicePaths are the disk group nfsdg and the AffinityOn property is True. Furthermore, the HA-NFS resource has Resource_Dependencies set to HAStorage resource.

  1. Remove the dependencies the application resource has on HAStorage resource.


    # clresource set -p Resource_Dependencies="" nfsserver-rs
    
  2. Disable the HAStorage resource.


    # clresource disable nfs1storage-rs
    
  3. Remove the HAStorage resource from the application resource group.


    # clresource delete nfs1storage-rs
    
  4. Unregister the HAStorage resource type.


    # clresourcetype unregister SUNW.HAStorage
    
  5. Modify /etc/vfstab to remove the global flag and change “mount at boot” to “no”.

  6. Create the HAStoragePlus resource.


    Note –

    Instead of using the ServicePaths property of HAStorage, you must use the FilesystemMountPoints property or GlobalDevicePaths property of HAStoragePlus.


    • To specify the mount point of a file system, type the following command.

      The FilesystemMountPoints extension property must match the sequence that is specified in /etc/vfstab.


      # clresource create -g nfs1-rg -t \
      SUNW.HAStoragePlus -p FilesystemMountPoints=/global/nfsdata -p \
      AffinityOn=True nfs1-hastp-rs
      
    • To specify global device paths, type the following command.


      # clresource create -g nfs1-rg -t \
      SUNW.HAStoragePlus -p GlobalDevicePaths=nfsdg -p AffinityOn=True nfs1-hastp-rs
      

    The resource is created in the enabled state.

  7. Disable the application server resource.


    # clresource disable nfsserver-rs
    
  8. Bring online the nfs1-rg group on a cluster node.


    # clresourcegroup online nfs1-rg
    
  9. Set up the dependencies between the application server and HAStoragePlus.


    # clresource set -p Resource_dependencies=nfs1-hastp-rs nfsserver-rs
    
  10. Bring online the nfs1-rg group on a cluster node.


    # clresourcegroup online -eM nfs1-rg
    

Modifying Online the Resource for a Highly Available File System

You might need a highly available file system to remain available while you are modifying the resource that represents the file system. For example, you might need the file system to remain available because storage is being provisioned dynamically. In this situation, modify the resource that represents the highly available file system while the resource is online.

In the Sun Cluster environment, a highly available file system is represented by an HAStoragePlus resource. Sun Cluster enables you to modify an online HAStoragePlus resource as follows:


Note –

Sun Cluster software does not enable you to rename a file system while the file system is online.



Note –

When you remove the file systems configured in the HAStoragePlus resources for a zone cluster, you also need to remove the file system configuration from the zone cluster. For information on removing a file system from a zone cluster, seeHow to Remove a File System from a Zone Cluster in Sun Cluster System Administration Guide for Solaris OS.


ProcedureHow to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

When you add a local or global file system to a HAStoragePlus resource, the HAStoragePlus resource automatically mounts the file system.

  1. On one node of the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. In the /etc/vfstab file on each node of the cluster, add an entry for the mount point of each file system that you are adding.

    For each entry, set the mount at boot field and the mount options field as follows:

    • For local file systems

      • Set the mount at boot field to no.

      • Remove the global flag.

    • For cluster file systems

      • If the file system is a global file system, set the mount options field to contain the global option.

  3. Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.


    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
    FileSystemMountPoints
    
    -R hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource

  4. Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain the following mount points:

    • The mount points of the file systems that the HAStoragePlus resource already manages

    • The mount points of the file systems that you are adding to the HAStoragePlus resource


    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points of the file systems that the HAStoragePlus resource already manages and the mount points of the file systems that you are adding. The format of each entry in the list is LocalZonePath:GlobalZonePath. In this format, the global path is optional. If the global path is not specified, the global path is the same as the local path.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems.

  5. Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 4.


    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
     FileSystemMountPoints
    
    -R hasp-resource

    Specifies the HAStoragePlus resource to which you are adding file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  6. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to mount a file system failed.


    # clresource status hasp-resource
    

Example 2–39 Adding a File System to an Online HAStoragePlus Resource

This example shows how to add a file system to an online HAStoragePlus resource.

The example assumes that the /etc/vfstab file on each cluster node already contains an entry for the file system that is to be added.


# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints
STRINGARRAY
/global/global-fs/fs
# clresource set  \
-p FileSystemMountPoints="/global/global-fs/fs,/global/local-fs/fs"
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp
STRINGARRAY
/global/global-fs/fs
/global/local-fs/fs
# clresource status rshasp


=== Cluster Resources ===

Resource Name          Node Name      Status        Message
--------------        ----------      -------       --------
   rshasp               node46       Offline         Offline
                        node47       Online          Online

ProcedureHow to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

When you remove a file system from an HAStoragePlus resource, the HAStoragePlus resource treats a local file system differently from a global file system.


Caution – Caution –

Before removing a file system from an online HAStoragePlus resource, ensure that no applications are using the file system. When you remove a file system from an online HAStoragePlus resource, the file system might be forcibly unmounted. If a file system that an application is using is forcibly unmounted, the application might fail or hang.


  1. On one node of the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Retrieve the list of mount points for the file systems that the HAStoragePlus resource already manages.


    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
    FileSystemMountPoints
    
    -R hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  3. Modify the FileSystemMountPoints extension property of the HAStoragePlus resource to contain only the mount points of the file systems that are to remain in the HAStoragePlus resource.


    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points of the file systems that are to remain in the HAStoragePlus resource. This list must not include the mount points of the file systems that you are removing.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

  4. Confirm that you have a match between the mount point list of the HAStoragePlus resource and the list that you specified in Step 3.


    # scha_resource_get -O extension -R hasp-resource -G hasp-rg \
    FileSystemMountPoints
    
    -R hasp-resource

    Specifies the HAStoragePlus resource from which you are removing file systems.

    -G hasp-rg

    Specifies the resource group that contains the HAStoragePlus resource.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online and faulted, validation of the resource succeeded, but an attempt by HAStoragePlus to unmount a file system failed.


    # clresource status hasp-resource
    
  6. (Optional) From the /etc/vfstab file on each node of the cluster, remove the entry for the mount point of each file system that you are removing.


Example 2–40 Removing a File System From an Online HAStoragePlus Resource

This example shows how to remove a file system from an online HAStoragePlus resource.


# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints
STRINGARRAY
/global/global-fs/fs
/global/local-fs/fs
# clresource set -p FileSystemMountPoints="/global/global-fs/fs"
# scha_resource_get -O extension -R rshasp -G rghasp FileSystemMountPoints rshasp
STRINGARRAY
/global/global-fs/fs
 # clresource status rshasp


=== Cluster Resources ===

Resource Name          Node Name      Status        Message
--------------        ----------      -------       --------
   rshasp               node46       Offline         Offline
                        node47       Online          Online

ProcedureHow to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

When you add a Solaris ZFS (Zettabyte File System) storage pool to an online HAStoragePlus resource, the HAStoragePlus resource does the following:

  1. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Determine the ZFS storage pools that the HAStoragePlus resource already manages.


    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.

  3. Add the new ZFS storage pool to the existing list of ZFS storage pools that the HAStoragePlus resource already manages.


    # clresource set -p Zpools="zpools-list" hasp-resource
    
    -p Zpools="zpools-list"

    Specifies a comma-separated list of existing ZFS storage pool names that the HAStoragePlus resource already manages and the new ZFS storage pool name that you want to add.

    hasp-resource

    Specifies the HAStoragePlus resource to which you are adding the ZFS storage pool.

  4. Compare the new list of ZFS storage pools that the HAStoragePlus resource manages with the list that you generated in Step 2.


    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource to which you added the ZFS storage pool.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to import and mount the ZFS file system failed. In this case, you need to repeat the preceding set of steps.


    # clresourcegroup status hasp-resource
    

ProcedureHow to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

When you remove a Solaris ZFS (Zettabyte File System) storage pool from an online HAStoragePlus resource, the HAStoragePlus resource does the following:

  1. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Determine the ZFS storage pools that the HAStoragePlus resource already manages.


    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.

  3. Remove the ZFS storage pool from the list of ZFS storage pools that the HAStoragePlus resource currently manages.


    # clresource set -p Zpools="zpools-list" hasp-resource
    
    -p Zpools="zpools-list"

    Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus resource currently manages, minus the ZFS storage pool name that you want to remove.

    hasp-resource

    Specifies the HAStoragePlus resource from which you are removing the ZFS storage pool.

  4. Compare the new list of ZFS storage pools that the HAStoragePlus resource now manages with the list that you generated in Step 2.


    # clresource show -g hasp-resource-group -p Zpools hasp-resource
    
    -g hasp-resource-group

    Specifies the resource group that contains the HAStoragePlus resource.

    hasp-resource

    Specifies the HAStoragePlus resource from which you removed the ZFS storage pool.

  5. Confirm that the HAStoragePlus resource is online and not faulted.

    If the HAStoragePlus resource is online but faulted, validation of the resource succeeded. However, an attempt by the HAStoragePlus resource to unmount and export the ZFS file system failed. In this case, you need to repeat the preceding set of steps.


    # clresourcegroup status SUNW.HAStoragePlus +
    

ProcedureHow to Recover From a Fault After Modifying the a FileSystemMountPoints Property of a HAStoragePlus Resource

If a fault occurs during a modification of the FileSystemMountPoints extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.

  1. Determine the fault that caused the attempted modification to fail.


    # clresource status hasp-resource
    

    The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:

    • The device on which the file system should reside does not exist.

    • An attempt by the fsck command to repair a file system failed.

    • The mount point of a file system that you attempted to add does not exist.

    • A file system that you attempted to add cannot be mounted.

    • A file system that you attempted to remove cannot be unmounted.

  2. Correct the fault that caused the attempted modification to fail.

  3. Repeat the step to modify the FileSystemMountPoints extension property of the HAStoragePlus resource.


    # clresource set -p FileSystemMountPoints="mount-point-list" hasp-resource
    
    -p FileSystemMountPoints="mount-point-list"

    Specifies a comma-separated list of mount points that you specified in the unsuccessful attempt to modify the highly available file system

    hasp-resource

    Specifies the HAStoragePlus resource that you are modifying

  4. Confirm that the HAStoragePlus resource is online and not faulted.


    # clresource status
    

Example 2–41 Status of a Faulty HAStoragePlus Resource

This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because an attempt by the fsck command to repair a file system failed.


# clresource status

  === Cluster Resources ===

  Resource Name     Node Name     Status       Status Message
  --------------    ----------    -------      -------------
  rshasp            node46        Offline      Offline
                    node47        Online       Online Faulted - Failed to fsck: /mnt.

ProcedureHow to Recover From a Fault After Modifying the Zpools Property of a HAStoragePlus Resource

If a fault occurs during a modification of the Zpools extension property, the status of the HAStoragePlus resource is online and faulted. After the fault is corrected, the status of the HAStoragePlus resource is online.

  1. Determine the fault that caused the attempted modification to fail.


    # clresource status hasp-resource
    

    The status message of the faulty HAStoragePlus resource indicates the fault. Possible faults are as follows:

    • The ZFS pool zpool failed to import.

    • The ZFS pool zpool failed to export.

  2. Correct the fault that caused the attempted modification to fail.

  3. Repeat the step to modify the Zpools extension property of the HAStoragePlus resource.


    # clresource set -p Zpools="zpools-list" hasp-resource
    
    -p Zpools="zpools-list"

    Specifies a comma-separated list of ZFS storage pool names that the HAStoragePlus currently manages, minus the ZFS storage pool name that you want to remove.

    hasp-resource

    Specifies the HAStoragePlus resource that you are modifying

  4. Confirm that the HAStoragePlus resource is online and not faulted.


    # clresource status
    

Example 2–42 Status of a Faulty HAStoragePlus Resource

This example shows the status of a faulty HAStoragePlus resource. This resource is faulty because the ZFS pool zpool failed to import.


# clresource status hasp-resource

  === Cluster Resources ===

  Resource Name     Node Name     Status            Status Message
  --------------    ----------    -------           -------------
  hasp-resource     node46        Online            Faulted - Failed to import:hazpool
                    node47        Offline           Offline

Changing the Global File System to Local File System in a HAStoragePlus Resource

You can change the file system of a HAStoragePlus resource from a global file system to a local file system.

ProcedureHow to Change the Global File System to Local File System in a HAStoragePlus Resource

  1. Bring the failover resource group offline.


    # clresourcegroup offline resource-group
    
  2. Display the HAStoragePlus resource.


    # clresource show -g resource-group -t SUNW.HAStoragePlus
    
  3. Retrieve the list of mount points for each resource.


    # clresource show -p FilesystemMountPoints  hastorageplus-resource
    
  4. Unmount the global file system.


    # umount mount-points
    
  5. Modify the /etc/vfstab entry of the mount points on all the nodes configured in the node list of the resource group.

    • Remove the global keyword from the mount options.

    • Modify the mount at boot option from yes to no.

    Repeat the steps for all the cluster file systems of all the HAStoragePlus resources configured in the resource group.

  6. Bring online the resource group.


    # clresourcegroup online -M resource-group
    

Upgrading the HAStoragePlus Resource Type

In Sun Cluster 3.1 9/04, the HAStoragePlus resource type is enhanced to enable you to modify highly available file systems online. Upgrade the HAStoragePlus resource type if all conditions in the following list apply:

For general instructions that explain how to upgrade a resource type, see Upgrading a Resource Type. The information that you need to complete the upgrade of the HAStoragePlus resource type is provided in the subsections that follow.

Information for Registering the New Resource Type Version

The relationship between a resource type version and the release of Sun Cluster is shown in the following table. The release of Sun Cluster indicates the release in which the version of the resource type was introduced.

Resource Type Version 

Sun ClusterRelease 

1.0 

3.0 5/02 

3.1 9/04 

3.2 

3.2 2/08 

To determine the version of the resource type that is registered, use one command from the following list:

The RTR file for this resource type is /usr/cluster/lib/rgm/rtreg/SUNW.HAStoragePlus.

Information for Migrating Existing Instances of the Resource Type

The information that you need to migrate instances of the HAStoragePlus resource type is as follows:

Distributing Online Resource Groups Among Cluster Nodes

For maximum availability or optimum performance, some combinations of services require a specific distribution of online resource groups among cluster nodes. Distributing online resource groups involves creating affinities between resource groups for the following purposes:

This section provides the following examples of how to use resource group affinities to distribute online resource groups among cluster nodes:

Resource Group Affinities

An affinity between resource groups restricts on which nodes the resource groups may be brought online simultaneously. In each affinity, a source resource group declares an affinity for a target resource group or several target resource groups. To create an affinity between resource groups, set the RG_affinities resource group property of the source as follows:


-p RG_affinities=affinity-list
affinity-list

Specifies a comma-separated list of affinities between the source resource group and a target resource group or several target resource groups. You may specify a single affinity or more than one affinity in the list.

Specify each affinity in the list as follows:


operator target-rg

Note –

Do not include a space between operator and target-rg.


operator

Specifies the type of affinity that you are creating. For more information, see Table 2–2.

target-rg

Specifies the resource group that is the target of the affinity that you are creating.

Table 2–2 Types of Affinities Between Resource Groups

Operator 

Affinity Type 

Effect 

+

Weak positive

If possible, the source is brought online on a node or on nodes where the target is online or starting. However, the source and the target are allowed to be online on different nodes.  

++

Strong positive

The source is brought online only on a node or on nodes where the target is online or starting. The source and the target are not allowed to be online on different nodes.

-

Weak negative

If possible, the source is brought online on a node or on nodes where the target is not online or starting. However, the source and the target are allowed to be online on the same node.

--

Strong negative

The source is brought online only on a node or on nodes where the target is not online. The source and the target are not allowed to be online on the same node.

+++

Strong positive with failover delegation

Same as strong positive, except that an attempt by the source to fail over is delegated to the target. For more information, see Delegating the Failover or Switchover of a Resource Group.

Weak affinities take precedence over Nodelist preference ordering.

The current state of other resource groups might prevent a strong affinity from being satisfied on any node. In this situation, the resource group that is the source of the affinity remains offline. If other resource groups' states change to enable the strong affinities to be satisfied, the resource group that is the source of the affinity comes back online.


Note –

Use caution when declaring a strong affinity on a source resource group for more than one target resource group. If all declared strong affinities cannot be satisfied, the source resource group remains offline.


Enforcing Collocation of a Resource Group With Another Resource Group

A service that is represented by one resource group might depend so strongly on a service in a second resource group that both services must run on the same node. For example, an application that is comprised of multiple interdependent service daemons might require that all daemons run on the same node.

In this situation, force the resource group of the dependent service to be collocated with the resource group of the other service. To enforce collocation of a resource group with another resource group, declare on the resource group a strong positive affinity for the other resource group.


# clresourcegroup set|create -p RG_affinities=++target-rg source-rg
source-rg

Specifies the resource group that is the source of the strong positive affinity. This resource group is the resource group on which you are declaring a strong positive affinity for another resource group.

-p RG_affinities=++target-rg

Specifies the resource group that is the target of the strong positive affinity. This resource group is the resource group for which you are declaring a strong positive affinity.

A resource group follows the resource group for which it has a strong positive affinity. If the target resource group is relocated to a different node, the source resource group automatically switches to the same node as the target. However, a resource group that declares a strong positive affinity is prevented from failing over to a node on which the target of the affinity is not already running.


Note –

Only failovers that are initiated by a resource monitor are prevented. If a node on which the source resource group and target resource group are running fails, both resource groups fail over to the same surviving node.


For example, a resource group rg1 declares a strong positive affinity for resource group rg2. If rg2 fails over to another node, rg1 also fails over to that node. This failover occurs even if all the resources in rg1 are operational. However, if a resource in rg1 attempts to fail over rg1 to a node where rg2 is not running, this attempt is blocked.

The source of a strong positive affinity might be offline on all nodes when you bring online the target of the strong positive affinity. In this situation, the source of the strong positive affinity is automatically brought online on the same node as the target.

For example, a resource group rg1 declares a strong positive affinity for resource group rg2. Both resource groups are initially offline on all nodes. If an administrator brings online rg2 on a node, rg1 is automatically brought online on the same node.

You can use the clresourcegroup suspend command to prevent a resource group from being brought online automatically due to strong affinities or cluster reconfiguration.

If you require a resource group that declares a strong positive affinity to be allowed to fail over, you must delegate the failover. For more information, see Delegating the Failover or Switchover of a Resource Group.


Example 2–43 Enforcing Collocation of a Resource Group With Another Resource Group

This example shows the command for modifying resource group rg1 to declare a strong positive affinity for resource group rg2. As a result of this affinity relationship, rg1 is brought online only on nodes where rg2 is running. This example assumes that both resource groups exist.


# clresourcegroup set -p RG_affinities=++rg2 rg1

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

A service that is represented by one resource group might use a service in a second resource group. As a result, these services run most efficiently if they run on the same node. For example, an application that uses a database runs most efficiently if the application and the database run on the same node. However, the services can run on different nodes because the reduction in efficiency is less disruptive than additional failovers of resource groups.

In this situation, specify that both resource groups should be collocated if possible. To specify preferred collocation of a resource group with another resource group, declare on the resource group a weak positive affinity for the other resource group.


# clresourcegroup set|create -p RG_affinities=+target-rg source-rg
source-rg

Specifies the resource group that is the source of the weak positive affinity. This resource group is the resource group on which you are declaring a weak positive affinity for another resource group.

-p RG_affinities=+target-rg

Specifies the resource group that is the target of the weak positive affinity. This resource group is the resource group for which you are declaring a weak positive affinity.

By declaring a weak positive affinity on one resource group for another resource group, you increase the probability of both resource groups running on the same node. The source of a weak positive affinity is first brought online on a node where the target of the weak positive affinity is already running. However, the source of a weak positive affinity does not fail over if a resource monitor causes the target of the affinity to fail over. Similarly, the source of a weak positive affinity does not fail over if the target of the affinity is switched over. In both situations, the source remains online on the node where the source is already running.


Note –

If a node on which the source resource group and target resource group are running fails, both resource groups are restarted on the same surviving node.



Example 2–44 Specifying a Preferred Collocation of a Resource Group With Another Resource Group

This example shows the command for modifying resource group rg1 to declare a weak positive affinity for resource group rg2. As a result of this affinity relationship, rg1 and rg2 are first brought online on the same node. But if a resource in rg2 causes rg2 to fail over, rg1 remains online on the node where the resource groups were first brought online. This example assumes that both resource groups exist.


# clresourcegroup set -p RG_affinities=+rg2 rg1

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Each resource group in a set of resource groups might impose the same load on the cluster. In this situation, by distributing the resource groups evenly among cluster nodes, you can balance the load on the cluster.

To distribute a set of resource groups evenly among cluster nodes, declare on each resource group a weak negative affinity for the other resource groups in the set.


# clresourcegroup set|create -p RG_affinities=neg-affinity-list source-rg
source-rg

Specifies the resource group that is the source of the weak negative affinity. This resource group is the resource group on which you are declaring a weak negative affinity for other resource groups.

-p RG_affinities=neg-affinity-list

Specifies a comma-separated list of weak negative affinities between the source resource group and the resource groups that are the target of the weak negative affinity. The target resource groups are the resource groups for which you are declaring a weak negative affinity.

By declaring a weak negative affinity on one resource group for other resource groups, you ensure that a resource group is always brought online on the most lightly loaded node in the cluster. The fewest other resource groups are running on that node. Therefore, the smallest number of weak negative affinities are violated.


Example 2–45 Distributing a Set of Resource Groups Evenly Among Cluster Nodes

This example shows the commands for modifying resource groups rg1, rg2, rg3, and rg4 to ensure that these resource groups are evenly distributed among the available nodes in the cluster. This example assumes that resource groups rg1, rg2, rg3, and rg4 exist.


# clresourcegroup set -p RG_affinities=-rg2,-rg3,-rg4 rg1
# clresourcegroup set -p RG_affinities=-rg1,-rg3,-rg4 rg2
# clresourcegroup set -p RG_affinities=-rg1,-rg2,-rg4 rg3
# clresourcegroup set -p RG_affinities=-rg1,-rg2,-rg3 rg4

Specifying That a Critical Service Has Precedence

A cluster might be configured to run a combination of mission-critical services and noncritical services. For example, a database that supports a critical customer service might run in the same cluster as noncritical research tasks.

To ensure that the noncritical services do not affect the performance of the critical service, specify that the critical service has precedence. By specifying that the critical service has precedence, you prevent noncritical services from running on the same node as the critical service.

When all nodes are operational, the critical service runs on a different node from the noncritical services. However, a failure of the critical service might cause the service to fail over to a node where the noncritical services are running. In this situation, the noncritical services are taken offline immediately to ensure that the computing resources of the node are fully dedicated to the mission-critical service.

To specify that a critical service has precedence, declare on the resource group of each noncritical service a strong negative affinity for the resource group that contains the critical service.


# clresourcegroup set|create -p RG_affinities=--critical-rg noncritical-rg
noncritical-rg

Specifies the resource group that contains a noncritical service. This resource group is the resource group on which you are declaring a strong negative affinity for another resource group.

-p RG_affinities=--critical-rg

Specifies the resource group that contains the critical service. This resource group is the resource group for which you are declaring a strong negative affinity.

A resource group moves away from a resource group for which it has a strong negative affinity.

The source of a strong negative affinity might be offline on all nodes when you take offline the target of the strong negative affinity. In this situation, the source of the strong negative affinity is automatically brought online. In general, the resource group is brought online on the most preferred node, based on the order of the nodes in the node list and the declared affinities.

For example, a resource group rg1 declares a strong negative affinity for resource group rg2. Resource group rg1 is initially offline on all nodes, while resource group rg2 is online on a node. If an administrator takes offline rg2, rg1 is automatically brought online.

You can use the clresourcegroup suspend command to prevent the source of a strong negative affinity from being brought online automatically due to strong affinities or cluster reconfiguration.


Example 2–46 Specifying That a Critical Service Has Precedence

This example shows the commands for modifying the noncritical resource groups ncrg1 and ncrg2 to ensure that the critical resource group mcdbrg has precedence over these resource groups. This example assumes that resource groups mcdbrg, ncrg1, and ncrg2 exist.


# clresourcegroup set -p RG_affinities=--mcdbrg ncrg1 ncrg2

Delegating the Failover or Switchover of a Resource Group

The source resource group of a strong positive affinity cannot fail over or be switched over to a node where the target of the affinity is not running. If you require the source resource group of a strong positive affinity to be allowed to fail over or be switched over, you must delegate the failover to the target resource group. When the target of the affinity fails over, the source of the affinity is forced to fail over with the target.


Note –

You might need to switch over the source resource group of a strong positive affinity that is specified by the ++ operator. In this situation, switch over the target of the affinity and the source of the affinity at the same time.


To delegate failover or switchover of a resource group to another resource group, declare on the resource group a strong positive affinity with failover delegation for the other resource group.


# clresourcegroup set|create source-rg -p RG_affinities=+++target-rg
source-rg

Specifies the resource group that is delegating failover or switchover. This resource group is the resource group on which you are declaring a strong positive affinity with failover delegation for another resource group.

-p RG_affinities=+++target-rg

Specifies the resource group to which source-rg delegates failover or switchover. This resource group is the resource group for which you are declaring a strong positive affinity with failover delegation.

A resource group may declare a strong positive affinity with failover delegation for at most one resource group. However, a given resource group may be the target of strong positive affinities with failover delegation that are declared by any number of other resource groups.

A strong positive affinity with failover delegation is not fully symmetric. The target can come online while the source remains offline. However, if the target is offline, the source cannot come online.

If the target declares a strong positive affinity with failover delegation for a third resource group, failover or switchover is further delegated to the third resource group. The third resource group performs the failover or switchover, forcing the other resource groups to fail over or be switched over also.


Example 2–47 Delegating the Failover or Switchover of a Resource Group

This example shows the command for modifying resource group rg1 to declare a strong positive affinity with failover delegation for resource group rg2. As a result of this affinity relationship, rg1 delegates failover or switchover to rg2. This example assumes that both resource groups exist.


# clresourcegroup set -p RG_affinities=+++rg2 rg1

Combining Affinities Between Resource Groups

You can create more complex behaviors by combining multiple affinities. For example, the state of an application might be recorded by a related replica server. The node selection requirements for this example are as follows:

You can satisfy these requirements by configuring resource groups for the application and the replica server as follows:


Example 2–48 Combining Affinities Between Resource Groups

This example shows the commands for combining affinities between the following resource groups.

In this example, the resource groups declare affinities as follows:

This example assumes that both resource groups exist.


# clresourcegroup set -p RG_affinities=+rep-rg app-rg
# clresourcegroup set -p RG_affinities=--app-rg rep-rg

Zone Cluster Resource Group Affinities

The cluster administrator can specify affinities between a resource group in a zone cluster and another resource group in a zone cluster or a resource group on the global cluster.

You can use the following command to specify the affinity between resource groups in zone clusters.


# clresourcegroup set -p RG_affinities=affinity-typetarget-zc:target-rg source-zc:source-rg

The resource group affinity types in a zone cluster can be one of the following:


Note –

The affinity type +++ is not supported for zone clusters in this release.



Example 2–49 Specifying a Strong Positive Affinity Between Resource Groups in Zone Clusters

This example shows the command for specifying a strong positive affinity between resource groups in zone clusters.

The resource group RG1 in a zone cluster ZC1 declares a strong positive affinity for a resource group RG2 in a zone cluster ZC2.

If you need to specify a strong positive affinity between a resource group RG1 in a zone cluster ZC1 and a resource group RG2 in another zone cluster ZC2, use the following command:


# clresourcegroup set -p RG_affinities=++ZC2:RG2 ZC1:RG1


Example 2–50 Specifying a Strong Negative Affinity Between a Resource Group in a Zone Cluster and a Resource Group in the Global Cluster

This example shows the command for specifying a strong negative affinity between resource groups in zone clusters. If you need to specify a strong negative affinity between a resource group RG1 in a zone cluster ZC1 and a resource group RG2 in the global cluster, use the following command:


# clresourcegroup set -p RG_affinities=--global:RG2 ZC1:RG1

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

If you require identical resource configuration data on two clusters, you can replicate the data to the second cluster to save the laborious task of setting it up again. Use scsnapshot to propagate the resource configuration information from one cluster to another cluster. To save effort, ensure that your resource-related configuration is stable and you do not need to make any major changes to the resource configuration, before copying the information to a second cluster.

Configuration data for resource groups, resource types, and resources can be retrieved from the Cluster Configuration Repository (CCR) and formatted as a shell script. The script can be used to perform the following tasks:

The scsnapshot tool retrieves configuration data that is stored in the CCR. Other configuration data are ignored. The scsnapshot tool ignores the dynamic state of different resource groups, resource types, and resources.

ProcedureHow to Replicate Configuration Data on a Cluster Without Configured Resource Groups, Resource Types, and Resources

This procedure replicates configuration data on a cluster that does not have configured resource groups, resource types, and resources. In this procedure, a copy of the configuration data is taken from one cluster and used to generate the configuration data on another cluster.

  1. Using the system administrator role, log in to any node in the cluster from which you want to copy the configuration data.

    For example, node1.

    The system administrator role gives you the following role-based access control (RBAC) rights:

    • solaris.cluster.resource.read

    • solaris.cluster.resource.modify

  2. Retrieve the configuration data from the cluster.


    node1 % scsnapshot -s scriptfile
    

    The scsnapshot tool generates a script called scriptfile. For more information about using the scsnapshot tool, see the scsnapshot(1M) man page.

  3. Edit the script to adapt it to the specific features of the cluster where you want to replicate the configuration data.

    For example, you might have to change the IP addresses and host names that are listed in the script.

  4. Launch the script from any node in the cluster where you want to replicate the configuration data.

    The script compares the characteristics of the local cluster to the cluster where the script was generated. If the characteristics are not the same, the script writes an error and ends. A message asks whether you want to rerun the script, using the -f option. The -f option forces the script to run, despite any difference in characteristics. If you use the -f option, ensure that you do not create inconsistencies in your cluster.

    The script verifies that the Sun Cluster resource type exists on the local cluster. If the resource type does not exist on the local cluster, the script writes an error and ends. A message asks whether you want to install the missing resource type before running the script again.

ProcedureHow to Upgrade Configuration Data on a Cluster With Configured Resource Groups, Resource Types, and Resources

This procedure upgrades configuration data on a cluster that already has configured resource groups, resource types, and resources. This procedure can also be used to generate a configuration template for resource groups, resource types, and resources.

In this procedure, the configuration data on cluster1 is upgraded to match the configuration data on cluster2.

  1. Using the system administrator role, log on to any node in cluster1.

    For example, node1.

    The system administrator role gives you the following RBAC rights:

    • solaris.cluster.resource.read

    • solaris.cluster.resource.modify

  2. Retrieve the configuration data from the cluster by using the image file option of the scsnapshot tool:


    node1% scsnapshot -s scriptfile1 -o imagefile1
    

    When run on node1, the scsnapshot tool generates a script that is called scriptfile1. The script stores configuration data for the resource groups, resource types, and resources in an image file that is called imagefile1. For more information about using the scsnapshot tool, see the scsnapshot(1M) man page.

  3. Repeat Step 1 through Step 2 on a node in cluster2:


    node2 % scsnapshot -s scriptfile2 -o imagefile2
    
  4. On node1, generate a script to upgrade the configuration data on cluster1 with configuration data from cluster2:


    node1 % scsnapshot -s scriptfile3 imagefile1 imagefile2
    

    This step uses the image files that you generated in Step 2 and Step 3, and generates a new script that is called scriptfile3.

  5. Edit the script that you generated in Step 4 to adapt it to the specific features of the cluster1, and to remove data specific to cluster2.

  6. From node1, launch the script to upgrade the configuration data.

    The script compares the characteristics of the local cluster to the cluster where the script was generated. If the characteristics are not the same, the script writes an error and ends. A message asks whether you want to rerun the script, using the -f option. The -f option forces the script to run, despite any difference in characteristics. If you use the -f option, ensure that you do not create inconsistencies in your cluster.

    The script verifies that the Sun Cluster resource type exists on the local cluster. If the resource type does not exist on the local cluster, the script writes an error and ends. A message asks whether you want to install the missing resource type before running the script again.

Enabling Solaris SMF Services to Run With Sun Cluster

The Service Management Facility (SMF) enables you to automatically start and restart SMF services, during a node boot or service failure. SMF facilitates some degree of high availability to the SMF services on a single host. This feature is similar to the Sun Cluster Resource Group Manager (RGM), which facilitates high availability and scalability for cluster applications. SMF services and RGM features are complementary to each other.

Sun Cluster includes three new SMF proxy resource types that can be used to enable SMF services to run with Sun Cluster in a failover, multi-master, or scalable configuration. The following are the proxy resource types:

The SMF proxy resource types enables you to encapsulate a set of interrelated SMF services into a single resource, SMF proxy resource to be managed by Sun Cluster. In this feature, SMF manages the availability of SMF services on a single node. Sun Cluster provides cluster-wide high availability and scalability of the SMF services.

You can use the SMF proxy resource types to integrate your own SMF controlled services into Sun Cluster so that these services have cluster-wide service availability without you rewriting callback methods or service manifest. After you integrate the SMF service into the SMF proxy resource, the SMF service is no longer managed by the default restarter. The restarter that is delegated by Sun Cluster manages the SMF service.

SMF proxy resources are identical to other resources, with no restriction on their usage. For example, an SMF proxy resource can be grouped with other resources into a resource group. SMF proxy resources can be created and managed the same way as other resources. An SMF proxy resource differs from other resources in one way. When you create a resource of any of the SMF proxy resource types, you need to specify the extension property Proxied_service_instances. You must include information about the SMF services to be proxied by the SMF resource. The extension property's value is the path to a file that contains all the proxied SMF services. Each line in the file is dedicated to one SMF service and specifies svc fmri, path of the corresponding service manifest file.

For example, if the resource has to manage two services, restarter_svc_test_1:default and restarter_svc_test_2:default, the file should include the following two lines:


<svc:/system/cluster/restarter_svc_test_1:default>,</var/svc/manifest/system/clus
ter/restarter_svc_test_1.xml>

<svc:/system/cluster/restarter_svc_test_2:default>,</var/svc/manifest/system/clus
ter/restarter_svc_test_2.xml>

The services that are encapsulated under an SMF proxy resource can reside in the global cluster or global-cluster non-voting node. However, all the services under the same proxy resource must be in the same zone.


Caution – Caution –

Do not use SMF svcadm for disabling or enabling SMF services that are encapsulated in a proxy resource. Do not change the properties of the SMF services (in the SMF repository) that are encapsulated in a proxy resource.


ProcedureEncapsulating an SMF Service Into a Failover Proxy Resource Configuration

For information about failover configuration, see Creating a Resource Group


Note –

Perform this procedure from any cluster node.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Register the proxy SMF failover resource type.


    # clresourcetype register -f\
    /opt/SUNWscsmf/etc/SUNW.Proxy_SMF_failover SUNW.Proxy_SMF_failover
    
  3. Verify that the proxy resource type has been registered.


    # clresourcetype show 
    
  4. Create the SMF failover resource group.


    # clresourcegroup create [-n node-zone-list] resource-group
    
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is configured on all the global-cluster voting nodes.


    Note –

    To achieve highest availability, specify global-cluster non-voting nodes on different nodes in the node list of an SMF failover resource group instead of different global-cluster non-voting nodes on the same node.


    resource-group

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  5. Verify that the SMF resource group has been created.


    # clresourcegroup status resource-group
    
  6. Add an SMF failover application resource to the resource group.


    # clresource create -g resource-group -t SUNW.Proxy_SMF_failover \
    [-p "extension-property[{node-specifier}]"=value, …] [-p standard-property=value, …] resource
    

    The resource is created in the enabled state.

  7. Verify that the SMF failover application resource has been added and validated.


    # clresource show resource
    
  8. Bring the failover resource group online.


    # clresourcegroup online -M +
    

    Note –

    If you use the clresource status command to view the state of the SMF proxy resource type, the status is displayed as online but not monitored. This is not an error message. The SMF proxy resource is enabled and running and this status message is displayed because there is no monitoring support provided for the resources of SMF proxy resource type.



Example 2–51 Registering an SMF Proxy Failover Resource Type

The following example registers the SUNW.Proxy_SMF_failover resource type.


# clresourcetype register SUNW.Proxy_SMF_failover
# clresourcetype show SUNW.Proxy_SMF_failover

Resource Type:              SUNW.Proxy_SMF_failover
RT_description:             Resource type for proxying failover SMF services 
RT_version:                 3.2
API_version:                6
RT_basedir:                 /opt/SUNWscsmf/bin
Single_instance:            False
Proxy:                      False
Init_nodes:                 All potential masters
Installed_nodes:            <All>
Failover:                   True
Pkglist:                    SUNWscsmf 
RT_system:                  False
Global_zone:			       False


Example 2–52 Adding an SMF Proxy Failover Application Resource to a Resource Group

This example shows the addition of a proxy resource type, SUN.Proxy_SMF_failover to a resource group resource-group-1.


# clresource create -g resource-group-1 -t SUNW.Proxy_SMF_failover
-x proxied_service_instances=/var/tmp/svslist.txt resource-1
# clresource show resource-1

=== Resources ===

  Resource:                                  resource-1
  Type:                                      SUNW.Proxy_SMF_failover
  Type_version:                              3.2 
  Group:                                     resource-group-1
  R_description:                             
  Resource_project_name:                     default
  Enabled{phats1}:                           True 
  Monitored{phats1}:                         True
 

ProcedureEncapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Register the SMF proxy multi-master resource type.


    # clresourcetype register -f\ 
    /opt/SUNWscsmf/etc/SUNW.Proxy_SMF_multimaster SUNW.Proxy_SMF_multimaster
    
  3. Create the SMF multi-master resource group.


    # clresourcegroup create\-p Maximum_primaries=m\-p Desired_primaries=n\
    [-n node-zone-list]\
    resource-group
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for this resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes in which this resource group is to be available. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is configured on the global-cluster voting nodes.

    resource-group

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  4. Verify that the SMF proxy multi-master resource group has been created.


    # clresourcegroup show resource-group
    
  5. Add an SMF proxy multi-master resource to the resource group.


    # clresource create -g resource-group -t SUNW.Proxy_SMF_multimaster\
    [-p "extension-property[{node-specifier}]"=value, …] [-p standard-property=value, …] resource
    
    -g resource-group

    Specifies the name of a scalable service resource group that you previously created.

    -p "extension-property[{node-specifier}]"=value, …

    Specifies a comma-separated list of extension properties that you are setting for the resource. The extension properties that you can set depend on the resource type. To determine which extension properties to set, see the documentation for the resource type.

    node-specifier is an optional qualifier to the -p and -x options. This qualifier indicates that the extension property or properties on only the specified node or nodes are to be set when the resource is created. The specified extension properties on other nodes in the cluster are not set. If you do not include node-specifier, the specified extension properties on all nodes in the cluster are set. You can specify a node name or a node identifier for node-specifier. Examples of the syntax of node-specifier include the following:


    -p "myprop{phys-schost-1}"
    

    The braces ({}) indicate that you want to set the specified extension property on only node phys-schost-1. For most shells, the double quotation marks (“) are required.

    You can also use the following syntax to set an extension property in two different global-cluster non-voting nodes on two different nodes:


    -x "myprop{phys-schost-1:zoneA,phys-schost-2:zoneB}"
    
    -p standard-property=value, …

    Specifies a comma-separated list of standard properties that you are setting for the resource. The standard properties that you can set depend on the resource type. For scalable services, you typically set the Port_list, Load_balancing_weights, and Load_balancing_policy properties. To determine which standard properties to set, see the documentation for the resource type and Appendix B, Standard Properties.

    resource

    Specifies your choice of the name of the resource to add.

    The resource is created in the enabled state.

  6. Verify that the SMF proxy multi-master application resource has been added and validated.


    # clresource show resource
    
  7. Bring the multi-master resource group online.


    # clresourcegroup online -M +
    

    Note –

    If you use the clresource status command to view the state of the SMF proxy resource type, the status is displayed as online but not monitored. This is not an error message. The SMF proxy resource is enabled and running and this status message is displayed because there is no monitoring support provided for the resources of SMF proxy resource type.



Example 2–53 Registering an SMF Proxy Multi-Master Resource Type

The following example registers the SUNW.Proxy_SMF_multimaster resource type.


# clresourcetype register SUNW.Proxy_SMF_multimaster
# clresourcetype show SUNW.Proxy_SMF_multimaster

Resource Type:            SUNW.Proxy_SMF_multimaster
RT_description:           Resource type for proxying multimastered SMF services 
RT_version:               3.2
API_version:              6
RT_basedir:               /opt/SUNWscsmf/bin
Single_instance:          False
Proxy:                    False
Init_nodes:               All potential masters
Installed_nodes:          <All>
Failover:                  True
Pkglist:                   SUNWscsmf 
RT_system:                 False
Global_zone:				   False


Example 2–54 Creating and Adding an SMF Proxy Multi-Master Application Resource to a Resource Group

This example shows the creation and addition of a multi-master proxy resource type SUN.Proxy_SMF_multimaster to a resource group resource-group-1.


# clresourcegroup create\
-p Maximum_primaries=2\
-p Desired_primaries=2\
-n phys-schost-1, phys-schost-2\
resource-group-1
# clresourcegroup show resource-group-1

=== Resource Groups and Resources ===          

Resource Group:                        resource-group-1
RG_description:                        <NULL>
RG_mode:                               multimastered
RG_state:                              Unmanaged
RG_project_name:                       default
RG_affinities:                         <NULL>
Auto_start_on_new_cluster:             True
Failback:                              False
Nodelist:                              phys-schost-1 phys-schost-2
Maximum_primaries:                      2
Desired_primaries:                      2
Implicit_network_dependencies:         True
Global_resources_used:                 <All>
Pingpong_interval:                      3600
Pathprefix:                            <NULL>
RG_System:                             False
Suspend_automatic_recovery:                      False

# clresource create -g resource-group-1 -t SUNW.Proxy_SMF_multimaster
-x proxied_service_instances=/var/tmp/svslist.txt resource-1
# clresource show resource-1

=== Resources ===

  Resource:                               resource-1
  Type:                                  SUNW.Proxy_SMF_multimaster
  Type_version:                          3.2 
  Group:                                 resource-group-1
  R_description:                         
  Resource_project_name:                 default
  Enabled{phats1}:                       True 
  Monitored{phats1}:                     True
 

ProcedureEncapsulating an SMF Service Into a Scalable Proxy Resource Configuration

For information about scalable configuration, see How to Create a Scalable Resource Group


Note –

Perform this procedure from any cluster node.


  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  2. Register the SMF proxy scalable resource type.


    # clresourcetype register -f\
    /opt/SUNWscsmf/etc/SUNW.Proxy_SMF_scalable SUNW.Proxy_SMF_scalable  
    
  3. Create the SMF failover resource group that holds the shared address that the scalable resource group is to use. See How to Create a Failover Resource Group to create the failover resource group.

  4. Create the SMF proxy scalable resource group.


    # clresourcegroup create\-p Maximum_primaries=m\-p Desired_primaries=n\
    -p RG_dependencies=depend-resource-group\
    [-n node-zone-list]\
    resource-group
    
    -p Maximum_primaries=m

    Specifies the maximum number of active primaries for this resource group.

    -p Desired_primaries=n

    Specifies the number of active primaries on which the resource group should attempt to start.

    -p RG_dependencies=depend-resource-group

    Identifies the resource group that contains the shared address-resource on which the resource group that is being created depends.

    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes in which this resource group is to be available. The format of each entry in the list is node:zone. In this format, node specifies the global-cluster voting node and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the resource group is created on all nodes in the cluster.

    The node list of the scalable resource can contain the same list or a subset of nodename:zonename pairs as the node list of the shared address resource

    resource-group

    Specifies your choice of the name of the scalable resource group to add. This name must begin with an ASCII character.

  5. Verify that the scalable resource group has been created.


    # clresourcegroup show resource-group
    
  6. Add an SMF proxy scalable resource to the resource group.


    # clresource create-g resource-group -t SUNW.Proxy_SMF_scalable \
    -p Network_resources_used=network-resource[,network-resource...] \
    -p Scalable=True
    [-p "extension-property[{node-specifier}]"=value, …] [-p standard-property=value, …] resource
    
    -g resource-group

    Specifies the name of a scalable service resource group that you previously created.

    -p Network_resources_used= network-resource[,network-resource...]

    Specifies the list of network resources (shared addresses) on which this resource depends.

    -p Scalable=True

    Specifies that this resource is scalable.

    -p "extension-property[{node-specifier}]"=value, …

    Specifies that you are setting extension properties for the resource. To determine which extension properties to set, see the documentation for the resource type.

    node-specifier is an optional qualifier to the -p and -x options. This qualifier indicates that the extension property or properties on only the specified node or nodes are to be set when the resource is created. The specified extension properties on other nodes in the cluster are not set. If you do not include node-specifier, the specified extension properties on all nodes in the cluster are set. You can specify a node name or a node identifier for node-specifier. Examples of the syntax of node-specifier include the following:


    -p "myprop{phys-schost-1}"
    

    The braces ({}) indicate that you want to set the specified extension property on only node phys-schost-1. For most shells, the double quotation marks (“) are required.

    You can also use the following syntax to set an extension property in two different global-cluster non-voting nodes on two different global-cluster voting nodes:


    -x "myprop{phys-schost-1:zoneA,phys-schost-2:zoneB}"
    
    -p standard-property=value, …

    Specifies a comma-separated list of standard properties that you are setting for the resource. The standard properties that you can set depend on the resource type. For scalable services, you typically set the Port_list, Load_balancing_weights, and Load_balancing_policy properties. To determine which standard properties to set, see the documentation for the resource type and Appendix B, Standard Properties.

    resource

    Specifies your choice of the name of the resource to add.

    The resource is created in the enabled state.

  7. Verify that the SMF proxy scalable application resource has been added and validated.


    # clresource show resource
    
  8. Bring the SMF proxy scalable resource group online.


    # clresourcegroup online -M +
    

    Note –

    If you use the clresource status command to view the state of the SMF proxy resource type, the status is displayed as online but not monitored. This is not an error message. The SMF proxy resource is enabled and running and this status message is displayed because there is no monitoring support provided for the resources of SMF proxy resource type.



Example 2–55 Registering an SMF Proxy Scalable Resource Type

The following example registers the SUNW.Proxy_SMF_scalable resource type.


# clresourcetype register SUNW.Proxy_SMF_scalable
# clresourcetype show SUNW.Proxy_SMF_scalable

Resource Type:           SUNW.Proxy_SMF_scalable
RT_description:          Resource type for proxying scalable SMF services 
RT_version:              3.2
API_version:             6
RT_basedir:              /opt/SUNWscsmf/bin
Single_instance:          False
Proxy:                    False
Init_nodes:               All potential masters
Installed_nodes:          <All>
Failover:                 True
Pkglist:                  SUNWscsmf 
RT_system:                 False
Global_zone:				  False


Example 2–56 Creating and Adding an SMF Proxy Scalable Application Resource to a Resource Group

This example shows the creation and addition of a scalable proxy resource type SUN.Proxy_SMF_scalalble to a resource group resource-group-1.


# clresourcegroup create\
-p Maximum_primaries=2\
-p Desired_primaries=2\
-p RG_dependencies=resource-group-2\
-n phys-schost-1, phys-schost-2\
resource-group-1
# clresourcegroup show resource-group-1

=== Resource Groups and Resources ===          

Resource Group:                      resource-group-1
RG_description:                     <NULL>
RG_mode:                             Scalable
RG_state:                            Unmanaged
RG_project_name:                     default
RG_affinities:                       <NULL>
Auto_start_on_new_cluster:           True
Failback:                            False
Nodelist:                            phys-schost-1 phys-schost-2
Maximum_primaries:                   2
Desired_primaries:                   2
RG_dependencies:                     resource-group2
Implicit_network_dependencies:       True
Global_resources_used:               <All>
Pingpong_interval:                   3600
Pathprefix:                          <NULL>
RG_System:                            False
Suspend_automatic_recovery:           False

# clresource create -g resource-group-1 -t SUNW.Proxy_SMF_scalable
-x proxied_service_instances=/var/tmp/svslist.txt resource-1
# clresource show resource-1

=== Resources ===

  Resource:                            resource-1
  Type:                                SUNW.Proxy_SMF_scalable
  Type_version:                        3.2 
  Group:                               resource-group-1
  R_description:                       
  Resource_project_name:               default
  Enabled{phats1}:                     True 
  Monitored{phats1}:                   True
 

Tuning Fault Monitors for Sun Cluster Data Services

Each data service that is supplied with the Sun Cluster product has a built-in fault monitor. The fault monitor performs the following functions:

The fault monitor is contained in the resource that represents the application for which the data service was written. You create this resource when you register and configure the data service. For more information, see the documentation for the data service.

System properties and extension properties of this resource control the behavior of the fault monitor. The default values of these properties determine the preset behavior of the fault monitor. The preset behavior should be suitable for most Sun Cluster installations. Therefore, you should tune a fault monitor only if you need to modify this preset behavior.

Tuning a fault monitor involves the following tasks:

Perform these tasks when you register and configure the data service. For more information, see the documentation for the data service.


Note –

A resource's fault monitor is started when you bring online the resource group that contains the resource. You do not need to start the fault monitor explicitly.


Setting the Interval Between Fault Monitor Probes

To determine whether a resource is operating correctly, the fault monitor probes this resource periodically. The interval between fault monitor probes affects the availability of the resource and the performance of your system as follows:

The optimum interval between fault monitor probes also depends on the time that is required to respond to a fault in the resource. This time depends on how the complexity of the resource affects the time that is required for operations such as restarting the resource.

To set the interval between fault monitor probes, set the Thorough_probe_interval system property of the resource to the interval in seconds that you require.

Setting the Timeout for Fault Monitor Probes

The timeout for fault monitor probes specifies the length of time that a fault monitor waits for a response from a resource to a probe. If the fault monitor does not receive a response within this timeout, the fault monitor treats the resource as faulty. The time that a resource requires to respond to a fault monitor probe depends on the operations that the fault monitor performs to probe the resource. For information about operations that a data service's fault monitor performs to probe a resource, see the documentation for the data service.

The time that is required for a resource to respond also depends on factors that are unrelated to the fault monitor or the application, for example:

To set the timeout for fault monitor probes, set the Probe_timeout extension property of the resource to the timeout in seconds that you require.

Defining the Criteria for Persistent Faults

To minimize the disruption that transient faults in a resource cause, a fault monitor restarts the resource in response to such faults. For persistent faults, more disruptive action than restarting the resource is required:

A fault monitor treats a fault as persistent if the number of complete failures of a resource exceeds a specified threshold within a specified retry interval. Defining the criteria for persistent faults enables you to set the threshold and the retry interval to accommodate the performance characteristics of your cluster and your availability requirements.

Complete Failures and Partial Failures of a Resource

A fault monitor treats some faults as a complete failure of a resource. A complete failure typically causes a complete loss of service. The following failures are examples of a complete failure:

A complete failure causes the fault monitor to increase by 1 the count of complete failures in the retry interval.

A fault monitor treats other faults as a partial failure of a resource. A partial failure is less serious than a complete failure, and typically causes a degradation of service, but not a complete loss of service. An example of a partial failure is an incomplete response from a data service server before a fault monitor probe is timed out.

A partial failure causes the fault monitor to increase by a fractional amount the count of complete failures in the retry interval. Partial failures are still accumulated over the retry interval.

The following characteristics of partial failures depend on the data service:

For information about faults that a data service's fault monitor detects, see the documentation for the data service.

Dependencies of the Threshold and the Retry Interval on Other Properties

The maximum length of time that is required for a single restart of a faulty resource is the sum of the values of the following properties:

To ensure that you allow enough time for the threshold to be reached within the retry interval, use the following expression to calculate values for the retry interval and the threshold:

retry_interval >= 2 x threshold × (thorough_probe_interval + probe_timeout)

The factor of 2 accounts for partial probe failures that do not immediately cause the resource to be failed over or taken offline.

System Properties for Setting the Threshold and the Retry Interval

To set the threshold and the retry interval, set the following system properties of the resource:

Specifying the Failover Behavior of a Resource

The failover behavior of a resource determines how the RGM responds to the following faults:

To specify the failover behavior of a resource, set the Failover_mode system property of the resource. For information about the possible values of this property, see the description of the Failover_mode system property in Resource Properties.