JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Services Planning and Administration Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Planning for Oracle Solaris Cluster Data Services

2.  Administering Data Service Resources

Overview of Tasks for Administering Data Service Resources

Configuring and Administering Oracle Solaris Cluster Data Services

Registering a Resource Type

How to Register a Resource Type

Upgrading a Resource Type

How to Install and Register an Upgrade of a Resource Type

How to Migrate Existing Resources to a New Version of the Resource Type

Downgrading a Resource Type

How to Downgrade a Resource to an Older Version of Its Resource Type

Creating a Resource Group

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Configuring Failover and Scalable Applications

How to Configure a Failover Application Using the ScalMountPoint Resource

How to Configure a Scalable Application Using the ScalMountPoint Resource

Tools for Adding Resources to Resource Groups

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Bringing Resource Groups Online

How to Bring Resource Groups Online

Switching Resource Groups to Preferred Primaries

How to Switch Resource Groups to Preferred Primaries

Enabling a Resource

How to Enable a Resource

Quiescing Resource Groups

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

Immediately Suspending Automatic Recovery by Killing Methods

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disabling and Enabling Resource Monitors

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Removing Resource Types

How to Remove a Resource Type

Removing Resource Groups

How to Remove a Resource Group

Removing Resources

How to Remove a Resource

Switching the Current Primary of a Resource Group

How to Switch the Current Primary of a Resource Group

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Displaying Resource Type, Resource Group, and Resource Configuration Information

Changing Resource Type, Resource Group, and Resource Properties

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

How to Modify a Logical Hostname Resource or a Shared Address Resource

Clearing the STOP_FAILED Error Flag on Resources

How to Clear the STOP_FAILED Error Flag on Resources

Clearing the Start_failed Resource State

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Upgrading a Preregistered Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Reregistering Preregistered Resource Types After Inadvertent Deletion

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Adding or Removing a Node to or From a Resource Group

Adding a Node to a Resource Group

How to Add a Node to a Scalable Resource Group

How to Add a Node to a Failover Resource Group

Removing a Node From a Resource Group

How to Remove a Node From a Scalable Resource Group

How to Remove a Node From a Failover Resource Group

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

Example - Removing a Node From a Resource Group

Migrating the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

How to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

Synchronizing the Startups Between Resource Groups and Device Groups

Managed Entity Monitoring by HAStoragePlus

Troubleshooting Monitoring for Managed Entities

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

Configuring an HAStoragePlus Resource for Cluster File Systems

Sample Entries in /etc/vfstab for Cluster File Systems

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Delete an HAStoragePlus Resource Type for Cluster File Systems

Enabling Highly Available Local File Systems

Configuration Requirements for Highly Available Local File Systems

Format of Device Names for Devices Without a Volume Manager

Sample Entries in /etc/vfstab for Highly Available Local File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

Sharing a Failover File System Across Zone Clusters

Configuration Requirements for Sharing a Failover File System Directory to a Zone Cluster

How to Set Up the HAStorage Plus Resource Type to Share a Failover File System Directory to a Zone Cluster

Upgrading From HAStorage to HAStoragePlus

How to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

How to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

Modifying Online the Resource for a Highly Available File System

How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

Changing the Cluster File System to a Local File System in an HAStoragePlus Resource

How to Change the Cluster File System to Local File System in an HAStoragePlus Resource

Upgrading the HAStoragePlus Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Distributing Online Resource Groups Among Cluster Nodes

Resource Group Affinities

Enforcing Collocation of a Resource Group With Another Resource Group

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Specifying That a Critical Service Has Precedence

Delegating the Failover or Switchover of a Resource Group

Combining Affinities Between Resource Groups

Zone Cluster Resource Group Affinities

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

How to Replicate Configuration Data on a Cluster Without Configured Resource Groups, Resource Types, and Resources

How to Upgrade Configuration Data on a Cluster With Configured Resource Groups, Resource Types, and Resources

Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster

Encapsulating an SMF Service Into a Failover Proxy Resource Configuration

Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration

Tuning Fault Monitors for Oracle Solaris Cluster Data Services

Setting the Interval Between Fault Monitor Probes

Setting the Timeout for Fault Monitor Probes

Defining the Criteria for Persistent Faults

Complete Failures and Partial Failures of a Resource

Dependencies of the Threshold and the Retry Interval on Other Properties

System Properties for Setting the Threshold and the Retry Interval

Specifying the Failover Behavior of a Resource

Denying Cluster Services For a Selected Non-Global Zone

How to Deny Cluster Services For a Non-Global Zone

How to Allow Cluster Services For a Non-Global Zone

A.  Data Service Configuration Worksheets and Examples

Index

Adding or Removing a Node to or From a Resource Group

The procedures in this section enable you to perform the following tasks.

The procedures are slightly different, depending on whether you plan to add or remove the node to or from a failover or scalable resource group.

Failover resource groups contain network resources that both failover and scalable services use. Each IP subnetwork connected to the cluster has its own network resource that is specified and included in a failover resource group. The network resource is either a logical hostname or a shared address resource. Each network resource includes a list of IPMP groups that it uses. For failover resource groups, you must update the complete list of IPMP groups for each network resource that the resource group includes (the netiflist resource property).

The procedure for scalable resource groups involves the following steps:

  1. Repeating the procedure for failover groups that contain the network resources that the scalable resource uses

  2. Changing the scalable group to be mastered on the new set of hosts

For more information, see the clresourcegroup(1CL) man page.


Note - Run either procedure from any cluster node.


Adding a Node to a Resource Group

The procedure to follow to add a node to a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:

You must supply the following information to complete the procedure.

Also, be sure to verify that the new node is already a cluster member.

How to Add a Node to a Scalable Resource Group

  1. For each network resource that a scalable resource in the resource group uses, make the resource group where the network resource is located run on the new node.

    See Step 1 through Step 5 in the following procedure for details.

  2. Add the new node to the list of nodes that can master the scalable resource group (the nodelist resource group property).

    This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.

    # clresourcegroup set [-n node-zone-list] resource-group
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all of the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    resource-group

    Specifies the name of the resource group to which the node is being added.

  3. (Optional) Update the scalable resource's Load_balancing_weights property to assign a weight to the node that you are adding to the resource group.

    Otherwise, the weight defaults to 1. See the clresourcegroup(1CL) man page for more information.

How to Add a Node to a Failover Resource Group

  1. Display the current node list and the current list of IPMP groups that are configured for each resource in the resource group.
    # clresourcegroup show -v resource-group | grep -i nodelist
    # clresourcegroup show -v resource-group | grep -i netiflist

    Note - The output of the command line for nodelist and netiflist identifies the nodes by node name. To identify node IDs, run the command clnode show -v | grep -i node-id.


  2. Update netiflist for the network resources that the node addition affects.

    This step overwrites the previous value of netiflist, and therefore you must include all the IPMP groups here.

    # clresource set  -p netiflist=netiflist network-resource
    -p netiflist=netiflist

    Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.

    network-resource

    Specifies the name of the network resource (logical hostname or shared address) that is being hosted on the netiflist entries.

  3. If the HAStoragePlus AffinityOn extension property equals True, add the node to the appropriate disk set or device group.
    • If you are using Solaris Volume Manager, use the metaset command.
      # metaset -s disk-set-name -a -h node-name
      -s disk-set-name

      Specifies the name of the disk set on which the metaset command is to work

      -a

      Adds a drive or host to the specified disk set

      -h node-name

      Specifies the node to be added to the disk set

    • SPARC: If you are using Veritas Volume Manager, use the clsetup utility.
      1. On any active cluster member, start the clsetup utility.
        # clsetup

        The Main Menu is displayed.

      2. On the Main Menu, type the number that corresponds to the option for device groups and volumes.
      3. On the Device Groups menu, type the number that corresponds to the option for adding a node to a VxVM device group.
      4. Respond to the prompts to add the node to the VxVM device group.
  4. Update the node list to include all of the nodes that can now master this resource group.

    This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.

    # clresourcegroup set [-n node-zone-list] resource-group
    -n node-zone-list

    Specifies a comma-separated, ordered list of global-cluster non-voting nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    resource-group

    Specifies the name of the resource group to which the node is being added.

  5. Verify the updated information.
    # clresourcegroup show -vresource-group | grep -i nodelist
    # clresourcegroup show -vresource-group | grep -i netiflist

Example 2-29 Adding a Node to a Resource Group

This example shows how to add a global-cluster voting node (phys-schost-2) to a resource group (resource-group-1) that contains a logical hostname resource (schost-2).

# clresourcegroup show -v resource-group-1 | grep -i nodelist
( Nodelist:    phys-schost-1 phys-schost-3
# clresourcegroup show -v resource-group-1 | grep -i netiflist
( Res property name: NetIfList
 Res property class: extension
 List of IPMP
interfaces on each node
 Res property type: stringarray
 Res property value: sc_ipmp0@1 sc_ipmp0@3
 
(Only nodes 1 and 3 have been assigned IPMP groups. You must add an IPMP group for node 2.)

# clresource set  -p netiflist=sc_ipmp0@1,sc_ipmp0@2,sc_ipmp0@3 schost-2
# metaset -s red -a -h phys-schost-2
# clresourcegroup set -n  phys-schost-1,phys-schost-2,phys-schost-3 resource-group-1
# clresourcegroup show -v resource-group-1 | grep -i nodelist
 Nodelist:     phys-schost-1 phys-schost-2
               phys-schost-3
# clresourcegroup show -v resource-group-1 | grep -i netiflist
 Res property value: sc_ipmp0@1 sc_ipmp0@2
                     sc_ipmp0@3

Removing a Node From a Resource Group

The procedure to follow to remove a node from a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:

To complete the procedure, you must supply the following information.

Additionally, be sure to verify that the resource group is not mastered on the node that you are removing. If the resource group is mastered on the node that you are removing, run the clresourcegroup command to switch the resource group offline from that node. The following clresourcegroup command brings the resource group offline from a given node, provided that new-masters does not contain that node.

# clresourcegroup switch -n new-masters resource-group
-n new-masters

Specifies the nodes that is now to master the resource group.

resource-group

Specifies the name of the resource group that you are switching . This resource group is mastered on the node that you are removing.

For more information, see the clresourcegroup(1CL) man page.


Caution

Caution - If you plan to remove a node from all the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then remove the node from the failover groups.


How to Remove a Node From a Scalable Resource Group

A scalable service is configured as two resource groups, as follows.

Additionally, the RG_dependencies property of the scalable resource group is set to configure the scalable group with a dependency on the failover resource group. For information about this property, see the rg_properties(5) man page.

For details about scalable service configuration, see Oracle Solaris Cluster Concepts Guide.

Removing a node from the scalable resource group causes the scalable service to no longer be brought online on that node. To remove a node from the scalable resource group, perform the following steps.

  1. Remove the node from the list of nodes that can master the scalable resource group (the nodelist resource group property).
    # clresourcegroup set [-n node-zone-list] scalable-resource-group
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    scalable-resource-group

    Specifies the name of the resource group from which the node is being removed.

  2. (Optional) Remove the node from the failover resource group that contains the shared address resource.

    For details, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.

  3. (Optional) Update the Load_balancing_weights property of the scalable resource to remove the weight of the node that you are removing from the resource group.

See Also

The clresourcegroup(1CL) man page.

How to Remove a Node From a Failover Resource Group

Perform the following steps to remove a node from a failover resource group.


Caution

Caution - If you plan to remove a node from all of the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then use this procedure to remove the node from the failover groups.



Note - If the failover resource group contains shared address resources that scalable services use, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.


  1. Update the node list to include all of the nodes that can now master this resource group.

    This step removes the node and overwrites the previous value of the node list. Be sure to include all of the nodes that can master the resource group here.

    # clresourcegroup set [-n node-zone-list] failover-resource-group
    -n node-zone-list

    Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.

    This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.

    failover-resource-group

    Specifies the name of the resource group from which the node is being removed.

  2. Display the current list of IPMP groups that are configured for each resource in the resource group.
    # clresourcegroup show -v failover-resource-group | grep -i netiflist
  3. Update netiflist for network resources that the removal of the node affects.

    This step overwrites the previous value of netiflist. Be sure to include all of the IPMP groups here.

    # clresource set -p netiflist=netiflist network-resource

    Note - The output of the preceding command line identifies the nodes by node name. Run the command line clnode show -v | grep -i “Node ID” to find the node ID.


    -p netiflist=netiflist

    Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.

    network-resource

    Specifies the name of the network resource that is hosted on the netiflist entries.


    Note - Oracle Solaris Cluster does not support the use of the adapter name for netif.


  4. Verify the updated information.
    # clresourcegroup show -vfailover-resource-group | grep -i nodelist
    # clresourcegroup show -vfailover-resource-group | grep -i netiflist 

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

In a failover resource group that contains shared address resources that scalable services use, a node can appear in the following locations.

To remove the node from the node list of the failover resource group, follow the procedure How to Remove a Node From a Failover Resource Group.

To modify the auxnodelist of the shared address resource, you must remove and recreate the shared address resource.

If you remove the node from the failover group's node list, you can continue to use the shared address resource on that node to provide scalable services. To continue to use the shared address resource, you must add the node to the auxnodelist of the shared address resource. To add the node to the auxnodelist, perform the following steps.


Note - You can also use the following procedure to remove the node from the auxnodelist of the shared address resource. To remove the node from the auxnodelist, you must delete and recreate the shared address resource.


  1. Switch the scalable service resource offline.
  2. Remove the shared address resource from the failover resource group.
  3. Create the shared address resource.

    Add the node ID or node name of the node that you removed from the failover resource group to the auxnodelist.

    # clressharedaddress create -g failover-resource-group \
     -X new-auxnodelist shared-address 
    failover-resource-group

    The name of the failover resource group that used to contain the shared address resource.

    new-auxnodelist

    The new, modified auxnodelist with the desired node added or removed.

    shared-address

    The name of the shared address.

Example – Removing a Node From a Resource Group

This example shows how to remove a node (phys-schost-3) from a resource group (resource-group-1) that contains a logical hostname resource (schost-1).

# clresourcegroup show -v resource-group-1 | grep -i nodelist
Nodelist:       phys-schost-1 phys-schost-2
                                             phys-schost-3
# clresourcegroup set -n phys-schost-1,phys-schost-2 resource-group-1
# clresourcegroup show -v resource-group-1 | grep -i netiflist
( Res property name: NetIfList
Res property class: extension
( List of IPMP 
interfaces on each node
( Res property type: stringarray
 Res property value: sc_ipmp0@1 sc_ipmp0@2
                     sc_ipmp0@3

(sc_ipmp0@3 is the IPMP group to be removed.)

# clresource set  -p  netiflist=sc_ipmp0@1,sc_ipmp0@2 schost-1
# clresourcegroup show -v resource-group-1 | grep -i nodelist
Nodelist:       phys-schost-1 phys-schost-2
# clresourcegroup show -v resource-group-1 | grep -i netiflist
 Res property value: sc_ipmp0@1 sc_ipmp0@2