JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Services Planning and Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Planning for Oracle Solaris Cluster Data Services

2.  Administering Data Service Resources

Overview of Tasks for Administering Data Service Resources

Configuring and Administering Oracle Solaris Cluster Data Services

Registering a Resource Type

How to Register a Resource Type

Upgrading a Resource Type

How to Install and Register an Upgrade of a Resource Type

How to Migrate Existing Resources to a New Version of the Resource Type

Downgrading a Resource Type

How to Downgrade a Resource to an Older Version of Its Resource Type

Creating a Resource Group

How to Create a Failover Resource Group

How to Create a Scalable Resource Group

Tools for Adding Resources to Resource Groups

How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility

How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface

How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility

How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface

How to Add a Failover Application Resource to a Resource Group

How to Add a Scalable Application Resource to a Resource Group

Bringing Online Resource Groups

How to Bring Online Resource Groups

Enabling a Resource

How to Enable a Resource

Quiescing Resource Groups

How to Quiesce a Resource Group

How to Quiesce a Resource Group Immediately

Suspending and Resuming the Automatic Recovery Actions of Resource Groups

Immediately Suspending Automatic Recovery by Killing Methods

How to Suspend the Automatic Recovery Actions of a Resource Group

How to Suspend the Automatic Recovery Actions of a Resource Group Immediately

How to Resume the Automatic Recovery Actions of a Resource Group

Disabling and Enabling Resource Monitors

How to Disable a Resource Fault Monitor

How to Enable a Resource Fault Monitor

Removing Resource Types

How to Remove a Resource Type

Removing Resource Groups

How to Remove a Resource Group

Removing Resources

How to Remove a Resource

Switching the Current Primary of a Resource Group

How to Switch the Current Primary of a Resource Group

Disabling Resources and Moving Their Resource Group Into the UNMANAGED State

How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State

Displaying Resource Type, Resource Group, and Resource Configuration Information

Changing Resource Type, Resource Group, and Resource Properties

How to Change Resource Type Properties

How to Change Resource Group Properties

How to Change Resource Properties

How to Modify a Logical Hostname Resource or a Shared Address Resource

Clearing the STOP_FAILED Error Flag on Resources

How to Clear the STOP_FAILED Error Flag on Resources

Clearing the Start_failed Resource State

How to Clear a Start_failed Resource State by Switching Over a Resource Group

How to Clear a Start_failed Resource State by Restarting a Resource Group

How to Clear a Start_failed Resource State by Disabling and Enabling a Resource

Upgrading a Preregistered Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Reregistering Preregistered Resource Types After Inadvertent Deletion

How to Reregister Preregistered Resource Types After Inadvertent Deletion

Adding or Removing a Node to or From a Resource Group

Adding a Node to a Resource Group

How to Add a Node to a Scalable Resource Group

How to Add a Node to a Failover Resource Group

Removing a Node From a Resource Group

How to Remove a Node From a Scalable Resource Group

How to Remove a Node From a Failover Resource Group

How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources

Example - Removing a Node From a Resource Group

Migrating the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

How to Migrate the Application From a Global-Cluster Voting Node to a Global-Cluster Non-Voting Node

Synchronizing the Startups Between Resource Groups and Device Groups

Managed Entity Monitoring by HAStoragePlus

Troubleshooting Monitoring for Managed Entities

Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster

How to Set Up the HAStoragePlus Resource Type for New Resources

How to Set Up the HAStoragePlus Resource Type for Existing Resources

Configuring an HAStoragePlus Resource for Cluster File Systems

Sample Entries in /etc/vfstab for Cluster File Systems

How to Set Up the HAStoragePlus Resource for Cluster File Systems

How to Delete an HAStoragePlus Resource Type for Cluster File Systems

Enabling Highly Available Local File Systems

Configuration Requirements for Highly Available Local File Systems

Format of Device Names for Devices Without a Volume Manager

Sample Entries in /etc/vfstab for Highly Available Local File Systems

How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

How to Set Up the HAStoragePlus Resource Type to Make File Systems Highly Available Other Than Solaris ZFS

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

Upgrading From HAStorage to HAStoragePlus

How to Upgrade From HAStorage to HAStoragePlus When Using Device Groups or CFS

How to Upgrade From HAStorage With CFS to HAStoragePlus With Highly Available Local File System

Modifying Online the Resource for a Highly Available File System

How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource

How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource

How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource

How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource

How to Recover From a Fault After Modifying the FileSystemMountPoints Property of an HAStoragePlus Resource

How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource

Changing the Cluster File System to a Local File System in an HAStoragePlus Resource

How to Change the Cluster File System to Local File System in an HAStoragePlus Resource

Upgrading the HAStoragePlus Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

Distributing Online Resource Groups Among Cluster Nodes

Resource Group Affinities

Enforcing Collocation of a Resource Group With Another Resource Group

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Specifying That a Critical Service Has Precedence

Delegating the Failover or Switchover of a Resource Group

Combining Affinities Between Resource Groups

Zone Cluster Resource Group Affinities

Replicating and Upgrading Configuration Data for Resource Groups, Resource Types, and Resources

How to Replicate Configuration Data on a Cluster Without Configured Resource Groups, Resource Types, and Resources

How to Upgrade Configuration Data on a Cluster With Configured Resource Groups, Resource Types, and Resources

Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster

Encapsulating an SMF Service Into a Failover Proxy Resource Configuration

Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration

Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration

Tuning Fault Monitors for Oracle Solaris Cluster Data Services

Setting the Interval Between Fault Monitor Probes

Setting the Timeout for Fault Monitor Probes

Defining the Criteria for Persistent Faults

Complete Failures and Partial Failures of a Resource

Dependencies of the Threshold and the Retry Interval on Other Properties

System Properties for Setting the Threshold and the Retry Interval

Specifying the Failover Behavior of a Resource

Denying Cluster Services For a Selected Non-Global Zone

How to Deny Cluster Services For a Non-Global Zone

How to Allow Cluster Services For a Non-Global Zone

A.  Standard Properties

B.  Legal RGM Names and Values

C.  Data Service Configuration Worksheets and Examples

Index

Distributing Online Resource Groups Among Cluster Nodes

For maximum availability or optimum performance, some combinations of services require a specific distribution of online resource groups among cluster nodes. Distributing online resource groups involves creating affinities between resource groups for the following purposes:

This section provides the following examples of how to use resource group affinities to distribute online resource groups among cluster nodes:

Resource Group Affinities

An affinity between resource groups restricts on which nodes the resource groups may be brought online simultaneously. In each affinity, a source resource group declares an affinity for a target resource group or several target resource groups. To create an affinity between resource groups, set the RG_affinities resource group property of the source as follows:

-p RG_affinities=affinity-list
affinity-list

Specifies a comma-separated list of affinities between the source resource group and a target resource group or several target resource groups. You may specify a single affinity or more than one affinity in the list.

Specify each affinity in the list as follows:

operator target-rg

Note - Do not include a space between operator and target-rg.


operator

Specifies the type of affinity that you are creating. For more information, see Table 2-3.

target-rg

Specifies the resource group that is the target of the affinity that you are creating.

Table 2-3 Types of Affinities Between Resource Groups

Operator
Affinity Type
Effect
+
Weak positive
If possible, the source is brought online on a node or on nodes where the target is online or starting. However, the source and the target are allowed to be online on different nodes.
++
Strong positive
The source is brought online only on a node or on nodes where the target is online or starting. The source and the target are not allowed to be online on different nodes.
-
Weak negative
If possible, the source is brought online on a node or on nodes where the target is not online or starting. However, the source and the target are allowed to be online on the same node.
--
Strong negative
The source is brought online only on a node or on nodes where the target is not online. The source and the target are not allowed to be online on the same node.
+++
Strong positive with failover delegation
Same as strong positive, except that an attempt by the source to fail over is delegated to the target. For more information, see Delegating the Failover or Switchover of a Resource Group.

Weak affinities take precedence over Nodelist preference ordering.

The current state of other resource groups might prevent a strong affinity from being satisfied on any node. In this situation, the resource group that is the source of the affinity remains offline. If other resource groups' states change to enable the strong affinities to be satisfied, the resource group that is the source of the affinity comes back online.


Note - Use caution when declaring a strong affinity on a source resource group for more than one target resource group. If all declared strong affinities cannot be satisfied, the source resource group remains offline.


Enforcing Collocation of a Resource Group With Another Resource Group

A service that is represented by one resource group might depend so strongly on a service in a second resource group that both services must run on the same node. For example, an application that is comprised of multiple interdependent service daemons might require that all daemons run on the same node.

In this situation, force the resource group of the dependent service to be collocated with the resource group of the other service. To enforce collocation of a resource group with another resource group, declare on the resource group a strong positive affinity for the other resource group.

# clresourcegroup set|create -p RG_affinities=++target-rg source-rg
source-rg

Specifies the resource group that is the source of the strong positive affinity. This resource group is the resource group on which you are declaring a strong positive affinity for another resource group.

-p RG_affinities=++target-rg

Specifies the resource group that is the target of the strong positive affinity. This resource group is the resource group for which you are declaring a strong positive affinity.

A resource group follows the resource group for which it has a strong positive affinity. If the target resource group is relocated to a different node, the source resource group automatically switches to the same node as the target. However, a resource group that declares a strong positive affinity is prevented from failing over to a node on which the target of the affinity is not already running.


Note - Only failovers that are initiated by a resource monitor are prevented. If a node on which the source resource group and target resource group are running fails, both resource groups fail over to the same surviving node.


For example, a resource group rg1 declares a strong positive affinity for resource group rg2. If rg2 fails over to another node, rg1 also fails over to that node. This failover occurs even if all the resources in rg1 are operational. However, if a resource in rg1 attempts to fail over rg1 to a node where rg2 is not running, this attempt is blocked.

The source of a strong positive affinity might be offline on all nodes when you bring online the target of the strong positive affinity. In this situation, the source of the strong positive affinity is automatically brought online on the same node as the target.

For example, a resource group rg1 declares a strong positive affinity for resource group rg2. Both resource groups are initially offline on all nodes. If an administrator brings online rg2 on a node, rg1 is automatically brought online on the same node.

You can use the clresourcegroup suspend command to prevent a resource group from being brought online automatically due to strong affinities or cluster reconfiguration.

If you require a resource group that declares a strong positive affinity to be allowed to fail over, you must delegate the failover. For more information, see Delegating the Failover or Switchover of a Resource Group.

Example 2-45 Enforcing Collocation of a Resource Group With Another Resource Group

This example shows the command for modifying resource group rg1 to declare a strong positive affinity for resource group rg2. As a result of this affinity relationship, rg1 is brought online only on nodes where rg2 is running. This example assumes that both resource groups exist.

# clresourcegroup set -p RG_affinities=++rg2 rg1

Specifying a Preferred Collocation of a Resource Group With Another Resource Group

A service that is represented by one resource group might use a service in a second resource group. As a result, these services run most efficiently if they run on the same node. For example, an application that uses a database runs most efficiently if the application and the database run on the same node. However, the services can run on different nodes because the reduction in efficiency is less disruptive than additional failovers of resource groups.

In this situation, specify that both resource groups should be collocated if possible. To specify preferred collocation of a resource group with another resource group, declare on the resource group a weak positive affinity for the other resource group.

# clresourcegroup set|create -p RG_affinities=+target-rg source-rg
source-rg

Specifies the resource group that is the source of the weak positive affinity. This resource group is the resource group on which you are declaring a weak positive affinity for another resource group.

-p RG_affinities=+target-rg

Specifies the resource group that is the target of the weak positive affinity. This resource group is the resource group for which you are declaring a weak positive affinity.

By declaring a weak positive affinity on one resource group for another resource group, you increase the probability of both resource groups running on the same node. The source of a weak positive affinity is first brought online on a node where the target of the weak positive affinity is already running. However, the source of a weak positive affinity does not fail over if a resource monitor causes the target of the affinity to fail over. Similarly, the source of a weak positive affinity does not fail over if the target of the affinity is switched over. In both situations, the source remains online on the node where the source is already running.


Note - If a node on which the source resource group and target resource group are running fails, both resource groups are restarted on the same surviving node.


Example 2-46 Specifying a Preferred Collocation of a Resource Group With Another Resource Group

This example shows the command for modifying resource group rg1 to declare a weak positive affinity for resource group rg2. As a result of this affinity relationship, rg1 and rg2 are first brought online on the same node. But if a resource in rg2 causes rg2 to fail over, rg1 remains online on the node where the resource groups were first brought online. This example assumes that both resource groups exist.

# clresourcegroup set -p RG_affinities=+rg2 rg1

Distributing a Set of Resource Groups Evenly Among Cluster Nodes

Each resource group in a set of resource groups might impose the same load on the cluster. In this situation, by distributing the resource groups evenly among cluster nodes, you can balance the load on the cluster.

To distribute a set of resource groups evenly among cluster nodes, declare on each resource group a weak negative affinity for the other resource groups in the set.

# clresourcegroup set|create -p RG_affinities=neg-affinity-list source-rg
source-rg

Specifies the resource group that is the source of the weak negative affinity. This resource group is the resource group on which you are declaring a weak negative affinity for other resource groups.

-p RG_affinities=neg-affinity-list

Specifies a comma-separated list of weak negative affinities between the source resource group and the resource groups that are the target of the weak negative affinity. The target resource groups are the resource groups for which you are declaring a weak negative affinity.

By declaring a weak negative affinity on one resource group for other resource groups, you ensure that a resource group is always brought online on the most lightly loaded node in the cluster. The fewest other resource groups are running on that node. Therefore, the smallest number of weak negative affinities are violated.

Example 2-47 Distributing a Set of Resource Groups Evenly Among Cluster Nodes

This example shows the commands for modifying resource groups rg1, rg2, rg3, and rg4 to ensure that these resource groups are evenly distributed among the available nodes in the cluster. This example assumes that resource groups rg1, rg2, rg3, and rg4 exist.

# clresourcegroup set -p RG_affinities=-rg2,-rg3,-rg4 rg1
# clresourcegroup set -p RG_affinities=-rg1,-rg3,-rg4 rg2
# clresourcegroup set -p RG_affinities=-rg1,-rg2,-rg4 rg3
# clresourcegroup set -p RG_affinities=-rg1,-rg2,-rg3 rg4

Specifying That a Critical Service Has Precedence

A cluster might be configured to run a combination of mission-critical services and noncritical services. For example, a database that supports a critical customer service might run in the same cluster as noncritical research tasks.

To ensure that the noncritical services do not affect the performance of the critical service, specify that the critical service has precedence. By specifying that the critical service has precedence, you prevent noncritical services from running on the same node as the critical service.

When all nodes are operational, the critical service runs on a different node from the noncritical services. However, a failure of the critical service might cause the service to fail over to a node where the noncritical services are running. In this situation, the noncritical services are taken offline immediately to ensure that the computing resources of the node are fully dedicated to the mission-critical service.

To specify that a critical service has precedence, declare on the resource group of each noncritical service a strong negative affinity for the resource group that contains the critical service.

# clresourcegroup set|create -p RG_affinities=--critical-rg noncritical-rg
noncritical-rg

Specifies the resource group that contains a noncritical service. This resource group is the resource group on which you are declaring a strong negative affinity for another resource group.

-p RG_affinities=--critical-rg

Specifies the resource group that contains the critical service. This resource group is the resource group for which you are declaring a strong negative affinity.

A resource group moves away from a resource group for which it has a strong negative affinity.

The source of a strong negative affinity might be offline on all nodes when you take offline the target of the strong negative affinity. In this situation, the source of the strong negative affinity is automatically brought online. In general, the resource group is brought online on the most preferred node, based on the order of the nodes in the node list and the declared affinities.

For example, a resource group rg1 declares a strong negative affinity for resource group rg2. Resource group rg1 is initially offline on all nodes, while resource group rg2 is online on a node. If an administrator takes offline rg2, rg1 is automatically brought online.

You can use the clresourcegroup suspend command to prevent the source of a strong negative affinity from being brought online automatically due to strong affinities or cluster reconfiguration.

Example 2-48 Specifying That a Critical Service Has Precedence

This example shows the commands for modifying the noncritical resource groups ncrg1 and ncrg2 to ensure that the critical resource group mcdbrg has precedence over these resource groups. This example assumes that resource groups mcdbrg, ncrg1, and ncrg2 exist.

# clresourcegroup set -p RG_affinities=--mcdbrg ncrg1 ncrg2

Delegating the Failover or Switchover of a Resource Group

The source resource group of a strong positive affinity cannot fail over or be switched over to a node where the target of the affinity is not running. If you require the source resource group of a strong positive affinity to be allowed to fail over or be switched over, you must delegate the failover to the target resource group. When the target of the affinity fails over, the source of the affinity is forced to fail over with the target.


Note - You might need to switch over the source resource group of a strong positive affinity that is specified by the ++ operator. In this situation, switch over the target of the affinity and the source of the affinity at the same time.


To delegate failover or switchover of a resource group to another resource group, declare on the resource group a strong positive affinity with failover delegation for the other resource group.

# clresourcegroup set|create source-rg -p RG_affinities=+++target-rg
source-rg

Specifies the resource group that is delegating failover or switchover. This resource group is the resource group on which you are declaring a strong positive affinity with failover delegation for another resource group.

-p RG_affinities=+++target-rg

Specifies the resource group to which source-rg delegates failover or switchover. This resource group is the resource group for which you are declaring a strong positive affinity with failover delegation.

A resource group may declare a strong positive affinity with failover delegation for at most one resource group. However, a given resource group may be the target of strong positive affinities with failover delegation that are declared by any number of other resource groups.

A strong positive affinity with failover delegation is not fully symmetric. The target can come online while the source remains offline. However, if the target is offline, the source cannot come online.

If the target declares a strong positive affinity with failover delegation for a third resource group, failover or switchover is further delegated to the third resource group. The third resource group performs the failover or switchover, forcing the other resource groups to fail over or be switched over also.

Example 2-49 Delegating the Failover or Switchover of a Resource Group

This example shows the command for modifying resource group rg1 to declare a strong positive affinity with failover delegation for resource group rg2. As a result of this affinity relationship, rg1 delegates failover or switchover to rg2. This example assumes that both resource groups exist.

# clresourcegroup set -p RG_affinities=+++rg2 rg1

Combining Affinities Between Resource Groups

You can create more complex behaviors by combining multiple affinities. For example, the state of an application might be recorded by a related replica server. The node selection requirements for this example are as follows:

You can satisfy these requirements by configuring resource groups for the application and the replica server as follows:

Example 2-50 Combining Affinities Between Resource Groups

This example shows the commands for combining affinities between the following resource groups.

In this example, the resource groups declare affinities as follows:

This example assumes that both resource groups exist.

# clresourcegroup set -p RG_affinities=+rep-rg app-rg
# clresourcegroup set -p RG_affinities=--app-rg rep-rg

Zone Cluster Resource Group Affinities

The cluster administrator can specify affinities between a resource group in a zone cluster and another resource group in a zone cluster or a resource group on the global cluster.

You can use the following command to specify the affinity between resource groups in zone clusters.

# clresourcegroup set -p RG_affinities=affinity-typetarget-zc:target-rg source-zc:source-rg

The resource group affinity types in a zone cluster can be one of the following:


Note - The affinity type +++ is not supported for zone clusters in this release.


Example 2-51 Specifying a Strong Positive Affinity Between Resource Groups in Zone Clusters

This example shows the command for specifying a strong positive affinity between resource groups in zone clusters.

The resource group RG1 in a zone cluster ZC1 declares a strong positive affinity for a resource group RG2 in a zone cluster ZC2.

If you need to specify a strong positive affinity between a resource group RG1 in a zone cluster ZC1 and a resource group RG2 in another zone cluster ZC2, use the following command:

# clresourcegroup set -p RG_affinities=++ZC2:RG2 ZC1:RG1

Example 2-52 Specifying a Strong Negative Affinity Between a Resource Group in a Zone Cluster and a Resource Group in the Global Cluster

This example shows the command for specifying a strong negative affinity between resource groups in zone clusters. If you need to specify a strong negative affinity between a resource group RG1 in a zone cluster ZC1 and a resource group RG2 in the global cluster, use the following command:

# clresourcegroup set -p RG_affinities=--global:RG2 ZC1:RG1