Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1 |
1. Planning for Oracle Solaris Cluster Data Services
2. Administering Data Service Resources
Overview of Tasks for Administering Data Service Resources
Configuring and Administering Oracle Solaris Cluster Data Services
How to Register a Resource Type
How to Install and Register an Upgrade of a Resource Type
How to Migrate Existing Resources to a New Version of the Resource Type
How to Unregister Older Unused Versions of the Resource Type
How to Downgrade a Resource to an Older Version of Its Resource Type
How to Create a Failover Resource Group
How to Create a Scalable Resource Group
Configuring Failover and Scalable Data Services on Shared File Systems
How to Configure a Failover Application Using the ScalMountPoint Resource
How to Configure a Scalable Application Using the ScalMountPoint Resource
Tools for Adding Resources to Resource Groups
How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility
How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface
How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility
How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface
How to Add a Failover Application Resource to a Resource Group
How to Add a Scalable Application Resource to a Resource Group
Bringing Resource Groups Online
How to Bring Resource Groups Online
Switching Resource Groups to Preferred Primaries
How to Switch Resource Groups to Preferred Primaries
How to Quiesce a Resource Group
How to Quiesce a Resource Group Immediately
Suspending and Resuming the Automatic Recovery Actions of Resource Groups
Immediately Suspending Automatic Recovery by Killing Methods
How to Suspend the Automatic Recovery Actions of a Resource Group
How to Suspend the Automatic Recovery Actions of a Resource Group Immediately
How to Resume the Automatic Recovery Actions of a Resource Group
Disabling and Enabling Resource Monitors
How to Disable a Resource Fault Monitor
How to Enable a Resource Fault Monitor
How to Remove a Resource Group
Switching the Current Primary of a Resource Group
How to Switch the Current Primary of a Resource Group
Disabling Resources and Moving Their Resource Group Into the UNMANAGED State
How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State
Displaying Resource Type, Resource Group, and Resource Configuration Information
Changing Resource Type, Resource Group, and Resource Properties
How to Change Resource Type Properties
How to Change Resource Group Properties
How to Change Resource Properties
How to Change Resource Dependency Properties
How to Modify a Logical Hostname Resource or a Shared Address Resource
Clearing the STOP_FAILED Error Flag on Resources
How to Clear the STOP_FAILED Error Flag on Resources
Clearing the Start_failed Resource State
How to Clear a Start_failed Resource State by Switching Over a Resource Group
How to Clear a Start_failed Resource State by Restarting a Resource Group
How to Clear a Start_failed Resource State by Disabling and Enabling a Resource
Upgrading a Preregistered Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Reregistering Preregistered Resource Types After Inadvertent Deletion
How to Reregister Preregistered Resource Types After Inadvertent Deletion
Adding or Removing a Node to or From a Resource Group
Adding a Node to a Resource Group
How to Add a Node to a Scalable Resource Group
How to Add a Node to a Failover Resource Group
Removing a Node From a Resource Group
How to Remove a Node From a Scalable Resource Group
How to Remove a Node From a Failover Resource Group
How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources
Example - Removing a Node From a Resource Group
Synchronizing the Startups Between Resource Groups and Device Groups
Managed Entity Monitoring by HAStoragePlus
Troubleshooting Monitoring for Managed Entities
Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster
How to Set Up the HAStoragePlus Resource Type for New Resources
How to Set Up the HAStoragePlus Resource Type for Existing Resources
Configuring an HAStoragePlus Resource for Cluster File Systems
Sample Entries in /etc/vfstab for Cluster File Systems
How to Set Up the HAStoragePlus Resource for Cluster File Systems
How to Delete an HAStoragePlus Resource Type for Cluster File Systems
Enabling Highly Available Local File Systems
Configuration Requirements for Highly Available Local File Systems
Format of Device Names for Devices Without a Volume Manager
Sample Entries in /etc/vfstab for Highly Available Local File Systems
How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility
How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available
Sharing a Highly Available Local File System Across Zone Clusters
Modifying Online the Resource for a Highly Available Local File System
How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource
How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource
How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource
How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource
Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource
How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource
How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource
Changing the Cluster File System to a Local File System in an HAStoragePlus Resource
How to Change the Cluster File System to Local File System in an HAStoragePlus Resource
Upgrading the HAStoragePlus Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Distributing Online Resource Groups Among Cluster Nodes
Enforcing Collocation of a Resource Group With Another Resource Group
Specifying a Preferred Collocation of a Resource Group With Another Resource Group
Distributing a Set of Resource Groups Evenly Among Cluster Nodes
Specifying That a Critical Service Has Precedence
Delegating the Failover or Switchover of a Resource Group
Combining Affinities Between Resource Groups
Zone Cluster Resource Group Affinities
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster
Encapsulating an SMF Service Into a Failover Proxy Resource Configuration
Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration
Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration
Tuning Fault Monitors for Oracle Solaris Cluster Data Services
Setting the Interval Between Fault Monitor Probes
Setting the Timeout for Fault Monitor Probes
Defining the Criteria for Persistent Faults
Complete Failures and Partial Failures of a Resource
Dependencies of the Threshold and the Retry Interval on Other Properties
System Properties for Setting the Threshold and the Retry Interval
You can enable the automatic distribution of resource group load across nodes by setting load limits. You assign load factors to resource groups, and the load factors correspond to the defined load limits of the nodes.
The default behavior is to distribute resource group load evenly across all the available nodes. Each resource group is started on a node from its node list. The Resource Group Manager (RGM) chooses a node that best satisfies the configured load distribution policy. As resource groups are assigned to nodes by the RGM, the resource groups' load factors on each node are summed up to provide a total load. The total load is then compared against that node's load limits.
You can configure load limits in a global cluster or a zone cluster.
The factors you set to control load distribution on each node include load limits, resource group priority, and preemption mode. In the global cluster, you can set the Concentrate_load property to choose the preferred load distribution policy: to concentrate resource group load onto as few nodes as possible without exceeding load limits or to spread the load out as evenly as possible across all available nodes. The default behavior is to spread out the resource group load. Each resource group is still limited to running only on nodes in its node list, regardless of load factor and load limit settings.
Note - You can use the command line or the clsetup utility to configure load distribution for resource groups. The following procedure illustrates how to configure load distribution for resource groups using the clsetup utility. For instructions on using the command line to perform these procedures, see Configuring Load Limits in Oracle Solaris Cluster System Administration Guide.
This section contains the following procedures:
Each cluster node can have its own set of load limits. You assign load factors to resource groups, and the load factors correspond to the defined load limits of the nodes. You can set soft load limits (which can be exceeded) or hard load limits (which cannot be exceeded).
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Manage load limits Menu is displayed.
You can create a load limit, modify a load limit, or delete a load limit.
If you want to set a load limit on a second node, select the option number for the second node and press the Return key. After you have selected all the nodes where you want to configure load limits, type q and press the Return key.
For example, type mem_load as the name of a load limit.
If you typed yes, type the soft limit value and press Enter.
If you typed yes, type the hard limit value and press Enter.
The message Command completed successfully is displayed, along with the soft and hard load limits for the nodes you selected. Press the Return key to continue.
Return to the previous menu by typing q and pressing the Return key.
You can configure a resource group to have a higher priority so that it is less likely to be displaced from a specific node. If load limits are exceeded, lower-priority resource groups might be forced offline.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the Priority of a Resource Group Menu is displayed.
The existing Priority value is displayed. The default Priority value is 500.
The Manage Resource Group Load Distribution Menu is displayed.
A load factor is a value that you assign to the load on a load limit. Load factors are assigned to a resource group, and those load factors correspond to the defined load limits of the nodes.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the load factors of a Resource Group Menu is displayed.
For example, you can set a load factor called mem_load on the resource group you selected by typing mem_load@50. Press Ctrl-D when you are done.
The Manage Resource Group Load Distribution Menu is displayed.
The preemption_mode property determines if a resource group will be preempted from a node by a higher-priority resource group because of node overload. The property indicates the cost of moving a resource group from one node to another.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Manage Resource Group Load Distribution Menu is displayed.
The Set the Preemption Mode of a Resource Group Menu is displayed.
If the resource group has a preemption mode set, it is displayed, similar to the following:
The preemption mode property of "rg11" is currently set to the following: preemption mode: Has_Cost
The three choices are Has_cost, No_cost, or Never.
The Manage Resource Group Load Distribution Menu is displayed.
Setting the Concentrate_load property to false causes the cluster to spread resource group loads evenly across all available nodes in the resource groups' node lists. By default, the Concentrate_load property is set to FALSE.
If you set this property to TRUE, the cluster attempts to concentrate resource group load on the fewest possible nodes without exceeding any configured hard or soft load limits.
Note - When specifying Concentrate_load=TRUE, if a resource group RG2 declares a ++ or +++ affinity for a resource group RG1, avoid setting any nonzero load factors for RG2. Instead, set larger load factors for RG1 to account for the additional load that would be imposed by RG2 coming online on the same node as RG1. This will allow the Concentrate_load feature to work as intended. Alternately, you can set load factors on RG2 but avoid setting any hard load limits for those load factors—set only soft limits. This will allow RG2 to come online even if the soft load limit is exceeded.
You can only set the Concentrate_load property in a global cluster; you cannot set this property in a zone cluster. In a zone cluster, the default setting is always FALSE.
phys-schost# clsetup
The clsetup menu is displayed.
The Other Cluster Tasks Menu is displayed.
The Set the Concentrate Load Property of the Cluster Menu is displayed.
The current value of TRUE or FALSE is displayed.
The Other Cluster Tasks Menu is displayed.