Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster Data Services Planning and Administration Guide Oracle Solaris Cluster 4.1 |
1. Planning for Oracle Solaris Cluster Data Services
2. Administering Data Service Resources
Overview of Tasks for Administering Data Service Resources
Configuring and Administering Oracle Solaris Cluster Data Services
How to Register a Resource Type
How to Install and Register an Upgrade of a Resource Type
How to Migrate Existing Resources to a New Version of the Resource Type
How to Unregister Older Unused Versions of the Resource Type
How to Downgrade a Resource to an Older Version of Its Resource Type
How to Create a Failover Resource Group
How to Create a Scalable Resource Group
Configuring Failover and Scalable Data Services on Shared File Systems
How to Configure a Failover Application Using the ScalMountPoint Resource
How to Configure a Scalable Application Using the ScalMountPoint Resource
Tools for Adding Resources to Resource Groups
How to Add a Logical Hostname Resource to a Resource Group by Using the clsetup Utility
How to Add a Logical Hostname Resource to a Resource Group Using the Command-Line Interface
How to Add a Shared Address Resource to a Resource Group by Using the clsetup Utility
How to Add a Shared Address Resource to a Resource Group Using the Command-Line Interface
How to Add a Failover Application Resource to a Resource Group
How to Add a Scalable Application Resource to a Resource Group
Bringing Resource Groups Online
How to Bring Resource Groups Online
Switching Resource Groups to Preferred Primaries
How to Switch Resource Groups to Preferred Primaries
How to Quiesce a Resource Group
How to Quiesce a Resource Group Immediately
Suspending and Resuming the Automatic Recovery Actions of Resource Groups
Immediately Suspending Automatic Recovery by Killing Methods
How to Suspend the Automatic Recovery Actions of a Resource Group
How to Suspend the Automatic Recovery Actions of a Resource Group Immediately
How to Resume the Automatic Recovery Actions of a Resource Group
Disabling and Enabling Resource Monitors
How to Disable a Resource Fault Monitor
How to Enable a Resource Fault Monitor
How to Remove a Resource Group
Switching the Current Primary of a Resource Group
How to Switch the Current Primary of a Resource Group
Disabling Resources and Moving Their Resource Group Into the UNMANAGED State
How to Disable a Resource and Move Its Resource Group Into the UNMANAGED State
Displaying Resource Type, Resource Group, and Resource Configuration Information
Changing Resource Type, Resource Group, and Resource Properties
How to Change Resource Type Properties
How to Change Resource Group Properties
How to Change Resource Properties
How to Change Resource Dependency Properties
How to Modify a Logical Hostname Resource or a Shared Address Resource
Clearing the STOP_FAILED Error Flag on Resources
How to Clear the STOP_FAILED Error Flag on Resources
Clearing the Start_failed Resource State
How to Clear a Start_failed Resource State by Switching Over a Resource Group
How to Clear a Start_failed Resource State by Restarting a Resource Group
How to Clear a Start_failed Resource State by Disabling and Enabling a Resource
Upgrading a Preregistered Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Reregistering Preregistered Resource Types After Inadvertent Deletion
How to Reregister Preregistered Resource Types After Inadvertent Deletion
Adding or Removing a Node to or From a Resource Group
Adding a Node to a Resource Group
How to Add a Node to a Scalable Resource Group
How to Add a Node to a Failover Resource Group
Removing a Node From a Resource Group
How to Remove a Node From a Scalable Resource Group
How to Remove a Node From a Failover Resource Group
How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources
Synchronizing the Startups Between Resource Groups and Device Groups
Managed Entity Monitoring by HAStoragePlus
Troubleshooting Monitoring for Managed Entities
Additional Administrative Tasks to Configure HAStoragePlus Resources for a Zone Cluster
How to Set Up the HAStoragePlus Resource Type for New Resources
How to Set Up the HAStoragePlus Resource Type for Existing Resources
Configuring an HAStoragePlus Resource for Cluster File Systems
Sample Entries in /etc/vfstab for Cluster File Systems
How to Set Up the HAStoragePlus Resource for Cluster File Systems
How to Delete an HAStoragePlus Resource Type for Cluster File Systems
Enabling Highly Available Local File Systems
Configuration Requirements for Highly Available Local File Systems
Format of Device Names for Devices Without a Volume Manager
Sample Entries in /etc/vfstab for Highly Available Local File Systems
How to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility
How to Delete an HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available
Sharing a Highly Available Local File System Across Zone Clusters
Modifying Online the Resource for a Highly Available Local File System
How to Add File Systems Other Than Solaris ZFS to an Online HAStoragePlus Resource
How to Remove File Systems Other Than Solaris ZFS From an Online HAStoragePlus Resource
How to Add a Solaris ZFS Storage Pool to an Online HAStoragePlus Resource
How to Remove a Solaris ZFS Storage Pool From an Online HAStoragePlus Resource
Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource
How to Change a ZFS Pool Configuration That is Managed by an Online HAStoragePlus Resource
How to Recover From a Fault After Modifying the Zpools Property of an HAStoragePlus Resource
Changing the Cluster File System to a Local File System in an HAStoragePlus Resource
How to Change the Cluster File System to Local File System in an HAStoragePlus Resource
Upgrading the HAStoragePlus Resource Type
Information for Registering the New Resource Type Version
Information for Migrating Existing Instances of the Resource Type
Distributing Online Resource Groups Among Cluster Nodes
Enforcing Collocation of a Resource Group With Another Resource Group
Specifying a Preferred Collocation of a Resource Group With Another Resource Group
Distributing a Set of Resource Groups Evenly Among Cluster Nodes
Specifying That a Critical Service Has Precedence
Delegating the Failover or Switchover of a Resource Group
Combining Affinities Between Resource Groups
Zone Cluster Resource Group Affinities
Configuring the Distribution of Resource Group Load Across Nodes
How to Configure Load Limits for a Node
How to Set Priority for a Resource Group
How to Set Load Factors for a Resource Group
How to Set Preemption Mode for a Resource Group
How to Concentrate Load Onto Fewer Nodes in the Cluster
Enabling Oracle Solaris SMF Services to Run With Oracle Solaris Cluster
Encapsulating an SMF Service Into a Failover Proxy Resource Configuration
Encapsulating an SMF Service Into a Multi-Master Proxy Resource Configuration
Encapsulating an SMF Service Into a Scalable Proxy Resource Configuration
Tuning Fault Monitors for Oracle Solaris Cluster Data Services
Setting the Interval Between Fault Monitor Probes
Setting the Timeout for Fault Monitor Probes
Defining the Criteria for Persistent Faults
Complete Failures and Partial Failures of a Resource
Dependencies of the Threshold and the Retry Interval on Other Properties
System Properties for Setting the Threshold and the Retry Interval
The procedures in this section enable you to perform the following tasks.
Configuring a cluster node to be an additional master of a resource group
Removing a node from a resource group
The procedures are slightly different, depending on whether you plan to add or remove the node to or from a failover or scalable resource group.
Failover resource groups contain network resources that both failover and scalable services use. Each IP subnetwork connected to the cluster has its own network resource that is specified and included in a failover resource group. The network resource is either a logical hostname or a shared address resource. Each network resource includes a list of IPMP groups that it uses. For failover resource groups, you must update the complete list of IPMP groups for each network resource that the resource group includes (the netiflist resource property).
The procedure for scalable resource groups involves the following steps:
Repeating the procedure for failover groups that contain the network resources that the scalable resource uses
Changing the scalable group to be mastered on the new set of hosts
For more information, see the clresourcegroup(1CL) man page.
Note - Run either procedure from any cluster node.
The procedure to follow to add a node to a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:
You must supply the following information to complete the procedure.
The names and node IDs of all of the cluster nodes
The names of the resource groups to which you are adding the node
The name of the IPMP group that is to host the network resources that are used by the resource group on all of the nodes
Also, be sure to verify that the new node is already a cluster member.
See Step 1 through Step 5 in the following procedure for details.
This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.
# clresourcegroup set [-n nodelist] resource-group
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all of the other nodes. .
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group to which the node is being added.
Otherwise, the weight defaults to 1. See the clresourcegroup(1CL) man page for more information.
# clresourcegroup show -v resource-group | grep -i nodelist # clresourcegroup show -v resource-group | grep -i netiflist
Note - The output of the command line for nodelist and netiflist identifies the nodes by node name. To identify node IDs, run the command clnode show -v | grep -i node-id.
This step overwrites the previous value of netiflist, and therefore you must include all the IPMP groups here.
# clresource set -p netiflist=netiflist network-resource
Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.
Specifies the name of the network resource (logical hostname or shared address) that is being hosted on the netiflist entries.
# metaset -s disk-set-name -a -h node-name
Specifies the name of the disk set on which the metaset command is to work
Adds a drive or host to the specified disk set
Specifies the node to be added to the disk set
This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.
# clresourcegroup set [-n nodelist] resource-group
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group to which the node is being added.
# clresourcegroup show -v resource-group | grep -i nodelist # clresourcegroup show -v resource-group | grep -i netiflist
Example 2-31 Adding a Node to a Resource Group
This example shows how to add a cluster node phys-schost-2the to the resource group resource-group-1that contains the logical hostname resource schost-2.
# clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-3 # clresourcegroup show -v resource-group-1 | grep -i netiflist Res property name: NetIfList Res property class: extension List of IPMP interfaces on each node Res property type: stringarray Res property value: sc_ipmp0@1 sc_ipmp0@3 Only nodes 1 and 3 have been assigned IPMP groups. You must add an IPMP group for node 2. # clresource set -p netiflist=sc_ipmp0@1,sc_ipmp0@2,sc_ipmp0@3 schost-2 # metaset -s red -a -h phys-schost-2 # clresourcegroup set -n phys-schost-1,phys-schost-2,phys-schost-3 resource-group-1 # clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # clresourcegroup show -v resource-group-1 | grep -i netiflist Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3
The procedure to follow to remove a node from a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:
Note - If the node that you want to remove appears in a per-node resource dependency, you must remove that node from the per-node dependency before you can remove it from the resource group. For more information, see How to Change Resource Dependency Properties.
To complete the procedure, you must supply the following information.
Node names and node IDs of all of the cluster nodes
# clnode show -v | grep -i “Node ID”
The name of the resource group or the names of the resource groups from which you plan to remove the node
# clresourcegroup show | grep “Nodelist”
Names of the IPMP groups that are to host the network resources that are used by the resource groups on all of the nodes
# clresourcegroup show -v | grep “NetIfList.*value”
Additionally, be sure to verify that the resource group is not mastered on the node that you are removing. If the resource group is mastered on the node that you are removing, run the clresourcegroup command to switch the resource group offline from that node. The following clresourcegroup command brings the resource group offline from a given node, provided that new-masters does not contain that node.
# clresourcegroup switch -n new-masters resource-group
Specifies the nodes that is now to master the resource group.
Specifies the name of the resource group that you are switching. This resource group is mastered on the node that you are removing.
For more information, see the clresourcegroup(1CL) man page.
Caution - If you plan to remove a node from all the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then remove the node from the failover groups. |
A scalable service is configured as two resource groups, as follows.
One resource group is a scalable group that contains the scalable service resource.
One resource group is a failover group that contains the shared address resources that the scalable service resource uses.
Additionally, the RG_dependencies property of the scalable resource group is set to configure the scalable group with a dependency on the failover resource group. For information about this property, see the rg_properties(5) man page.
For details about scalable service configuration, see Oracle Solaris Cluster Concepts Guide.
Removing a node from the scalable resource group causes the scalable service to no longer be brought online on that node. To remove a node from the scalable resource group, perform the following steps.
# clresourcegroup set [-n nodelist] scalable-resource-group
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group from which the node is being removed.
For details, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.
See Also
The clresourcegroup(1CL) man page.
Perform the following steps to remove a node from a failover resource group.
Caution - If you plan to remove a node from all of the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then use this procedure to remove the node from the failover groups. |
If the failover resource group contains shared address resources that scalable services use, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.
This step removes the node and overwrites the previous value of the node list. Be sure to include all of the nodes that can master the resource group here.
# clresourcegroup set [-n nodelist] failover-resource-group
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group from which the node is being removed.
# clresourcegroup show -v failover-resource-group | grep -i netiflist
This step overwrites the previous value of netiflist. Be sure to include all of the IPMP groups here.
# clresource set -p netiflist=netiflist network-resource
Note - The output of the preceding command line identifies the nodes by node name. Run the command line clnode show -v | grep -i “Node ID” to find the node ID.
Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.
Specifies the name of the network resource that is hosted on the netiflist entries.
Note - Oracle Solaris Cluster does not support the use of the adapter name for netif.
# clresourcegroup show -v failover-resource-group | grep -i nodelist # clresourcegroup show -v failover-resource-group | grep -i netiflist
In a failover resource group that contains shared address resources that scalable services use, a node can appear in the following locations.
The node list of the failover resource group
The auxnodelist of the shared address resource
To remove the node from the node list of the failover resource group, follow the procedure How to Remove a Node From a Failover Resource Group.
To modify the auxnodelist of the shared address resource, you must remove and recreate the shared address resource.
If you remove the node from the failover group's node list, you can continue to use the shared address resource on that node to provide scalable services. To continue to use the shared address resource, you must add the node to the auxnodelist of the shared address resource. To add the node to the auxnodelist, perform the following steps.
Note - You can also use the following procedure to remove the node from the auxnodelist of the shared address resource. To remove the node from the auxnodelist, you must delete and recreate the shared address resource.
Add the node ID or node name of the node that you removed from the failover resource group to the auxnodelist.
# clressharedaddress create -g failover-resource-group \ -X new-auxnodelist shared-address
The name of the failover resource group that used to contain the shared address resource.
The new, modified auxnodelist with the desired node added or removed.
The name of the shared address.
This example shows how to remove a node (phys-schost-3) from a resource group (resource-group-1) that contains a logical hostname resource (schost-1).
# clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # clresourcegroup set -n phys-schost-1,phys-schost-2 resource-group-1 # clresourcegroup show -v resource-group-1 | grep -i netiflist ( Res property name: NetIfList Res property class: extension ( List of IPMP interfaces on each node ( Res property type: stringarray Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3 (sc_ipmp0@3 is the IPMP group to be removed.) # clresource set -p netiflist=sc_ipmp0@1,sc_ipmp0@2 schost-1 # clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 # clresourcegroup show -v resource-group-1 | grep -i netiflist Res property value: sc_ipmp0@1 sc_ipmp0@2