The procedures in this section enable you to perform the following tasks.
Configuring a cluster node to be an additional master of a resource group
Removing a node from a resource group
The procedures are slightly different, depending on whether you plan to add or remove the node to or from a failover or scalable resource group.
Failover resource groups contain network resources that both failover and scalable services use. Each IP subnetwork connected to the cluster has its own network resource that is specified and included in a failover resource group. The network resource is either a logical hostname or a shared address resource. Each network resource includes a list of IPMP groups that it uses. For failover resource groups, you must update the complete list of IPMP groups for each network resource that the resource group includes (the netiflist resource property).
The procedure for scalable resource groups involves the following steps:
Repeating the procedure for failover groups that contain the network resources that the scalable resource uses
Changing the scalable group to be mastered on the new set of hosts
For more information, see the clresourcegroup(1CL) man page.
Run either procedure from any cluster node.
The procedure to follow to add a node to a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:
You must supply the following information to complete the procedure.
The names and node IDs of all of the cluster nodes and names of zones
The names of the resource groups to which you are adding the node
The name of the IPMP group that is to host the network resources that are used by the resource group on all of the nodes
Also, be sure to verify that the new node is already a cluster member.
For each network resource that a scalable resource in the resource group uses, make the resource group where the network resource is located run on the new node.
See Step 1 through Step 5 in the following procedure for details.
Add the new node to the list of nodes that can master the scalable resource group (the nodelist resource group property).
This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.
# clresourcegroup set [-n node-zone-list] resource-group |
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all of the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node or to specify a node without global-cluster non-voting nodes, specify only node.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group to which the node is being added.
(Optional) Update the scalable resource's Load_balancing_weights property to assign a weight to the node that you are adding to the resource group.
Otherwise, the weight defaults to 1. See the clresourcegroup(1CL) man page for more information.
Display the current node list and the current list of IPMP groups that are configured for each resource in the resource group.
# clresourcegroup show -v resource-group | grep -i nodelist # clresourcegroup show -v resource-group | grep -i netiflist |
The output of the command line for nodelist and netiflist identifies the nodes by node name. To identify node IDs, run the command clnode show -v | grep -i node-id.
Update netiflist for the network resources that the node addition affects.
This step overwrites the previous value of netiflist, and therefore you must include all the IPMP groups here.
# clresource set -p netiflist=netiflist network-resource |
Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.
Specifies the name of the network resource (logical hostname or shared address) that is being hosted on the netiflist entries.
If the HAStoragePlus AffinityOn extension property equals True, add the node to the appropriate disk set or device group.
If you are using Solaris Volume Manager, use the metaset command.
# metaset -s disk-set-name -a -h node-name |
Specifies the name of the disk set on which the metaset command is to work
Adds a drive or host to the specified disk set
Specifies the node to be added to the disk set
SPARC: If you are using Veritas Volume Manager, use the clsetup utility.
On any active cluster member, start the clsetup utility.
# clsetup |
The Main Menu is displayed.
On the Main Menu, type the number that corresponds to the option for device groups and volumes.
On the Device Groups menu, type the number that corresponds to the option for adding a node to a VxVM device group.
Respond to the prompts to add the node to the VxVM device group.
Update the node list to include all of the nodes that can now master this resource group.
This step overwrites the previous value of nodelist, and therefore you must include all of the nodes that can master the resource group here.
# clresourcegroup set [-n node-zone-list] resource-group |
Specifies a comma-separated, ordered list of global-cluster non-voting nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group to which the node is being added.
Verify the updated information.
# clresourcegroup show -vresource-group | grep -i nodelist # clresourcegroup show -vresource-group | grep -i netiflist |
This example shows how to add a global-cluster voting node (phys-schost-2) to a resource group (resource-group-1) that contains a logical hostname resource (schost-2).
# clresourcegroup show -v resource-group-1 | grep -i nodelist ( Nodelist: phys-schost-1 phys-schost-3 # clresourcegroup show -v resource-group-1 | grep -i netiflist ( Res property name: NetIfList Res property class: extension List of IPMP interfaces on each node Res property type: stringarray Res property value: sc_ipmp0@1 sc_ipmp0@3 (Only nodes 1 and 3 have been assigned IPMP groups. You must add an IPMP group for node 2.) # clresource set -p netiflist=sc_ipmp0@1,sc_ipmp0@2,sc_ipmp0@3 schost-2 # metaset -s red -a -h phys-schost-2 # clresourcegroup set -n phys-schost-1,phys-schost-2,phys-schost-3 resource-group-1 # clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # clresourcegroup show -v resource-group-1 | grep -i netiflist Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3 |
The procedure to follow to remove a node from a resource group depends on whether the resource group is a scalable resource group or a failover resource group. For detailed instructions, see the following sections:
To complete the procedure, you must supply the following information.
Node names and node IDs of all of the cluster nodes
# clnode show -v | grep -i “Node ID” |
The name of the resource group or the names of the resource groups from which you plan to remove the node
# clresourcegroup show | grep “Nodelist” |
Names of the IPMP groups that are to host the network resources that are used by the resource groups on all of the nodes
# clresourcegroup show -v | grep “NetIfList.*value” |
Additionally, be sure to verify that the resource group is not mastered on the node that you are removing. If the resource group is mastered on the node that you are removing, run the clresourcegroup command to switch the resource group offline from that node. The following clresourcegroup command brings the resource group offline from a given node, provided that new-masters does not contain that node.
# clresourcegroup switch -n new-masters resource-group |
Specifies the nodes that is now to master the resource group.
Specifies the name of the resource group that you are switching . This resource group is mastered on the node that you are removing.
For more information, see the clresourcegroup(1CL) man page.
If you plan to remove a node from all the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then remove the node from the failover groups.
A scalable service is configured as two resource groups, as follows.
One resource group is a scalable group that contains the scalable service resource.
One resource group is a failover group that contains the shared address resources that the scalable service resource uses.
Additionally, the RG_dependencies property of the scalable resource group is set to configure the scalable group with a dependency on the failover resource group. For information about this property, see Appendix B, Standard Properties.
For details about scalable service configuration, see Sun Cluster Concepts Guide for Solaris OS.
Removing a node from the scalable resource group causes the scalable service to no longer be brought online on that node. To remove a node from the scalable resource group, perform the following steps.
Remove the node from the list of nodes that can master the scalable resource group (the nodelist resource group property).
# clresourcegroup set [-n node-zone-list] scalable-resource-group |
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global-cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group from which the node is being removed.
(Optional) Remove the node from the failover resource group that contains the shared address resource.
For details, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.
(Optional) Update the Load_balancing_weights property of the scalable resource to remove the weight of the node that you are removing from the resource group.
The clresourcegroup(1CL) man page.
Perform the following steps to remove a node from a failover resource group.
If you plan to remove a node from all of the resource groups, and you use a scalable services configuration, first remove the node from the scalable resource groups. Then use this procedure to remove the node from the failover groups.
If the failover resource group contains shared address resources that scalable services use, see How to Remove a Node From a Failover Resource Group That Contains Shared Address Resources.
Update the node list to include all of the nodes that can now master this resource group.
This step removes the node and overwrites the previous value of the node list. Be sure to include all of the nodes that can master the resource group here.
# clresourcegroup set [-n node-zone-list] failover-resource-group |
Specifies a comma-separated, ordered list of nodes that can master this resource group. This resource group is switched offline on all the other nodes. The format of each entry in the list is node:zone. In this format, node specifies the node name and zone specifies the name of a global-cluster non-voting node. To specify the global cluster voting node, or to specify a node without global-cluster non-voting nodes, specify only node.
This list is optional. If you omit this list, the Nodelist property is set to all nodes in the cluster.
Specifies the name of the resource group from which the node is being removed.
Display the current list of IPMP groups that are configured for each resource in the resource group.
# clresourcegroup show -v failover-resource-group | grep -i netiflist |
Update netiflist for network resources that the removal of the node affects.
This step overwrites the previous value of netiflist. Be sure to include all of the IPMP groups here.
# clresource set -p netiflist=netiflist network-resource |
The output of the preceding command line identifies the nodes by node name. Run the command line clnode show -v | grep -i “Node ID” to find the node ID.
Specifies a comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be given as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.
Specifies the name of the network resource that is hosted on the netiflist entries.
Sun Cluster does not support the use of the adapter name for netif.
Verify the updated information.
# clresourcegroup show -vfailover-resource-group | grep -i nodelist # clresourcegroup show -vfailover-resource-group | grep -i netiflist |
In a failover resource group that contains shared address resources that scalable services use, a node can appear in the following locations.
The node list of the failover resource group
The auxnodelist of the shared address resource
To remove the node from the node list of the failover resource group, follow the procedure How to Remove a Node From a Failover Resource Group.
To modify the auxnodelist of the shared address resource, you must remove and re-create the shared address resource.
If you remove the node from the failover group's node list, you can continue to use the shared address resource on that node to provide scalable services. To continue to use the shared address resource, you must add the node to the auxnodelist of the shared address resource. To add the node to the auxnodelist, perform the following steps.
You can also use the following procedure to remove the node from the auxnodelist of the shared address resource. To remove the node from the auxnodelist, you must delete and re-create the shared address resource.
Switch the scalable service resource offline.
Remove the shared address resource from the failover resource group.
Create the shared address resource.
Add the node ID or node name of the node that you removed from the failover resource group to the auxnodelist.
# clressharedaddress create -g failover-resource-group \ -X new-auxnodelist shared-address |
The name of the failover resource group that used to contain the shared address resource.
The new, modified auxnodelist with the desired node added or removed.
The name of the shared address.
This example shows how to remove a node (phys-schost-3) from a resource group (resource-group-1) that contains a logical hostname resource (schost-1).
# clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # clresourcegroup set -n phys-schost-1,phys-schost-2 resource-group-1 # clresourcegroup show -v resource-group-1 | grep -i netiflist ( Res property name: NetIfList Res property class: extension ( List of IPMP interfaces on each node ( Res property type: stringarray Res property value: sc_ipmp0@1 sc_ipmp0@2 sc_ipmp0@3 (sc_ipmp0@3 is the IPMP group to be removed.) # clresource set -p netiflist=sc_ipmp0@1,sc_ipmp0@2 schost-1 # clresourcegroup show -v resource-group-1 | grep -i nodelist Nodelist: phys-schost-1 phys-schost-2 # clresourcegroup show -v resource-group-1 | grep -i netiflist Res property value: sc_ipmp0@1 sc_ipmp0@2 |