This chapter describes how to use the scrgadm(1M) command to manage resources, resource groups, and resource types within the cluster. See "Tools for Data-Service Resource Administration" to determine if you can use other tools to complete a procedure.
This chapter contains the following procedures.
"How to Add a Logical-Hostname Resource to a Resource Group"
"How to Add a Failover Application Resource to a Resource Group"
"How to Add a Scalable Application Resource to a Resource Group"
"How to Disable a Resource and Move Its Resource Group Into the Unmanaged State"
"How to Display Resource Type, Resource Group, and Resource Configuration Information"
"How to Set Up SUNW.HAStorage Resource Type for New Resources"
See Chapter 1, Planning for Sun Cluster Data Services and the Sun Cluster 3.0 U1 Concepts document for overview information about resource types, resource groups, and resources.
Table 11-1 lists the sections that describe the administration tasks for data-service resources.
Table 11-1 Task Map: Data Service Administration
Task |
For Instructions, Go To ... |
---|---|
Register a resource type | |
Create failover or scalable resource groups |
"How to Create a Failover Resource Group"
|
Add logical hostnames or shared addresses and data-service resources to resource groups |
"How to Add a Logical-Hostname Resource to a Resource Group"
"How to Add a Shared-Address Resource to a Resource Group"
"How to Add a Failover Application Resource to a Resource Group"
"How to Add a Scalable Application Resource to a Resource Group" |
Enable resources and resource monitors, manage the resource group, and bring the resource group and its associated resources online | |
Disable and enable resource monitors independent of the resource |
"How to Disable a Resource Fault Monitor"
|
Remove resource types from the cluster | |
Remove resource groups from the cluster | |
Remove resources from resource groups | |
Switch the primary for a resource group | |
Disable resources and move their resource group into the unmanaged state |
"How to Disable a Resource and Move Its Resource Group Into the Unmanaged State" |
Display resource type, resource group, and resource configuration information |
"How to Display Resource Type, Resource Group, and Resource Configuration Information" |
Change resource-type, resource group, and resource properties |
"How to Change Resource-Type Properties" |
Clear error flags for failed Resource Group Manager (RGM) processes | |
Re-register the built-in resource types LogicalHostname and SharedAddress | |
Update the network interface ID list for the network resources, and update the node list for the resource group | |
Remove a node from a resource group | |
Set up SUNW.HAStorage for resource groups so as to synchronize the startups between those resource groups and disk device groups |
"How to Set Up SUNW.HAStorage Resource Type for New Resources" |
The procedures in this chapter describe how to use the scrgadm(1M) command to complete these tasks. Other tools also enable you to administer your resources. See "Tools for Data-Service Resource Administration" for details about these options.
Configuring a Sun Cluster data service is a single task composed of several procedures. The following procedures enable you to perform the following tasks.
Register a resource type.
Create resource groups.
Add resources into the resource groups.
Bring the resources online.
Use the procedures in this chapter to update your data service configuration after the initial configuration. For example, to change resource type, resource group, and resource properties, go to "Changing Resource Type, Resource Group, and Resource Properties".
A resource type provides specification of common properties and callback methods that apply to all the resources of the given type. You must register a resource type before creating a resource of that type. See Chapter 1, Planning for Sun Cluster Data Services for details about resource types.
To complete this procedure, you must supply the name for the resource type you are registering, which is an abbreviation for the data-service name. This name maps to the name shown on your data-service license certificate. See the Sun Cluster 3.0 U1 Release Notes for the mapping between the names and the license certificate names.
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Register the resource type.
# scrgadm -a -t resource-type |
Adds the specified resource type.
Specifies name of the resource type to add. See the Sun Cluster 3.0 U1 Release Notes to determine the predefined name to supply.
Verify that the resource type has been registered.
# scrgadm -pv -t resource-type |
The following example registers the Sun Cluster HA for iPlanet Web Server data service (internal name iws).
# scrgadm -a -t SUNW.iws # scrgadm -pv -t SUNW.iws Res Type name: SUNW.iws (SUNW.iws) Res Type description: None registered (SUNW.iws) Res Type base directory: /opt/SUNWschtt/bin (SUNW.iws) Res Type single instance: False (SUNW.iws) Res Type init nodes: All potential masters (SUNW.iws) Res Type failover: False (SUNW.iws) Res Type version: 1.0 (SUNW.iws) Res Type API version: 2 (SUNW.iws) Res Type installed on nodes: All (SUNW.iws) Res Type packages: SUNWschtt |
After registering resource types, you can create resource groups and add resources to the resource group. See "Creating a Resource Group" for details.
A resource group contains a set of resources, all of which are brought online or offline together on a given node or set of nodes. You must create an empty resource group before placing resources into it.
The two resource group types are failover and scalable. A failover resource group can be online on one node only at any time, while a scalable resource group can be online on multiple nodes simultaneously.
The following procedure describes how to use the scrgadm(1M) command to register and configure your data service.
See Chapter 1, Planning for Sun Cluster Data Services and the Sun Cluster 3.0 U1 Concepts document for conceptual information on resource groups.
A failover resource group contains network addresses, such as the built-in resource types LogicalHostname and SharedAddress, as well as failover resources, such as the data-service application resources for a failover data service. The network resources, along with their dependent data-service resources, move between cluster nodes when data services fail over or are switched over.
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Create the failover resource group.
# scrgadm -a -g resource-group [-h nodelist] |
Adds the specified resource group.
Specifies your choice of the name of the failover resource group to add. This name must begin with an ASCII character.
Specifies an optional ordered list of nodes that can master this resource group. If you do not specify this list, it defaults to all the nodes in the cluster.
Verify that the resource group has been created.
# scrgadm -pv -g resource-group |
This example shows the addition of a failover resource group (resource-group-1) that two nodes (phys-schost-1 and phys-schost-2) can master.
# scrgadm -a -g resource-group-1 -h phys-schost1,phys-schost-2 # scrgadm -pv -g resource-group-1 Res Group name: resource-group-1 (resource-group-1) Res Group RG_description: <NULL> (resource-group-1) Res Group management state: Unmanaged (resource-group-1) Res Group Failback: False (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 (resource-group-1) Res Group Maximum_primaries: 1 (resource-group-1) Res Group Desired_primaries: 1 (resource-group-1) Res Group RG_dependencies: <NULL> (resource-group-1) Res Group mode: Failover (resource-group-1) Res Group network dependencies: True (resource-group-1) Res Group Global_resources_used: All (resource-group-1) Res Group Pathprefix: |
After creating a failover resource group, you can add application resources to this resource group. See "Adding Resources to Resource Groups" for the procedure.
A scalable resource group is used with scalable services. The shared-address feature is the Sun Cluster networking facility that enables the multiple instances of a scalable service to appear as a single service. You must first create a failover resource group that contains the shared addresses on which the scalable resources depend. Next, create a scalable resource group, and add scalable resources to that group.
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Create the failover resource group that holds the shared addresses that the scalable resource will use.
Create the scalable resource group.
# scrgadm -a -g resource-group \ -y Maximum_primaries=m \ -y Desired_primaries=n \ -y RG_dependencies=depend-resource-group \ -h nodelist] |
Adds a scalable resource group.
Specifies your choice of the name of the scalable resource group to add.
Specifies the maximum number of active primaries for this resource group.
Specifies the number of active primaries on which the resource group should attempt to start.
Identifies the resource group that contains the shared-address resource on which the resource group being created depends.
Specifies an optional list of nodes on which this resource group is to be available. If you do not specify this list, the value defaults to all nodes.
Verify that the scalable resource group has been created.
# scrgadm -pv -g resource-group |
This example shows the addition of a scalable resource group (resource-group-1) to be hosted on two nodes (phys-schost-1, phys-schost-2). The scalable resource group depends on the failover resource group (resource-group-2) that contains the shared addresses.
# scrgadm -a -g resource-group-1 \ -y Maximum_primaries=2 \ -y Desired_primaries=2 \ -y RG_dependencies=resource-group-2 \ -h phys-schost-1,phys-schost-2 # scrgadm -pv -g resource-group-1 Res Group name: resource-group-1 (resource-group-1) Res Group RG_description: <NULL> (resource-group-1) Res Group management state: Unmanaged (resource-group-1) Res Group Failback: False (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 (resource-group-1) Res Group Maximum_primaries: 2 (resource-group-1) Res Group Desired_primaries: 2 (resource-group-1) Res Group RG_dependencies: resource-group-2 (resource-group-1) Res Group mode: Scalable (resource-group-1) Res Group network dependencies: True (resource-group-1) Res Group Global_resources_used: All (resource-group-1) Res Group Pathprefix: |
After a scalable resource group has been created, you can add scalable application resources to the resource group. See "How to Add a Scalable Application Resource to a Resource Group" for details.
A resource is an instantiation of a resource type. You must add resources to a resource group before the RGM can manage the resources. This section describes the following three resource types.
logical-hostname resources
shared-address resources
data-service (application) resources
Logical-hostname resources and shared-address resources are always added to failover resource groups. Data-service resources for failover data services are added to failover resource groups. Failover resource groups contain both the logical-hostname resources and the application resources for the data service. Scalable resource groups contain only the application resources for scalable services. The shared-address resources on which the scalable service depends must reside in a separate failover resource group. You must specify dependencies between the scalable application resources and the shared-address resources for the data service to scale across cluster nodes.
See the Sun Cluster 3.0 U1 Concepts document and Chapter 1, Planning for Sun Cluster Data Services for more information on resources.
To complete this procedure, you must supply the following information.
the name of the failover resource group into which you are adding the resource
the hostnames you are adding to the resource group
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Add the logical-hostname resource to the resource group.
# scrgadm -a -L [-j resource] -g resource-group -l hostnamelist, ... [-n netiflist] |
Adds a logical-hostname resource.
Specifies the logical-hostname resource form of the command.
Specifies an optional resource name of your choice. If you do not specify this option, the name defaults to the first hostname specified with the -l option.
Specifies the name of the resource group in which this resource resides.
Specifies a comma-separated list of UNIX hostnames (logical hostnames) by which clients communicate with services in the resource group.
Specifies an optional comma-separated list that identifies the NAFO groups on each node. All nodes in nodelist of the resource group must be represented in netiflist. See the scrgadm(1M) man page for a description of the syntax for specifying netiflist. If you do not specify this option, scrgadm attempts to discover a net adapter on the subnet that the hostnamelist identifies for each node in nodelist.
Verify that the logical-hostname resource has been added.
# scrgadm -pv -j resource |
The resource addition action causes the Sun Cluster software to validate the resource. If the validation succeeds, the resource can be enabled, and the resource group can be moved into the state where the RGM manages it. If the validation fails, the scrgadm command produces an error message to that effect and exits. If the validation fails, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the scrgadm command.
This example shows the addition of logical-hostname resource (resource-1) to a resource group (resource-group-1).
# scrgadm -a -L -j resource-1 -g resource-group-1 -l schost-1 # scrgadm -pv -j resource-1 Res Group name: resource-group-1 (resource-group-1) Res name: resource-1 (resource-group-1:resource-1) Res R_description: (resource-group-1:resource-1) Res resource type: SUNW.LogicalHostname (resource-group-1:resource-1) Res resource group name: resource-group-1 (resource-group-1:resource-1) Res enabled: False (resource-group-1:resource-1) Res monitor enabled: True |
After adding logical-hostname resources, use the procedure "How to Bring a Resource Group Online" to bring them online.
To complete this procedure, you must supply the following information.
The name of the resource group into which you are adding the resource. This group must be a failover resource group created previously.
The hostnames you are adding to the resource group.
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Add the shared-address resource to the resource group.
# scrgadm -a -S [-j resource] -g resource-group -l hostnamelist, ... \ [-X auxnodelist] [-n netiflist] |
Adds shared-address resources.
Specifies the shared-address resource form of the command.
Specifies an optional resource name of your choice. If you do not specify this option, the name defaults to the first hostname specified with the -l option.
Specifies the resource-group name.
Specifies a comma-separated list of shared address hostnames.
Specifies a comma-separated list of physical node names or IDs that identify the cluster nodes that can host the shared address but never serve as primary in the case of failover. These nodes are mutually exclusive, with the nodes identified in the resource group nodelist as potential masters.
Specifies an optional comma-separated list that identifies the NAFO groups on each node. All the nodes in nodelist of the resource group must be represented in the netiflist. See the scrgadm(1M) man page for a description of the syntax for specifying netiflist. If you do not specify this option, scrgadm attempts to discover a net adapter on the subnet that the hostnamelist identifies for each node in nodelist.
Verify that the shared-address resource has been added and validated.
# scrgadm -pv -j resource |
The resource addition action causes the Sun Cluster software to validate the resource. If the resource is successfully validated, the resource can be enabled, and the resource group can be moved into the state where the RGM manages it. If the validation fails, the scrgadm command produces an error message to this effect and exits. If the validation fails, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the scrgadm command.
This example shows the addition of a shared-address resource (resource-1) to a resource group (resource-group-1).
# scrgadm -a -S -j resource-1 -g resource-group-1 -l schost-1 # scrgadm -pv -j resource-1 (resource-group-1) Res name: resource-1 (resource-group-1:resource-1) Res R_description: (resource-group-1:resource-1) Res resource type: SUNW.SharedAddress (resource-group-1:resource-1) Res resource group name: resource-group-1 (resource-group-1:resource-1) Res enabled: False (resource-group-1:resource-1) Res monitor enabled: True |
After adding a shared resource, use the procedure "How to Bring a Resource Group Online" to enable the resource.
A failover application resource is an application resource that uses logical hostnames created in a failover resource group previously.
To complete this procedure, you must supply the following information.
the name of the failover resource group into which you are adding the resource
the name of the resource type for the resource
the logical-hostname resources that the application resource uses, which are the logical hostnames previously included in the same resource group
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Add a failover application resource to the resource group.
# scrgadm -a -j resource -g resource-group -t resource-type \ [-x Extension_property=value, ...] [-y Standard_property=value, ...] |
Adds a resource.
Specifies your choice of the name of the resource to add.
Specifies the name of the failover resource group created previously.
Specifies the name of the resource type for the resource.
Specifies a comma-separated list of extension properties that depend on the particular data service. See the chapter for each data service to determine whether the data service requires this property.
Specifies a comma-separated list of standard properties that depends on the particular data service. See the chapter for each data service and Appendix A, Standard Properties to determine whether the data service requires this property.
You can set additional properties. See Appendix A, Standard Properties and the chapter in this book on how to install and configure your failover data service for details.
Verify that the failover application resource has been added and validated.
# scrgadm -pv -j resource |
The resource addition action causes the Sun Cluster software to validate the resource. If the validation succeeds, the resource can be enabled, and the resource group can be moved into the state where the RGM manages it. If the validation fails, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the scrgadm command.
This example shows the addition of a resource (resource-1) to a resource group (resource-group-1). The resource depends on logical-hostname resources (schost-1, schost-2), which must reside in the same failover resource groups that you defined previously.
# scrgadm -a -j resource-1 -g resource-group-1 -t resource-type-1 \ -y Network_resources_used=schost-1,schost2 \ # scrgadm -pv -j resource-1 (resource-group-1) Res name: resource-1 (resource-group-1:resource-1) Res R_description: (resource-group-1:resource-1) Res resource type: resource-type-1 (resource-group-1:resource-1) Res resource group name: resource-group-1 (resource-group-1:resource-1) Res enabled: False (resource-group-1:resource-1) Res monitor enabled: True |
After adding a failover application resource, use the procedure "How to Bring a Resource Group Online" to enable the resource.
A scalable application resource is an application resource that uses shared addresses in a failover resource group.
To complete this procedure, you must supply the following information:
the name of the scalable resource group into which you are adding the resource
the name of the resource type for the resource
the shared-address resources that the scalable service resource uses, which are the shared addresses previously included in a failover resource group
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Add a scalable application resource to the resource group.
# scrgadm -a -j resource -g resource-group -t resource-type \ -y Network_resources_used=network-resource[,network-resource...] \ -y Scalable=True [-x Extension_property=value, ...] [-y Standard_property=value, ...] |
Adds a resource.
Specifies your choice of the name of the resource to add.
Specifies the name of a scalable service resource group created previously.
Specifies the name of the resource type for this resource.
Specifies the list of network resources (shared addresses) on which this resource depends.
Specifies that this resource is scalable.
Specifies a comma-separated list of extension properties that depend on the particular data service. See the chapter for each data service to determine whether the data service requires this property.
Specifies a comma-separated list of standard properties that depends on the particular data service. See the chapter for each data service and Appendix A, Standard Properties to determine whether the data service requires this property.
You can set additional properties. See Appendix A, Standard Properties and the chapter in this book on how to install and configure your scalable data service for information on other configurable properties. Specifically for scalable services, you typically set the Port_list, Load_balancing_weights, and Load_balancing_policy properties, which Appendix A, Standard Properties describes.
Verify that the scalable application resource has been added and validated.
# scrgadm -pv -j resource |
The resource addition action causes the Sun Cluster software to validate the resource. If the validation succeeds, the resource can be enabled and the resource group can be moved into the state where the RGM manages it. If the validation fails, check the syslog on each node for an error message. The message appears on the node that performed the validation, not necessarily the node on which you ran the scrgadm command.
This example shows the addition of a resource (resource-1) to a resource group (resource-group-1). Note that resource-group-1 depends on the failover resource group that contains the network addresses being used (schost-1 and schost-2 in the following example). The resource depends on shared-address resources (schost-1, schost-2), which must reside in one or more failover resource groups that you defined previously.
# scrgadm -a -j resource-1 -g resource-group-1 -t resource-type-1 \ -y Network_resources_used=schost-1,schost-2 \ -y Scalable=True # scrgadm -pv -j resource-1 (resource-group-1) Res name: resource-1 (resource-group-1:resource-1) Res R_description: (resource-group-1:resource-1) Res resource type: resource-type-1 (resource-group-1:resource-1) Res resource group name: resource-group-1 (resource-group-1:resource-1) Res enabled: False (resource-group-1:resource-1) Res monitor enabled: True |
After you add a scalable application resource, follow the procedure "How to Bring a Resource Group Online" to enable the resource.
To enable resources to begin providing HA services, you must enable the resources in the resource group, enable the resource monitors, make the resource group managed, and bring the resource group online. You can perform these tasks individually or by using the following one-step procedure. See the scswitch(1M) man page for details.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Enable the resource, and bring the resource group online.
If the resource monitor has been previously disabled, it will be enabled also.
# scswitch -Z -g resource-group |
Brings a resource group online by first enabling its resources and fault monitors.
Specifies the name of the resource group to bring online. The group must be an existing resource group.
Verify that the resource is online.
Run the following command on any cluster node, and look for the resource group state field to see if the resource group is online on the nodes specified in the node list.
# scstat -g |
This example shows how to bring a resource group (resource-group-1) online and verify its status.
# scswitch -Z -g resource-group-1 # scstat -g |
After a resource group has been brought online, the resource group is configured and ready to use. If a resource or node fails, the RGM maintains availability of the resource group by automatically switching the resource group online on alternate nodes.
The following procedures disable or enable resource fault monitors, not the resources themselves. A resource can continue to normal operation while its fault monitor is disabled. However, if the fault monitor is disabled and a data service fault occurs, automatic fault recovery is not initiated.
See the scswitch(1M) man page for additional information.
Run this procedure from any cluster node.
Become superuser on a cluster member.
Disable the resource fault monitor.
# scswitch -n -M -j resource |
Disable a resource or resource monitor.
Disable the fault monitor for the specified resource.
The name of the resource.
Verify that the resource fault monitor has been disabled.
Run the following command on each cluster node and look for monitored fields (RS Monitored).
# scrgadm -pv |
This example shows how to disable a resource fault monitor.
# scrgadm -n -M -j resource-1 # scrgadm -pv ... RS Monitored: no... |
Become superuser on a cluster member.
Enable the resource fault monitor.
# scswitch -e -M -j resource |
Enable a resource or resource monitor.
Enable the fault monitor for the specified resource.
The name of the resource.
Verify that the resource fault monitor has been enabled.
Run the following command on each cluster node and look for monitored fields (RS Monitored).
# scrgadm -pv |
This example shows how to enable a resource fault monitor.
# scrgadm -e -M -j resource-1 # scrgadm -pv ... RS Monitored: yes... |
You do not need to remove resource types that are not in use. However, if you want to remove a resource type, you can use this procedure to do so.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
Before you remove a resource type, you must disable and remove all the resources of that type in all the resource groups in the cluster. Use the scrgadm -pv command to identify the resources and resource groups in the cluster.
Become superuser on a cluster member.
Disable each resource of the resource type to be removed.
# scswitch -n -j resource |
Disables the resource.
Specifies the name of the resource to disable.
Remove each resource of the resource type to be removed.
# scrgadm -r -j resource |
Removes the specified resource.
Specifies the name of the resource to remove.
Remove the resource type.
# scrgadm -r -t resource-type |
Removes the specified resource type.
Specifies the name of the resource type to remove.
Verify that the resource type has been removed.
# scrgadm -p |
This example shows how to disable and remove all resources of a resource type (resource-type-1) and then remove the resource type itself. Here, resource-1 is a resource of the resource type resource-type-1.
# scswitch -n -j resource-1 # scrgadm -r -j resource-1 # scrgadm -r -t resource-type-1 |
To remove a resource group, you must first remove all the resources from the resource group.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Run the following command to take the resource group offline.
# scswitch -F -g resource-group |
Switches a resource group offline.
Specifies the name of the resource group to take offline.
Disable all the resources that are part of the resource group.
You can use the scrgadm -pv command to view the resources in the resource group. Disable all the resources in the resource group to be removed.
# scswitch -n -j resource |
Disables the resource.
Specifies the name of the resource to disable.
If any dependent data-service resources exist in a resource group, you cannot disable the resource until you have disabled all the resources that depend on it.
Remove all resources from the resource group.
Use the following scrgadm commands to perform the following tasks.
Remove the resources.
Remove the resource group.
# scrgadm -r -j resource # scrgadm -r -g resource-group |
Removes the specified resource or resource group.
Specifies the name of the resource to be removed.
Specifies the name of the resource group to be removed.
Verify that the resource group has been removed.
# scrgadm -p |
This example shows how to remove a resource group (resource-group-1) after you have removed its resource (resource-1).
# scswitch -F -g resource-group-1 # scrgadm -r -j resource-1 # scrgadm -r -g resource-group-1 |
Disable the resource before removing it from a resource group.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Disable the resource that you want to remove.
# scswitch -n -j resource |
Disables the resource.
Specifies the name of the resource to disable.
Remove the resource.
# scrgadm -r -j resource |
Removes the specified resource.
Specifies the name of the resource to remove.
Verify that the resource has been removed.
# scrgadm -p |
This example shows how to disable and remove a resource (resource-1).
# scswitch -n -j resource-1 # scrgadm -r -j resource-1 |
Use the following procedure to switch over a resource group from its current primary to another node that will become the new primary.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
To complete this procedure, you must supply the following information.
The name of the resource group to be switched over.
The names of the nodes on which you want the resource group to be brought online or to remain online. These nodes must be cluster nodes that have been set up to be potential masters of the resource group to be switched. To see a list of potential primaries for the resource group, use the scrgadm -pv command.
Become superuser on a cluster member.
Switch the primary to a potential primary.
# scswitch -z -g resource-group -h nodelist |
Switches the specified resource group online.
Specifies the name of the resource group to switch.
Specifies the node or nodes on which the resource group is to be brought online or is to remain online. This resource group is then switched to be offline on all other nodes.
Verify that the resource group has been switched to the new primary.
Run the following command and look for the output for the state of the resource group that has been switched over.
# scstat -g |
This example shows how to switch a resource group (resource-group-1) from its current primary (phys-schost-1) to the potential primary (phys-schost-2). First, verify that the resource group is online on phys-schost-1, perform the switch, then verify that the group is switched to be online on phys-schost-2.
phys-schost-1# scstat -g ... Resource Group Name: resource-group-1 Status Node Name: phys-schost-1 Status: Online Node Name: phys-schost-2 Status: Offline ... phys-schost-1# scswitch -z -g resource-group-1 -h phys-schost-2 phys-schost-1# scstat -g ... Resource Group Name: resource-group-1 Status Node Name: phys-schost-2 Status: Online Node Name: phys-schost-1 Status: Offline ... |
At times, you must bring a resource group into the unmanaged state before performing an administrative procedure on it. Before moving a resource group into the unmanaged state, you must disable all the resources that are part of the resource group and bring the resource group offline.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
To complete this procedure, you must supply the following information.
the name of the resources to be disabled
the name of the resource group to move into the unmanaged state
To determine the resource and resource group names that are needed for this procedure, use the scrgadm -pv command.
Become superuser on a cluster member.
Disable the resource.
Repeat this step for all resources in the resource group.
# scswitch -n -j resource |
Disables the resource.
Specifies the name of the resource to disable.
Run the following command to take the resource group offline.
# scswitch -F -g resource-group |
Switches a resource group offline.
Specifies the name of the resource group to take offline.
Bring the resource group into the unmanaged state.
# scswitch -u -g resource-group |
Puts the specified resource group in the unmanaged state.
Specifies the name of the resource group to move into the unmanaged state.
Verify that the resources are disabled and the resource group is in the unmanaged state.
# scrgadm -pv -g resource-group |
This example shows how to disable the resource (resource-1) and then move the resource group (resource-group-1) into the unmanaged state.
# scswitch -n -j resource-1 # scswitch -F -g resource-group-1 # scswitch -u -g resource-group-1 # scrgadm -pv -g resource-group-1 Res Group name: resource-group-1 (resource-group-1) Res Group RG_description: <NULL> (resource-group-1) Res Group management state: Unmanaged (resource-group-1) Res Group Failback: False (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 (resource-group-1) Res Group Maximum_primaries: 2 (resource-group-1) Res Group Desired_primaries: 2 (resource-group-1) Res Group RG_dependencies: <NULL> (resource-group-1) Res Group mode: Failover (resource-group-1) Res Group network dependencies: True (resource-group-1) Res Group Global_resources_used: All (resource-group-1) Res Group Pathprefix: (resource-group-1) Res name: resource-1 (resource-group-1:resource-1) Res R_description: (resource-group-1:resource-1) Res resource type: SUNW.apache (resource-group-1:resource-1) Res resource group name: resource-group-1 (resource-group-1:resource-1) Res enabled: True (resource-group-1:resource-1) Res monitor enabled: False (resource-group-1:resource-1) Res detached: False |
Before you perform administrative procedures on resources, resource groups, or resource types, use the following procedure to view the current configuration settings for these objects.
See the scrgadm(1M) and scswitch(1M) man pages for additional information.
Perform this procedure from any cluster node.
The scrgadm command provides the following three levels of configuration status information.
With the -p option, the output shows a very limited set of property values for resource types, resource groups, and resources.
With the -pv option, the output shows more details on other resource type, resource group, and resource properties.
With the -pvv option, the output provides a detailed view, including resource type methods, extension properties, and all resource and resource-group properties.
You can also view specific resource types, resource groups, and resources by using the -t, -g, and -j (resource type, resource group, and resource, respectively) options, followed by the name of the object you want to view. For example, the following command specifies that you want to view specific information on the resource apache-1 only.
# scrgadm -p[v[v]] -j apache-1 |
See the scrgadm(1M) man page for details.
Resource groups and resources have standard configuration properties that you can change. The following procedures describe how to change these properties.
Resources also have extension properties-some of which the data service developer predefines-that you cannot change. See the individual data service chapters in this document for a list of the extension properties for each data service.
See the scrgadm(1M) man page for information on the standard configuration properties for resource groups and resources.
To complete this procedure, you must supply the following information.
The name of the resource type to change.
The name of the resource-type property to change. For resource types, you can change only one property-the list of nodes on which resources of this type can be instantiated.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Run the scrgadm command to determine the name of the resource type needed for this procedure.
# scrgadm -pv |
Change the resource-type property.
The only property that can be changed for a resource type is Installed_node_list.
# scrgadm -c -t resource-type -h installed-node-list |
Changes the specified resource-type property.
Specifies the name of the resource type.
Specifies the names of nodes on which this resource type is installed.
Verify that the resource-type property has been changed.
# scrgadm -pv -t resource-type |
This example shows how to change the SUNW.apache property to define that this resource type is installed on two nodes (phys-schost-1 and phys-schost-2).
# scrgadm -c -t SUNW.apache -h phys-schost-1,phys-schost-2 # scrgadm -pv -t SUNW.apache Res Type name: SUNW.apache (SUNW.apache) Res Type description: Apache Resource Type (SUNW.apache) Res Type base directory: /opt/SUNWscapc/bin (SUNW.apache) Res Type single instance: False (SUNW.apache) Res Type init nodes: All potential masters (SUNW.apache) Res Type failover: False (SUNW.apache) Res Type version: 1.0 (SUNW.apache) Res Type API version: 2 (SUNW.apache) Res Type installed on nodes: phys-schost1 phys-schost-2 (SUNW.apache) Res Type packages: SUNWscapc |
To complete this procedure, you must supply the following information.
the name of the resource group to change
the name of the resource-group property to change and its new value
This procedure describes the steps for changing resource-group properties. See Appendix A, Standard Properties for a complete list of resource-group properties.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Change the resource-group property.
# scrgadm -c -g resource-group -y property=new-value |
Changes the specified property.
Specifies the name of the resource group.
Specifies the name of the property to change.
Verify that the resource-group property has been changed.
# scrgadm -pv -g resource-group |
This example shows how to change the Failback property for the resource group (resource-group-1).
# scrgadm -c -g resource-group-1 -y Failback=True # scrgadm -pv -g resource-group-1 |
To complete this procedure, you must supply the following information.
the name of the resource with the property to change
the name of the property to change
This procedure describes the steps for changing resource properties. See Appendix A, Standard Properties for a complete list of resource-group properties.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Use the scrgadm -pvv command to view the current resource property settings.
# scrgadm -pvv -j resource |
Change the resource property.
# scrgadm -c -j resource -y property=new-value | -x extension-property=new-value |
Changes the specified property.
Specifies the name of the resource.
Specifies the name of the standard property to change.
Specifies the name of the extension property to change. For Sun-supplied data services, see the extension properties documented in the chapters on how to install and configure the individual data services.
Verify that the resource property has been changed.
# scrgadm pvv -j resource |
This example shows how to change the system-defined Start_timeout property for the resource (resource-1).
# scrgadm -c -j resource-1 -y start_timeout=30 # scrgadm -pvv -j resource-1 |
This example shows how to change an extension property (Log_level) for the resource (resource-1).
# scrgadm -c -j resource-1 -x Log_level=3 # scrgadm -pvv -j resource-1 |
When the Failover_mode resource property is NONE or SOFT and the STOP of a resource fails, the individual resource goes into the STOP_FAILED state and the resource group goes into the ERROR_STOP_FAILED state. You cannot bring a resource group in this state on any node online, nor can you edit it (create or delete resources, or change resource-group or resource properties).
To complete this procedure, you must supply the following information.
the name of the node where the resource is STOP_FAILED
the name of the resource and resource group in STOP_FAILED state
See the scswitch(1M) man page for additional information.
Perform this procedure from any cluster node.
Become superuser on a cluster member.
Identify which resources have gone into the STOP_FAILED state and on which nodes.
# scstat -g |
Manually stop the resources and their monitors on the nodes on which they are in STOP_FAILED state.
This step might require killing processes or running resource type-specific commands or other commands.
Manually set the state of these resources to OFFLINE on all the nodes on which they were manually stopped.
# scswitch -c -h nodelist -j resource -f STOP_FAILED |
Clears the flag.
Specifies the node names on which the resource was running.
Specifies the name of the resource to take offline.
Specifies the flag name.
Check the resource-group state on the nodes where the STOP_FAILED flag was cleared in Step 4.
The resource-group state should now be OFFLINE or ONLINE.
# scstat -g |
If the resource group remains in the ERROR_STOP_FAILED state, which the command scstat -g indicates, run the following scswitch command to take the resource group offline on the nodes where the resource group is still in the ERROR_STOP_FAILED state.
# scswitch -F -g resource-group |
Takes the resource group offline on all nodes that can master the group.
Specifies the name of the resource group to take offline.
This situation can occur if the resource group was being switched offline when the STOP method failure occurred and the resource that failed to stop had a dependency on other resources in the resource group. Otherwise, the resource group reverts to the ONLINE or OFFLINE state automatically after you have run the command in Step 4 on all STOP_FAILED resources.
Now you can switch the resource group to the ONLINE state.
Two preregistered resource types are SUNW.LogicalHostname and SUNW.SharedAddress. All logical hostname and shared-address resources use these resource types. You never need to register these two resource types, but you might accidentally delete them. If you have deleted resource types inadvertently, use the following procedure to re-register them.
See the scrgadm(1M) man page for additional information.
Perform this procedure from any cluster node.
Re-register the resource type.
# scrgadm -a -t SUNW.resource-type |
Adds a resource type.
Specifies the resource type to add (re-register). The resource type can be either SUNW.LogicalHostname or SUNW.SharedAddress.
This example shows how to re-register the SUNW.LogicalHostname resource type.
# scrgadm -a -t SUNW.LogicalHostname |
This section contains the following two procedures.
how to configure a cluster node to be an additional master of a resource group
how to remove a node from a resource group
The procedures are slightly different, depending on whether you are adding or removing the node to or from a failover or scalable resource group.
Failover resource groups contain network resources that both failover and scalable services use. Each IP subnetwork connected to the cluster has its own network resource specified and included in a failover resource group. The network resource is either a logical hostname or a shared-address resource. Each network resource includes a list of NAFO groups that it uses. For failover resource groups, you must update the complete list of NAFO groups for each network resource included in the resource group (the netiflist resource property).
For scalable resource groups, in addition to changing the scalable group to be mastered on the new set of hosts, you must repeat the procedure for failover groups that contain the network resources that the scalable resource uses.
See the scrgadm(1M) man page for additional information.
Run either of these procedures from any cluster node.
You must supply the following information to complete this procedure.
the names and node IDs of all the cluster nodes
the names of the resource groups to which you are adding the node
the name of the NAFO group that will host the network resources used by the resource group on all the nodes
Also note the following points.
Be sure to verify that the new node is already a cluster member.
For failover resource groups, perform all the steps in the procedure "How to Add a Node to a Resource Group."
For scalable resource groups, you must complete the tasks listed as "For Scalable Resource Groups Only."
For Scalable Resource Groups Only
For each network resource that a scalable resource in the resource group uses, make the resource group where the network resource is located run on the new node (Steps 1 through 4 in the following procedure).
Add the new node to the list of nodes that can master the scalable resource group (the nodelist resource-group property) (Step 3 in the following procedure).
(Optional) Update the Load_balancing_weights property of the scalable resource to assign a weight to the node that you want to add to the resource group. Otherwise, the weight defaults to 1. See the scrgadm(1M) man page for more information.
Procedure - How to Add a Node to a Resource Group
Display the current node list and the current list of NAFO groups configured for each resource in the resource group.
# scrgadm -pvv -g resource-group | grep -i nodelist # scrgadm -pvv -g resource-group | grep -i netiflist |
The output of the command line for nodelist identifies the nodes by node name. The output for netiflist identifies them by node ID.
Update netiflist for the network resources that the node addition affects.
This step overwrites the previous value of netiflist, and therefore you must include all NAFO groups here. Also, you must input nodes to netiflist by node ID. To find the node ID, use scconf -pv | grep "Node ID".
# scrgadm -c -j network-resource -x netiflist=netiflist |
Changes a network resource.
Specifies the name of the network resource (logical hostname or shared address) being hosted on the netiflist entries.
Specifies a comma-separated list that identifies the NAFO groups on each node. Each element in netiflist must be in the form of NAFO-group-name@nodeid.
Update the node list to include all the nodes that can now master this resource group.
This step overwrites the previous value of nodelist, and therefore you must include all the nodes that can master the resource group here.
# scrgadm -c -g resource-group -h nodelist |
Changes a resource group.
Specifies the name of the resource group to which the node is being added.
Specifies a comma-separated list of nodes that can master the resource group.
Verify the updated information.
# scrgadm -pvv -g resource-group | grep -i nodelist # scrgadm -pvv -g resource-group | grep -i netiflist |
This example shows how to add a node (phys-schost-2) to a resource group (resource-group-1), which contains a logical-hostname resource (schost-2).
# scrgadm -pvv -g resource-group-1 | grep -i nodelist (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-3 # scrgadm -pvv -g resource-group-1 | grep -i netiflist (resource-group-1:schost-2) Res property name: NetIfList (resource-group-1:schost-2:NetIfList) Res property class: extension(resource-group-1:schost-2:NetIfList) List of NAFO interfaces on each node(resource-group-1:schost-2:NetIfList) Res property type: stringarray(resource-group-1:schost-2:NetIfList) Res property value: nafo0@1 nafo0@3 (Only nodes 1 and 3 have been assigned NAFO groups. You must add a NAFO group for node 2.) # scrgadm -c -j schost-2 -x netiflist=nafo0@1,nafo0@2,nafo0@3 # scrgadm -c -g resource-group-1 -h phys-schost-1,phys-schost-2,phys-schost-3 # scrgadm -pvv -g resource-group-1 | grep -i nodelist (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # scrgadm -pvv -g resource-group-1 | grep -i netiflist (resource-group-1:schost-2:NetIfList) Res property value: nafo0@1 nafo0@2 nafo0@3 |
To complete this procedure, you must supply the following information.
the names and node IDs of all the cluster nodes
the name of the resource group or groups from which you are removing the node
the name of the NAFO group that will host the network resources used by the resource group on all the nodes
Also note the following points.
Be sure to verify that the resource group is not mastered on the node you will remove. If that's not the case, run the scswitch command to take the resource group offline on the node you want to remove.
For failover resource groups, perform all the steps in the procedure "How to Remove a Node from a Resource Group."
For scalable resource groups, you must complete the tasks listed as "For Scalable Resource Groups Only."
For Scalable Resource Groups Only
Remove the node from the list of nodes that can master the scalable resource group (the nodelist resource-group property) (Step 1 in the following procedure).
(Optional) For each network resource that a scalable resource in the resource group uses, update the resource group where the network resource is located to not be mastered on the removed node (Steps 1 through 4 in the following procedure).
(Optional) Update the Load_balancing_weights property of the scalable resource to remove the weight of the node that you want to remove from the resource group. See the scrgadm(1M) man page for more information.
Procedure - How to Remove a Node from a Resource Group
Update the node list to include all the nodes that can now master this resource group.
This step removes the node and overwrites the previous value of nodelist. Be sure to include all the nodes that can master the resource group here.
# scrgadm -c -g resource-group -h nodelist |
Changes a resource group.
Specifies the name of the resource group from which the node is being removed.
Specifies a comma-separated list of nodes that can master this resource group.
Display the current list of NAFO groups that are configured for each resource in the resource group.
# scrgadm -pvv -g resource-group | grep -i netiflist |
The output of the preceding command lines identifies the nodes by node ID.
Update netiflist for network resources that the removal of the node affects.
This step overwrites the previous value of netiflist. Be sure to include all NAFO groups here. Also, you must input nodes to netiflist by node ID. Run scconf -pv | grep "Node ID" to find the node ID.
# scrgadm -c -j network-resource -x netiflist=netiflist |
Changes a network resource.
Specifies the name of the network resource (logical hostname or shared address) that is being hosted on the netiflist entries.
Specifies a comma-separated list that identifies the NAFO groups on each node. Each element in netiflist must be in the form of NAFO-group-name@nodeid.
Verify the updated information.
# scrgadm -pvv -g resource-group | grep -i nodelist # scrgadm -pvv -g resource-group | grep -i netiflist |
This example shows how to remove a node (phys-schost-3) from a resource group (resource-group-1), which contains a logical-hostname resource (schost-1).
# scrgadm -pvv -g resource-group-1 | grep -i nodelist (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 phys-schost-3 # scrgadm -c -g resource-group-1 -h phys-schost-1,phys-schost-2 # scrgadm -pvv -g resource-group-1 | grep -i netiflist (resource-group-1:schost-1) Res property name: NetIfList(resource-group-1:schost-1:NetIfList) Res property class: extension(resource-group-1:schost-1:NetIfList) List of NAFO interfaces on each node(resource-group-1:schost-1:NetIfList) Res property type: stringarray(resource-group-1:schost-1:NetIfList) Res property value: nafo0@1 nafo0@2 nafo0@3 (nafo0@3 is the NAFO group to be removed.) # scrgadm -c -j schost-1 -x netiflist=nafo0@1,nafo0@2 # scrgadm -pvv -g resource-group-1 | grep -i nodelist (resource-group-1) Res Group Nodelist: phys-schost-1 phys-schost-2 # scrgadm -pvv -g resource-group-1 | grep -i netiflist (resource-group-1:schost-1:NetIfList) Res property value: nafo0@1 nafo0@2 |
After a cluster boots up or services fail over to another node, global devices and cluster file systems might take awhile before they become available. However, a data service can run its START method before global devices and cluster file systems-on which the data service depends-come online. In this case, the START method times out, and you must reset the state of the resource groups that the data service uses and restart the data service manually.The resource type SUNW.HAStorage monitors the global devices and cluster file systems and causes the START method of the other resources in the same resource group to wait until they become available. To avoid additional administrative tasks, set up SUNW.HAStorage for all the resource groups whose data-service resources depend on global devices or cluster file systems.
In the following example, the resource group resource-group-1 contains three data services.
iWS, which depends on /global/resource-group-1
Oracle, which depends on /dev/global/dsk/d5s2
NFS, which depends on dsk/d6
To create a SUNW.HAStorage resource hastorage-1 for new resources in resource-group-1, perform the following steps.
Become superuser on a cluster member.
Create the resource group resource-group-1.
# scrgadm -a -g resource-group-1 |
Register the resource type.
# scrgadm -a -t SUNW.HAStorage |
Create the SUNW.HAStorage resource hastorage-1, and define the service paths.
# scrgadm -a -j hastorage-1 -g resource-group-1 -t SUNW.HAStorage \ -x ServicePaths=/global/resource-group-1,/dev/global/dsk/d5s2,dsk/d6 |
ServicePaths can contain the following values.
global device group names, such as nfs-dg
paths to global devices, such as /dev/global/dsk/d5s2 or dsk/d6
cluster file system mount points, such as /global/nfs
Enable the hastorage-1 resource.
# scswitch -e -j hastorage-1 |
Add the resources (iWS, Oracle, and NFS) to resource-group-1, and set their dependency to hastorage-1.
For example, for iWS, run the following command.
# scrgadm -a -j resource -g resource-group-1 -t SUNW.iws \ -x Confdir_list=/global/iws/schost-1 \ -y Scalable=False -y Network_resources_used=schost-1 \ -y Port_list=80/tcp -y Resource_dependencies=hastorage-1 |
Set resource-group-1 to the managed state, and bring resource-group-1 online.
# scswitch -Z -g resource-group-1 |
The SUNW.HAStorage resource type contains another extension property, AffinityOn, which is a Boolean that specifies whether SUNW.HAStorage must perform an affinity switchover for the global devices and cluster file systems defined in ServicePaths. See the SUNW.HAStorage(5) man page for details.
Perform the following steps to create a SUNW.HAStorage resource for existing resources.
Register the resource type.
# scrgadm -a -t SUNW.HAStorage |
Create the SUNW.HAStorage resource hastorage-1.
# scrgadm -a -g resource-group -j hastorage-1 -t SUNW.HAStorage \ -x ServicePaths= ... -x AffinityOn=True |
Enable the hastorage-1 resource.
# scswitch -e -j hastorage-1 |
Set up the dependency for each of the existing resources, as required.
# scrgadm -c -j resource -y Resource_Dependencies=hastorage-1 |