Oracle® Solaris Cluster Reference Manual

Exit Print View

Updated: July 2014, E39662-01
 
 

clrg (1CL)

Name

clresourcegroup, clrg - manage resource groups for Oracle Solaris Cluster data services

Synopsis

/usr/cluster/bin/clresourcegroup -V
/usr/cluster/bin/clresourcegroup [subcommand] -?
/usr/cluster/bin/clresourcegroup subcommand [options] -v 
     [resourcegroup …]
/usr/cluster/bin/clresourcegroup add-node -n node[,...] 
     [-S] [-Z {zoneclustername | global}] 
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup create [-S] [-n node[,...]] 
     [-p name=value] […] [-Z {zoneclustername | 
     global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup create -i {- | clconfigfile} 
     [-S] [-n node [,...]] [-p name=value] […] 
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup delete [-F] [-Z 
     {zoneclustername | global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup evacuate -n node[,...]  
     [-T seconds]  [-Z {zoneclustername | global}] {+}
/usr/cluster/bin/clresourcegroup export [-o {- | configfile}] 
     [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup list [-n node[,...]] 
     [-r resource[,...]] [-s state[,...]] [-t resourcetype[,...]] 
     [ [-Z {zoneclustername[,...] | global | all}] 
     [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup manage [-Z {zoneclustername | 
     global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup offline [-n node
     [,...]] ] [-Z {zoneclustername | global}] 
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup online [-e] [-m] [-M] [-n node
     [,...]] [-Z {zoneclustername | global}] 
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup quiesce [-k] [-Z 
     {zoneclustername | global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup remaster [-Z {zoneclustername | 
     global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup remove-node -n node
     [,...]  [-Z {zoneclustername | global}]
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup restart [-n node[,...]] 
     [-Z zoneclustername |global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup resume [-Z {zoneclustername | 
     global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup set [-n node[,...]] -p name[+|-]=value […]  
     [-Z {zoneclustername | global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup show [-n node[,...]] 
     [-p name[,...]] [-r resource[,...]] [-t resourcetype[,...]] 
     [-Z {zoneclustername[,...] | global | all}] 
     [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup status [-n node[,...]] 
     [-r resource [,]...] [-s state [,]...] [-t resourcetype 
     [,]...] [-Z {zoneclustername[,...] | global | all}] 
     [+ | resourcegroup...]
/usr/cluster/bin/clresourcegroup suspend [-k] [-Z 
     {zoneclustername | global}] {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup switch -n node[,...] 
     [-e] [-m] [-M] [-Z {zoneclustername | global}] 
     {+ | resourcegroup...}
/usr/cluster/bin/clresourcegroup unmanage [-Z {zoneclustername | 
     global}] {+ | resourcegroup...}

Description

This command manages Oracle Solaris Cluster data service resource groups.

You can omit subcommand only if options is the –? option or the –V option.

Each option has a long and a short form. Both forms of each option are given with the description of the option in OPTIONS.

The clrg command is the short form of the clresourcegroup command.

With the exception of list, show, and status, subcommands require at least one operand. But, many subcommands accept the plus sign operand (+). This operand applies the subcommand to all applicable objects.

You can use some forms of this command in a zone cluster. For more information about valid uses of this command, see the descriptions of the individual subcommands. For ease of administration, use this command from the global-cluster node.

Resources and Resource Groups

The resource state, resource group state, and resource status are all maintained on a per-node basis. For example, a given resource has a distinct state on each cluster node and a distinct status on each cluster node.


Note -  State names, such as Offline and Start_failed, are not case sensitive. You can use any combination of uppercase and lowercase letters when you specify state names.

The resource state is set by the Resource Group Manager (RGM ) on each node, based only on which methods have been invoked on the resource. For example, after the STOP method has run successfully on a resource on a given node, the resource's state is Offline on that node. If the STOP method exits nonzero or times out, the state of the resource is Stop_failed .

Possible resource states include: Online, Offline, Start_failed, Stop_failed, Monitor_failed, Online_not_monitored, Starting, and Stopping.

Possible resource group states are: Unmanaged, Online, Offline, Pending_online, Pending_offline, Error_stop_failed, Online_faulted, and Pending_online_blocked.

In addition to resource state, the RGM also maintains a resource status that can be set by the resource itself by using the API. The field Status Message actually consists of two components: status keyword and status message. Status message is optionally set by the resource and is an arbitrary text string that is printed after the status keyword.

Descriptions of possible values for a resource's status are as follows:

Degraded

The resource is online, but its performance or availability might be compromised in some way.

Faulted

The resource has encountered an error that prevents it from functioning.

Offline

The resource is offline.

Online

The resource is online and providing service.

Unknown

The current status is unknown or is in transition.

Using This Command in a Zone Cluster

You can use the clresourcegroup command with all subcommands except export in a zone cluster.

You can also use the –Z option with all subcommands except export to specify the name of a particular zone cluster to which you want to restrict an operation. And, you can also attach the zone-cluster name to a resource group (zoneclustername :resourcegroup) to restrict an operation to a particular zone cluster.

You can access all zone cluster information from a global-cluster node, but a particular zone cluster is not aware of other zone clusters. If you do not restrict an operation to a particular zone cluster, the subcommand you use operates in the current cluster only.

You can specify affinities between a resource group in a zone cluster and a resource group in another zone cluster or a resource group on the global cluster. You can use the following command to specify the affinities between resource groups in different zone clusters:

# clresourcegroup set -p RG_affinities={+|++|-|--}
target-zc:target-rg 
source-zc:source-rg

    The affinity type can be one of the following:

  • + (weak positive)

  • ++ (strong positive)

  • +++ (strong positive with failover delegation)

  • - (weak negative)

  • -- (strong negative)

For example, if you need to specify a strong positive affinity (++) between resource group RG1 in zone cluster ZC1 and resource group RG2 in zone cluster ZC2, use the following command:

# clresourcegroup set -p RG_affinities=++ZC2:RG2 ZC1:RG1

To specify a strong positive affinity with failover delegation (+++) between resource group RG1 in zone cluster ZC1 and resource group RG2 in zone cluster ZC2, use the following command:

# clresourcegroup set -p RG_affinities=+++ZC2:RG2 ZC1:RG1

To specify a strong negative affinity (--) between resource group RG1 in zone cluster ZC1 and resource group RG2 in the global cluster, use the following command:

# clresourcegroup set -p RG_affinities=--global:RG2 ZC1:RG1


Resource groups can be automatically distributed across cluster nodes or zones. For more information, see the entries for Load_factors, Priority, and Preemption_mode in the rg_properties(5) man pages.

Sub Commands

The following subcommands are supported:

add-node

Adds a node to the end of the Nodelist property for a resource group.

You can use this subcommand in the global cluster or in a zone cluster.

The order of the nodes and zones in the list specifies the preferred order in which the resource group is brought online on those nodes or zones. To add a node to a different position in the Nodelist property, use the set subcommand.

To add a node for the resource group in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

Users other than superuser require solaris.cluster.modify role-based access control (RBAC) authorization to use this subcommand. See the rbac (5) man page.

create

Creates a new resource group.

You can use this subcommand in the global cluster or in a zone cluster.

To create a resource group in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

If you specify a configuration file with the –i option, you can specify the plus sign operand (+). This operand specifies that you want to create all resources in that file that do not exist.

To set the Nodelist property for the new resource group, specify one of the following options:

  • –n node

  • –p Nodelist=-node– ][,…]

  • –i clconfigfile

The order of the nodes in the list specifies the preferred order in which the resource group is brought online on those nodes. If you do not specify a node list at creation, the Nodelist property is set to all nodes that are configured in the cluster. The order is arbitrary.

By default, resource groups are created with the RG_mode property set to Failover. However, by using the –S option or the –p RG_mode=Scalable option, or by setting Maximum_primaries to a value that is greater than 1, you can create a scalable resource group. You can set the RG_mode property of a resource group only when that group is created.

Resource groups are always placed in an unmanaged state when they are created. However, when you issue the manage subcommand, or when you issue the online or switch subcommand with the –M option, the RGM changes their state to a managed state.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

delete

Deletes a resource group.

You can use this subcommand in the global cluster or in a zone cluster.

To delete a resource group in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

You can specify the plus sign operand (+) with this subcommand to delete all resource groups.

You cannot delete resource groups if they contain resources, unless you specify the –F option. If you specify the – F option, all resources within each group, as well as the group, are deleted. All dependencies and affinities are deleted as well.

This subcommand deletes multiple resource groups in an order that reflects resource and resource group dependencies. The order in which you specify resource groups on the command line does not matter.

The following forms of the clresourcegroup delete command are carried out in several steps:

  • When you delete multiple resource groups at the same time

  • When you delete a resource group with the –F option

If either of these forms of the command is interrupted, for example, if a node fails, some resource groups might be left in an invalid configuration.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

evacuate

Brings offline all resource groups on the nodes that you specify with the –n option.

You can use this subcommand in the global cluster or in a zone cluster.

When you run the evacuate command from the global-cluster nodes, this subcommand evacuates all resource groups in the global cluster or zone cluster. In a zone cluster, this subcommand only evacuates the resource groups in the specified zone cluster. To evacuate the resource groups in a specific zone cluster from the global-cluster nodes, you can use the –Z option to specify the name of the zone cluster.

Resource groups are brought offline in an order that reflects resource and resource group dependencies.

You can use the –T option with this subcommand to specify the number of seconds to keep resource groups from switching back. If you do not specify a value, 60 seconds is used by default.

Resource groups are prevented from failing over, or automatically being brought online, on the evacuating nodes for 60 seconds or the specified number of seconds after the evacuation completes.

If, however, you use the switch or online subcommand to switch a resource group online, or the evacuated nodes reboots, the evacuation timer immediately expires and automatic failovers are again allowed.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

export

Writes the configuration information for a resource group to a file or to the standard output (stdout).

You can use this subcommand only in the global cluster.

The format of this configuration information is described in the clconfiguration(5CL) man page.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

list

Displays a list, filtered by qualifier options, of resource groups that you specify.

You can use this subcommand in the global cluster or in a zone cluster.

You can use -r resource to include only those resource groups that contain resources. You can use -t resourcetype to include only those resource groups that contain a resource type in resourcetype. You can use -n node to include only those resource groups that are online in one or more nodes.

If you specify -s state, only those groups with the states that you specify are listed.

If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options that you specify, are listed.

If you specify the verbose option –v, the status (whether the resource group is online or offline) is displayed. A resource group is listed as online even if it is online on only one node in the cluster.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

manage

Brings a resource group that you specify to a managed state.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To manage resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

offline

Brings a resource group that you specify to an offline state.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To bring offline the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

If you specify the –n option, resource groups are taken offline only on the nodes that you specify.

If you do not specify the –n option, resource groups are taken offline on all nodes.

If you take a resource group offline with the offline subcommand, the Offline state of the resource group does not survive node reboots. In other words, if a node dies or joins the cluster, the resource group might come online on some node, even if you previously switched the resource group offline. Even if all of the resources are disabled, the resource group comes online.

Similarly, a resource group that declares any RG_dependencies or strong RG_affinities might be brought online automatically when another resource group is switched over.

To prevent the resource group from coming online automatically, use the suspend subcommand to suspend the automatic recovery actions of the resource group. To resume automatic recovery actions, use the resume subcommand.

Resource groups are brought offline in an order that reflects resource and resource group dependencies.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

online

Brings a resource group that you specify to an online state.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To bring the resource groups in a specific zone cluster online from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

Use the –n option to specify the list of nodes on which to bring resource groups online. If you do not specify the – n option, this subcommand brings resource groups online on their most-preferred nodes, without taking the groups offline from any of their current primaries. The total number of online nodes for each resource group is bounded by the Desired_primaries and Maximum_primaries properties. The preference ordering of nodes is determined by the Nodelist, RG_affinities, and Load_factors properties. See the rg_properties(5) man page for more information about these properties.

When multiple resource group operands are provided on the command line and if the –n option is not specified, the resource group operands are assigned primary nodes in an order determined by the Priority property, with the highest-priority resource group receiving its node assignment first. After primary nodes have been assigned, all of the resource group operands are brought online in parallel, except as constrained by resource dependencies or resource group dependencies. The order in which you specify resource groups on the command line does not matter. For more information regarding the Priority property, see the rg_properties(5) man page.

Lower-priority resource groups might not be able to be assigned to their most-preferred node, or might be forced offline by higher-priority resource groups, if load limits are exceeded. For more information, see the loadlimit subcommands in the clnode(1CL) man page.

Unlike the switch subcommand, this subcommand does not attempt to take any nodes that are listed in the Nodelist property to the Offline state.

If you specify the –e option with this subcommand, all resources in the set of resource groups that are brought online are enabled.

You can specify the –m option to enable monitoring for all resources in the set of resource groups that are brought online. However, resources are not actually monitored unless they are first enabled and are associated with a MONITOR_START method.

You can also specify the –M option to indicate that all resource groups that are brought online are to be placed in a managed state. If the –M option is not specified, this subcommand has no effect on unmanaged resource groups.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

quiesce

Brings the specified resource group to a quiescent state.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

This command stops a resource group from continuously switching from one node to another node if a START or STOP method fails. It also prevents the node reboot that would normally take place if a stop method fails and the Failover_mode property of the resource is set to HARD. In that case, the resource moves to a STOP_FAILED state instead.

Use the –k option to kill methods that are running on behalf of resources in the affected resource groups. If you do not specify the –k option, methods are allowed to continue running until they exit or exceed their configured timeout.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

remaster

Switches the resource groups that you specify from their current primary nodes to their most preferred nodes. The total number of online nodes for each resource group is bounded by the Desired_primaries and Maximum_primaries properties. The preference ordering of nodes is determined by the Nodelist, RG_affinities, and Load_factors properties. For more information, see the clnode(1CL) and the rg_properties(5) man pages.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

Unlike the online subcommand, this subcommand can switch resource groups offline from their current masters to bring them online on more preferred masters.

When multiple resource group operands are provided on the command line, the resource group operands are assigned primary nodes in an order determined by their Priority property, with the highest-priority resource group receiving its node assignment first. The order in which you specify resource groups on the command line does not matter. For more information, see the rg_properties(5) man page.

Lower-priority resource groups might not be able to be assigned to their most-preferred node, or might be forced offline by higher-priority resource groups if load limits are exceeded. For more information, see the loadlimit subcommands of the clnode(1CL)man page.

This subcommand has no effect on unmanaged resource groups.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

remove-node

Removes a node from the Nodelist property of a resource group.

You can use this subcommand in the global cluster or in a zone cluster.

You can use this subcommand from the global-cluster node or a zone cluster. To remove a node for a resource group in a zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

After removing the node, remove-node might reset the value of the Maximum_primaries or Desired_primaries property to the new number of nodes in the Nodelist property. remove-node resets the value of the Maximum_primaries or Desired_primaries property only if either value exceeds the new number of nodes in the Nodelist property.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

restart

Takes a resource group offline and then back online on the same set of primary nodes that currently host the resource group.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

If you specify the –n option, the resource group is restarted only on current masters that are in the list of nodes that you specify.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

resume

Resumes the automatic recovery actions on the specified resource group, which were previously suspended by the suspend subcommand.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic recovery. Whether online or offline, suspended data services remain in their current state. You can still manually switch the resource group to a different state on specified nodes. You can also still enable or disable individual resources in the resource group.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

set

Modifies the properties that are associated with the resource groups that you specify.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

You can modify the Nodelist property either with -p Nodelist=node or, as a convenience, with -n node.

Users other than superuser require solaris.cluster.modify RBAC authorization to use this subcommand. See the rbac (5) man page.

show

Generates a configuration report, filtered by qualifier options, for resource groups that you specify.

You can use this subcommand in the global cluster or in a zone cluster.

You can use –rresource to include only those resource groups that contain resources. You can use –tresourcetype to include only those resource groups that contain a resource type in resourcetype. You can use -n node to include only those resource groups that are online in one or more nodes. You can use the –Z option from a global cluster to include only those resource groups that are online in the specified zone cluster.

You can use the –p option to display a selected set of resource group properties rather than all resource group properties.

If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options that you specify, are listed.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

status

Generates a status report, filtered by qualifier options, for resource groups that you specify.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this command in a zone cluster, this subcommand applies only to the resource groups in the zone cluster.

You can use -r resource to include only those resource groups that contain resources. You can use -t resourcetype to include only those resource groups that contain a resource type in resourcetype. You can use -n node to include only those resource groups that are online in one or more nodes. You can use the –Z option to specify a zone cluster from the global-cluster node to include only those resource groups that are online in the specified zone cluster.

If you specify -s state, only those groups with the states that you specify are listed.


Note -  You can specify either the –n option or the –s option with the status subcommand. But, you cannot specify both options at the same time with the status subcommand.

If you do not specify an operand or if you specify the plus sign operand (+), all resource groups, filtered by any qualifier options that you specify, are listed.

Users other than superuser require solaris.cluster.read RBAC authorization to use this subcommand. See the rbac (5) man page.

suspend

Suspends the automatic recovery actions on and quiesces the specified resource group.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand in the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

A suspended resource group is not automatically restarted or failed over until you explicitly issue the command that resumes automatic recovery. Whether online or offline, suspended data services remain in their current state. While the resource group is suspended, you can manually switch the resource group or its resources to a different state on specific nodes by using the clresourcegroup (1CL) or clresource (1CL) commands with sub commands such as switch, online, offline, disable, or enable. Rather than directly operating on the resource such as killing the application processes or running application specific commands, use clresourcegroup (1CL) or clresource (1CL) commands. This allows the cluster framework to maintain an accurate picture of the current status of the resources and resource groups, so that availability can be properly restored when the resume subcommand is executed.

You might need to suspend the automatic recovery of a resource group to investigate and fix a problem in the cluster or perform maintenance on resource group services.

You can also specify the –k option to immediately kill methods that are running on behalf of resources in the affected resource groups. By using the –k option, you can speed the quiescing of the resource groups. If you do not specify the –k option, methods are allowed to continue running until they exit or they exceed their configured timeout.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

switch

Changes the node, or set of nodes, that is mastering a resource group that you specify.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand in the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the zone cluster.

Use the –n option to specify the list of nodes on which to bring the resource groups online. You can use the –Z option to specify a zone cluster from the global-cluster node to include only the list of resource groups in the specified zone cluster.

If a resource group is not already online, it is brought online on the set of nodes that is specified by the –n option. However, groups that are online are brought offline on nodes that are not specified by the –n option before the groups are brought online on new nodes.

If you specify –e with this subcommand, all resources in the set of resource groups that are brought online are enabled.

You can specify –m to enable monitoring for all resources in the set of resource groups that are brought online. However, resources are not actually monitored unless they are first enabled and are associated with a MONITOR_START method.

You can specify the –M option to indicate that all resource groups that are brought online are to be placed in a managed state. If the –M option is not specified, this subcommand has no effect on unmanaged resource groups.

Resource groups are brought online in an order that reflects resource and resource group dependencies. The order in which you specify groups on the command line does not matter.

Lower-priority resource groups might not be able to be switched to the specified nodes, or might even be forced offline by higher-priority resource groups if load limits are exceeded. For more information, see the loadlimit subcommands in the clnode(1CL) man page.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

unmanage

Brings a resource group that you specify to an unmanaged state.

You can use this subcommand in the global cluster or in a zone cluster.

If you use this subcommand from the global-cluster node, this subcommand can operate on any resource group. If you use this subcommand in a zone cluster, it successfully operates only on resource groups in the same zone cluster. To operate on the resource groups in a specific zone cluster from the global-cluster node, you can use the –Z option to specify the name of the zone cluster.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac (5) man page.

Options

The following options are supported:


Note -  Both the short and long form of each option is shown in this section.
–?
-–help

Displays help information.

You can specify this option with or without a subcommand .

If you specify this option without a subcommand, the list of all available subcommands is displayed.

If you specify this option with a subcommand, the usage for that subcommand is displayed.

If you specify this option with the create or set subcommands, help information is displayed for all resource group properties.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. No other processing occurs.

–e
-–enable

Enables all resources within a resource group when the group is brought online.

You can use this option only with the switch and online subcommands.

–F
-–force

Deletes a resource group and all of its resources forcefully, even if those resources are enabled or online. This option also removes both resources and resource groups from any dependency property settings or affinity property settings in other resources and in other resource groups.

Use the –F option with the delete subcommand with care. A forced deletion might cause changes to other resource groups that reference the deleted resource group, such as when a dependency or affinity is set. Dependent resources might be left with an invalid or error state after the forced deletion. If this occurs, you might need to reconfigure or restart the affected dependent resources.

–i {- | clconfigfile}
-–input={- | clconfigfile}
-–input {- | clconfigfile}

Specifies that you want to use the configuration information that is located in the clconfigfile file. See the clconfiguration(5CL) man page.

Specify a dash (-) with this option to provide configuration information through the standard input (stdin).

If you specify other options, they take precedence over the options and information in clconfigfile.

Only those resource groups that you specify are affected by this option.

–k
-–kill

Kills RGM resource methods that are running on behalf of resources in the resource group that you specify.

You can use this option with the quiesce and suspend subcommands. If you do not specify the –k option, methods are allowed to continue running until they exit or they exceed their configured timeout.

–m
-–monitor

Enables monitoring for all resources within a resource group when the resource group is brought online.

Resources, however, are not actually monitored unless they are first enabled and are associated with a MONITOR_START method.

You can use this option only with the switch and online subcommands.

–M
-–manage

Specifies that all resource groups that are brought online by the switch or online subcommand are to be in a managed state.

–n node[,…]]
-–node=node[,…]]
-–node node[,…]]

Specifies a node or a list of nodes in the target global cluster or zone cluster. If the –Z option is specified, then you can specify only zone-cluster hostnames with the – n option and not the global-cluster hostnames. If –Z option is not specified, then you can specify only the global-cluster hostnames with the –n option.

You can specify the name or identifier of a node for node .

When used with the list, show, and status subcommands, this option limits the output. Only those resource groups that are currently online on one or more nodes in the node list are included.

Specifying this option with the create, add-node, remove-node, and set subcommands is equivalent to setting the Nodelist property. The order of the nodes in the Nodelist property specifies the order in which the group is to be brought online on those nodes. If you do not specify a node list with the create subcommand, the Nodelist property is set to all nodes in the cluster. The order is arbitrary.

When used with the switch and online subcommands, this option specifies the nodes on which to bring the resource group online.

When used with the evacuate and offline subcommands, this option specifies the nodes on which to bring the resource group offline.

When used with the restart subcommand, this option specifies nodes on which to restart the resource group. The resource group is restarted on current masters which are in the specified list.

–o {- | clconfigfile}
-–output={- | clconfigfile}
-–output {- | clconfigfile}

Writes resource group configuration information to a file or to the standard output (stdout). The format of the configuration information is described in the clconfiguration(5CL) man page.

If you specify a file name with this option, this option creates a new file. Configuration information is then placed in that file. If you specify - with this option, the configuration information is sent to the standard output (stdout). All other standard output for the command is suppressed.

You can use this option only with the export subcommand.

–p name
–-property=name
–-property name

Specifies a list of resource group properties.

You use this option with the show subcommand.

For information about the properties that you can set or modify with the create or set subcommand, see the description of the –p name= value option.

If you do not specify this option, the show subcommand lists most resource group properties. If you do not specify this option and you specify the –verbose option with the show subcommand, the subcommand lists all resource group properties.

Resource group properties that you can specify are described in Resource Group Properties in Oracle Solaris Cluster Data Services Planning and Administration Guide .

–p name=value
–p name+=array-values
–p name=array-values
–-property=name=value
–-property=name+=array-values
–-property=name-=array-values
–-property name=value
–-property name+=array-values
–-property name-=array-values

Sets or modifies the value of a resource group property.

You can use this option only with the create and set subcommands.

For information about the properties about which you can display information with the show subcommand, see the description of the –p name option.

Multiple instances of –p are allowed.

The operators to use with this option are as follows:

=

Sets the property to the specified value. The create and set subcommands accept this operator.

+=

Adds one or more values to a list of property values. Only the set subcommand accepts this operator. You can specify this operator only for properties that accept lists of string values, for example, Nodelist.

–=

Removes one or more values to a list of property values. Only the set subcommand accepts this operator. You can specify this operator only for properties that accept lists of string values, for example, Nodelist.

–r resource[,…]
-–resource=resource[,…]
-–resource resource[,…]

Specifies a resource or a list of resources.

You can use this option only with the list, show, and status subcommands. This option limits the output from these commands. Only those resource groups that contain one or more of the resources in the resource list are output.

–s state[,…]
-–state=state[,…]
-–state state[,…]

Specifies a resource group state or a list of resource group states.

You can use this option only with the status subcommand. This option limits the output so that only those resource groups that are in the specified state on any specified nodes are displayed. You can specify one or more of the following arguments (states) with this option:

Error_stop_failed

Any specified resource group that is in the Error_stop_failed state on any node that you specify is displayed.

Not_online

Any specified resource group that is in any state other than online on any node that you specify is displayed.

Offline

A specified resource group is displayed only if it is in the Offline state on all nodes that you specify.

Online

Any specified resource group that is in the Online state on any node that you specify is displayed.

Online_faulted

Any specified resource group that is in the Online_faulted state on any node that you specify is displayed.

Pending_offline

Any specified resource group that is in the Pending_offline state on any node that you specify is displayed.

Pending_online

Any specified resource group that is in the Pending_online state on any node that you specify is displayed.

Pending_online_blocked

Any specified resource group that is in the Pending_online_blocked state on any node that you specify is displayed.

Unmanaged

Any specified resource group that is in the Unmanaged state on any node that you specify is displayed.

–S
-–scalable

Creates a scalable resource group or updates the Maximum_primaries and Desired_primaries properties.

You can use this option only with the create and add-node subcommands.

When used with the create subcommand, this option creates a scalable resource group rather than a failover resource group. This option also sets both the Maximum_primaries and Desired_primaries properties to the number of nodes in the resulting Nodelist property.

You can use this option with the add-node subcommand only if the resource group is already scalable. When used with the add-node subcommand, this option updates both the Maximum_primaries and Desired_primaries properties to the number of nodes in the resulting Nodelist property.

You can also set the RG_mode, Maximum_primaries , and Desired_primaries properties with the –p option.

–t resourcetype[,…]
-–type=resourcetype[,…]
-–type resourcetype[,…]

Specifies a resource type or a list of resource types.

You can use this option only with the list, show, and status subcommands. This option limits the output from these commands. Only those resource groups that contain one or more of the resources of a type that is included in the resource type list are output.

You specify resource types as [prefix.] type[:RT-version]. For example, an nfs resource type might be represented as SUNW.nfs:3.2, SUNW.nfs, or nfs. You need to include an RT-version only if there is more than one version of a resource type that is registered in the cluster. If you do not include a prefix, SUNW is assumed.

–T seconds
-–time=seconds
-–time seconds

Specifies the number of seconds to keep resource groups from switching back onto a node after you have evacuated resource groups from the node.

You can use this option only with the evacuate subcommand. You must specify an integer value between 0 and 65535 for seconds . If you do not specify a value, 60 seconds is used by default.

Resource groups are prevented from failing over, or automatically being brought online, on the evacuating node for 60 seconds or the specified number of seconds after the evacuation completes.

If, however, you use the switch or online subcommand to switch a resource group online, or the evacuated node reboots, the evacuation timer immediately expires and automatic failovers are again allowed.

The –T option specifies that resource groups are not to be brought online by the RGM on the evacuated node for a period of T seconds after the evacuation has completed. You can override the –T timer by switching a resource group onto the evacuated node by using the switch or online subcommand with the –n option. When such a switch completes, the –T timer immediately expires for that node. However, switchover commands such as online or remaster without the –n flag continues to respect the –T timer and avoid switching any resource groups onto the evacuated node.

–u

If you use the + operand, this option specifies that the command operates on resources whose resource group is suspended.

If you do not specify the –u option when you specify the + operand, the command ignores all suspended resource groups. The –u option is valid when the + operand is specified with the add-node , manage, offline, online, quiesce, remaster, remove-node, restart, set, switch, or unamanage subcommand.

When you use the + operand with the add-node , manage, offline, online, quiesce, remaster, remove-node, restart, set, switch, or unamanage subcommand, the command ignores all suspended resource groups unless you also specify the –u option.

–v
-–verbose

Displays verbose information on the standard output (stdout).

–V
-–version

Displays the version of the command.

If you specify this option with other options, with subcommands, or with operands, they are all ignored. Only the version of the command is displayed. No other processing occurs.

–Z {zoneclustername | global | all}
-–zoneclustername={zoneclustername | global | all}
-–zoneclustername {zoneclustername | global | all}

Specifies the cluster or clusters in which the resource group exists and on which you want to operate.

This option is supported by all subcommands except the export subcommand.

If you specify this option, you must also specify one argument from the following list:

zoneclustername

Specifies that the command with which you use this option is to operate on all specified resource groups in only the zone cluster named zoneclustername.

global

Specifies that the command with which you use this option is to operate on all specified resource groups in the global cluster only.

all

If you use this argument in the global cluster, it specifies that the command with which you use it is to operate on all specified resource groups in all clusters, including the global cluster and all zone clusters.

If you use this argument in a zone cluster, it specifies that the command with which you use it is to operate on all specified resource groups in that zone cluster only.

Operands

The following operands are supported:

resourcegroup

The name of the resource group that you want to manage.

+

All resource groups.

Exit Status

The complete set of exit status codes for all commands in this command set are listed in the Intro(1CL) man page. Returned exit codes are also compatible with the return codes that are described in the scha_calls(3HA) man page.

If the command is successful for all specified operands, it returns zero (CL_NOERR). If an error occurs for an operand, the command processes the next operand in the operand list. The returned exit code always reflects the error that occurred first.

This command returns the following exit status codes:

0 CL_NOERR

No error

The command that you issued completed successfully.

1 CL_ENOMEM

Not enough swap space

A cluster node ran out of swap memory or ran out of other operating system resources.

3 CL_EINVAL

Invalid argument

You typed the command incorrectly, or the syntax of the cluster configuration information that you supplied with the –i option was incorrect.

6 CL_EACCESS

Permission denied

The object that you specified is inaccessible. You might need superuser or RBAC access to issue the command. See the su(1M) and rbac(5) man pages for more information.

35 CL_EIO

I/O error

A physical input/output error has occurred.

36 CL_ENOENT

No such object

The object that you specified cannot be found for one of the following reasons:

  • The object does not exist.

  • A directory in the path to the configuration file that you attempted to create with the –o option does not exist.

  • The configuration file that you attempted to access with the –i option contains errors.

38 CL_EBUSY

Object busy

You attempted to remove a cable from the last cluster interconnect path to an active cluster node. Or, you attempted to remove a node from a cluster configuration from which you have not removed references.

39 CL_EEXIST

Object exists

The device, device group, cluster interconnect component, node, cluster, resource, resource type, resource group, or private string that you specified already exists.

Examples

Example 1 Creating a New Failover Resource Group

The first command in the following example creates the failover resource groups rg1 and rg2. The second command adds the resources that are included in the configuration file cluster-1.xml to these resource groups.

# clresourcegroup create rg1 rg2
# clresource create -g rg1,rg2 -i /net/server/export/cluster-1.xml +

Either of the following two examples create failover resource groups rg1 and rg2 in a zone cluster ZC from the global-cluster node.

# clresourcegroup create -Z ZC rg1 rg2
# clresourcegroup create ZC:rg1 ZC:rg2
Example 2 Bringing All Resource Groups Online

The following command brings all resource groups online, with all resources enabled and monitored.

# clresourcegroup online -eM +
Example 3 Adding a Node to the Nodelist Property

The following command adds the node phys-schost-4 to the Nodelist property for all resource groups.

# clresourcegroup set -p Nodelist+=phys-schost-4 +
Example 4 Evacuating All Resource Groups From a Node

The following command evacuates all resource groups from the node phys-schost-3.

# clresourcegroup evacuate -n phys-schost-3 +
Example 5 Bringing a Resource Group Offline on All Nodes

The following command brings the resource group rg1 offline on all nodes.

# clresourcegroup offline rg1
Example 6 Refreshing an Entire Resource Group Manager Configuration

The first command in the following example deletes all resources and resource groups, even if they are enabled and online. The second command unregisters all resource types. The third command creates the resources that are included in the configuration file cluster-1.xml . The third command also registers the resources' resource types and creates all resource groups upon which the resource types depend.

# clresourcegroup delete --force +
# clresourcetype unregister +
# clresource -i /net/server/export/cluster-1.xml -d +
Example 7 Listing All Resource Groups

The following command lists all resource groups.

# clresourcegroup list
rg1
rg2
Example 8 Listing All Resource Groups With Their Resources

The following command lists all resource groups with their resources. Note that rg3 has no resources.

# clresourcegroup list -v
Resource Group Resource
-------------- --------
rg1            rs-2
rg1            rs-3
rg1            rs-4
rg1            rs-5
rg2            rs-1
rg3            -
Example 9 Listing All Resource Groups That Include Particular Resources

The following command lists all groups that include Oracle Solaris Cluster HA for NFS resources.

# clresource list -t nfs
rg1
Example 10 Clearing a Start_failed Resource State by Switching Over a Resource Group

The Start_failed resource state indicates that a Start or Prenet_start method failed or timed out on a resource, but its resource group came online anyway. The resource group comes online even though the resource has been placed in a faulted state and might not be providing service. This state can occur if the resource's Failover_mode property is set to None or to another value that prevents the failover of the resource group.

Unlike the Stop_failed resource state, the Start_failed resource state does not prevent you or the Oracle Solaris Cluster software from performing actions on the resource group. You do not need to issue the reset subcommand to clear a Start_failed resource state. You only need to execute a command that restarts the resource.

The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The command clears this condition by switching the resource group to the schost-2 node.

# clresourcegroup switch -n schost-2 resource-grp-2
Example 11 Clearing a Start_failed Resource State by Restarting a Resource Group

The following command clears a Start_failed resource state that has occurred on a resource in the resource-grp-2 resource group. The command clears this condition by restarting the resource group on the schost-1 node, which originally hosted the resource group.

# clresourcegroup restart resource-grp-2
Example 12 Setting the load_factors Property

The following command sets load factors for two resource groups.

# clresourcegroup set -p load_factors=factor1@50,factor2@1 rg1 rg2

From a global cluster, the following command sets load factors for two resource groups within a zone cluster.

# clresourcegroup set -Z ZC load_factors=factor1@50,factor2@1 rg1 rg2
Example 13 Setting the priority Property for a Resource Group

The following command sets a resource group's priority.

# clresourcegroup set -p priority=600 rg1

The rg1 resource group will get preference over lower-priority resource groups for node assignment. The rg1 can preempt other resource groups of lower priority on a node where a hard limit is exceeded. If rg1's priority exceeds another resource group's priority by at least 100, it can preempt that resource group on a node where a soft limit is exceeded. The default value of priority is 500.

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core
Interface Stability
Evolving

See also

clresource(1CL), clresourcetype(1CL), cluster(1CL), Intro(1CL), su (1M), scha_calls(3HA), attributes(5) , rbac(5) , rg_properties(5), clconfiguration(5CL)

Notes

The superuser can run all forms of this command.

All users can run this command with the –? (help) or –V (version) option.

To run the clresourcegroup command with other subcommands, users other than super user require RBAC authorizations. See the following table.

Subcommand
RBAC Authorization
add-node
solaris.cluster.modify
create
solaris.cluster.modify
delete
solaris.cluster.modify
evacuate
solaris.cluster.admin
export
solaris.cluster.read
list
solaris.cluster.read
manage
solaris.cluster.admin
offline
solaris.cluster.admin
online
solaris.cluster.admin
quiesce
solaris.cluster.admin
remaster
solaris.cluster.admin
remove-node
solaris.cluster.modify
restart
solaris.cluster.admin
resume
solaris.cluster.admin
set
solaris.cluster.modify
show
solaris.cluster.read
status
solaris.cluster.read
suspend
solaris.cluster.admin
switch
solaris.cluster.admin
unmanage
solaris.cluster.admin