JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Reference Manual
search filter icon
search icon

Document Information

Preface

Introduction

OSC33 1

OSC33 1cl

OSC33 1ha

OSC33 1m

OSC33 3ha

OSC33 4

OSC33 5

crs_framework(5)

derby(5)

HADerby(5)

property_attributes(5)

Proxy_SMF_failover(5)

Proxy_SMF_multimaster(5)

Proxy_SMF_scalable(5)

rac_cvm(5)

rac_framework(5)

rac_svm(5)

rac_udlm(5)

rg_properties(5)

r_properties(5)

rt_properties(5)

scalable_service(5)

ScalDeviceGroup(5)

ScalMountPoint(5)

SCTelemetry(5)

SUNW.crs_framework(5)

SUNW.derby(5)

SUNW.Event(5)

SUNW.gds(5)

SUNW.HADerby(5)

SUNW.HAStoragePlus(5)

SUNW.Proxy_SMF_failover(5)

SUNW.Proxy_SMF_multimaster(5)

SUNW.Proxy_SMF_scalable(5)

SUNW.rac_cvm(5)

SUNW.rac_framework(5)

SUNW.rac_svm(5)

SUNW.rac_udlm(5)

SUNW.ScalDeviceGroup(5)

SUNW.ScalMountPoint(5)

SUNW.SCTelemetry(5)

SUNW.vucmm_cvm(5)

SUNW.vucmm_framework(5)

SUNW.vucmm_svm(5)

vucmm_cvm(5)

vucmm_framework(5)

vucmm_svm(5)

OSC33 5cl

OSC33 7

OSC33 7p

Index

rg_properties

- resource group properties

Description

The following information describes the resource group properties that are defined by Oracle Solaris Cluster.

Resource Group Properties and Descriptions


Note - Resource group property names, such as Auto_start_on_new_cluster and Desired_primaries, are not case sensitive. You can use any combination of uppercase and lowercase letters when you specify resource group property names.


Auto_start_on_new_cluster (boolean)

This property controls whether the Resource Group Manager (RGM) starts the resource group automatically when a new cluster is forming. The default is TRUE.

If set to TRUE, the RGM attempts to start the resource group automatically to achieve Desired_primaries when all the nodes in the cluster are simultaneously rebooted.

If set to FALSE, the resource group does not start automatically when the cluster is rebooted. The resource group remains offline until the first time that the resource group is manually switched online by using the clresourcegroup(1CL) command or the equivalent graphical user interface command. After that, the resource group resumes normal failover behavior.

Default

TRUE

Tunable

Any time

Desired_primaries (integer)

The desired number of nodes or zones on which the resource group can run simultaneously.

The default is 1. The value of the Desired_primaries property must be less than or equal to the value of the Maximum_primaries property.

Default

1, see above

Tunable

Any time

Failback (boolean)

A Boolean value that indicates whether to recalculate the set of nodes or zones where the resource group is online when the cluster membership changes. A recalculation can cause the RGM to bring the group offline on less preferred nodes or zones and online on more preferred nodes or zones.

Default

FALSE

Tunable

Any time

Global_resources_used (string_array)

Indicates whether cluster file systems are used by any resource in this resource group. Legal values that the administrator can specify are an asterisk (*) to indicate all global resources, and the empty string (“”) to indicate no global resources.

Default

All global resources

Tunable

Any time

Implicit_network_dependencies (boolean)

A Boolean value that indicates, when TRUE, that the RGM should enforce implicit strong dependencies of non-network-address resources on network-address resources within the group. This means that the RGM starts all network-address resources before all other resources and stops network address resources after all other resources within the group. Network-address resources include the logical host name and shared address resource types.

In a scalable resource group, this property has no effect because a scalable resource group does not contain any network-address resources.

Default

TRUE

Tunable

Any time

Load_factors

Determines how much of the load limit a resource group consumes.

You can configure load limits for each node, and a resource group is assigned a set of load factors that correspond to the nodes' defined load limits. As the RGM brings resource groups online, the load factors of the resource groups on each node are added up to provide a total load that is compared against that node's load limits. The load distribution policy for resource groups is also influenced by the setting of the Priority and Preemption_mode properties. See the Preemption_mode and Priority properties for more information.

You can use the clresourcegroup set -p option to set the value of the load_factors property. The load_factors property has a composite value consisting of a comma-separated list of zero or more elements of the form limitname@value, where limitname is an identifier string and value is a nonnegative integer. The default value for each load factor is 0, and the maximum permitted value is 1000. On any node in the resource group's node list, if a limitname is not defined as a loadlimit, it is considered unlimited on that node.

If a set of resource groups use a common load factor, then those resource groups will be distributed across nodes, even if the corresponding load limit is undefined (for example, unlimited) on those nodes. The existence of a nonzero load factor causes the RGM to distribute load. If you want to avoid load-based resource group distribution, remove the load factors or set them to zero.


Note - When load factors or load limits are changed, some resource groups that are currently offline might automatically be brought online. You can execute the clresourcegroup suspend command on a resource group to prevent it from coming online automatically.


You can use this subcommand in the global cluster or in a zone cluster.

See the clresourcegroup(1CL) and clnode(1CL) man pages for more information.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.

Maximum_primaries (integer)

The maximum number of nodes or zones where the resource group might be online at the same time.

If the RG_mode property is Failover, the value of this property must be no greater than 1. If the RG_mode property is Scalable, a value greater than 1 is allowed.

Default

1, see above

Tunable

Any time

Nodelist (string_array)

A list of nodes or zones where the group can be brought online in order of preference. These nodes or zones are known as the potential primaries or masters of the resource group.

To specify a non-global zone as an element of Nodelist, use the following syntax:

nodename:zonename

nodename is the name of the node where zonename is located. zonename is the name of the zone that you want to include in Nodelist. For example, to specify the non-global zone zone-1, which is located on the node phys-schost-1, you specify the following text:

phys-schost-1:zone-1
Default

The list of all cluster nodes in arbitrary order

Tunable

Any time

Pathprefix (string)

A directory in the cluster file system in which resources in the group can write essential administrative files. Some resources might require this property. Make Pathprefix unique for each resource group.

Default

The empty string

Tunable

Any time

Pingpong_interval (integer)

A non-negative integer value (in seconds) used by the RGM to determine where to bring the resource group online in the event of a reconfiguration or as the result of an scha_control giveover command or function being executed.

In the event of a reconfiguration, if the resource group fails more than once to come online within the past Pingpong_interval seconds on a particular node or zone (because the resource's Start or Prenet_start method exited nonzero or timed out), that node or zone is considered ineligible to host the resource group and the RGM looks for another master.

If a scha_control(1HA) command or scha_control(3HA) giveover is executed on a given node or zone by a resource, thereby causing its resource group to fail over to another node or zone, the first node or zone (on which scha_control was invoked) cannot be the destination of another scha_control giveover by the same resource until Pingpong_interval seconds have elapsed.

Default

3600 (one hour)

Tunable

Any time

Preemption_mode

Determines the likelihood that a resource group will be preempted from a node by a higher-priority resource group because of node overload.

You can use the clresourcegroup set -p option to set the enum value of the preemption_mode property. The default setting for the preemption_mode property is HAS_COST.

The resource group's preemption_mode property can have one of the following values:

  • HAS_COST – To satisfy load limits, this resource group can be displaced from its current master by a higher-priority resource group. Preempting this resource group has a cost associated with it, so the RGM will try to avoid it, if possible, by choosing a different node to master the higher-priority resource group.

  • NO_COST – To satisfy load limits, this resource group can be displaced from a current master by a higher-priority resource group. The cost of preempting this resource group is zero.

  • NEVER – This resource group cannot be displaced from its current master to satisfy load limits.

See the clresourcegroup(1CL) and clnode(1CL) man pages for more information.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.

Priority

Determines the order in which resource groups are assigned to master nodes. A higher priority indicates a more important service.

You can use the clresourcegroup set -p option to set the unsigned-integer value of the priority property. A resource group with a higher priority value than another group takes precedence and is more likely to be mastered by its preferred node and is less likely to be displaced from that node. The default value for the priority property is 500.

If two resource groups have equal priorities and are related by RG_dependencies or strong RG_affinities, the resource group that does not specify the dependency or affinity will receive its node assignment before the dependent resource group. If two resource groups have equal priority and are unrelated by dependencies or strong affinities, they are assigned their primaries in arbitrary order.

See the clresourcegroup(1CL) and clnode(1CL) man pages for more information.

Users other than superuser require solaris.cluster.admin RBAC authorization to use this subcommand. See the rbac(5) man page.

Resource_list (string_array)

The list of resources that are contained in the group. The administrator does not set this property directly. Rather, the RGM updates this property as the administrator adds or removes resources from the resource group.

Default

No default

Tunable

Never

RG_affinities (string)

The RGM is to try (1) to locate a resource group on a machine that is a current master of another given resource group (positive affinity) or (2) to locate a resource group on a machine that is not a current master of a given resource group (negative affinity).

You can set RG_affinities to the following strings:

  • +, or weak positive affinity

  • ++, or strong positive affinity

  • +++, or strong positive affinity with failover delegation

  • -, or weak negative affinity

  • --, or strong negative affinity

For example, RG_affinities=+RG2,--RG3 indicates that this resource group has a weak positive affinity for RG2 and a strong negative affinity for RG3.

Using RG_affinities is described in Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide.

Default

The empty string

Tunable

Any time

Sometimes a single-machine cluster is configured for prototyping purposes. If resource groups are configured to run on multiple nodes on such a cluster, then RG_affinities are interpreted at the node level rather than at the machine level. For example, a strong positive affinity requires that both resource groups run in the same node, not just on the same machine. Note that all nodes on a single machine cluster are zones on the same machine.

RG_dependencies (string_array)

Optional list of resource groups that indicate a preferred ordering for bringing other groups online or offline on the same node or zone. The graph of all strong RG_affinities (positive and negative) together with RG_dependencies is not allowed to contain cycles.

For example, suppose that resource group RG2 is listed in the RG_dependencies list of resource group RG1. In other words, suppose that RG1 has a resource group dependency on RG2. The following list summarizes the effects of this resource group dependency:

  • When a node or zone joins the cluster, Boot methods on that node or zone are not run on resources in RG1 until all Boot methods on that node or zone have completed on resources in RG2.

  • If RG1 and RG2 are both in the Pending_online state on the same node or zone at the same time, the start methods (Prenet_start or Start) are not run on any resources in RG1 until all the resources in RG2 have completed their start methods.

  • If RG1 and RG2 are both in the Pending_offline state on the same node or zone at the same time, the stop methods (Stop or Postnet_stop) are not run on any resources in RG2 until all the resources in RG1 have completed their stop methods.

  • An attempt to switch the primaries of RG1 or RG2 fails if switching the primaries would leave RG1 online on any node or zone and RG2 offline on all nodes or zones.

  • Setting the Desired_primaries property to a value that is greater than zero on RG1 is not permitted if Desired_primaries is set to zero on RG2.

  • Setting the Auto_start_on_new_cluster property to TRUE on RG1 is not permitted if Auto_start_on_new_cluster is set to FALSE on RG2.

Default

The empty list

Tunable

Any time

RG_description (string)

A brief description of the resource group.

Default

The empty string

Tunable

Any time

RG_is_frozen (boolean)

A Boolean value that indicates whether a global device on which a resource group depends is being switched over. If this property is set to TRUE, the global device is being switched over. If this property is set to FALSE, no global device is being switched over. A resource group depends on global devices as indicated by its Global_resources_used property.

You do not set the RG_is_frozen property directly. The RGM updates the RG_is_frozen property when the status of the global devices changes.

Default

No default

Tunable

Never

RG_mode (enum)

Indicates whether the resource group is a failover or a scalable group. If the value is Failover, the RGM sets the Maximum_primaries property of the group to 1 and restricts the resource group to being mastered by a single node or zone.

If the value of this property is Scalable, the RGM allows the Maximum_primaries property to be set to a value that is greater than 1. As a result, the group can be mastered by multiple nodes or zones simultaneously. The RGM does not allow a resource whose Failover property is TRUE to be added to a resource group whose RG_mode is Scalable.

If Maximum_primaries is 1, the default is Failover. If Maximum_primaries is greater than 1, the default is Scalable.

Default

Depends on the value of Maximum_primaries

Tunable

At creation

RG_project_name (string)

The Solaris project name (see projects(1)) that is associated with the resource group. Use this property to apply Solaris resource management features, such as CPU shares and resource pools, to cluster data services. When the RGM brings resource groups online, it launches the related processes under this project name for resources that do not have the Resource_project_name property set (see r_properties(5)). The specified project name must exist in the projects database(see projects(1) and System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.


Note - Changes to this property take affect the next time that the resource is started.


Default

The text string “default

Tunable

Any time

Valid value

Any valid Solaris project name

RG_SLM_CPU_SHARES (integer)

The number of CPU shares associated with the resource group.


Note - You can only set the RG_SLM_CPU_SHARES property if RG_SLM_TYPE is set to automated. For more information, see the RG_SLM_TYPE property.


The maximum value for RG_SLM_CPU_SHARES is 65535. Zero is not an acceptable value for RG_SLM_CPU_SHARES because setting a share value to zero can lead to processes not being scheduled when the CPU is heavily loaded. Changes made to RG_SLM_CPU_SHARES while the resource group is online are taken into account dynamically.

Because RG_SLM_TYPE is set to automated, Oracle Solaris Cluster creates a project(4) named SCSLM_resourcegroup-name, where resourcegroup-name is the name you give to the resource group. Each method of a resource that belongs to the resource group is executed in this project. In the Solaris 10 release, these projects are created in the resource group's zone, whether it is a global zone or a non-global zone.

The project SCSLM_resourcegroup-name has a project.cpu-shares value set to the RG_SLM_CPU_SHARES value. If the RG_SLM_CPU_SHARES property is not set, this project is created with a project.cpu-shares value of 1.

In the Solaris 10 release, when the RG_SLM_PSET_TYPE property is set to strong or weak, the value of RG_SLM_CPU_SHARES property is also used to compute the size of pset created (by convention, 100 shares are equivalent to one CPU). For more information, see the RG_SLM_PSET_TYPE property.

For information about processor sets, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.

Default

1

Tunable

Any time

RG_SLM_PSET_MIN (integer)

The minimum number of processors in the processor set in which the resource group executes. You can only use this property if the following are true:

  • The operating system used is Solaris 10.

  • RG_SLM_TYPE is set to automated.

  • RG_SLM_PSET_TYPE is set to strong or weak. (See the RG_SLM_PSET_TYPE property.)

  • The value of RG_SLM_PSET_MIN must be lower or equal to the value of the RG_SLM_CPU_SHARES divided by 100.

The maximum number of for RG_SLM_PSET_MIN is 655. The value of the RG_SLM_PSET_MIN property is used by Oracle Solaris Cluster to computer the minimum size of processor sets.

Changes made to RG_SLM_CPU_SHARES and RG_SLM_PSET_MIN while the resource group is online are taken into account dynamically. However, if RG_SLM_PSET_TYPE is set to strong, and if there are not enough CPUs available to accommodate the change, the change requested for RG_SLM_PSET_MIN is not applied. In this case, a warning message is displayed. On next switchover, errors due to lack of CPUs can occur if there are not enough CPUs available to respect the values you configured for RG_SLM_PSET_MIN.

For information about processor sets, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.

Default

0

Tunable

Any time

RG_SLM_PSET_TYPE (string)

Enables the creation of a dedicated processor set.

Possible values for RG_SLM_PSET_TYPE are default, strong, and weak.

You can set RG_SLM_PSET_TYPE to strong or weak if all of the following criteria are true:

  • The operating system used is Solaris 10.

  • The resource group is configured to execute only in a non-global zone.

  • RG_SLM_TYPE is set to automated.

Possible values for RG_SLM_PSET_TYPE are default, strong, and weak.

For a resource group to execute as strong or weak, the resource group must be configured so there are only non-global zones in its node list.

The non-global zone must not be configured for a pool other than the default pool (pool_default). For information about zone configuration, see zonecfg(1M). This non-global zone must not be dynamically bound to a pool other than the default pool. For more information on pool binding, see poolbind(1M). These two pool conditions are verified only when the methods of the resources in the resource group are started.

The values strong and weak are mutually exclusive for resource groups that have the same zone in their node list. You cannot configure resource groups in the same zone so that some have RG_SLM_PSET_TYPE set to strong and others set to weak.

If RG_SLM_PSET_TYPE is set to strong or weak and the actions listed for RG_SLM_TYPE are set to automated, when the resource group is brought online, Oracle Solaris Cluster does the following:

  • Creates a pool and dynamically binds this pool to the non-global zone in which the resource group starts.

  • Creates a processor set with a size between a minimum and maximum value.

    • The minimum value is the sum of RG_SLM_PSET_MIN values of all the resource groups online in the zone this resource group starts in, or 1 if that sum equals zero.

    • The maximum value is the sum of RG_SLM_SPU_SHARES values of all resource groups online in that zone, divided by 100, and rounded up to the immediate upper integer, or 1 if the result of the computation is zero.

  • Associates the processor set to the pool.

  • Sets zone.cpu-shares to the sum of RG_SLM_CPU_SHARES in all of the resource groups running in the zone.

If RG_SLM_PSET_TYPE is set to strong or weak, then the resource group is brought offline (more precisely when the STOP or POSTNET_STOP method of the resource group's first resource is executed), Oracle Solaris Cluster destroys the processor set if there are no longer any resource groups online in the zone, destroys the pool, and binds the zone to the default pool (pool_default).

If RG_SLM_PSET_TYPE is set to strong, the resource group behaves the same as if RG_SLM_PSET_TYPE was set to strong. However, if there are not enough processors available to create the processor set, the pool is associated with the default processor set.

If RG_SLM_PSET_TYPE is set to strong and there are not enough processors available to create the processor set, an error is returned to the Resource Group Monitor (RGM), and the resource group is not started on that node or zone.

The order of priority for CPU allocation is defaultpsetmin minimum size has priority over strong, which has priority over weak. (For information about the defaultpsetmin property, see clnode(1CL).) However, this priority is not maintained when you try to increase the size of the default processor set by using the clnode command and there are not enough processors available.

If you assign a minimum number of CPUs to the default processor set by using the clnode command, the operation is done dynamically. If the number of CPUs that you specify is not available, Oracle Solaris Cluster periodically retries to assign this number of CPUs, and subsequently smaller numbers of CPUs, to the default processor set until the minimum number of CPUs has been assigned. This action might destroy some weak processor sets, but does not destroy strong processor sets.

When a resource group with RG_SLM_PSET_TYPE configured as strong starts, it might destroy the processor sets associated with the weak processor sets if there are not enough CPU available on the node for both processor sets. In that case, the processes of the resource group running in the weak processor sets are associated with the default processor set.

To change a processor set from weak to strong or from strong to weak, you must first change the processor set to have RG_SLM_PSET_TYPE set to default.

If you set RG_SLM_PSET_TYPE to default, Oracle Solaris Cluster creates a pool, SCSLM_pool_zone-name, but does not create a processor set. In this case, SCSLM_pool_zone-name is associated with the default processor set. The shares that are assigned to the zone are determined by the sum of the values that are set for RG_SLM_CPU_SHARES for all of the resource groups that are running in the zone.

If there are no longer any online resource groups configured for CPU control in a non-global zone, the CPU share value for the non-global zone takes the value of zone.cpu-shares found in the zone configuration. This parameter has a value of 1 by default. For more information about zone configuration, see zonecfg(1M).

For information about resource pools and processor sets, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.

Default

The text string “default

Tunable

Any time

RG_SLM_TYPE (string)

Enables you to control system resource usage, and automates some steps to configure the Oracle Solaris OS for system resource management. Possible values for RG_SLM_TYPE are automated and manual.

If RG_SLM_TYPE is set to automated, when the resource group is brought online, Oracle Solaris Cluster does the following:

  • Creates a project named SCSLM_resourcegroup-name. All methods in the resources in this resource group execute in this project. This project is created the first time a method of a resource in this resource group is executed on the node or zone.

  • Sets the value of project.cpu_shares that is associated with the project to the value of RG_SLM_CPU_SHARES. The value of project.cpu_shares is 1 by default.

  • In the Solaris 10 release, sets zone.cpu-shares to the sum of RG_SLM_CPU_SHARES of all the resource groups with RG_SLM_TYPE set to automated for the zone. The zone can be global or non-global. The non-global zone is bound to an Oracle Solaris Cluster generated pool. Optionally, this Oracle Solaris Cluster generated pool is associated with an Oracle Solaris Cluster generated dedicated processor set if RG_SLM_PSET_TYPE is set to weak or strong. For information about dedicated processor sets, see the RG_SLM_PSET_TYPE property.

When RG_SLM_TYPE is set to automated, any action taken results in a message being logged.

If RG_SLM_TYPE is set to manual, the resource group executes in the project specified by the RG_project_name property.

For information about resource pools and processor sets, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones.


Note -

  • Do not specify resource group names that exceed 58 characters. If a resource group name contains more than 58 characters, you cannot configure CPU control, that is, you cannot set the RG_SLM_TYPE property to automated.

  • Refrain from including dashes (-) in resource group names. The Oracle Solaris Cluster software replaces all dashes in resource group names with underscores (_) when it creates a project. For example, Oracle Solaris Cluster creates the project named SCSLM_rg_dev for a resource group named rg-dev. If a resource group named rg_dev already exists, a conflict arises when Oracle Solaris Cluster attempts to create the project for the resource group rg-dev.


Default

manual

Tunable

Any time

RG_state on each cluster node or zone (enum)

Set by the RGM to Unmanaged, Online, Offline, Pending_online, Pending_offline, Error_stop_failed, Online_faulted, or Pending_online_blocked to describe the state of the resource group on each cluster node or zone.

You cannot configure this property. However, you can indirectly set this property by using clresourcegroup(1CL) or by using the equivalent Oracle Solaris Cluster graphical user interface command. A resource group can exist in an Unmanaged state when that group is not under the control of the RGM.

The following descriptions summarize each state.


Note - States apply to individual nodes or zones only, except the Unmanaged state, which applies across all nodes or zones. For example, a resource group might be Offline on node A, but Pending_online on node B.


Error_stop_failed

One or more resources within the resource group failed to stop successfully and are in the Stop_failed resource state. Other resources in the group might remain online or offline. This resource group is not permitted to start on any node or zone until the Error_stop_failed state is cleared.

You must use an administrative command, such as clresourcegroup clear, to manually kill the Stop_failed resource and reset its state to Offline.

Offline

The resource group has been stopped on the node or zone. In other words, the stop methods (Monitor_stop, Stop, and Postnet_stop, as applicable to each resource) have executed successfully on all enabled resources in the group. This state also applies before a resource group has started for the first time on the node or zone.

Online

The resource group has been started on the node or zone. In other words, the start methods (Prenet_start, Start, and Monitor_start, as applicable to each resource) have executed successfully on all enabled resources in the group.

Online_faulted

The resource group was Pending_online and has finished starting on this node or zone. However, one or more resources ended up in the Start_failed resource state or with Faulted status.

Pending_offline

The resource group is stopping on the node or zone. The stop methods (Monitor_stop, Stop, and Postnet_stop, as applicable to each resource) are being executed on enabled resources in the group.

Pending_online

The resource group is starting on the node or zone. The start methods (Prenet_start, Start, and Monitor_start, as applicable to each resource) are being executed on enabled resources in the group.

Pending_online_blocked

The resource group failed to start fully because one or more resources within that resource group have an unsatisfied strong resource dependency on a resource in a different resource group. Such resources remain Offline. When the resource dependencies are satisfied, the resource group automatically moves back to the Pending_online state.

Unmanaged

The initial state of a newly created resource group, or the state of a previously managed resource group. Either Init methods have not yet been run on resources in the group, or Fini methods have been run on resources in the group.

The group is not managed by the RGM.

Default

No default

Tunable

Never

RG_system (boolean)

If the RG_system property is TRUE for a resource group, particular operations are restricted for the resource group and for the resources that the resource group contains. This restriction is intended to help prevent accidental modification or deletion of critical resource groups and resources. Only the clresource(1CL) and clresourcegroup(1CL) commands are affected by this property. Operations for scha_control(1HA) and scha_control(3HA) are not affected.

Before performing a restricted operation on a resource group (or a resource group's resources), you must first set the RG_system property of the resource group to FALSE. Use care when you modify or delete a resource group that supports cluster services, or when you modify or delete the resources that such a resource group contains.

The following table shows the operations that are restricted for a resource group when RG_system is set to TRUE.

Operation
Example
Delete a resource group
clresourcegroup delete RG1
Edit a resource group property (except for RG_system)
clresourcegroup set -p RG_description=... +
Add a resource to a resource group
clresource create -g RG1 –t SUNW.nfs R1

The resource is created in the enabled state and with resource monitoring turned on.

Delete a resource from a resource group
clresource delete R1
Edit a property of a resource that belongs to a resource group
clresource set -g RG1 –t SUNW.nfs -p r_description="HA-NFS res" R1
Switch a resource group offline
clresourcegroup offline RG1
Manage a resource group
clresourcegroup manage RG1
Unmanage a resource group
clresourcegroup unmanage RG1
Enable a resource
clresource enable R1
Enable monitoring for a resource
clresource monitor R1
Disable a resource
clresource disable R1
Disable monitoring for a resource
clresource unmonitor R1

If the RG_system property is TRUE for a resource group, the only property of the resource group that you can edit is the RG_system property itself. In other words, editing the RG_system property is never restricted.

Default

FALSE

Tunable

Any time

Suspend_automatic_recovery (boolean)

A Boolean value that indicates whether the automatic recovery of a resource group is suspended. A suspended resource group is not automatically restarted or failed over until the cluster administrator explicitly issues the command that resumes automatic recovery. Whether online or offline, suspended data services remain in their current state.

While the resource group is suspended, you can manually switch the resource group or its resources to a different state on specific nodes or zones by using the clresourcegroup(1CL) or clresource(1CL) commands with sub commands such as switch, online, offline, disable, or enable. Rather than directly operating on the resource such as killing the application processes or running application specific commands, use clresourcegroup(1CL) or clresource(1CL) commands. This allows the cluster framework to maintain an accurate picture of the current status of the resources and resource groups, so that availability can be properly restored when the resume subcommand is executed.

If the Suspend_automatic_recovery property is set to TRUE, automatic recovery of the resource group is suspended. If this property is set to FALSE, automatic recovery of the resource group is resumed and active.

The cluster administrator does not set this property directly. The RGM changes the value of the Suspend_automatic_recovery property when the cluster administrator suspends or resumes automatic recovery of the resource group. The cluster administrator suspends automatic recovery with the clresourcegroup suspend command. The cluster administrator resumes automatic recovery with the clresourcegroup resume command. The resource group can be suspended or resumed regardless of the setting of its RG_system property.

Default

FALSE

Tunable

Never

See Also

projects(1), clnode(1CL), clresource(1CL), clresourcegroup(1CL), scha_control(1HA), poolbind(1M), zonecfg(1M), scha_control(3HA), project(4), property_attributes(5), r_properties(5), rt_properties(5), scha_resourcegroup_get(1HA), and scha_resourcegroup_get(3HA).

Oracle Solaris Cluster Concepts Guide, Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide, System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones