Sun Cluster System Administration Guide for Solaris OS

Chapter 9 Configuring Control of CPU Usage

If you want to control the usage of CPU, configure the CPU control facility. For more information about configuring the CPU control facility, see the rg_properties(5) man page. This chapter provides information about the following topics:

Introduction to CPU Control

Sun Cluster software enables you to control the usage of CPU. The configuration choices you can make on the Solaris 9 OS are not the same as the choices you can make on the Solaris 10 OS.

The CPU control facility builds on the functionality available in the Solaris OS. For information about zones, projects, resource pools, processor sets, and scheduling classes, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

SPARC: On the Solaris 9 OS, you can assign CPU shares to resource groups.

On the Solaris 10 OS, you can do the following:


Note –

All procedures in this chapter are for use on the Solaris 10 OS unless labeled as specific to the Solaris 9 OS.


Choosing a Scenario

Depending on the configuration choices you make and version of the operating system you choose, you can have different levels of CPU control. All aspects of CPU control described in this chapter are dependent on the resource group property RG_SLM_TYPE being set to automated.

Table 9–1 provides a description of the different configuration scenarios available.

Table 9–1 CPU Control Scenarios

Description 

Instructions 

SPARC: Resource group runs on the Solaris 9 OS 

Assign CPU shares to a resource group, providing a value for project.cpu-shares.

SPARC: How to Control CPU Usage on the Solaris 9 OS

Resource group runs in the global-cluster voting node on the Solaris 10 OS 

Assign CPU shares to resource groups and zones, providing values for project.cpu-shares and zone.cpu-shares

You can perform this procedure whether or not global- cluster non-voting nodes are configured. 

How to Control CPU Usage in the Voting Node on a Global Cluster

Resource group runs in a global-cluster non-voting zone by using the default processor set 

Assign CPU shares to resource groups and zones, providing values for project.cpu-shares and zone.cpu-shares

Perform this procedure if you do not need to control the size of the processor set. 

How to Control CPU Usage in a Global-Cluster Non-Voting Node With the Default Processor Set

Resource group runs in a global-cluster non-voting node with a dedicated processor set 

Assign CPU shares to resource groups, providing values for project.cpu-shares, zone.cpu-shares, and maximum number of processors in a dedicated processor set.

Set the minimum number of processor sets in a dedicated processor set. 

Perform this procedure if you want to control CPU shares and the size of a processor set. You can exercise this control only in a global-cluster non-voting node by using a dedicated processor set. 

How to Control CPU Usage in a Global-Cluster Non-Voting Node With a Dedicated Processor Set

Fair Share Scheduler

The first step in the procedures to assign CPU shares to resource groups is to set the scheduler for the system to be the fair share scheduler (FSS). By default, the scheduling class for Solaris OS is timesharing schedule (TS). Set the scheduler to be FSS to have the shares configuration take effect.

You can create a dedicated processor set regardless of the scheduler class you choose.

Configuring CPU Control

This section includes the following procedures:

ProcedureSPARC: How to Control CPU Usage on the Solaris 9 OS

Perform this procedure to assign CPU shares to a resource group on a cluster running the Solaris 9 OS.

If a resource group is assigned CPU shares, Sun Cluster software performs the following tasks when it starts a resource of the resource group:

For more information about configuring the CPU control facility, see the rg_properties(5) man page.

  1. Set the scheduler for the system to be the fair share scheduler (FSS).


    # dispadmin -d FSS
    

    FSS becomes the default scheduler on next reboot. To make this configuration take effect immediately, use the priocntl command.


    # priocntl -s -c FSS
    

    Using the combination of the priocntl and dispadmin commands ensures that FSS becomes the default scheduler immediately and remains so after reboot. For more information about setting a scheduling class, see the dispadmin(1M) and priocntl(1) man pages.


    Note –

    If the FSS is not the default scheduler, your CPU shares assignment will not take effect.


  2. Configure the CPU control facility.


    # clresourcegroup create -p RG_SLM_TYPE=automated \
     [-p RG_SLM_CPU_SHARES=value] resource_group_name
    
    -p RG_SLM_TYPE=automated

    Enables you to control CPU usage and automates some steps to configure the Solaris OS for system resource management.

    -p RG_SLM_CPU-SHARES=value

    Specifies the number of CPU shares assigned to the resource group-specific project , project.cpu-shares.

    resource_group_name

    Specifies the name of the resource group.

    This step creates a resource group. You could alternatively use the clresourcegroup set command to modify an existing resource group.

  3. Activate the configuration change.


    # clresourcegroup online -M resource_group_name
    
    resource_group_name

    Specifies the name of the resource group.


    Note –

    Do not remove or modify the SCSLM_resource_group_name project. You can add more resource control manually to the project, for example by configuring the project.max-lwps property. For more information, see the projmod(1M) man page.


ProcedureHow to Control CPU Usage in the Voting Node on a Global Cluster

Perform this procedure to assign CPU shares to a resource group that will be executed in a global–cluster voting node.

If a resource group is assigned CPU shares, Sun Cluster software performs the following tasks when it starts a resource of the resource group in a global–cluster voting node:

For more information about configuring the CPU control facility, see the rg_properties(5) man page.

  1. Set the default scheduler for the system to be fair share scheduler (FSS).


    # dispadmin -d FSS
    

    FSS becomes the default scheduler on next reboot. To make this configuration take effect immediately, use the priocntl command.


    # priocntl -s -C FSS
    

    Using the combination of the priocntl and dispadmin commands ensures that FSS becomes the default scheduler immediately and remains so after reboot. For more information about setting a scheduling class, see the dispadmin(1M) and priocntl(1) man pages.


    Note –

    If the FSS is not the default scheduler, your CPU shares assignment will not take effect.


  2. On each node to use CPU control, configure the number of shares for the global-cluster voting nodes and the minimum number of CPUs available in the default processor set.

    Setting these parameters helps protect processes running in the voting nodes from competing for CPUs with processes running in non-voting nodes. If you do not assign a value to the globalzoneshares and defaultpsetmin properties, these properties take their default values.


    # clnode set [-p globalzoneshares=integer] \
    [-p defaultpsetmin=integer] \
    node
    
    -p defaultpsetmin=defaultpsetmininteger

    Sets the minimum number of CPU shares available in the default processor set. The default value is 1.

    -p globalzoneshares=integer

    Sets the number of shares assigned to the voting node. The default value is 1.

    node

    Specifies nodes on which properties are to be set.

    In setting these properties, you are setting properties for the voting node. If you do not set these properties, you cannot benefit from the RG_SLM_PSET_TYPE property in non-voting nodes.

  3. Verify that you correctly set these properties.


    # clnode show node
    

    For the node you specify, the clnode command prints the properties set and the values that are set for these properties. If you do not set the CPU control properties with clnode, they take the default value.

  4. Configure the CPU control facility.


    # clresourcegroup create -p RG_SLM_TYPE=automated \
     [-p RG_SLM_CPU_SHARES=value] resource_group_name
    
    -p RG_SLM_TYPE=automated

    Enables you to control CPU usage and automates some steps to configure the Solaris OS for system resource management.

    -p RG_SLM_CPU_SHARES=value

    Specifies the number of CPU shares that are assigned to the resource group-specific project, project.cpu-shares and determines the number of CPU shares that are assigned to the voting node zone.cpu-shares.

    resource_group_name

    Specifies the name of the resource group.

    In this procedure, you do not set the RG_SLM_PSET_TYPE property. In the voting node, this property takes the value default.

    This step creates a resource group. You could alternatively use the clresourcegroup set command to modify an existing resource group.

  5. Activate the configuration change.


    # clresourcegroup online -M resource_group_name
    
    resource_group_name

    Specifies the name of the resource group.


    Note –

    Do not remove or modify the SCSLM_resource_group_name project. You can add more resource control manually to the project, for example, by configuring the project.max-lwps property. For more information, see the projmod(1M) man page.


ProcedureHow to Control CPU Usage in a Global-Cluster Non-Voting Node With the Default Processor Set

Perform this procedure if you want to assign CPU shares for resource groups in a global-cluster non-voting node, but do not need to create a dedicated processor set.

If a resource group is assigned CPU shares, Sun Cluster software performs the following tasks when starting a resource of that resource group in a non-voting node:

For more information about configuring the CPU control facility, see the rg_properties(5) man page.

  1. Set the default scheduler for the system to be fair share scheduler (FSS).


    # dispadmin -d FSS
    

    FSS becomes the default scheduler on next reboot. To make this configuration take effect immediately, use the priocntl command:


    # priocntl -s -C FSS
    

    Using the combination of the priocntl and dispadmin commands ensures that FSS becomes the default schedule immediately and remains so after reboot. For more information about setting a scheduling class, see the dispadmin(1M) and priocntl(1) man pages.


    Note –

    If the FSS is not the default scheduler, your CPU shares assignment will not take effect.


  2. On each node to use CPU control, configure the number of shares for the global-cluster voting node and the minimum number of CPUs available in the default processor set.

    Setting these parameters helps protect processes running in the voting node from competing for CPUs with processes running in global-cluster non-voting nodes. If you do not assign a value to the globalzoneshares and defaultpsetmin properties, these properties take their default values.


    # clnode set [-p globalzoneshares=integer] \
    [-p defaultpsetmin=integer] \
    node
    
    -p globalzoneshares=integer

    Sets the number of shares assigned to the voting node. The default value is 1.

    -p defaultpsetmin=defaultpsetmininteger

    Sets the minimum number of CPUs available in the default processor set. The default value is 1.

    node

    Identifies nodes on which properties are to be set.

    In setting these properties, you are setting properties for the voting node.

  3. Verify that you correctly set these properties:


    # clnode show node
    

    For the node you specify, the clnode command prints the properties set and the values that are set for these properties. If you do not set the CPU control properties with clnode, they take the default value.

  4. Configure the CPU control facility.


    # clresourcegroup create -p RG_SLM_TYPE=automated \
     [-p RG_SLM_CPU_SHARES=value] resource_group_name
    
    -p RG_SLM_TYPE=automated

    Enables you to control CPU usage and automates some steps to configure the Solaris OS for system resource management.

    -p RG_SLM_CPU_SHARES=value

    Specifies the number of CPU shares assigned to the resource group-specific project (project.cpu-shares) and determines the number of CPU shares assigned to the global-cluster non-voting node (zone.cpu_shares).

    resource_group_name

    Specifies the name of the resource group.

    This step creates a resource group. You could alternatively use the clresourcegroup set command to modify an existing resource group.

    You cannot set RG_SLM_TYPE to automated in a non-voting node if a pool other than the default pool is in the zone configuration or if the zone is dynamically bound to a pool other than the default pool. See the zonecfg(1M) and poolbind(1M) man pages for information about zone configuration and pool binding respectively. View your zone configuration as follows:


    # zonecfg -z zone_name info pool
    

    Note –

    A resource such as an HAStoragePlus or a LogicalHostname resource was configured to start in a non-voting node but with the GLOBAL_ZONE property set to TRUE is started in the voting node. Even if you set the RG_SLM_TYPE property to automated, this resource does not benefit from the CPU shares configuration and is treated as in a resource group with RG_SLM_TYPE set to manual.


    In this procedure, you do not set the RG_SLM_PSET_TYPE property. Sun Cluster uses the default processor set.

  5. Activate the configuration change.


    # clresourcegroup online -M resource_group_name
    
    resource_group_name

    Specifies the name of the resource group.

    If you set RG_SLM_PSET_TYPE to default, Sun Cluster creates a pool, SCSLM_pool_zone_name, but does not create a processor set. In this case, SCSLM_pool_zone_name is associated with the default processor set.

    If online resource groups are no longer configured for CPU control in a non-voting node, the CPU share value for the non-voting node takes the value of zone.cpu-shares in the zone configuration. This parameter has a value of 1 by default. For more information about zone configuration, see the zonecfg(1M) man page.


    Note –

    Do not remove or modify the SCSLM_resource_group_name project. You can add more resource control manually to the project, for example by configuring the project.max-lwps property. For more information, see the projmod(1M) man page.


ProcedureHow to Control CPU Usage in a Global-Cluster Non-Voting Node With a Dedicated Processor Set

Perform this procedure if you want your resource group to execute in a dedicated processor set.

If a resource group is configured to execute in a dedicated processor set, Sun Cluster software performs the following tasks when it starts a resource of the resource group in a global-cluster non-voting node:

  1. Set the scheduler for the system to be fair share scheduler (FSS).


    # dispadmin -d FSS
    

    FSS becomes the default scheduler on next reboot. To make this configuration take effect immediately, use the priocntl command.


    # priocntl -s -C FSS
    

    Using the combination of the priocntl and dispadmin commands ensures that FSS becomes the default schedule immediately and remains so after reboot. For more information about setting a scheduling class, see the dispadmin(1M) and priocntl(1) man pages.


    Note –

    If the FSS is not the default scheduler, your CPU shares assignment will not take effect.


  2. On each node to use CPU control, configure the number of shares for the global-cluster voting node and the minimum number of CPUs available in the default processor set.

    Setting these parameters helps protect processes running in the voting node from competing for CPUs with processes running in non-voting nodes. If you do not assign a value to the globalzoneshares and defaultpsetmin properties, these properties take their default values.


    # clnode set  [-p globalzoneshares=integer] \
    [-p defaultpsetmin=integer] \
    node
    
    -p defaultpsetmin=defaultpsetmininteger

    Sets the minimum number of CPUs available in the default processor set. The default is 1.

    -p globalzoneshares=integer

    Sets the number of shares assigned to the voting node. The default is 1.

    node

    Identifies nodes on which properties are to be set.

    In setting these properties, you are setting properties for the voting node.

  3. Verify that you correctly set these properties:


    # clnode show node
    

    For the node you specify, the clnode command prints the properties set and the values that are set for these properties. If you do not set the CPU control properties with clnode, they take the default value.

  4. Configure the CPU control facility.


    # clresourcegroup create -p RG_SLM_TYPE=automated \
     [-p RG_SLM_CPU_SHARES=value] \
    -p -y RG_SLM_PSET_TYPE=value \
    [-p RG_SLM_PSET_MIN=value] resource_group_name
    
    -p RG_SLM_TYPE=automated

    Enables you to control CPU control usage and automates some steps to configure the Solaris OS for system resource management.

    -p RG_SLM_CPU_SHARES=value

    Specifies the number of CPU shares assigned to the resource group-specific project (project.cpu-shares) and determines the number of CPU shares assigned to the non-voting node (zone.cpu-shares) and the maximum number of processors in a processor set.

    -p RG_SLM_PSET_TYPE=value

    Enables the creation of a dedicated processor set. To have a dedicated processor set, you can set this property to strong or weak. The values strong and weak are mutually exclusive. That is, you cannot configure resource groups in the same zone so that some are strong and others weak.

    -p RG_SLM_PSET_MIN=value

    Determines the minimum number of processors in the processor set.

    resource_group_name

    Specifies the name of the resource group.

    This step creates a resource group. You can alternatively use the clresourcegroup set command to modify an existing resource group.

    You cannot set RG_SLM_TYPE to automated in a non-voting node if a pool other than the default pool is in the zone configuration or if the zone is dynamically bound to a pool other than the default pool. See the zonecfg(1M) and poolbind(1M) man pages for information about zone configuration and pool binding respectively. View your zone configuration as follows:


    # zonecfg -z zone_name info pool
    

    Note –

    A resource such as an HAStoragePlus or a LogicalHostname resource configured to start in a non-voting node but with the GLOBAL_ZONE property set to TRUE is started in the voting node. Even if you set the RG_SLM_TYPE property to automated, this resource does not benefit from the CPU shares and dedicated processor set configuration and is treated as in a resource group with RG_SLM_TYPE set to manual.


  5. Activate the configuration change.

    resource_group_name

    Specifies the name of the resource group.


    Note –

    Do not remove or modify the SCSLM_resource_group_name project. You can add more resource control manually to the project, for example by configuring the project.max-lwps property. For more information, see the projmod(1M) man page.


    Changes made to RG_SLM_CPU_SHARES and RG_SLM_PSET_MIN while the resource group is online are taken into account dynamically. However, if RG_SLM_PSET_TYPE is set to strong, and if not enough CPUs are available to accommodate the change, the change requested for RG_SLM_PSET_MIN is not applied. In this case, a warning message is displayed. On next switchover, errors because insufficient CPUs can occur if not enough CPUs are available to acknowledge the values that you configured for RG_SLM_PSET_MIN.

    If an online resource group is no longer configured for CPU control in the non-voting node, the CPU share value for the non-voting node takes the value of zone.cpu-shares. This parameter has a value of 1 by default.