Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Control CPU Usage in a Global-Cluster Non-Voting Node With a Dedicated Processor Set

Perform this procedure if you want your resource group to execute in a dedicated processor set.

If a resource group is configured to execute in a dedicated processor set, Sun Cluster software performs the following tasks when it starts a resource of the resource group in a global-cluster non-voting node:

  1. Set the scheduler for the system to be fair share scheduler (FSS).

    # dispadmin -d FSS

    FSS becomes the default scheduler on next reboot. To make this configuration take effect immediately, use the priocntl command.

    # priocntl -s -C FSS

    Using the combination of the priocntl and dispadmin commands ensures that FSS becomes the default schedule immediately and remains so after reboot. For more information about setting a scheduling class, see the dispadmin(1M) and priocntl(1) man pages.

    Note –

    If the FSS is not the default scheduler, your CPU shares assignment will not take effect.

  2. On each node to use CPU control, configure the number of shares for the global-cluster voting node and the minimum number of CPUs available in the default processor set.

    Setting these parameters helps protect processes running in the voting node from competing for CPUs with processes running in non-voting nodes. If you do not assign a value to the globalzoneshares and defaultpsetmin properties, these properties take their default values.

    # clnode set  [-p globalzoneshares=integer] \
    [-p defaultpsetmin=integer] \
    -p defaultpsetmin=defaultpsetmininteger

    Sets the minimum number of CPUs available in the default processor set. The default is 1.

    -p globalzoneshares=integer

    Sets the number of shares assigned to the voting node. The default is 1.


    Identifies nodes on which properties are to be set.

    In setting these properties, you are setting properties for the voting node.

  3. Verify that you correctly set these properties:

    # clnode show node

    For the node you specify, the clnode command prints the properties set and the values that are set for these properties. If you do not set the CPU control properties with clnode, they take the default value.

  4. Configure the CPU control facility.

    # clresourcegroup create -p RG_SLM_TYPE=automated \
     [-p RG_SLM_CPU_SHARES=value] \
    -p -y RG_SLM_PSET_TYPE=value \
    [-p RG_SLM_PSET_MIN=value] resource_group_name
    -p RG_SLM_TYPE=automated

    Enables you to control CPU control usage and automates some steps to configure the Solaris OS for system resource management.

    -p RG_SLM_CPU_SHARES=value

    Specifies the number of CPU shares assigned to the resource group-specific project (project.cpu-shares) and determines the number of CPU shares assigned to the non-voting node (zone.cpu-shares) and the maximum number of processors in a processor set.

    -p RG_SLM_PSET_TYPE=value

    Enables the creation of a dedicated processor set. To have a dedicated processor set, you can set this property to strong or weak. The values strong and weak are mutually exclusive. That is, you cannot configure resource groups in the same zone so that some are strong and others weak.

    -p RG_SLM_PSET_MIN=value

    Determines the minimum number of processors in the processor set.


    Specifies the name of the resource group.

    This step creates a resource group. You can alternatively use the clresourcegroup set command to modify an existing resource group.

    You cannot set RG_SLM_TYPE to automated in a non-voting node if a pool other than the default pool is in the zone configuration or if the zone is dynamically bound to a pool other than the default pool. See the zonecfg(1M) and poolbind(1M) man pages for information about zone configuration and pool binding respectively. View your zone configuration as follows:

    # zonecfg -z zone_name info pool

    Note –

    A resource such as an HAStoragePlus or a LogicalHostname resource configured to start in a non-voting node but with the GLOBAL_ZONE property set to TRUE is started in the voting node. Even if you set the RG_SLM_TYPE property to automated, this resource does not benefit from the CPU shares and dedicated processor set configuration and is treated as in a resource group with RG_SLM_TYPE set to manual.

  5. Activate the configuration change.


    Specifies the name of the resource group.

    Note –

    Do not remove or modify the SCSLM_resource_group_name project. You can add more resource control manually to the project, for example by configuring the project.max-lwps property. For more information, see the projmod(1M) man page.

    Changes made to RG_SLM_CPU_SHARES and RG_SLM_PSET_MIN while the resource group is online are taken into account dynamically. However, if RG_SLM_PSET_TYPE is set to strong, and if not enough CPUs are available to accommodate the change, the change requested for RG_SLM_PSET_MIN is not applied. In this case, a warning message is displayed. On next switchover, errors because insufficient CPUs can occur if not enough CPUs are available to acknowledge the values that you configured for RG_SLM_PSET_MIN.

    If an online resource group is no longer configured for CPU control in the non-voting node, the CPU share value for the non-voting node takes the value of zone.cpu-shares. This parameter has a value of 1 by default.