Installing and Administering N1 Grid Console - Container Manager 1.0

Resource Management

In a general sense, a resource represents a process-bindable OS entity. More often, a resource refers to the objects constructed by a kernel subsystem that offers some form of partitioning. A resource can also considered an aspect of the computing system that can be manipulated with the intention of affecting application behavior. Examples of resources include physical memory, CPUs, or network bandwidth.

Container Manager works with resource management utilities in both the Solaris 8 and Solaris 9 Operating System. In the Solaris 8 release, resource management is provided by Solaris Resource Manager 1.3. Every service is represented by a lnode, or limit node. The lnode is used to record resource allocation policies and accrued resource usage data. lnodes correspond to UNIX user IDs (UIDs). The UID can represent individual users and applications by default. For more information about lnodes and resource management, see “Limit Node Overview” in Solaris Resource Manager 1.3 System Administration Guide.

In the Solaris 9 release, resource management is provided by Solaris 9 Resource Manager. In this release, the project is similar to the lnode, and provides a network-wide administrative identifier for related work. All the processes running in a container have the same project identifier, also known as the project ID. The Solaris kernel tracks resource usage through the project ID. Historical data can be gathered using extended accounting, which uses the same tracking method. In Container Manager, the project represents the container. For more information about projects and resource management, see “Projects and Tasks” in System Administration Guide: Resource Management and Network Services.

Figure 1–3 Example of Projects on a Host

Illustration showing an example of projects on a host. The surrounding text describes the context.

Information about the processes running in a container is obtained from the Container Manager GUI. The gathering of data will be transparent to you as you create and manage containers using the software.

Different methods can be used to create container boundaries. One method is to partition the system by using resource pools. Another method is to establish limits on the project through resource controls.

Resource Pools

A resource pool, or pool, is a Solaris 9 software configuration mechanism that is used to partition the resources of a host. A resource set is a process-bindable resource. Memory sets and processor sets are examples of resource sets. Only processor sets are currently available in the Solaris 9 release. A pool binds the various resource sets available on a host. For more information about resource pools, see “Resource Pools” in System Administration Guide: Resource Management and Network Services.

A resource pool can hold one or more containers. In the case of one container, the resources that are linked to the pool are dedicated to that container. In the case of multiple containers, the resources that are linked to the pool are shared with the containers. The following illustration shows the relationship between resource pools and containers.

Figure 1–4 Example of Resource Pools Supporting the Containers on a Host

Illustration showing an example of resource pools on a host. The surrounding text describes the context.

A host can also have only one resource pool, as is always the case when running on the Solaris 8 Operating System. This pool is called pool_default. Because resource pools do not exist in this OS version, the pool_default is created artificially. All of the CPUs on a host running the Solaris 8 release are considered to be in a single pool by convention.

For more information about managing resource pools with Container Manager, see Chapter 4, Managing Resource Pools.

Resource Controls

In the case when more than one container is bound to a single pool, you can set guarantees, or limits, on a single container. These limits are called resource controls. An example of a control is the setting a minimum CPU limit, as in the case of using the fair share scheduler (FSS). Another example is the setting of a physical memory cap, as in the case of using the rcapd daemon. When setting a minimum CPU guarantee, the idle CPU cycles in one container can be used by the applications in the other containers. For more information about resource controls, see “Resource Controls” in System Administration Guide: Resource Management and Network Services.