Installing and Administering N1 Grid Console - Container Manager 1.0

Chapter 1 Introduction to N1 Grid Console - Container Manager 1.0

This chapter provides an introduction to N1 Grid Console - Container Manager 1.0 (Container Manager).

The following topics are discussed:

Container Manager Overview

Container Manager is an add-on software product to the Sun Management Center 3.5 Update 1 release. The software organizes existing resource management utilities that run on the SolarisTM 8 and Solaris 9 Operating Systems. Specifically, this release provides tools to simplify the configuration of Solaris Resource Manager 1.3 and Solaris 9 Resource Manager. The ability to manage resources with these utilities is useful when undergoing a server consolidation project. Once your consolidation project is completed, this release also provides tools to do the following tasks:

For more information about the Solaris resource management utilities, see Solaris Resource Manager 1.3 System Administration Guide and System Administration Guide: Resource Management and Network Services.

This release is a resource management application that helps you prevent contention for resource consumption between software applications. The Container Manager software enables the control of the amount of central processor units (CPUs) and physical memory allocated for use by each software application. This ability is achieved, in part, with the use of a container.

A container helps organize and manage the system resources that a service requires. The service is delivered by an application, which is a workload to the system. Using containers enables you to customize the workload environment, and control the level of resources delivered so that an application can perform according to need. You specify in each container the amounts of CPU and physical memory resources you want to allocate for each application. Resource contention between applications running on the same host is limited because you specify minimum CPU and maximum memory resource levels for each application. With this container management application, you are able to accomplish the following tasks:

The ability to establish resource levels is similar to reserving resources for use by an application. For example, if an application is running by itself in a container, the application can use any unused CPU resources available, even if that amount exceeds the minimum CPU reservation. When more than one application is running on a host, each in its own container, the resource reservations take effect and contention for CPU resources is limited. Each application will be guaranteed the minimum CPU reservation established for the container in which that application runs.

When a container's maximum memory limit, or memory cap, is exceeded, the system tries to page out the memory in order to limit the amount used to the capped value. The rcapd daemon is used for enforcing memory caps. For more information about memory capping and how the rcapd daemon works, see “Physical Memory Management Using the Resource Capping Daemon” in Solaris Resource Manager 1.3 System Administration Guide. For more information about how CPU resources are allocated and how the fair share scheduling class works, see “Fair Share Scheduler” in System Administration Guide: Resource Management and Network Services.

Establishing resource constraints for applications avoids the overutilization of system resources by one application. This ability to constrain resource consumption prevents another application from being starved for system resources such as CPU and memory.

Resource management tools such as Container Manager are useful in implementing server consolidation in your data center. Server consolidation produces the following benefits:

You must develop your server consolidation plan before you install the Container Manager software. For more information, see Before Using Container Manager.

N1 Grid Container Model

An N1 Grid Container is an abstraction layer that helps organize and manage the collection of physical system resources. The container enables the creation of a blueprint that details the resource requirements for an application. The resource requirements of the application are the focus of the N1 Grid Container model. This model focuses on the service or workload. The service is delivered by an application, which is a workload to the system. A workload is a set of associated processes, such as an executing application.

An earlier form of workload-based management was implemented in the Solaris Resource Manager 1.3 release. In that release, the workload was associated with the lnode. Container Manager software builds upon this earlier effort. The current container model provides a tool to help you organize and manage the ongoing delivery of resources for services. Common examples of services could be monthly payroll, customer order look-up, and web service delivery.

You need to be able to describe the environment that an application will be limited to when undergoing a server consolidation. Establishing this description enables you to move from having one application running per server to having many applications running on a single server. The container provides this description, as well as being its instantiation. A simple container could, for example, only describe system resources such as CPU, physical memory, and bandwidth. A more complex container could, for example, also control security, namespace isolation, and application faults.

The following illustration of an N1 Grid Container shows the relationship between services and resources.

Figure 1–1 Example of a N1 Grid Container

Illustration depicting an example of a N1 Grid Container. The surrounding text describes the context.

The box represents the container. Three kinds of resources are shown along the x, y, and z axes of the box that surrounds the service. In this model, CPU, Memory, and Bandwidth are fundamental resources. The service is bound by the box to represent how this service is contained by the container. For this release, Container Manager controls only CPU and physical memory resources.

Because Container Manager focuses upon the workload, the amount of resources used by an individual host is not monitored. The software instead monitors the amount of resources that is used by the service. In this model, a single instance of a service represents at least one process running on an individual host. The data is retained for possible system health monitoring and accounting purposes.

Figure 1–2 Example of Containers on a Host

Illustration showing an example of containers on a host. The surrounding text describes the context.

More than one container can be active on an individual host at the same time. If multiple containers exist on a single host, you can set the boundaries of the containers so that the host can expand and contract them. In this case, resources that other containers are not currently using are available to a container that can use them. Ultimately, the number of containers that can be active on an individual host is determined by the amount of CPU and memory resources available, and how much of these resources each container reserves. The system must be able to meet the combined resource requirements of all the active containers, which are sized according to the needs of the applications.

For more information about managing containers with Container Manager, see Chapter 3, Managing Containers.

Resource Management

In a general sense, a resource represents a process-bindable OS entity. More often, a resource refers to the objects constructed by a kernel subsystem that offers some form of partitioning. A resource can also considered an aspect of the computing system that can be manipulated with the intention of affecting application behavior. Examples of resources include physical memory, CPUs, or network bandwidth.

Container Manager works with resource management utilities in both the Solaris 8 and Solaris 9 Operating System. In the Solaris 8 release, resource management is provided by Solaris Resource Manager 1.3. Every service is represented by a lnode, or limit node. The lnode is used to record resource allocation policies and accrued resource usage data. lnodes correspond to UNIX user IDs (UIDs). The UID can represent individual users and applications by default. For more information about lnodes and resource management, see “Limit Node Overview” in Solaris Resource Manager 1.3 System Administration Guide.

In the Solaris 9 release, resource management is provided by Solaris 9 Resource Manager. In this release, the project is similar to the lnode, and provides a network-wide administrative identifier for related work. All the processes running in a container have the same project identifier, also known as the project ID. The Solaris kernel tracks resource usage through the project ID. Historical data can be gathered using extended accounting, which uses the same tracking method. In Container Manager, the project represents the container. For more information about projects and resource management, see “Projects and Tasks” in System Administration Guide: Resource Management and Network Services.

Figure 1–3 Example of Projects on a Host

Illustration showing an example of projects on a host. The surrounding text describes the context.

Information about the processes running in a container is obtained from the Container Manager GUI. The gathering of data will be transparent to you as you create and manage containers using the software.

Different methods can be used to create container boundaries. One method is to partition the system by using resource pools. Another method is to establish limits on the project through resource controls.

Resource Pools

A resource pool, or pool, is a Solaris 9 software configuration mechanism that is used to partition the resources of a host. A resource set is a process-bindable resource. Memory sets and processor sets are examples of resource sets. Only processor sets are currently available in the Solaris 9 release. A pool binds the various resource sets available on a host. For more information about resource pools, see “Resource Pools” in System Administration Guide: Resource Management and Network Services.

A resource pool can hold one or more containers. In the case of one container, the resources that are linked to the pool are dedicated to that container. In the case of multiple containers, the resources that are linked to the pool are shared with the containers. The following illustration shows the relationship between resource pools and containers.

Figure 1–4 Example of Resource Pools Supporting the Containers on a Host

Illustration showing an example of resource pools on a host. The surrounding text describes the context.

A host can also have only one resource pool, as is always the case when running on the Solaris 8 Operating System. This pool is called pool_default. Because resource pools do not exist in this OS version, the pool_default is created artificially. All of the CPUs on a host running the Solaris 8 release are considered to be in a single pool by convention.

For more information about managing resource pools with Container Manager, see Chapter 4, Managing Resource Pools.

Resource Controls

In the case when more than one container is bound to a single pool, you can set guarantees, or limits, on a single container. These limits are called resource controls. An example of a control is the setting a minimum CPU limit, as in the case of using the fair share scheduler (FSS). Another example is the setting of a physical memory cap, as in the case of using the rcapd daemon. When setting a minimum CPU guarantee, the idle CPU cycles in one container can be used by the applications in the other containers. For more information about resource controls, see “Resource Controls” in System Administration Guide: Resource Management and Network Services.

Resource Utilization Reports and Extended Accounting Data

Reports that provide historical resource utilization data per container, resource pool, or host are available if you have the Performance Reporting Manager add-on product installed with Container Manager. CPU data, memory utilization data, and CPU extended accounting data are stored in the database by the Performance Reporting Manager data collection service. From the GUI, you can request a graph report that details resource usage, or you can export the data to a text file in comma-separated value (CSV) format. The latter can be used in a billing and accounting application, for example.

For more information about Performance Reporting Manager software, see Sun Management Center 3.5 Performance Reporting Manager User's Guide. For more information about the available reports and accounting data, see Resource Utilization Reports and Extended Accounting Data.

Before Using Container Manager

Prior to installing and using the Container Manager software, you should make an assessment of your resource consumption needs. As part of the container creation process, you will provide a minimum CPU reservation and a physical memory cap for the processes that will run inside the container. The creation process is best done if you have already evaluated your needs, developed your goals, and have a resource plan in place. Additionally, a master list of the specifications of all the hardware involved is also useful before you begin.

Server Consolidation

A key component for undergoing a successful server consolidation is to have a master list of all the servers, storage, and applications that are candidates for consolidation. Once your consolidation plan has been finalized, you can begin to implement the plan with this list.

If you are planning on performing a server consolidation in your data center, you will need to perform several tasks prior to installing and using the Container Manager software. A partial list of tasks to be performed includes the following:

For more discussion about how to plan and execute a server consolidation, you can read the Sun Blueprints book Consolidation in the Data Center by David Hornby and Ken Pepple.