Installing and Administering Solaris Container Manager 3.6.1

Chapter 1 Introduction to Solaris Container Manager 3.6.1

This chapter introduces Solaris Container Manager 3.6.1 (Container Manager).

The following topics are discussed:

Container Manager Overview

Solaris Container Manager 3.6 is an add-on software product to the Sun Management Center 3.6.1 release. This product helps you to consolidate servers to control the cost of large networks of servers and software. Container Manager enables you to create and manage containers, projects, resource pools, and zones. You benefit from better utilization of hardware resources and higher server-to-administrator ratios.

The product enables you to do the following tasks:

Containers are ideal for any organization where users need their own virtualized environment, including IP address, disk storage, and applications. For example, a company might set up containers for specific applications, such as mail server, web server, or database. A company might also set up containers for geographic areas, such as United States, Americas, Europe, and Asia-Pacific. Similarly, a company might set up containers for functional departments, such as, human resources, research and development, and sales.

Specific industries can use containers or zones for a variety of purposes. A university might give each university student a zone with an instance of the OS, a share of system resources, and a root password. A wireless company might set up containers to monitor services, such as long-distance service, local telephone service, and voice mail. A cable provider or Internet service provider might set up containers for DSL, cable modem, or cable television service. A financial institution might set up separate containers for users, who do complex queries on data warehouses, and for users, who need online transaction processing. An independent software vendor (ISV) might set up containers or zones for separate customers to whom they sell software or services.

Container Manager and Other Resource Management Utilities

This product organizes existing resource management utilities that run on the Solaris 8, Solaris 9, and Solaris 10 releases. Specifically, this product provides tools to simplify the configuration of Solaris Resource Manager 1.3 and Solaris 9 Resource Manager.

For more information about the Solaris resource management utilities, see Solaris Resource Manager 1.3 System Administration Guideand System Administration Guide: Network Services.

Solaris Container Model

A Solaris Container is an abstraction layer that helps to organize and manage the collection of physical system resources. The container enables the creation of a blueprint that details the resource requirements for an application. The resource requirements of the application are the focus of the Solaris Container model. This model focuses on the service or workload. The service is delivered by an application, which is a workload to the system. A workload is a set of associated processes, such as an executing application.

An earlier form of workload-based management was implemented in the Solaris Resource Manager 1.3 release. In that release, the workload was associated with the limit node, lnode. Container Manager software builds on this earlier effort. The current container model provides a tool to help you organize and manage the ongoing delivery of resources for services. Common examples of services could be monthly payroll, customer order lookup, and web service delivery.

You need to be able to describe the environment that an application is limited to in a server consolidation. Establishing this description enables you to move from having one application running per server to having many applications running on a single server. The container provides this description, as well as being its instantiation. A simple container could, for example, describe system resources such as CPU, physical memory, and bandwidth. A more complex container could, for example, also control security, namespace isolation, and application faults.

The following illustration of a Solaris Container shows the relationship between services and resources.

Figure 1–1 Example of a Solaris Container

Illustration depicting an example of a Solaris Container.
The surrounding text describes the context.

The box represents the container. Three kinds of resources are shown along the x, y, and z axes of the box that surrounds the service. In this model, CPU, Memory, and Bandwidth are fundamental resources. The service is bound by the box to represent how this service is contained by the container. In this release, Container Manager controls all three fundamental resources: CPU, physical memory resources, and bandwidth.

Because Container Manager focuses on the workload, the amount of resources that is used by an individual host is not monitored. A host is a system on which the Container Manager agent software has been installed and which is part of the Sun Management Center server context. When installation is complete, the host is automatically discovered and the name is added to the navigation window in the Hosts view. The software monitors the amount of resources that is used by the service. In this model, a single instance of a service represents at least one process that runs on an individual host. The data is retained for possible system health monitoring and accounting purposes.

Figure 1–2 Example of Containers on a Host

Illustration showing an example of containers on a host. The
surrounding text describes the context.

More than one container can be active on an individual host at the same time. If multiple containers exist on a single host, you can set the boundaries of the containers so that the host can expand and contract them. In this case, resources that other containers are not currently using are available to a container that can use them. Ultimately, the number of containers that can be active on an individual host is determined by the amount of CPU and memory resources available, and how much of these resources each container reserves. The system must be able to meet the combined resource requirements of all the active containers, which are sized by the needs of the applications.

For more information about managing containers with Container Manager, see Chapter 4, Managing Projects.

Resource Management

Generally, a resource represents a process-bindable OS entity. More often, a resource refers to the objects constructed by a kernel subsystem that offers some form of partitioning. A resource can also considered an aspect of the computing system that can be manipulated with the intention of affecting application behavior. Examples of resources include physical memory, CPUs, or network bandwidth.

Container Manager works with resource management utilities in the Solaris 8, Solaris 9, and Solaris 10 releases. In the Solaris 8 release, resource management is provided by Solaris Resource Manager 1.3. Every service is represented by an lnode. The lnode is used to record resource allocation policies and accrued resource usage data. lnodes correspond to UNIX user IDs (UIDs). The UID can represent individual users and applications by default. For more information about lnodes and resource management, see Limit Node Overview in Solaris Resource Manager 1.3 System Administration Guide

In the Solaris 9 and Solaris 10 releases, resource management is provided by the Resource Manager. In this release, the project is similar to the lnode. A project provides a network-wide administrative identifier for related work. All the processes that run in a container have the same project identifier, also known as the project ID. The Solaris kernel tracks resource usage through the project ID. Historical data can be gathered by using extended accounting, which uses the same tracking method. In Container Manager, the project represents the container.

Figure 1–3 Example of Projects on a Host

Illustration showing an example of projects on a host. The surrounding
text describes the context.

Information about the processes that run in a container is obtained from the Container Manager GUI. The gathering of data is transparent to you as you create and manage containers by using the software.

Different methods can be used to create container boundaries. One method is to partition the system by using resource pools. Another method is to establish limits on the project through resource controls.

Resource Pools

A resource pool, or pool, is a Solaris 9 and Solaris 10 software configuration mechanism that is used to partition the resources of a host. A resource set is a process-bindable resource. Memory sets and processor sets are examples of resource sets. Only processor sets are currently available in the Solaris 9 and Solaris 10 release. A pool binds the various resource sets that are available on a host.

A resource pool can hold one or more projects. In the case of one project, the resources that are linked to the pool are dedicated to that project. In the case of multiple projects, the resources that are linked to the pool are shared with the projects.

On the Solaris 10 Operating System, the product has a feature called dynamic resource pools. The dynamic resource pools help you obtain better performance by enabling you to adjust each pool's resource allocations in response to system events and load changes. This feature is described in Dynamic Resource Pools.

When running on the Solaris 8 Operating System, a host can have only one resource pool. This pool is called pool_default. Because resource pools do not exist in this OS version, the pool_default is created artificially. All of the CPUs on a host that run the Solaris 8 release are considered to be in a single pool by convention.

For more information about managing resource pools with Container Manager, see Chapter 5, Managing Resource Pools.

Resource Controls

In the case when more than one project is bound to a single pool, you can set guarantees, or limits, on a single project. These limits are called resource controls. An example of a control is the setting of a minimum CPU limit, as in the case of using the fair share scheduler (FSS). Another example is the setting of a physical memory cap, as in the case of using the rcapd daemon. When setting a minimum CPU guarantee, the idle CPU cycles in one project can be used by the applications in the other projects.

Zones

Zones provide an isolated and secure environment for running applications. Zones give you a way to create virtualized operating system environments within an instance of Solaris. Zones allow one or more processes to run in isolation from other processes on the system. For example, a process that runs in a zone can send signals only to other processes in the same zone, regardless of user ID and other credential information. If an error occurs, it affects only the processes that run within the zone.

Global Zones

Every Solaris 10 system contains a general global environment, like previous versions of the OS, called a global zone. The global zone has two functions: it is the default zone for the system and the zone used for system-wide administrative control. All processes run in the global zone if no non-global zones, referred to simply as zones, are created by the global administrator.

The global zone is the only zone from which a non-global zone can be configured, installed, managed, or uninstalled. Only the global zone is bootable from the system hardware. Administrative functions, such as physical devices, routing, or dynamic reconfiguration (DR) are only possible in the global zone. Appropriately privileged processes or users that run in the global zone can access objects associated with other zones.

Unprivileged processes or users in the global zone might be able to perform operations not allowed to privileged processes or users in a non-global zone. For example, users in the global zone can view information about every process in the system. Zones allow the administrator to delegate some administrative functions while maintaining overall system security.

Non-Global Zones

A non-global zone does not need a dedicated CPU, a physical device, or a portion of physical memory. These resources can be shared across a number of zones that run within a single domain or system. Zones can be booted and rebooted without affecting other zones on the system. Each zone can provide a customized set of services. To enforce basic process isolation, a process can “see” or signal only those processes that exist in the same zone. Basic communication between zones is enabled by giving each zone at least one logical network interface. An application running in one zone cannot see the network traffic of another zone even though the respective streams of packets travel through the same physical interface.

Each zone that requires network connectivity is configured with one or more dedicated IP addresses.

For more information about zones, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.

Resource Utilization Reports and Extended Accounting Data

If you have the Performance Reporting Manager add-on product installed with Container Manager, you can create reports that provide historical resource utilization data per container, resource pool, zone, project, or host. CPU data, memory utilization data, and CPU extended accounting data are stored in the database by the Performance Reporting Manager data collection service. From the GUI, you can request a graph report that details resource usage, or you can export the data to a text file in comma-separated value (CSV) format. The latter can be used in a billing and accounting application, for example.

For more information about Performance Reporting Manager software, see Sun Management Center 3.6.1 Performance Reporting Manager User’s Guide. For more information about the available reports and accounting data, see About Reports.

Before Using Container Manager

Before installing and using the Container Manager software, assess your resource consumption needs. As part of the container creation process, you provide a minimum CPU reservation and optionally a physical memory cap for the processes that will run inside the container. The container creation process is easier if you have already evaluated your needs, developed your goals, and have a resource plan in place. Additionally, a master list of the specifications of all the hardware involved is also useful before you begin.

Server Consolidation

A key component of a successful server consolidation is a master list of all the servers, storage, and applications that are candidates for consolidation. After you have finalized your consolidation plan, you can begin to implement the plan with this list.

If you intend to perform a server consolidation in your data center, you need to perform several tasks before installing and using the Container Manager software. A partial list of tasks to be performed includes the following:

  1. Choose the applications to consolidate.

  2. Identify the components, such as processes, groups of users, or users that make up the workload for the application.

  3. Determine the performance requirements for each defined workload. This task involves monitoring the real-time activity of the application on the current system, including CPU, memory, network, and storage requirements and usage. You also need to determine which types of file systems, shared file systems, and shared libraries the workloads use to configure the new system and to share resources efficiently, such as read-only file systems, libraries, and man pages.

  4. Rank the workloads that are to share the system resources by which applications require the most resources and the time periods they need them. You also need to identify competing workloads that are housed on the same systems.

  5. Identify the projects for these workloads. The project serves as the administrative name that is used to group related work in a manner you deem useful. For example, you might have a project for a web services and a project for database services.


Note –

Although the Solaris Operating System can have thousands of containers, for practical purposes and best performance, we recommend that you have no more than 200 hosts with approximately 10 zones per host and 10 projects per zone.


For more information about how to plan and execute a server consolidation, you can read the Sun Blueprints book Consolidation in the Data Center by David Hornby and Ken Pepple.

Container Manager Examples

The following examples show how you can use Container Manager.

Multiple Projects With a Zone for Oracle

In this example, you have a default resource pool with a zone. You then set up a container with one resource pool with two zones. One zone, zone_ora1, has the Oracle database application and the second zone, zone_ws01, has a web server application. Each resource pool has two CPUs. You set up eight CPU shares on the container, four shares for zone_ora1 and three shares for zone_ws01. The container uses the fair share scheduler.

Dynamic Resource Pool Example

In this example, you set up one container with two resource pools. Pool1 has one to three CPUs assigned. The load goal for pool1 is greater than 20 percent and less than 80 percent. Pool2 is used by a mail server. Depending on the load that the mail server requires, the other pool is dynamic and can use from one to three CPUs for its applications.

Applications Sharing the Same Container

In this example, you set up one container with two zones. The first zone, zone_ora02 has seven projects: one project for the user ORACLE, one project for any process run by the group database administrator, and five default projects: system, user.root, noproject, default, and group.staff. A total of 100 CPU shares are in the first zone. Each of the default projects is assigned one share each. The first project for user ORACLE is assigned 75 shares and the second project for group.dba is assigned 20 shares.

The second zone, zone_ws_02, is for a web server.

Oracle 10g Rack on Multiple Systems

In this example, the application Oracle 10g runs on multiple systems. You create a project on system 1 with one pool and one zone for the Oracle 10g application. You then copy the project with its zone and pool onto a second system and associate the project on the second system with the Oracle 10g application.

Multiple Systems With Multiple Resource Pools

In this example, you have two systems with two pools each. You have a project with a web server on system 1 and a project with a web server on system 2. Each project has 10 CPU shares with each web server allocated 5 shares. The other 5 shares are reserved for future use.

New Features and Changes in Solaris Container Manager 3.6 and 3.6.1

Solaris Container Manager has the following new features, which vary with operating systems.

Table 1–1 New Features in Solaris Container Manager 3.6

Benefit 

Feature 

Solaris 10 (SPARC and x86) 

Solaris 9 (SPARC and x86) 

Solaris 8 (SPARC) 

Run processes in isolation and virtual OS environments 

Zone management 

Yes 

   

Set and obtain system performance goals 

Dynamic resource pools 

Yes 

   

Avoid network congestion 

Internet Protocol Quality of Service (IPQoS) 

Yes 

   

More flexible process management 

Ability to move process across containers 

Yes 

Yes 

 

Timesharing scheduler support 

Support of other scheduler class 

Yes 

Yes 

Yes 

Better visualization tools 

Graph enhancement 

Yes 

Yes 

Yes 

Zone-aware containers with memory allocation 

Container enhancement 

Yes 

Yes 

Yes 

Utilization report for top 5 resource objects 

Graph enhancement 

Yes 

Yes 

Yes 

In Solaris Container Manager 3.6.1, Zone Copy feature has been enhanced. You can create multiple copies of a non-global zone on a single host or a copy of a non-global zone on multiple hosts. For information about this, see Copying Non-Global Zones in Chapter 6, Managing Zones.

Zone Management

Container Manager enables you to create, delete, modify, halt, and reboot non-global zones. Container Manager also can discover existing zones, detect zone changes, monitor and archive a zone's CPU, memory and network utilization, and generate zone up/down alarms.

For more information about zones, see Chapter 6, Managing Zones.

Dynamic Resource Pools

Dynamic resource pools dynamically adjust the resource allocation of each resource pool to meet established system performance goals. Dynamic resource pools simplify and reduce the number of decisions required from a system administrator. Adjustments are automatically made to preserve the system performance goals specified by a system administrator.

You can create, modify, and delete dynamic resource pools for Solaris 10 systems. After you configure dynamic resource pool constraints, such as the minimum and maximum CPUs, utilization objective, locality objective, and CPU share, the Container Manager agent dynamically adjusts the pool size to the conditions of resource availability and consumption.

Resource pool configuration is saved on both the agent and service database.

Bandwidth Control Using IPQoS

The IP quality-of-service feature helps you to provide consistent levels of services to network users and to manage network traffic. The service enables you to rank, control, and gather network statistics.

This feature controls inbound and outbound traffic of a Solaris zone. You specify the upper limit of the zone's input/output network bandwidth. The package is dropped if the limit is exceeded. Because IPQoS has a fair amount of CPU overhead, this is an optional feature.

Container Manager monitors and gathers work utilization data and provides a historical network utilization graph.

Flexible Process Management

To increase process management flexibility, Container Manager 3.6 enables you to move processes from container to container. For Solaris 9 systems, you can move processes across containers. For Solaris 10 systems, you can move processes across containers only within the same zone.

Timesharing Scheduler

Container Manager 1.0 supported the fair share scheduler (FSS) only. Container Manager 3.6 allows you to select the scheduler class, fair share or timesharing, when you create or modify a resource pool. The scheduler class determines the process priority, deciding which process runs next.

After you change a resource pool's scheduler class, any new processes for that resource pool changes to the resource pool's scheduler class. Container Manager does not change the scheduler class of a running process.

Container Enhancements

Container Manager includes the following enhancements to containers:

Container Manager Documentation

The following table lists the documentation resources that are available for the product. For the documentation for Solaris Container Manager 3.6, go to http://docs.sun.com/app/docs/coll/810.4.

Table 1–2 Documentation Resources

Task 

Resource 

To install and administer containers 

Installing and Administering Solaris Container Manager 3.6 (this book)

To access Help from the product 

Online Help for Solaris Container Manager 3.6. To access this help, click the Help link in the Solaris Container Manager GUI.

To install Sun Management Center 3.6 and its add-on products, including Container Manager 

Sun Management Center 3.6.1 Installation and Configuration Guide

To find installation issues, run-time issues, late-breaking news (including supported hardware), and documentation issues

Sun Management Center 3.6 Release Notes

To obtain information about Performance Reporting Manager, the optional add-on, which works with Container Manager 

Sun Management Center 3.6.1 Performance Reporting Manager User’s Guide

If you use Solaris 8 Operating System, you should read about Solaris Resource Manager 1.3 

Solaris Resource Manager 1.3 Installation Guide

Solaris Resource Manager 1.3 System Administration Guide

Solaris Resource Manager 1.3 Release Notes

If you use Solaris 9 or 10 Operating System, you should read about Solaris resource management and zones 

System Administration Guide: Solaris Containers-Resource Management and Solaris Zones

Getting Started

If you have already installed and set up the Solaris Container Manager, the following links help you use the product quickly: