Installing and Administering N1 Grid Console - Container Manager 1.0

About Container Creation

Every container starts with a container definition. A container can be one of three types, depending upon the project type selected during its creation. The project type determines how processes are tracked.

Container Types

When creating a new container definition, you must select the project type. A project is a network-wide administrative identifier (ID) for related work. All processes running in a container have the same project ID, and a container tracks the resources being used with the project ID. The container type is based on which project-type is selected when creating the container definition. For more information about projects and resource management, see “Projects and Tasks” in System Administration Guide: Resource Management and Network Services.

Every container definition has a project name that is a permanent part of its information. When a container is activated on a host, this project name is added to that host's /etc/project file. This entry remains as long as the container is active on that host.

It is not possible to have two containers with the same project name active on a host at the same time. This is because processes running in a container are tracked with the project ID, so every project name on a host must be unique.

When creating user and group-based containers, the user or group name becomes part of the project name. For user-based containers, the project name becomes user.username. For group-based containers, the project name becomes group.groupname. Therefore, when creating user or group-based containers, you cannot use a user or group name that duplicates the /etc/project entries for the default containers. For more information, see Default Containers.

You provide a project name of your choosing as part of the creation process for application-based containers. The Container Creation wizard will accept duplicate project names for different application-based container definitions. But two application-based containers that have the same project name cannot be active on the same host at the same time. If you want to reuse project names when creating application-based containers, do this only if you plan to activate these containers on different hosts. If you try to activate a second container on a host that already has an active container with the identical project name, the activation will fail.

The following table provides details about the three project types available and what changes occur based on the selection.

Table 3–2 Project Type Details

Project Type 

OS Version 

Details 

User-Based 

Solaris 8 

Only type of project supported in the Solaris 8 release. 

The project name in the /etc/project file becomes user.username. The project becomes the user's primary default project.

User-Based 

Solaris 9 

The project name in the /etc/project file becomes user.username, with a list of UNIX users who can join this project.

Valid forms are username.

Group-Based 

Solaris 9 

The project name in the /etc/project file becomes group.groupname.

Valid form is groupname.

Application-Based 

Solaris 9 

The project name can be the application name or any other name chosen. The name provided is added to the /etc/project file.

A match expression can be provided for automatically moving the matching processes to the project name. This expression is case sensitive. 

The corresponding username or groupname under which the processes currently run must be provided.

About Making Resource Reservations

Before you begin using containers to manage an application's resources, it is important that you first know the resource trends for the application. The performance of certain applications, such as ORACLE®, will be significantly degraded if the amount of the memory cap is inadequate. Every active container must have resource reservations set: a minimum CPU reservation, and optionally, a maximum memory reservation (memory cap). You should only begin using containers to manage these reservations after you have established the resource requirements for the applications.


Caution – Caution –

Do not set a physical memory cap for a container that is less than what the application typically uses. This practice will affect the application's performance adversely and might result in significant delays due to higher paging and swapping as the application is required to use more virtual memory.


You must have your server consolidation plan finalized before you start using containers to manage system resources. An important related task is to trend the resource consumption of the applications you are considering including in your consolidation plan. It is recommended that you trend resource utilization of the application for at least a month in your test environment before implementing your plan in your production environment. Once you have established the CPU and memory consumption trends, you should allow at least a few percentage points above the typical memory requirement.

When making a reservation for the amount of CPU resources needed by the container, you assign the amount of CPU in integer or decimal units. For example, .25, 1, and 3.75 are all valid amounts. Container Manager uses the fair share scheduler (FSS) to ensure the minimum CPU reservation you set. The convention used by the software is 1 CPU equals 100 shares. Likewise, .25 CPU equals 25 shares, and so on. CPU shares are not the same as CPU percentages. Instead, the number of shares defines the relative importance of the project when compared to others. FSS only limits the CPU resource allotment when there is competition for those resources. For example, if there is only one active container on a system at the time, then the corresponding application can utilize all of the CPU resources regardless of the number of shares it has. For more information about the fair share scheduler, see System Administration Guide: Resource Management and Network Services.

You can use Container Manager in your test environment as a tool to help trend application resource consumption by doing the following:

  1. Install and set up the Container Manager software along with any required software.

    For information, see Chapter 2, Container Manager Installation and Setup.

  2. Install Performance Reporting Manager on all agent machines you want to monitor.

    For more information, see Chapter 2, Container Manager Installation and Setup and Sun Management Center 3.5 Performance Reporting Manager User's Guide.

  3. Create an active application-based container for the application you want to trend. In the New Creation wizard, make a minimum CPU reservation only. Do not set a memory cap.

    For more information, see Creating an Application-Based Container and To Create an Active Application-Based Container.

  4. Monitor the resources used for a couple of weeks with daily, weekly, or real-time graphs. Two graphs, one for CPU and memory resources used, are available for the container running on an individual host. You can also view the Processes table to monitor processes running in the application.

    For more information, see To Request a Resource Utilization Report for an Active Container and Viewing Container Processes.

  5. Once you have established the maximum physical memory requirement for the application, modify the container's properties to include a memory cap. Be sure not to set a cap that is less than the maximum memory the application has been using.

    For more information, see To Modify a Container Using a Property Sheet.

  6. Set an alarm so you are notified if the memory used starts to exceed the memory cap set. Make any adjustments to the memory cap using the Properties sheet.

    For more information, see To Set an Alarm Threshold and To Modify a Container Using a Property Sheet.

After you have established resource utilization trends by using Container Manager or other suitable resource management software, you are ready to begin using containers as a tool for implementing the server consolidation plan in your production environment.

For more information about how to plan and execute a server consolidation, you can read the Sun Blueprints book Consolidation in the Data Center by David Hornby and Ken Pepple. For more information about server consolidation on systems running the ORACLE database, you can read the Sun white paper Consolidating Oracle RDBMS Instances Using Solaris Resource Manager Software available at http://wwws.sun.com/software/resourcemgr/wp-oracle/.