Installing and Administering Solaris Container Manager 3.6.1

Chapter 3 About Containers and Starting the Product

This chapter describes containers and projects and how to start the product.

The following topics are discussed:

Container Overview

A project is a container that has been associated with a host. A project helps organize and manage the collection of physical system resources. A project is useful when you implement an overall server consolidation plan. Projects offer the following features:

After the software is installed and set up, several default projects are available for your immediate use. You also create your own projects by using a wizard that guides you through the process. Every project is associated with a container. This container can be used over again for creating new projects. Projects provide the following advantages:

The GUI is browser based and provides three management views (tabs): one from the host perspective, one from the container perspective, and one for open alarms. You can further organize the host view and the container view by creating groups and selecting which elements the groups should contain.

Additionally, the processes running inside the container and the resources being used can be readily checked with the software. Several graphing options are also available to help assess the level of resource utilization per container or host, including the ability to export the data to a file. These features enable you to monitor and reassess resource consumption for the appropriate adjustments.

With the software's alarm feature you can be notified by email when a container's resource utilization reaches a threshold of your setting. Alarm icons are also visible in the GUI for both hosts and containers.

The resource change job feature enables you to schedule changes to current resource boundaries on one or more containers with one request. A wizard guides you through the steps required for creating or modifying a resource change job.

About Container Properties

The container has the following properties:

The name you assign to a container is permanent and cannot be changed. The project name is likewise permanent. The other identifying information for the container can be changed.

The container is saved by the software and is available for repeated use until the container is deleted. The project is a container that has been associated with a host. A project is active when it has been associated with a host, and its resource reservations have been set.

Because multiple projects, with the same definition and resource reservations, can be active simultaneously on several different hosts, the container can conveniently manage projects across the data center. After the container has been saved, it can be used at any time to activate a project on any suitable host. Thus, the container can be used as a template for creating a new project.

The container acts as a template for multiple projects. The container stores the common properties of the projects in a central location. The common properties of the projects are the:

Other properties, such as CPU share and memory limit, are specific to a host that the project is activated on. In Solaris Container Manager 3.6, this set of common properties, which is stored centrally, is called the container. When the container is activated on a specific host, it is instantiated as a Solaris project and is stored in /etc/project.

For example, a company wants to set up a container for its email applications. The common properties of the projects would be:

When the container is activated on a specific host, the company instantiates the project and can now specify a resource pool, CPU shares, and memory limits.

Figure 3–1 Containers and Projects

Containers act as templates to create multiple projects

You can use a container to create multiple projects across zones and hosts. For example, if you use a single container to create three active projects on three different hosts, you have one container and three projects in that container. Changing the underlying information in the container changes all the projects that are based on that container.

The Project Creation wizard gives you the option to create a project that is activated on completion of all the steps. A container is created at the same time and its name is saved in the GUI. You also have the option to create just the container and activate the project at a later time with a wizard that guides you through the process.

For a container, you can perform the following tasks by using the GUI:

For a project, you can perform the following tasks by using the GUI:

Project States

A project does not actually enforce the resource consumption boundaries that you set for an application. Rather, after the minimum CPU reservation and memory cap are provided and the project is activated, the Solaris kernel begins enforcing these boundaries. Before using projects, you need to know more about project states. A project can be in one of the following three states: defined, active, and inactive.

Figure 3–2 Project States

Illustration showing project states. Surrounding text
describes the context.

A project can move between these states throughout its lifetime.

Containers and Projects

The container is created during the initial stage when the project itself is still not fully formed. Each project must have a unique name and can be saved indefinitely in the database.

Figure 3–2 shows the project moves into the active state after the container is associated with a host. An inactive project can move back into the defined state after it has been deactivated and is no longer associated with a host.

Project Activation

The first step in making a project active is to associate its container with a host. The second step is to set the resource boundaries, namely, to assign the minimum CPU reservation and the memory cap for the project. The project must be associated with a host that can support these resource boundaries. An active project can also be referred to as being deployed, in the sense that the project has been pushed out and resides on a host.

When creating an application-based project with the New Project Wizard, a match expression can be provided that identifies the processes associated with the application. All processes that correspond to the match expression are then automatically moved under this container. On project activation, an entry in the /etc/project database is created on the host that the container is associated with. Correspondingly, the matching processes are then moved under the project name for the container. After the processes are moved, all resource utilization data is collected and saved for the project.

Inactive Project

When a project is deactivated, the resource boundaries are no longer enforced. A deactivated project enters into an inactive state and is deleted from the host's /etc/project file. While inactive, the project still exists in the software's database, pending future activation. After the inactive project is reactivated, the container's resource boundaries are again enforced.

All data collected about the project's use of resources while it was active is preserved in the database. You can still request utilization reports for an inactive project for up to 30 days after the project was deactivated.

Container Manager GUI

Standard command-line commands in Solaris software resource management are not supported by the Container Manager software. You should manage the containers from the Container Manager graphical user interface (GUI). The GUI is started from the Java Web Console by using a browser. The following browsers are supported:

ProcedureTo Start the Container Manager GUI

Steps
  1. If your UNIX user ID is not present in the /var/opt/SUNWsymon/cfg/esusers file, create this entry.

    You must also be assigned to either esadm or esdomadm group.

    For instructions about creating an entry and assigning to a group, see Setting Up Users in Sun Management Center 3.6 Installation and Configuration Guide.

  2. Start a browser.

    For a list of supported browsers, see Container Manager GUI.

  3. To reach the Container Manager GUI, type:


    https://sunmc-server_machine_name:6789/containers
    

    The Java Web Console login page appears.

    Figure 3–3 Java Web Console Login Page

    Java Web Console login page with three fields: server
name, user name, password.

    If the login page does not appear, you might need to restart Java Web Console. For instructions, see To Restart Java Web Console.


    Tip –

    If you reach the Console page, click the Solaris Container Manager 3.6.1 link beneath the Systems section to access the GUI.


  4. Log in to the Java Web Console by using your UNIX user ID and password.

    The Container Manager GUI appears. The screen has three tabs: Hosts, Containers, and Open Alarms.

    Figure 3–4 Container Manager Main Page

    Container Manager main page with three tabs: Hosts, Containers,
Open Alarms.

ProcedureTo Restart Java Web Console

If you are unable to access the Java Web Console, use this command to restart it.

Step

    As superuser (su -), restart the Java Web Console by typing:


    # /usr/sbin/smcwebserver restart
    

Container Manager GUI Tabs

The following table provides information about the tabs that appear in the right pane of Container Manager GUI.

Table 3–1 Container Manager GUI Tabs

Tab 

Tab Name 

Contents 

Host (view) 

Contents 

Provides information about the resource pools on the selected host. 

 

Properties 

Provides information about the properties of the selected host, zone, project, or resource pool. 

 

Utilization 

Provides information about a host's, zone's, project's, or pool's daily, weekly, or monthly resource utilization. Real-time utilization data is available for active projects. This tab is visible only if Performance Reporting Manager software is installed. 

 

Projects 

Provides information about the projects that are associated with a host. 

 

Zones 

Provides information about the zones associated with a host. 

Containers (view) 

Contents 

Provides information about projects. 

 

Properties 

Provides information about the properties of the selected host, container, project, or resource pool. 

 

Utilization 

Provides information about a host's, zone's, project's, or pool's's daily, weekly, or monthly resource utilization. Real-time utilization data is available for active projects. This tab is visible only if Performance Reporting Manager software is installed. 

 

Jobs (Resource Change Jobs) 

Provides information about scheduled resource change jobs. You can also create a new resource change job from this tab. Note that default containers cannot have resource change jobs associated with them. 

Open Alarms 

 

Provides information about open alarms, including severity, message, managed object, start time, and acknowledgment. 

Resource Pool (drill down) 

Contents 

Provides information about the zones on the selected resource pool. 

 

Properties 

Provides information about the properties of the selected resource pool. 

 

Utilization 

Provides information about a pool's daily, weekly, or monthly resource utilization. This tab is visible only if Performance Reporting Manager software is installed. 

 

Projects 

Provides information about the projects that are associated with the selected resource pool. 

Zone (drill down) 

Contents 

Provides information about the projects on the selected zone. 

 

Properties 

Provides information about the properties of the selected zone. 

 

Utilization 

Provides information about a zone's daily, weekly, or monthly resource utilization. This tab is visible only if Performance Reporting Manager software is installed. 

Project (drill down) 

Properties 

Provides information about the properties of the selected project. 

 

Utilization 

Provides information about a project's daily, weekly, or monthly resource utilization. This tab is visible only if Performance Reporting Manager software is installed. 

 

Processes 

Provides information about the processes of the selected project. 

 

Alarm Thresholds 

Used to set or remove alarm thresholds. 

Hosts View

The Hosts view organizes information from the host perspective. All agent machines that you are managing appear in the navigation window. The resource pools that are available for each host are shown when you click the expansion triangle beside the host name. You can also manage the containers that are associated with the host from this view.

All agent hosts that have the software installed are automatically discovered and added to the Hosts view. This view is accessed from the left tab in the navigation window. All agent hosts that are discovered are initially placed under a default group titled Hosts. You can further organize this view by creating new groups and moving the hosts to relevant groups.


Note –

Only those agent machines that are part of the Sun Management Center server context and that have Solaris Container Manager 3.6 installed are loaded into the Hosts view. For more information about server context, see Sun Management Center Architecture in Sun Management Center 3.6.1 User’s Guide.


The tabs and information that are available in the Hosts view are listed in Table 3–1.

Information about every project instance that is associated with a host is listed in the Project table.

The following figure shows the Hosts view with the project table that is associated with the default pool.

Figure 3–5 Sample: Hosts View Showing the Project Table

Screen capture of the Project Table in the Hosts View.
Surrounding text describes the context.

The Project table provides information about each project, detailing one project per row. The Project table provides the following data:

Project Name

Name of the project

Container Name

Name of the container

Status

State of the project: active and inactive

Resource Pool Name

Resource pool to which the project is bound

Zone Name

Name of the zone where the project resides. For Solaris 8 and Solaris 9 hosts, the zone name is always global.

CPU Reservation (CPU shares)

Minimum CPU shares set for the project

CPU Usage (CPUs)

Amount of CPU the project is using

Memory Cap (MB)

Maximum memory limit in megabytes

Memory Usage (MB)

Memory used by the project in megabytes

Shared Memory (MB)

Total amount of memory that is allowed to be used by the processes that run within this project in megabytes

The Resource Pool table provides information about each resource pool. The Resource Pool table provides the following data:

Resource Pool Name

Name of the resource pool

Current CPU(s)

Number of CPUs currently set for the resource pool

Unreserved CPU Shares

CPU shares that are not assigned to the zones or projects in the resource pool

Scheduler

Scheduler set for the resource pool: time-sharing scheduler or fair share scheduler

CPU Shares

CPU shares set for the resource pool

Minimum CPU Reservation

Minimum number of CPUs set for the resource pool

Maximum CPU Reservation

Maximum number of CPUs set for the resource pool

The Zone table provides information about each zone. The Zone table provides the following data:

Zone Name

Name of the zone

Zone State

State of the zone: configured, incomplete, installed, ready, running, shutting down, or down

Zone Host Name

Unique name for the zone as a virtual host

Zone Path

Absolute path that starts from the root (/) directory

IP Address

IP address for the zone

Project CPU Shares

Number of CPU shares that is allocated to the projects in the zone

Unreserved CPU Shares

Number of CPU shares available for allocation to projects associated with this zone

Reserved CPU Shares

Number of CPU shares that is allocated to this zone in the resource pool

Resource Pool

Resource pool for the zone

Containers View

The Containers view organizes information from the container perspective. All containers and projects appear in the navigation window. Because containers can be used repeatedly to make new projects, you can readily access the containers from this view, as well as perform other management tasks.

After installation and setup are complete, the Containers view automatically adds the Containers group as a default. Containers are managed from the Containers view.

The following figure shows the Containers view.

Figure 3–6 Sample: Containers View Showing the Hosts Associated With the Default Container

Screen capture of the Containers View. Surrounding text
describes the context.

The information that is available in the Containers view is listed in Table 3–1.

Organizing Hosts and Containers With Groups

The Hosts view contains the default group Hosts. All hosts that are discovered after installation of the software are placed in this group. Likewise, the Containers view has a default group named Default in which all the default containers of a host are placed. You can create additional groups in each view for organizing the hosts and containers.

You might use groups to organize the ten or hundreds of systems you have in a data center. For example, you might put hosts located together in a group. You might put containers owned by the same customer (internal or external) or department in a group. Likewise, you might put containers with a similar application in a group.

ProcedureTo Create a Container Group or Host Group

Steps
  1. If the Container Manager GUI is not already open, access it as described in To Start the Container Manager GUI.

  2. Select the appropriate view from the navigation window.

    • For a new container group, select the Containers view. The Container table is displayed in the right pane.

    • For a new host group, select the Hosts view. The Hosts and Groups table is displayed in the right pane.

  3. Click the New Group button.

    A dialog box appears.

  4. Provide a name for the group, and click OK.

    The name cannot exceed 32 characters.

    The new group appears in the selected view.

ProcedureTo Move a Container or Host to a Different Group

Steps
  1. If the Container Manager GUI is not already open, access it as described in To Start the Container Manager GUI.

  2. Select the appropriate view from the navigation window.

    • To move a container to a different group, select the Containers view. The Containers table is displayed in the right pane.

    • To move a host to a different group, select the Hosts view. The Hosts and Groups table is displayed in the right pane.

  3. To enable the Move button in the table, select the check box for the container or host that is to be moved.

  4. In the right pane, click the Move button.

    A dialog box lists the available groups.

  5. Select the group to which the container or host is to be moved.

  6. Click OK.

    The container or host is moved to the selected group.

Default Containers

After the software is set up, the Containers view is initially loaded with a group titled Default. This group holds the following five default containers on a host that runs the Solaris 9 or Solaris 10 Operating System (OS):

Each of the five default containers has a corresponding entry in the /etc/project file. Specifically, the five entries correspond to default, noproject, user.root, system, and group.staff.


Note –

On a host that runs the Solaris 8 release, the Users with Group Staff (group.staff) container does not exist. Otherwise, the default containers are the same.


Figure 3–7 Sample: System Containers Group With Containers Showing

Screen capture of the System Containers group with contents
showing. Surrounding text describes the context.

Each default container is in the active state, and the boundaries are set at 1 minimum CPU reservation (CPU shares) and no memory cap. A default container is always bound to the default resource pool (pool_default) of the host. You can monitor the resource utilization and run reports on each default container if you have Performance Reporting Manager installed.

These default containers cannot be deactivated, edited, or deleted. Each container is labeled Read Only accordingly.

Every UNIX user is assigned to a default project and is correspondingly assigned to a default container. Initially, the default containers hold all processes that are running on the system. As you create projects, processes are moved from the corresponding default container into the project you create.

About Container Creation

Every project starts with a container. A project can be one of three types, depending on the project type that is selected during its creation. The project type determines how processes are tracked.

Project Types

When creating a new container, you must select the project type. A project is a network-wide administrative identifier (ID) for related work. All processes that run in a container have the same project ID, and a container tracks the resources being used with the project ID. The container type is based on which project type is selected when creating the container.

Every container has a project name that is a permanent part of its information. When a container is activated on a host, this project name is added to that host's /etc/project file. This entry remains as long as the container is active on that host.

You cannot have two projects with the same project name active on a host at the same time. This is because processes that run in a container are tracked with the project ID, so every project name on a host must be unique.

When creating user-based and group-based projects, the user or group name becomes part of the project name. For user-based containers, the project name becomes user.username. For group-based containers, the project name becomes group.groupname. Therefore, when creating user-based or group-based projects, you cannot use a user name or group name that duplicates the /etc/project entries for the default containers. For more information, see Default Containers.

You provide a project name of your choosing as part of the creation process for application-based containers. The Project Creation wizard accepts duplicate project names for different application-based projects. But two application-based projects that have the same project name cannot be active on the same host at the same time. Reuse project names when creating application-based projects, only if you plan to activate these containers on different hosts. If you try to activate a second project on a host that already has a project with the identical project name, the activation fails.

The following table provides details about the three project types that are available and which changes occur based on the selection.

Table 3–2 Project Type Details

Project Type 

OS Version 

Details 

User-Based 

Solaris 8 

Only type of project supported in the Solaris 8 release. 

The project name in the /etc/project file becomes user.username. The project becomes the user's primary default project.

 

Solaris 9 and Solaris 10 

The project name in the /etc/project file becomes user.username, with a list of UNIX users who can join this project.

Valid forms are username.

Group-Based 

Solaris 9 and Solaris 10 

The project name in the /etc/project file becomes group.groupname.

Valid form is groupname.

Application-Based 

Solaris 9 and Solaris 10 

The project name can be the application name or any other chosen name. The name that is provided is added to the /etc/project file.

A match expression can be provided for automatically moving the matching processes to the project name. This expression is case sensitive. 

The corresponding username or groupname under which the processes currently run must be provided.

About Making Resource Reservations (CPU Shares)

Before you begin using projects to manage an application's resources, you must first know the resource trends for the application. The performance of certain applications, such as ORACLE®, will be significantly degraded if the amount of the memory cap is inadequate. Every project must have resource reservations set: a minimum CPU share and optionally, a maximum memory reservation (memory cap). You should only begin using projects to manage these reservations after you have established the resource requirements for the applications.


Caution – Caution –

Do not set a physical memory cap for a project that is less than what the application typically uses. This practice affects the application's performance adversely and might result in significant delays because of higher paging and swapping as the application is required to use more virtual memory.


You must have your server consolidation plan finalized before you start using projects to manage system resources. An important related task is to identify the trends in the resource consumption of the applications you include in your consolidation plan. Ideally. you identify the trends in resource utilization of the application for at least a month in your test environment before implementing your plan in your production environment. After you have established the CPU and memory consumption trends, you should allow at least a few percentage points above the typical memory requirement.

When making a reservation for the amount of CPU shares that is needed by the project, you assign the amount of CPU as an integer. For example, 25, 1, and 37 are all valid amounts. The term share is used to define a portion of the system's CPU resources that is allocated to a project. If you assign a greater number of CPU shares to a project, relative to other projects, the project receives more CPU resources from the fair share scheduler.

CPU shares are not equivalent to percentages of CPU resources. Shares are used to define the relative importance of workloads in relation to other workloads. For example, if the sales project is twice as important as the marketing project, the sales project should be assigned twice as many shares as the marketing project. The number of shares you assign is irrelevant; 2 shares for the sales project versus 1 share for the marketing project is the same as 18 shares for the sales project versus 9 shares for the marketing project. In both cases, the sales project is entitled to twice the amount of CPU as the marketing project.

CPU shares can be further broken down into two categories:

CPU Shares Assigned During Pool or Project Creation

On hosts running Solaris 8 OS, one resource pool, pool_default only, is available. The pool_default has a value of 100 CPU shares.

On hosts running Solaris 9 and Solaris 10 OS, when you create a new resource pool, you establish the value of the CPU shares for the pool. Solaris Container Manager gives a default value, but you can enter any integer. Some system administrators use a formula of 100 CPU shares per CPUs available to the resource pool. For example, you might assign 100 CPU shares to a pool, which has 1 CPU.

Let's say, for this pool, you have three projects: Project X, Project Y and Project Z. You assign the most important project, Project X, 50 CPU shares and the next project, Project Y, 10 shares and the next project, Project Z, 40 shares.

Figure 3–8 Project CPU Shares

CPU shares of a project

You assign the CPU shares to the project when you create the project by using the New Project wizard. The New Project wizard shows the Unreserved CPU shares for the pool so you can determine the CPU shares available and assign an appropriate amount to the project.

Figure 3–9 CPU Shares

Assign CPU shares to the project

(Solaris 10 only) CPU Shares Assigned During Zone Creation

If your host runs on the Solaris 10 Operating System, you can create zones and assign CPU shares for the zone as a whole and Project CPU shares for the projects in the zone. These are related entities.

You assign the CPU shares and Project CPU shares during zone creation by using the New Zone wizard. In Step 4 of the New Zone wizard, you select a resource pool. The wizard shows the Total CPU Shares for the pool and the Total Available CPU Shares for the pool.

You enter a value for the CPU shares that you want to allocate to this zone from the resource pool. This integer must be less than or equal to the Total Available CPU Shares for the pool.

Figure 3–10 Zone Shares

Assign CPU shares to the zone

If the pool has a Total Available CPU Shares of 100, then you can assign this zone all or some of the 100 shares. In this example, let's say we assign the zone 20 CPU shares from the resource pool.

Project CPU Shares Assigned During Zone Creation

In Step 4 of the New Zone wizard, you can also enter the Project CPU Shares. This field specifies the number of CPU shares that is allocated to projects in the zone. When you create this value, you establish the value of the Project CPU shares for the zone. You can enter any integer. The integer you enter determines the granularity you want to achieve.

For example, let's say we assign the Project CPU shares for Zone A to be 1000. On a physical level, 1000 Project CPU shares is the 20 CPU shares, inherited from the resource pool, divided into 1000 shares. Here is a formula that shows the relationship between 1 Project CPU share and CPU shares in this example:

1 Project CPU share = 20 (number of CPU shares allocated to the zone)/1000 (number of Project CPU shares) = 0.02 CPU shares

When you create a project, Project 1, in Zone A, Project 1 gets the shares from the zone and not directly from the resource pool. If Project 1 is assigned 300 shares in Zone A, then it gets 300 Project CPU shares or 300/1000 x 20/100 = 0.06 CPU shares.

Figure 3–11 Zone CPU Shares

CPU shares of a zone

You assign the Project CPU shares to the project when you invoke the New Project wizard. In Step 7 of the New Project Wizard, Provide Resource Reservations for the Project, you enter the Project CPU shares in the field labeled CPU Reservations (CPU Shares). This is true when you create a project only in a zone on a Solaris 10 host only.

Figure 3–12 Project CPU Shares

Assign project CPU shares to a project


Note –

When you create a project on a Solaris 8 or Solaris 9 host, the field Unreserved CPU Shares is used for entering CPU shares (not Project CPU shares).



Caution – Caution –

Do not use the command line (zonecfg command) to change the CPU shares manually. This will interfere with the Solaris Container Manager calculations.


The Global Zone and Its Projects

The global zone is the only zone that is not bound to only one resource pool. It can get CPU resources from any pool. Projects in the global zone can obtain CPU resources from every resource pool on the host because a hidden global zone is present in every resource pool on the host.

For example, the resource pool, Pool_default, has 4 CPUs and has zone_1 and zone_2 deployed on it. The Pool_default has 10 CPU shares. Zone_1 has 5 CPU shares, zone_2 has 4 CPU shares and the global zone has 1 CPU share.

Another resource pool, Pool_1, has 2 CPUs and has 10 CPU shares. Pool_1 has only one zone, zone_3 deployed. Zone_3 has 9 CPU shares. The global zone has 1 CPU share.

The projects in the global zone get their CPU resource from the 1 CPU share of the pool that they are deployed on.

In Solaris Container Manager, projects in the global zone must be deployed to pool_default.

Fair Share Scheduler (FSS)

Container Manager uses the fair share scheduler (FSS) to ensure the minimum CPU shares you set. The fair share scheduler is the default scheduler. The fair share scheduler calculates the proportion of CPU allocated to a project by dividing the shares for the project by the total number of active projects' shares. An active project is a project with at least one process that uses the CPU. Shares for idle projects, that is, projects with no active processes, are not used in the calculations.

For example, three projects, sales, marketing, and database, have two, one, and four shares allocated respectively. All projects are active. The CPU resources for the resource pool is distributed this way: the sales project receives to 2/7ths, the marketing project receives 1/7th, and the database project receives 4/7ths of the CPU resources. If the sales project is idle, then the marketing project receives 1/5th and the database project receives 4/5ths of the CPU resources.

Note that the fair share scheduler only limits CPU usage if there is competition for the CPU. A project that is the only active project on the system can use 100 percent of the CPU, regardless of the number of shares it holds. CPU cycles are not wasted. If a project does not use all of the CPU that it is entitled to use because it has no work to perform, the remaining CPU resources are distributed among other active processes. If a project does not have any CPU shares defined, it is assigned one share. Processes in projects with zero (0) shares are run at the lowest system priority. These processes only run when projects with nonzero shares are not using CPU resources.

Timesharing Scheduler (TS)

The timesharing scheduler tries to provide every process relatively equal access to the available CPUs, allocating CPU time based on priority. Because the TS does not need to be administered, it is easy to use. However, the TS cannot guarantee performance to a specific application. You should use TS if CPU allocation is not required.

For example, if two projects are assigned to an FSS resource pool and they each have two shares, the number of processes that are running in those projects is irrelevant. A project can only access 50 percent of the available CPU. Thus, if one process is running the sales project and 99 processes are running in the marketing project, the one process in the sales project can access 50 percent of the CPU. The 99 processes in the marketing project must share 50 percent of the available CPU resources.

In a TS resource pool, the CPU is allocated per process. The one process in the sales project has access to only 1 percent of the CPU, while the 99 processes in the marketing project have access to 99 percent of the available CPU resources.

For more information about the fair share scheduler or the timesharing scheduler, see System Administration Guide: Network Services.

Using Container Manager to Trend Application Resource Consumption

You can use Container Manager in your test environment as a tool to help trend application resource consumption by doing the following:

  1. Installing and setting up the Container Manager software along with any required software.

    For information, see Chapter 2, Container Manager Installation and Setup.

  2. Installing Performance Reporting Manager on all agent machines you want to monitor.

    For more information, see Chapter 2, Container Manager Installation and Setup and Sun Management Center 3.6.1 Performance Reporting Manager User’s Guide.

  3. Creating an active application-based container for the application you want to trend. In the New Creation wizard, make a minimum CPU reservation only. Do not set a memory cap.

    For more information, see Creating an Application-Based Project and To Create an Application-Based Project.

  4. Monitoring the resources used for a couple of weeks with daily, weekly, or real-time graphs. Two graphs, one for CPU and memory resources used, are available for the container that is running on an individual host. You can also view the Processes table to monitor processes running in the application.

    For more information, see To Request a Resource Utilization Report for an Active Project and Viewing Project Processes.

  5. After you have established the maximum physical memory requirement for the application, modify the container's properties to include a memory cap. Do not set a cap that is less than the maximum memory the application has been using.

    For more information, see To Modify a Project Using a Property Sheet.

  6. Setting an alarm so you are notified if the memory used starts to exceed the memory cap set. Make any adjustments to the memory cap using the Properties sheet.

    For more information, see To Set an Alarm Threshold and To Modify a Project Using a Property Sheet.

After you have established resource utilization trends by using Container Manager, you can use containers to consolidate servers in your production environment.

For more information about how to plan and execute a server consolidation, you can read the Sun Blueprints book Consolidation in the Data Center by David Hornby and Ken Pepple. For more information about server consolidation on systems running the ORACLE database, you can read the Sun white paper Consolidating Oracle RDBMS Instances Using Solaris Resource Manager Software.