Sun Cluster Concepts Guide for Solaris OS

Data Service Project Configuration

Data services can be configured to launch under a Solaris project name when brought online by using the RGM. The configuration associates a resource or resource group managed by the RGM with a Solaris project ID. The mapping from your resource or resource group to a project ID gives you the ability to use sophisticated controls that are available in the Solaris Operating System to manage workloads and consumption within your cluster.


Note –

You can perform this configuration only if you are running the current release of Sun Cluster software with at least Solaris 9.


Using the Solaris management functionality in a Sun Cluster environment enables you to ensure that your most important applications are given priority when sharing a node with other applications. Applications might share a node if you have consolidated services or because applications have failed over. Use of the management functionality described herein might improve availability of a critical application by preventing lower-priority applications from overconsuming system supplies such as CPU time.


Note –

The Solaris documentation for this feature describes CPU time, processes, tasks and similar components as “resources”. Meanwhile, Sun Cluster documentation uses the term “resources” to describe entities that are under the control of the RGM. The following section will use the term “resource” to refer to Sun Cluster entities under the control of the RGM. The section uses the term “supplies” to refer to CPU time, processes, and tasks.


This section provides a conceptual description of configuring data services to launch processes in a specified Solaris 9 project(4). This section also describes several failover scenarios and suggestions for planning to use the management functionality provided by the Solaris Operating System.

For detailed conceptual and procedural documentation about the management feature, refer to Chapter 1, Network Service (Overview), in System Administration Guide: Network Services.

When configuring resources and resource groups to use Solaris management functionality in a cluster, use the following high-level process:

  1. Configuring applications as part of the resource.

  2. Configuring resources as part of a resource group.

  3. Enabling resources in the resource group.

  4. Making the resource group managed.

  5. Creating a Solaris project for your resource group.

  6. Configuring standard properties to associate the resource group name with the project you created in step 5.

  7. Bring the resource group online.

To configure the standard Resource_project_name or RG_project_name properties to associate the Solaris project ID with the resource or resource group, use the -y option with the scrgadm(1M) command. Set the property values to the resource or resource group. See Appendix A, Standard Properties, in Sun Cluster Data Services Planning and Administration Guide for Solaris OS for property definitions. Refer to r_properties(5) and rg_properties(5) for property descriptions.

The specified project name must exist in the projects database (/etc/project) and the root user must be configured as a member of the named project. Refer to Chapter 2, Projects and Tasks (Overview), in System Administration Guide: Solaris Containers-Resource Management and Solaris Zones for conceptual information about the project name database. Refer to project(4) for a description of project file syntax.

When the RGM brings resources or resource groups online, it launches the related processes under the project name.


Note –

Users can associate the resource or resource group with a project at any time. However, the new project name is not effective until the resource or resource group is taken offline and brought back online by using the RGM.


Launching resources and resource groups under the project name enables you to configure the following features to manage system supplies across your cluster.

Determining Requirements for Project Configuration

Before you configure data services to use the controls provided by Solaris in a Sun Cluster environment, you must decide how to control and track resources across switchovers or failovers. Identify dependencies within your cluster before configuring a new project. For example, resources and resource groups depend on disk device groups.

Use the nodelist, failback, maximum_primaries and desired_primaries resource group properties that are configured with scrgadm(1M) to identify nodelist priorities for your resource group.

Use the preferenced property and failback property that are configured with scrgadm(1M) and scsetup(1M) to determine disk device group nodelist priorities.

If you configure all cluster nodes identically, usage limits are enforced identically on primary and secondary nodes. The configuration parameters of projects do not need to be identical for all applications in the configuration files on all nodes. All projects that are associated with the application must at least be accessible by the project database on all potential masters of that application. Suppose that Application 1 is mastered by phys-schost-1 but could potentially be switched over or failed over to phys-schost-2 or phys-schost-3. The project that is associated with Application 1 must be accessible on all three nodes (phys-schost-1, phys-schost-2, and phys-schost-3).


Note –

Project database information can be a local /etc/project database file or can be stored in the NIS map or the LDAP directory service.


The Solaris Operating System enables for flexible configuration of usage parameters, and few restrictions are imposed by Sun Cluster. Configuration choices depend on the needs of the site. Consider the general guidelines in the following sections before configuring your systems.

Setting Per-Process Virtual Memory Limits

Set the process.max-address-space control to limit virtual memory on a per-process basis. Refer to rctladm(1M) for detailed information about setting the process.max-address-space value.

When you use management controls with Sun Cluster software, configure memory limits appropriately to prevent unnecessary failover of applications and a “ping-pong” effect of applications. In general, observe the following guidelines.

Failover Scenarios

You can configure management parameters so that the allocation in the project configuration (/etc/project) works in normal cluster operation and in switchover or failover situations.

The following sections are example scenarios.

In a Sun Cluster environment, you configure an application as part of a resource. You then configure a resource as part of a resource group (RG). When a failure occurs, the resource group, along with its associated applications, fails over to another node. In the following examples the resources are not shown explicitly. Assume that each resource has only one application.


Note –

Failover occurs in the preferenced nodelist order that is set in the RGM.


The following examples have these constraints:

Although the numbers of assigned shares remain the same, the percentage of CPU time allocated to each application changes after failover. This percentage depends on the number of applications that are running on the node and the number of shares that are assigned to each active application.

In these scenarios, assume the following configurations.

Two-Node Cluster With Two Applications

You can configure two applications on a two-node cluster to ensure that each physical host (phys-schost-1, phys-schost-2) acts as the default master for one application. Each physical host acts as the secondary node for the other physical host. All projects that are associated with Application 1 and Application 2 must be represented in the projects database files on both nodes. When the cluster is running normally, each application is running on its default master, where it is allocated all CPU time by the management facility.

After a failover or switchover occurs, both applications run on a single node where they are allocated shares as specified in the configuration file. For example, this entry in the/etc/project file specifies that Application 1 is allocated 4 shares and Application 2 is allocated 1 share.

Prj_1:100:project for App-1:root::project.cpu-shares=(privileged,4,none)
Prj_2:101:project for App-2:root::project.cpu-shares=(privileged,1,none)

The following diagram illustrates the normal and failover operations of this configuration. The number of shares that are assigned does not change. However, the percentage of CPU time available to each application can change. The percentage depends on the number of shares that are assigned to each process that demands CPU time.

Illustration: The preceding context describes the graphic.

Two-Node Cluster With Three Applications

On a two-node cluster with three applications, you can configure one physical host (phys-schost-1) as the default master of one application. You can configure the second physical host (phys-schost-2) as the default master for the remaining two applications. Assume the following example projects database file on every node. The projects database file does not change when a failover or switchover occurs.

Prj_1:103:project for App-1:root::project.cpu-shares=(privileged,5,none)
Prj_2:104:project for App_2:root::project.cpu-shares=(privileged,3,none) 
Prj_3:105:project for App_3:root::project.cpu-shares=(privileged,2,none)  

When the cluster is running normally, Application 1 is allocated 5 shares on its default master, phys-schost-1. This number is equivalent to 100 percent of CPU time because it is the only application that demands CPU time on that node. Applications 2 and 3 are allocated 3 and 2 shares, respectively, on their default master, phys-schost-2. Application 2 would receive 60 percent of CPU time and Application 3 would receive 40 percent of CPU time during normal operation.

If a failover or switchover occurs and Application 1 is switched over to phys-schost-2, the shares for all three applications remain the same. However, the percentages of CPU resources are reallocated according to the projects database file.

The following diagram illustrates the normal operations and failover operations of this configuration.

Illustration: The preceding context describes the graphic.

Failover of Resource Group Only

In a configuration in which multiple resource groups have the same default master, a resource group (and its associated applications) can fail over or be switched over to a secondary node. Meanwhile, the default master is running in the cluster.


Note –

During failover, the application that fails over is allocated resources as specified in the configuration file on the secondary node. In this example, the project database files on the primary and secondary nodes have the same configurations.


For example, this sample configuration file specifies that Application 1 is allocated 1 share, Application 2 is allocated 2 shares, and Application 3 is allocated 2 shares.

Prj_1:106:project for App_1:root::project.cpu-shares=(privileged,1,none)
Prj_2:107:project for App_2:root::project.cpu-shares=(privileged,2,none)
Prj_3:108:project for App_3:root::project.cpu-shares=(privileged,2,none)
 

The following diagram illustrates the normal and failover operations of this configuration, where RG-2, containing Application 2, fails over to phys-schost-2. Note that the number of shares assigned does not change. However, the percentage of CPU time available to each application can change, depending on the number of shares that are assigned to each application that demands CPU time.

Illustration: The preceding context describes the graphic.