The grid engine system does all of the following:
Accepts jobs from the outside world. Jobs are users' requests for computer resources
Puts jobs in a holding area until the jobs can be run
Sends jobs from the holding area to an execution device
Manages running jobs
Logs the record of job execution when the jobs are finished
As an analogy, imagine a large “money-center” bank in one of the world's capital cities. In the bank's lobby are dozens of customers waiting to be served. Each customer has different requirements. One customer wants to withdraw a small amount of money from his account. Arriving just after him is another customer, who has an appointment with one of the bank's investment specialists. She wants advice before she undertakes a complicated venture. Another customer in front of the first two customers wants to apply for a large loan, as do the eight customers in front of her.
Different customers with different needs require different types of service and different levels of service from the bank. Perhaps the bank on this particular day has many employees who can handle the one customer's simple withdrawal of money from his account. But at the same time the bank has only one or two loan officers available to help the many loan applicants. On another day, the situation might be reversed.
The effect is that customers must wait for service unnecessarily. Many of the customers could receive immediate service if only their needs were immediately recognized and then matched to available resources.
If the grid engine system were the bank manager, the service would be organized differently.
On entering the bank lobby, customers would be asked to declare their name, their affiliations, and their service needs.
Each customer's time of arrival would be recorded.
Based on the information that the customers provided in the lobby, the bank would serve the following customers:
Customers whose needs match suitable and immediately available resources
Customers whose requirements have the highest priority
Customers who were waiting in the lobby for the longest time
In a “grid engine system bank,” one bank employee might be able to help several customers at the same time. The grid engine system would try to assign new customers to the least-loaded and most-suitable bank employee.
As bank manager, the grid engine system would allow the bank to define service policies. Typical service policies might be the following:
To provide preferential service to commercial customers because those customers generate more profit
To make sure a certain customer group is served well, because those customers have received bad service so far
To ensure that customers with an appointment get a timely response
To prefer a certain customer on direct demand of a bank executive
Such policies would be implemented, monitored, and adjusted automatically by a grid engine system manager. Customers with preferential access would be served sooner. Such customers would receive more attention from employees, whose assistance those customers must share with other customers. The grid engine manager would recognize if the customers do not make progress. The manager would immediately respond by adjusting service levels in order to comply with the bank's service policies.
In a grid engine system, jobs correspond to bank customers. Jobs wait in a computer holding area instead of a lobby. Queues, which provide services for jobs, correspond to bank employees. As in the case of bank customers, the requirements of each job, such as available memory, execution speed, available software licenses, and similar needs, can be very different. Only certain queues might be able to provide the corresponding service.
To continue the analogy, the grid engine software arbitrates available resources and job requirements in the following way:
A user who submits a job through the grid engine system declares a requirement profile for the job. In addition, the system retrieves the identity of the user. The system also retrieves the user's affiliation with projects or user groups. The time that the user submitted the job is also stored.
The moment that a queue is available to run a new job, the grid engine system determines what are the suitable jobs for the queue. The system immediately dispatches the job that has either the highest priority or the longest waiting time.
Queues allow concurrent execution of many jobs. The grid engine system tries to start new jobs in the least-loaded and most-suitable queue.
The administrator of a cluster can define high-level usage policies that are customized according to whatever is appropriate for the site. Four usage policies are available:
Urgency. Using this policy, each job's priority is based on an urgency value. The urgency value is derived from the job's resource requirements, the job's deadline specification, and how long the job waits before it is run.
Functional. Using this policy, an administrator can provide special treatment because of a user's or a job's affiliation with a certain user group, project, and so forth.
Share-based. Under this policy, the level of service depends on an assigned share entitlement, the corresponding shares of other users and user groups, the past usage of resources by all users, and the current presence of users within the system.
Override. This policy requires manual intervention by the cluster administrator, who modifies the automated policy implementation.
Policy management automatically controls the use of shared resources in the cluster to best achieve the goals of the administration. High-priority jobs are dispatched preferentially. Such jobs receive higher CPU entitlements if the jobs compete for resources with other jobs. The grid engine software monitors the progress of all jobs and adjusts their relative priorities correspondingly and with respect to the goals defined in the policies.
The functional, share-based, and override policies are defined through a grid engine system concept that is called tickets. You might compare tickets to shares of a public company's stock. The more stock shares that you own, the more important you are to the company. If shareholder A owns twice as many shares as shareholder B, A also has twice the votes of B. Therefore shareholder A is twice as important to the company. Similarly, the more tickets that a job has, the more important the job is. If job A has twice the tickets of job B, job A is entitled to twice the resource usage of job B.
Jobs can retrieve tickets from the functional, share-based, and override policies. The total number of tickets, as well as the number retrieved from each ticket policy, often changes over time.
The administrator controls the number of tickets that are allocated to each ticket policy in total. Just as ticket allocation does for jobs, this allocation determines the relative importance of the ticket policies among each other. Through the ticket pool that is assigned to particular ticket policies, the administration can run a grid engine system in different ways. For example, the system can run in a share-based mode only. Or the system can run in a combination of modes, for example, 90% share-based and 10% functional.
The urgency policy can be used in combination with two other job priority specifications:
The number of tickets assigned by the functional, share-based, and override policies
A job can be assigned an urgency value, which is derived from three sources:
The job's resource requirements
The length of time a job must wait before the job runs
The time at which a job must finish running, that is, the job's deadline
The administrator can separately weight the importance of each of these sources in order to arrive at a job's overall urgency value. For more information, see Chapter 5, Managing Policies and the Scheduler, in Sun N1 Grid Engine 6.1 Administration Guide.
Figure 1–2 shows the correlation among policies.