Whether you have installed previous versions of the grid engine software or this is your first time, you will need to do some planning before you extract and install the software. This section describes the decisions you must make, and, wherever possible, gives you criteria on which you can base your decisions.
You must make several decisions before you can plan the installation:
Decide whether your system of networked computer hosts that run N1 Grid Engine 6.1 software is to be a single cluster or a collection of sub-clusters, called cells. Cells enable you to install separate instances of the grid engine software but share the binary files across those instances.
Select the machines that are to be grid engine system hosts. Determine the host type of each machine: master host, shadow master host, administration host, submit host, execution host, or a combination.
Ensure that all users of the grid engine system have the same user names on all submit and execution hosts.
Hosts running Windows as operating system cannot have the host type master host or shadow master host.
Decide how to order grid engine software directories. For example, you could organize directories as a complete tree on each workstation, or you could cross-mount directories, or you could set up a partial directory tree on some workstations. You must also decide where to locate each grid engine software installation directory, sge-root.
Decide on the site's queue structure.
Determine whether to define network services as an NIS file or as local to each workstation in /etc/services.
Use the information in this chapter to gather the information necessary to complete the installation worksheet.
Before you install the grid engine software, you must plan how to achieve the results that fit your environment. This section helps you make the decisions that affect the rest of the procedure. Write down your installation plan in a table similar to the following example.
Parameter |
Value |
---|---|
sge-root directory | |
Cell name | |
Administrative user | |
sge_qmaster port number |
6444 is recommended |
sge_execd port number |
6445 is recommended |
Master host | |
Shadow master hosts | |
Execution hosts | |
Administration hosts | |
Submit hosts | |
Group ID range for jobs | |
Spooling mechanism (Berkeley DB or Classic spooling) | |
Berkeley DB server host (the master or another host) | |
Berkeley DB spooling directory on the database server | |
Scheduler tuning profile (Normal, High, Max) | |
Installation method (interactive, secure, automated, or upgrade) | |
If you are going to install N1 Grid Engine 6.1 on a Windows system, acquire and install Microsoft Services For UNIX. See Appendix A, Microsoft Services For UNIX for more information. | |
If you are going to install N1 Grid Engine 6.1 on a Windows system, create the required Certificate Security Protocol (CSP) certificates before installing N1GE. See How to Install a CSP-Secured System for information about CSP certificates. | |
Check Appendix C, Other N1 Grid Engine Installation Issues for applicability. |
The grid engine software directory tree has the following fixed disk space requirements:
40 Mbytes for the installation files without any binaries
Between 10 and 15 Mbytes for each set of binaries
The ideal disk space for grid engine system spool directories is as follows:
10-200 Mbytes for the master host spool directories
10-200 Mbytes for the Berkeley DB spool directories
The spool directories of the master host and of the execution hosts are configurable and need not reside under the default location, sge-root.
You must satisfy several Windows-specific prerequisites before you can install N1 Grid Engine on hosts that are running the Windows operating system. You might need to install additional software on your computer which might require additional disk space. See Appendix A, Microsoft Services For UNIX
Create a directory into which you will load the contents of the distribution media. This directory is called the root directory, or sge-root. When the grid engine system is running, this directory stores the current cluster configuration and all other data that must be spooled to disk.
Spool areas do not have to reside under sge-root. Actually this location may be avoided for efficiency reasons.
Use a valid path name for the directory that is network accessible on all hosts. For example, if the file system is mounted using automounter, set sge-root to /usr/N1GE6, not to /tmp_mnt/usr/N1GE6. Throughout this document, the sge-root variable is used to refer to the installation directory.
sge-root is the top level of the grid engine software directory tree. Each grid engine system component in a cell needs read access to the sge-root/cell/common directory, on startup. When grid engine software is installed as a single cluster, the value of cell is default.
For ease of installation and administration, this directory should be readable on all hosts on which you intend to run the grid engine software installation procedure. For example, you can select a directory available across a network file system, such as NFS. If you choose to select file systems that are local to the hosts, you must copy the installation directory to each host before you start the installation procedure for the particular machine. See File Access Permissions for a description of required permissions.
When determining the directory organization, you must decide the following:
The directory organization, for example, whether you will install a complete software tree on each workstation, directories cross-mounted, or a partial directory tree on some workstations
Where to locate each root directory, sge-root
Because changing the installation directory or the spool directories requires a new installation of the system, use extra care to select a suitable installation directory up front. Note that all important information from a previous installation can be preserved.
By default, the installation procedure installs the grid engine software, manuals, spool areas, and the configuration files in a directory hierarchy under the installation directory as shown in Figure 1–1. If you accept this default behavior, you should install or select a directory with the access permissions that are described in File Access Permissions.
You can select the spool areas to put in other locations during the primary installation. See Configuring Queues in Sun N1 Grid Engine 6.1 Administration Guide for instructions.
You can set up the grid engine system as a single cluster or as a collection of loosely coupled clusters called cells. The $SGE_CELL environment variable indicates the cluster being referenced. When the grid engine system is installed as a single cluster, $SGE_CELL is not set, and the value default is assumed for the cell value.
In order for the grid engine system to verify that users submitting jobs have permission to submit them on the desired execution hosts, users' names must be identical on the submit and execution hosts involved. You might therefore have to change user names on some machines, because grid engine system users map directly to system user accounts.
User names on the master host are not relevant for permission checking. These user names do not have to match or even exist.
You can install the grid engine software either as the root user or as an unprivileged user , for example, your own user account. However, if you install the software logged in as an unprivileged user, the installation allows only that user to run grid engine system jobs. Access is denied to all other accounts. Installing the software logged in as the root account resolves this restriction. However, root permission is required for the complete installation procedure. Also, if you install as an unprivileged user, you are not allowed to use the qrsh, qtcsh, or qmake commands, nor can you run tightly integrated parallel jobs.
If you install the software logged in as root, you might have a problem configuring root read/write access for all hosts on a shared file system. Therefore, you might have problems putting sge-root onto a network-wide file system.
You can force grid engine software to run all grid engine system components through a non-root administrative user account, for example called sgeadmin. With this setup, this particular user needs only read/write access to the shared sge-root file system.
The installation procedure asks whether files should be created and owned by an administrative user account. If you answer “Yes” and provide a valid user name, files are created by this user. Otherwise, the user name under which you run the installation procedure is used. Create an administrative user, and answer “Yes” to this question.
Make sure in all cases that the account used for file handling on all hosts has read/write access to the sge-root directory. Also, the installation procedure assumes that the host from which you access the grid engine software distribution media can write to the sge-root directory.
The name of the root user on Windows hosts depends on the system language of the Windows operating system. You can even change the name of the root user. The default name for many languages is the name Administrator.
If your Windows host is a member of a Windows domain, only the local Administrator is the root user. Neither the members of the Administrators group, nor the domain Administrator, nor a member of the Domain Admins group are the root user. See Appendix B, User Management For N1GE on Windows Hosts for more information about users on Windows hosts.
Determine whether your site's network services are defined in an NIS database or in an /etc/services file that is local to each workstation. If your site uses NIS, find out the host name of your NIS server is so that you can add entries to the NIS services map.
The grid engine system services are sge_execd and sge_qmaster. To add the services to your NIS map, choose reserved, unused port numbers. The following examples show sge_qmaster and sge_execd entries.
sge_qmaster 6444/tcp |
sge_execd 6445/tcp |
The master host controls the grid engine system. This host runs the master daemon sge_qmaster, and the scheduling daemon, sge_schedd.
The master host must comply with the following requirements:
The host must be a stable platform.
The host must not be excessively busy with other processing.
At least 60 – 120 Mbytes of unused main memory must be available to run the grid engine system daemons. For very large clusters that include many hundreds or thousands of hosts and tens of thousands of jobs in the system at any time, 1 GByte or more of unused main memory might be required and two CPUs might be beneficial.
The master host must be installed before shadow master execution, administration, or submit hosts.
(Optional) The grid engine software directory, sge-root, should be installed locally, to cut down on network traffic.
Windows hosts cannot act as master hosts.
These hosts back up the functionality of sge_qmaster in case the master host or the master daemon fails. To be a shadow master host, a machine must have the following characteristics:
It must run sge_shadowd.
It must share sge_qmaster status, job information, and queue configuration information that is logged to disk. In particular, the shadow master hosts need read/write root or administration user access to the sge_qmaster spool directory and to the sge-root/cell/common directory.
The sge-root/cell/common/shadow_masters file must contain a line defining the host as a shadow master host.
If no cell name is specified during installation, the value of cell is default.
The shadow master host facility is activated for a host as soon as these conditions are met. You do not need to restart the grid engine system daemons to make a host into a shadow master host.
Windows hosts cannot act as master hosts.
During the installation of the master host, you must specify the location of a spooling directory. This directory is used to spool jobs from execution hosts that do not have a local spooling directory.
On the master host, spool directories are maintained under qmaster-spool-dir. The location of qmaster-spool-dir is defined during the master host installation process. The default value of qmaster-spool-dir is sge-root/cell/spool/qmaster.
On each execution host, a spool directory called execd-spool-dir is defined during the execution host installation processes. The default value of execd-spool-dir is sge-root/cell/spool/exec-host. You will get better performance from execution hosts with local spooling directories than from execution hosts that have NFS mounted the master host's spooling directory.
If no cell name is specified during installation, the value of cell is default.
You do not need to export these directories to other machines. However, exporting the entire sge-root tree and making it write-accessible for the master host and all executable hosts makes administration easier.
During the installation, you are given the option to choose between classic spooling and Berkeley DB spooling server. If you choose Berkeley DB spooling, you are then given the option to spool to a local directory or to a separate host, known as a Berkeley DB spooling server.
While classic spooling is an option, you should see better performance using a Berkeley DB spooling server. Part of this performance increase is because the master host can make non-blocking writes to the database, but has to make blocking writes to the text file used by classic spooling. Other factors that might influence your decision are file format and data integrity. Writing to the Berkeley DB provides a greater level of data integrity than writing to a text file. However, a text file stores data in a format that you can read and edit. Normally, you do not need to read these files, but the spooling directory contains the messages from the system daemons, which can be useful during debugging.
The master host can store its configuration and state to a Berkeley DB spooling database. The spooling database can be installed on the master server or on a separate host. When the Berkeley DB spools into a local directory on the master host, the performance is better. If you want to set up a shadow master host, you need to use a separate Berkeley DB spooling server (host). In this case, you have to choose a host with a configured RPC service. The master host connects through RPC to the Berkeley DB.
This configuration does not provide a High-Availability (HA) solution. For example, scripts of pending jobs are not spooled through BDB spool server and thus are not available for a shadow master.
With the introduction of NFS4 software available with the SolarisTM 10 operating system, you can use Berkeley DB spooling on a network file system. You could not use Berkeley DB spooling on previous NFS versions. This circumstance allows a shadow host installation spooled on Berkeley DB without setting up an additional Berkeley DB Spooling Server.
Using a shadow master host is more reliable, but using a separate Berkeley DB spooling host results in a potential security hole. RPC communication as used by the Berkeley DB can be easily compromised. Only use this alternative if your site is secure and if users can be trusted to access the Berkeley DB spooling host by means of TCP/IP communication.
If you choose to use Berkeley DB spooling without a shadow master, you don't need to set up a separate spooling server. Likewise, if you choose not to use Berkeley DB spooling, you can set up a shadow master host without setting up a separate spooling server.
Once you determine whether you need a separate spooling server, you will also need to determine the location for the spooling directory. The spooling directory must be local to the spooling server. A default value for the location of the spooling directory is recommended during installation but this default value is not suitable when the file server is different from the master host.
The requirements for the Berkeley DB spooling host are similar to the requirements for the master host:
The host must be a stable platform.
The host must not be excessively busy with other processing.
At least 60 – 120 Mbytes of unused main memory must be available to run the grid engine system daemons. For very large clusters that include many hundreds or thousands of hosts and tens of thousands of jobs in the system at any time, 1 GByte or more of unused main memory may be required and two CPUs may be beneficial.
(Optional) A separate spooling host must be installed before the master host.
(Optional) The directory, sge-root, should be installed locally, to cut down on network traffic.
Execution hosts run the jobs that users submit to the grid engine system. An execution host must first be set up as an administration host. You run an installation script on each execution host.
You need to provide a range of IDs that will be assigned dynamically for jobs. The range must be big enough to provide enough numbers for the maximum number of grid engine system jobs running at a single moment on a single host.
A group ID is assigned to each grid engine system job to monitor the resource utilization of the job. Each job will be assigned a unique ID during the time it is running. For example, a range of 20000-20100 allows 100 jobs to run concurrently on a single host. You can change the group ID range for your cluster configuration at any time, but the values in the UNIX group ID range must be unused on your system.
Operators and managers of the grid engine system use administration hosts to perform administrative tasks such as reconfiguring queues or adding grid engine system users.
The master host installation script automatically makes the master host an administration host. During the master host installation process, you can add other administration hosts. You can also manually add administration hosts on the master host at any time after installation.
Jobs can be submitted and controlled from submit hosts. The master host installation script automatically makes the master host a submit host.
The installation procedure creates a default cluster queue structure, which is suitable for getting acquainted with the system. The default queue can be removed after installation.
No matter what directory is used for the installation of the software, the administrator can change most settings that were created by the installation procedure. This change can be made while the system is running.
Consider the following when determining a queue structure:
Whether you need cluster queues for sequential, interactive, parallel, and other job types
Which queue instances to put on which execution hosts
How many job slots are needed in each queue
For more detailed information on administering cluster queues, see Configuring Queues in Sun N1 Grid Engine 6.1 Administration Guide.
You can choose from three scheduler profiles during the installation process: normal, high, and max. You can use these predefined profiles as a starting point for grid engine tuning.
Using these profiles, you can optimize the scheduler for one or more of the following:
The amount of information about a scheduling run
The load adjustment during a scheduling run
Interval scheduling (the default) or immediate scheduling
You can choose from three scheduler profiles:
normal – This profile uses load adaptation, and interval scheduling, and reports all the information that the scheduler gathers during the dispatch cycle. This profile is the starting point for most grids. Use this profile if your highest priority is gathering and reporting information about a scheduling run.
high – This profile is more appropriate for a large cluster, where throughput is more important than gathering and reporting all the information from the scheduler. This profile also uses interval scheduling. Use this profile if you want to get better performance at the cost of getting less information about your scheduling runs.
max – This profile disables all information gathering and reporting, enables immediate scheduling, and disables load adaptation. Immediate scheduling is very useful for sites with high throughput and very short running jobs. The advantage of immediate scheduling decreases as runtime of the jobs increases. This profile can be used in clusters of any size where only throughput is important and everything else is a lower priority.
For more information on how to configure scheduling, see Administering the Scheduler in Sun N1 Grid Engine 6.1 Administration Guide.
Several methods are available for installing the grid engine software:
Interactive
Interactive, with increased security
Automated, using the inst_sge script and a configuration file
Upgrade
To decide which installation method you should use, consider the following factors.
Do you already have the grid engine software installed and running?
If so, you'll probably want to upgrade. The upgrade process is described in Chapter 5, Upgrading From a Previous Release of N1 Grid Engine Software.
If not, the master host installation is only done once, so the master host is typically installed interactively, as described in Chapter 2, Installing the N1 Grid Engine Software Interactively.
Do you need to install just a few execution hosts?
If so, then you will probably want to install them interactively, as described in Chapter 2, Installing the N1 Grid Engine Software Interactively.
Do you need to install a large number of execution hosts?
If so, then you might want to perform automated installation, using the inst_sge script and a configuration file. This process is described in Using the inst_sge Utility and a Configuration Template.
Do you require your grid to use encryption?
If so, you have to perform an interactive installation with increased security. This process is described in Chapter 4, Installing the Increased Security Features.
If you are installing N1 Grid Engineon a Linux system or on a system with IPMP, see Appendix C, Other N1 Grid Engine Installation Issues for important information.