Sun HPC ClusterTools 3.0 Administrator's Guide: With CRE

The hpc_config File

Many aspects of the Sun HPC ClusterTools 3.0 installation process are controlled by a configuration file called hpc_config, which is similar to the lsf_config file used to install LSF 3.2.3.

Instructions for accessing and editing hpc_config are provided in "Accessing hpc_config" and "Editing hpc_config".

Accessing hpc_config

Use a text editor to edit the hpc_config file directly. This file must be located in a directory within a file system that is mounted read/write/execute accessible on all the other nodes in the cluster. A template for hpc_config is provided on the Sun HPC ClusterTools 3.0 distribution CD-ROM to simplify creation of this file.

Before starting the installation process, you should copy this template to a directory on the node chosen to be the installation platform and edit it so that it satisfies your site-specific installation requirements. Choose a node to function as the installation platform and a home directory on that node for hpc_config.


Note -

The directory containing hpc_config must be read/write/execute accessible (777 permissions) by all the nodes in the cluster.


The hpc_config template is located in

/cdrom/hpc_3_0_ct/Product/Install_Utilities/config_dir/hpc_config

To access hpc_config on the distribution CD-ROM, perform the following steps on the node chosen to be the installation platform:

  1. Mount the CD-ROM path on all the nodes in the cluster.

  2. Load the Sun HPC ClusterTools distribution CD-ROM in the CD-ROM drawer.

  3. Copy the configuration template onto the node.


    # cd config_dir_install
    # cp /cdrom/hpc_3_0_ct/Product/Install_Utilities/config_dir/hpc_config .

    config_dir_install is a variable representing the directory where the configuration files will reside; all cluster nodes must be able to read from and write to this directory.

  4. Edit the hpc_config file according to the instructions provided in the next section.

If You Have Already Installed the Software

If you have already installed the software, you can find a copy of the hpc_config template in the directory /opt/SUNWhpc/bin/Install_Utililites/config_dir.

If you are editing an existing hpc_config file after installing the software using the graphical installation tool, the hpc_config file created by the tool will not contain the comment lines included in the template.

Editing hpc_config

Example A-1 shows the basic hpc_config template, but without most of the comment lines provided in the online template. The template is simplified here to make it easier to read and because each section is discussed in detail following Example A-1. Two examples of edited hpc_config files follow the general description of the template.

The template comprises five sections:

For the purposes of initial installation, ignore the fifth section.

Supported Software Installation

LSF Support

You will be using the software with LSF, so enter yes here.


LSF_SUPPORT="yes"

Since you will be using LSF, complete only Part A of this section.

LSF Parameter Modification

Allowing the Sun HPC installation script to modify LSF parameters optimizes HPC job launches. Your choice for this variable must be yes or no.


MODIFY_LSF_PARAM="choice"

Name of the LSF Cluster

Before installing Sun HPC ClusterTools software, you must have installed LSF 3.2.3. When you installed the LSF software, you selected a name for the LSF cluster. Enter this name in the LSF_CLUSTER_NAME field.


LSF_CLUSTER_NAME="clustername"

General Installation Information

All installations must complete this section. If you are installing the software locally on a single-node cluster, you can stop after completing this section.

Type of Installation

Three types of installation are possible for Sun HPC ClusterTools 3.0 software:

Specify one of the installation types: nfs, smp-local, or cluster-local. There is no default type of installation.


INSTALL_CONFIG="config_choice"

Installation Location

The way the INSTALL_LOC path is used varies, depending on which type of installation you have chosen.

You must enter a full path name. The default location is /opt. The location must have set (or mounted, if this is an NFS installation) read/write (755) permission on all the nodes in the cluster.


INSTALL_LOC="/opt"

If you choose an installation directory other than the default /opt, a symbolic link is created from /opt/SUNWhpc to the chosen installation point.

CD-ROM Mount Point

Specify a mount point for the CD-ROM. This mount point must be mounted on (that is, NFS-accessible to) all the nodes in the cluster. The default mount point is /cdrom/hpc_3_0_ct. For example:


CD_MOUNT_PT="/cdrom/hpc_3_0_ct"

Information for NFS and Cluster-Local Installations

If you are installing the software either on an NFS server for remote mounting or locally on each node of a multinode cluster, you need to complete this section.

Installation Method Options

Specify either rsh or cluster-tool as the method for propagating the installation to all the nodes in the cluster.

Also note that this method requires that all nodes are trusted hosts--at least during the installation process.


INSTALL_METHOD="method"

Hardware Information

There are two ways to enter information in this section:

In each triplet, specify the host name of a node, followed by the host name of the terminal concentrator and the port ID on the terminal concentrator to which that node is connected. Separate the triplet fields with virgules (/). Use spaces between node triplets.

Every node in your Sun HPC cluster must also be in the corresponding LSF cluster. See the discussion of the lsf.cluster.clustername configuration file in the LSF Batch Administrator's Guide for information on LSF clusters.


Note -

If you will not be using the CCM tools, you can allow the installation script to derive the node list from the LSF configuration file lsf.cluster.clustername. To do this, either set the NODES variable to NULL or leave the line commented out. You must be installing from one of the nodes in the LSF cluster.


SCI Support

This section tells the script whether to install the SCI-related packages. If your cluster includes SCI, replace choice with yes; otherwise, replace it with no.


INSTALL_SCI="yes"

A yes entry causes the three SCI packages and two RSM packages to be installed in the /opt directory. A no causes the installation script to skip the SCI and RSM packages.


Note -

The SCI and RSM packages are installed locally on every node, not on an NFS server.


Information for NFS Installations Only

You need to complete this section only if you are installing the software on an NFS server.

NFS Server Host Name

The format for setting the NFS server host name is the same as for setting the host names for the nodes in the cluster. There are two ways to define the host name of the NFS server:

The NFS server can be one of the cluster nodes or it can be external (but connected) to the cluster. If the server will be part of the cluster--that is, will also be an execution host for the Sun HPC ClusterTools software--it must be included in the NODES field described in "Hardware Information". If the NFS server will not be part of the cluster, it must be available from all the hosts listed in NODES, but it should not be included in the NODES field.

Location of the Software on the Server

If you want to install the software on the NFS server in the same directory as the one specified in INSTALL_LOC, leave INSTALL_LOC_SERVER empty (""). If you prefer, you can override INSTALL_LOC by specifying an alternative directory in INSTALL_LOC_SERVER.


INSTALL_LOC_SERVER="directory"

Recall that the directory specified in INSTALL_LOC defines the mount point for INSTALL_LOC_SERVER on each NFS client.

Sample hpc_config Files

Example A-2 and Example A-3 illustrate the general descriptions in the preceding sections with edited hpc_config files representing two different types of installations.

Local Install - Example A-2 shows how the file would be edited for a local installation on every node in a cluster. The main characteristics of the installation illustrated by Example A-2 are summarized below:

For the purposes of initial installation, ignore the fifth section.

NFS Install - Example A-3 shows an hpc_config file for an NFS installation. The main features of this installation example are summarized below:

This example shows the nodes venice, napoli, and pisa all connected to the terminal concentrator rome via ports 5002, 5003, and 5004.

In this case, the NFS server is not one of the nodes in the Sun HPC cluster. All the nodes in the cluster must be able to communicate with it over a network.

For the purposes of initial installation, ignore the fifth section.