C H A P T E R 1 |
Introduction to Sun HPC ClusterTools 8.1 Software |
This chapter provides an overview of the software installation utilities. It contains the following topics:
This manual explains how to use the Sun HPC ClusterTools software installation utilities to install, activate, deactivate, and remove Sun HPC ClusterTools software on one or more cluster nodes. See TABLE 1-1 for a summary of the Solaris OS interfaces.
ctinstall |
|
ctremove |
|
ctnfssvr |
|
ctact |
|
ctdeact |
The tools install a complete copy of the Sun HPC ClusterTools software, locally on each node in the cluster.
For Linux-based installations, RPM packages are provided. Chapter 6 contains the information about how to install and configure RPM packages for Red Hat and SuSe Linux.
You can install Sun HPC ClusterTools software locally on the cluster nodes, or in an NFS client/server configuration.
In a non-NFS cluster configuration, the tools install a complete copy of the Sun HPC ClusterTools software locally on each node in the cluster. Chapter 3 explains how to install the software on the local nodes.
In an NFS-based cluster, the tools install the packages on the NFS server using the CLI ctnfssvr command. The tools also mount the exported directory and create version-specific links. Chapter 4 explains how to set up the software in an NFS configuration.
The NFS server can, but need not be, one of the NFS client nodes in the cluster. When a Sun HPC ClusterTools software NFS server is also a cluster node, the packages are installed on the NFS server with ctnfssvr.
You can choose between two methods of initiating operations on the cluster nodes:
Support for centralized command initiation is built into the Sun HPC ClusterTools software installation utilities. Issuing these commands from a central host has the equivalent effect as invoking the commands locally using one of the Cluster Console tools, cconsole, ctelnet, or crlogin.
The Sun HPC ClusterTools software CLI utilities provide several options that are specific to the centralized command initiation mode and are intended to simplify management of parallel installation of the software from a central host. These options support:
The initiating system can be one of the cluster nodes or it can be external to the cluster. It must be a Sun system running the Solaris 9 or Solaris 10 Operating System (Solaris OS). Compute nodes must run the Solaris 10 OS.
The Sun HPC ClusterTools software installation utilities are completely self-contained and are fully capable of scaling from single node installations to installing very large clusters. However, if you customarily use Custom JumpStart and/or Solaris Web Start Flash methods for installing software on your servers, you can easily integrate the CLI installation tool ctinstall into those contexts.
The following variations on a basic Sun HPC ClusterTools software installation are described briefly below. More detailed descriptions are provided later in the manual.
For this context, you must set up a Custom JumpStart environment in advance of the installation. Next, you invoke local installation of Sun HPC ClusterTools software on the cluster nodes by integrating the ctinstall command in the Custom JumpStart finish script, using the -l and -R switches.
Custom JumpStart installations are initiated from the console of each Custom JumpStart client. The cconsole tool that is included in the Cluster Console software allows you to access multiple consoles through a single common window.
The first step for this type of installation is to perform a local installation of the Sun HPC ClusterTools software on a node that will serve as the flash master. You can use ctinstall -l for this step. Once the flash master is fully installed and activated, you create a flash archive and apply it to the target nodes, usually in a Custom JumpStart environment.
This flash archive-based approach creates clones of the flash master, which includes reinstalling the Solaris operating environment on each clone.
Note - Web Start Flash installations are restricted to cluster environments where all the systems have identical hardware and software configurations. |
The Sun HPC ClusterTools 8.1 installation tools log information about installation-related tasks locally on the nodes where installation tasks are performed. The default location for the log files is /var/sadm/system/logs/hpc. If installation tasks are initiated from a central host, a summary log file is also created on the central host.
Two types of log files are created locally on each cluster node where installation operations take place.
These log files contain detailed logging information for the most recent associated task. Each time a task is repeated, its log file is overwritten.
These node specific installation log files are created regardless of the installation method used, local or centralized.
When installation tasks are initiated from a central host, a summary log file named ct_summary.log is created on the central host. This log file records the final summary report that is generated by the CLI. The ct_summary.log is not overwritten when a new task is performed. As with the ct_history.log file, new entries are appended to the summary log file.
FIGURE 1-1 shows an overview of the installation-related tasks you can perform.
Note - The CLI tools require superuser privileges to execute. |
FIGURE 1-1 Sun HPC ClusterTools Software Installation Tasks
The various installation-related operations are independent of each other. With the CLI, you simply start the applicable utility: ctinstall, ctact, ctdeact, or ctremove. The operations these tools control are described below.
The installation activity loads the Sun HPC ClusterTools software onto cluster nodes.
With the CLI command ctinstall, you can install individual Sun HPC ClusterTools 8.1 software packages as well as install the entire software suite.
A complete copy of the Sun HPC ClusterTools software is installed locally on each node in the cluster.
The next sections describe the installation and activation choices you can make when you install Sun HPC ClusterTools software.
In an NFS installation, all Sun HPC ClusterTools software packages are installed on a Sun NFS server and remotely mounted on the NFS client nodes in the cluster. The NFS server can be one of the NFS client nodes in the cluster, but need not be.
In non-NFS configurations, a complete copy of the Sun HPC ClusterTools software is installed locally on each node in the cluster.
When you initiate a software installation operation, you can specify to have the nodes activated automatically as soon as the installation process completes. The activation process sets up symbolic links on the nodes.
As stated above, Sun HPC ClusterTools 8.1 software does not need to be activated after installation. If it is not activated, then you can run all commands from the /opt/SUNWhpc/HPC8.1/bin directory.
Sun HPC ClusterTools 8.1 software does not need to be activated; once it is installed, it is ready for use. The node activation step sets up symbolic links that point to the Sun HPC ClusterTools software. If you plan to run the software from its installed location (by default, /opt/SUNWhpc/HPC8.1/bin), you do not need to activate the ClusterTools software.
Node deactivation removes the symbolic links that point to the Sun HPC ClusterTools software. Note that you can still run the software from /opt/SUNWhpc/HPC8.1/bin (or the directory in which you installed it) after the software has been deactivated.
This operation deletes Sun HPC ClusterTools software packages from the cluster nodes on which it is executed. If a node is active at the time you initiate the removal operation, it will be deactivated automatically before the software is removed.
With the CLI command ctremove, you can remove individual Sun HPC ClusterTools software packages as well as remove the entire software suite.
The following are tips for installing Sun HPC ClusterTools 8.1 software on clusters containing hundreds of nodes using the centralized method:
Copyright © 2008 Sun Microsystems, Inc. All Rights Reserved.