| C H A P T E R 1 |
|
Differences Between Sun HPC ClusterTools Versions |
This chapter outlines some of the differences between Sun HPC ClusterTools 6 and Sun HPC ClusterTools 7.1 software.
Sun HPC ClusterTools 6 and previous versions used Sun’s own implementation of the MPI-2 standard. However, Sun HPC ClusterTools 7.1 is based on Open MPI 1.2 (OMPI), an open-source implementation of MPI. There are some differences between Sun MPI and Open MPI; you might need to update applications that use Sun MPI functions to run with Open MPI and Sun HPC ClusterTools 7.1.
The Sun HPC ClusterTools 7.1 Software User’s Guide contains information about the Open MPI mpirun command and its options.
For more information about Open MPI, see the Open MPI web site at:
In Sun HPC ClusterTools 6 and previous versions, the command to run MPI was mprun. In Sun HPC ClusterTools 7.1, this command is called mpirun. The mpirun(1) man page contains information about the usage of mpirun and its options.
For a complete listing of all Sun HPC ClusterTools 7.1 mpirun options and a sample usage message, type the following command:
The following table shows a mapping between Sun HPC ClusterTools 6 options and some of the options in Sun HPC ClusterTools 7.1.
|
-hostfile [file], --hostfile [file], -machinefile [file], --machinefile [file] |
||
For a complete listing of mpirun options and instructions on how to use them, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.
Both Sun MPI and Open MPI use environment variables to pass options to MPI. However, Open MPI also allows you to pass options in two additional ways:
The Sun HPC ClusterTools 7.1 Software User’s Guide describes Modular Component Architecture (MCA) parameters in detail and lists the ways in which they can be used with mpirun. For more information about using MCA parameters with Sun Grid Engine software, see MCA Parameters.
For more information about the available options that can be used with -mca or in the mca-params.conf file, see the Open MPI runtime tuning Frequently Asked Questions (FAQ) at:
http://www.open-mpi.org/faq/?category=tuning
Many of the environment variables that existed in Sun HPC ClusterTools 6 are not supported in Sun HPC ClusterTools 7.1. However, the MCA parameters in Sun HPC ClusterTools 7.1 supply much of the equivalent functionality.
For more information about the MCA parameters and their usage, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide. For additional information from the command line, see the mpirun man page, or type the following command at the system prompt:
Sun HPC ClusterTools 6 uses a centralized database in the tm.rdb process to store the names of selected CPUs on which to run applications. However, Sun HPC ClusterTools 7.1 does not use a centralized database. You specify the CPUs on which to run applications in two ways:
For more information, see Chapter 4 in the Sun HPC ClusterTools 7.1 Software User’s Guide.
Sun HPC ClusterTools 6 software included both non-thread-safe (libmpi.so) and thread-safe (libmpi_mt.so) MPI libraries. Sun HPC ClusterTools 7.1 includes only a non-thread-safe version of the library. Thread-safe libraries are not supplied in Sun HPC ClusterTools 7.1.
For more information, refer to the Sun HPC ClusterTools 7.1 Software Release Notes.
Some commands and functionality in Sun HPC ClusterTools 6/Sun MPI have no equivalents in Sun HPC ClusterTools 7.1/Open MPI.
The following tools in Sun HPC ClusterTools 6 have no equivalents in Open MPI:
These four tools made use of root daemons that existed in ClusterTools 6. Sun HPC ClusterTools 7.1 does not have root daemons; therefore, the software currently does not have equivalent tools.
In place of mpprof, you might use the Solaris DTrace dynamic tracing utility, the PERUSE interface in Open MPI, or some other tracing tool. For more information about using DTrace with Sun HPC ClusterTools 7.1 software, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.
For more information on the PERUSE interface, refer to the following paper on the Open MPI Web site:
http://www.open-mpi.org/papers/euro-pvmmpi-2006-peruse/
In place of mpkill, you might need to kill the processes on each node manually, or kill the processes with a batch system, such as Sun Grid Engine. You can also use Ctrl-C to kill each process, if needed.
In place of mpps and mpinfo, use your batch system to see the parallel job.
Because there are no root daemons, the activation step in Sun HPC ClusterTools 7.1 is optional, as it only creates symbolic links to the installed files. For more information about the differences in installation and activation steps, refer to the Sun HPC ClusterTools 7.1 Software Installation Guide.
In addition, Sun HPC ClusterTools 6 has the ability to do dynamic allocation of processors based on resource constraints, such as CPU load or memory usage. ORTE (the Open MPI Run-Time Environment) has no equivalent functionality. It also cannot determine the number of CPUs on a node.
For more information, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.
You must recompile applications created using Sun HPC ClusterTools 6 software (or an earlier version) before you can use that application with Sun HPC ClusterTools 7.1.
For more information about updating applications, see Chapter 2, Migrating Sun HPC ClusterTools 6 Applications.
| Note - In most cases, you should not experience issues when migrating your applications to Sun HPC ClusterTools 7.1 software. The instructions in Chapter 2 of this manual describe what to do if you do run into a problem. |
Sun HPC ClusterTools 7.1/Open MPI employs Open MPI MCA parameters to serve a similar function to the environment variables in Sun HPC ClusterTools 6/Sun MPI. The Sun HPC ClusterTools 7.1 User’s Guide contains information about the available MCA parameters for Solaris OS installations, as well as instructions on how to use them.
The following table shows some equivalent functions between the two ClusterTools versions:
For more information about MCA parameters, refer to the following:
http://www.open-mpi.org/faq/?category=tuning
For more information about compiling applications with Sun HPC ClusterTools 7.1 software, see Chapter 2, Migrating Sun HPC ClusterTools 6 Applications.