C H A P T E R  1

Differences Between Sun HPC ClusterTools Versions

This chapter outlines some of the differences between Sun HPC ClusterTools 6 and Sun HPC ClusterTools 7.1 software.

Sun HPC ClusterTools 6 and previous versions used Sun’s own implementation of the MPI-2 standard. However, Sun HPC ClusterTools 7.1 is based on Open MPI 1.2 (OMPI), an open-source implementation of MPI. There are some differences between Sun MPI and Open MPI; you might need to update applications that use Sun MPI functions to run with Open MPI and Sun HPC ClusterTools 7.1.

The Sun HPC ClusterTools 7.1 Software User’s Guide contains information about the Open MPI mpirun command and its options.

For more information about Open MPI, see the Open MPI web site at:

http://www.open-mpi.org


Comparing mprun Options to mpirun Options

In Sun HPC ClusterTools 6 and previous versions, the command to run MPI was mprun. In Sun HPC ClusterTools 7.1, this command is called mpirun. The mpirun(1) man page contains information about the usage of mpirun and its options.

For a complete listing of all Sun HPC ClusterTools 7.1 mpirun options and a sample usage message, type the following command:


% mpirun --help

The following table shows a mapping between Sun HPC ClusterTools 6 options and some of the options in Sun HPC ClusterTools 7.1.



Note - Although it appears that Sun HPC ClusterTools 7.1 does not possess options equivalent to those in Sun HPC ClusterTools 6 software, Sun HPC ClusterTools 7.1 contains many additional options that do not have equivalents in Sun HPC ClusterTools 6. For more information about the options in Sun HPC ClusterTools 7.1, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.



TABLE 1-1 Comparison of mprun Options and mpirun Options

Description

Sun HPC ClusterTools 6/Sun MPI

(mprun)

Sun HPC ClusterTools 7.1/Open MPI (mpirun)

Displays this help/usage text

-h

-h

Displays tool version information

-V

 

Specifies the cluster to use

-c [cluster]

 

Specifies the partition to use

-p [partition]

 

Specify the argv [0] explicitly

-A [aout]

 

Specify uid to execute as

-U [uid]

 

Specify gid to execute as

-G [gid]

 

Specify the I/O fd set to multiplex

-I [iofds]

 

Specify CRE I/O (use with -x)

-Is

 

Specify an alternate working directory

-C [path]

-wdir --wdir

Specify a project name

-P [project]

 

Chroot to working dir before execution

-r [path]

 

Show job id after exec

-J

 

Specify the number of processes/threads in job

-np [PxT]

-np [P] (P != 0)

Specify the number of processes/threads to reserve

-nr [PxT]

 

Specify Resource Requirement String

-R [rrs]

 

Allow wrapping of hosts

-W

 

Settle for available hosts

-S

 

Run this job on same resources as [job name]

-j [job name]

 

Only rank 0 gets stdin

-i

 

Rank-tag stdout

-o

 

Separate stdout/stderr streams

-D

 

No stdio connections

-N

 

Batch stream handling

-B

 

No stdin connection

-n

 

No spawning on SMPs

-Ns

 

Enable spawning on SMPs

-Ys

 

Group procs [n] to an SMP

-Z [n]

 

Group/tile procs [n] to an SMP

-Zt [n]

 

Specify rankmap string

-l "[host] procs?[,...]"

-host --host -H

Specify rankmap file

-m [file]

-hostfile [file], --hostfile [file], -machinefile [file], --machinefile [file]

Use any partition independent nodes

-u

 

Multiply daemon and mprun timeouts by factor n

-t [n]

 

Dump JID to a file

-d [filename]

 

Verbose. Gives extra information during job startup.

-v

-d

Run processes under control of resource manager RM

-x [RM]

[not needed]

Multiple executables

-np 8 : -np 2 exe1 : -np 6 exe2

-np 2 exe1 : -np 6 exe2


For a complete listing of mpirun options and instructions on how to use them, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.

Passing MPI Options

Both Sun MPI and Open MPI use environment variables to pass options to MPI. However, Open MPI also allows you to pass options in two additional ways:

The Sun HPC ClusterTools 7.1 Software User’s Guide describes Modular Component Architecture (MCA) parameters in detail and lists the ways in which they can be used with mpirun. For more information about using MCA parameters with Sun Grid Engine software, see MCA Parameters.

For more information about the available options that can be used with -mca or in the mca-params.conf file, see the Open MPI runtime tuning Frequently Asked Questions (FAQ) at:

http://www.open-mpi.org/faq/?category=tuning

MPI Environment Variables

Many of the environment variables that existed in Sun HPC ClusterTools 6 are not supported in Sun HPC ClusterTools 7.1. However, the MCA parameters in Sun HPC ClusterTools 7.1 supply much of the equivalent functionality.

For more information about the MCA parameters and their usage, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide. For additional information from the command line, see the mpirun man page, or type the following command at the system prompt:


% ompi_info -param all all

Specifying Selected CPUs for Applications

Sun HPC ClusterTools 6 uses a centralized database in the tm.rdb process to store the names of selected CPUs on which to run applications. However, Sun HPC ClusterTools 7.1 does not use a centralized database. You specify the CPUs on which to run applications in two ways:

For more information, see Chapter 4 in the Sun HPC ClusterTools 7.1 Software User’s Guide.


Thread Safety

Sun HPC ClusterTools 6 software included both non-thread-safe (libmpi.so) and thread-safe (libmpi_mt.so) MPI libraries. Sun HPC ClusterTools 7.1 includes only a non-thread-safe version of the library. Thread-safe libraries are not supplied in Sun HPC ClusterTools 7.1.

For more information, refer to the Sun HPC ClusterTools 7.1 Software Release Notes.


Sun HPC ClusterTools-Specific Commands

Some commands and functionality in Sun HPC ClusterTools 6/Sun MPI have no equivalents in Sun HPC ClusterTools 7.1/Open MPI.

The following tools in Sun HPC ClusterTools 6 have no equivalents in Open MPI:

These four tools made use of root daemons that existed in ClusterTools 6. Sun HPC ClusterTools 7.1 does not have root daemons; therefore, the software currently does not have equivalent tools.

In place of mpprof, you might use the Solaris DTrace dynamic tracing utility, the PERUSE interface in Open MPI, or some other tracing tool. For more information about using DTrace with Sun HPC ClusterTools 7.1 software, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.

For more information on the PERUSE interface, refer to the following paper on the Open MPI Web site:

http://www.open-mpi.org/papers/euro-pvmmpi-2006-peruse/

In place of mpkill, you might need to kill the processes on each node manually, or kill the processes with a batch system, such as Sun Grid Engine. You can also use Ctrl-C to kill each process, if needed.

In place of mpps and mpinfo, use your batch system to see the parallel job.

Because there are no root daemons, the activation step in Sun HPC ClusterTools 7.1 is optional, as it only creates symbolic links to the installed files. For more information about the differences in installation and activation steps, refer to the Sun HPC ClusterTools 7.1 Software Installation Guide.

In addition, Sun HPC ClusterTools 6 has the ability to do dynamic allocation of processors based on resource constraints, such as CPU load or memory usage. ORTE (the Open MPI Run-Time Environment) has no equivalent functionality. It also cannot determine the number of CPUs on a node.

For more information, refer to the Sun HPC ClusterTools 7.1 Software User’s Guide.


Updating Applications Compiled With Previous Versions of Sun HPC ClusterTools Software

You must recompile applications created using Sun HPC ClusterTools 6 software (or an earlier version) before you can use that application with Sun HPC ClusterTools 7.1.

For more information about updating applications, see Chapter 2, Migrating Sun HPC ClusterTools 6 Applications.



Note - In most cases, you should not experience issues when migrating your applications to Sun HPC ClusterTools 7.1 software. The instructions in Chapter 2 of this manual describe what to do if you do run into a problem.



Comparing Sun MPI Environment Variables to Open MPI MCA Parameters

Sun HPC ClusterTools 7.1/Open MPI employs Open MPI MCA parameters to serve a similar function to the environment variables in Sun HPC ClusterTools 6/Sun MPI. The Sun HPC ClusterTools 7.1 User’s Guide contains information about the available MCA parameters for Solaris OS installations, as well as instructions on how to use them.

The following table shows some equivalent functions between the two ClusterTools versions:


TABLE 1-2 Sun MPI and Open MPI Equivalents

Sun MPI Environment Variable

Open MPI MCA Parameter

MPI_FULLCONNINIT

mpi_preconnect_all

MPI_PRINTENV

mpi_show_mca_params

MPI_CHECK_ARGS

mpi_param_check

MPI_SPIN

mpi_yield_when_idle

MPI_PROC_BIND

mpi_paffinity_alone



More Information About Parameters and Compiling Applications

For more information about MCA parameters, refer to the following:

http://www.open-mpi.org/faq/?category=tuning

For more information about compiling applications with Sun HPC ClusterTools 7.1 software, see Chapter 2, Migrating Sun HPC ClusterTools 6 Applications.