C H A P T E R  2

Migrating Sun HPC ClusterTools 6 Applications

This chapter describes changes you might need to make in your application code to recompile and run programs developed with Sun HPC ClusterTools 6 software (or a previous version of Sun HPC ClusterTools) in Sun HPC ClusterTools 7.1.

This chapter contains the following topics:



Note - In most circumstances, the differences between Sun HPC ClusterTools 6 and Sun HPC ClusterTools 7.1 are relatively minor. If you do run into difficulty using your compiled application with Sun HPC ClusterTools 7.1 software, then the information in this chapter can help you troubleshoot the problem.


It is suggested that you recompile your applications using the mpicc compiler if you want them to be compatible with Sun HPC ClusterTools 7.1 software. The tmcc compiler is no longer supported in Sun HPC ClusterTools 7.1, and there is no backward compatibility.


Using the Wrapper Compilers

Sun HPC ClusterTools 7.1 supplies wrapper compilers for you to use instead of directly calling the compilers when compiling applications for use with the Sun HPC ClusterTools 7.1 software. These wrapper compilers do not actually perform the compilation and linking steps themselves, but they add the appropriate compiler and linker flags and call the compiler and linker.



Note - Using the wrapper compilers is strongly suggested. If you decide not to use them, the Open MPI Web site at http://www.open-mpi.org contains instructions about how to compile without using them.


The following wrapper compilers are available:


TABLE 2-1 Wrapper Compilers

Language

Wrapper Compiler

C

mpicc

C++

mpiCC, mpicxx, or mpic++ (Note: mpiCC is for use on case-sensitive file systems only)

Fortran 77

mpif77

Fortran 90

mpif90


For more information about the wrapper compilers, their use, and troubleshooting, see the Open MPI FAQ at:

http://www.open-mpi.org/faq/?category=mpi-apps


C++ Specific Issues

This section describes issues you should address if you have a C++ application using Sun MPI and a previous version of Sun HPC ClusterTools. You might need to make changes to your code in order for it to work properly with Open MPI and Sun HPC ClusterTools 7.1.

mpi++.h Header File

Sun MPI supports the mpi++.h header file for C++ programs; however, there is no mpi++.h file in Open MPI. If your application has a C++ test for Sun MPI, you must change the header file from mpi++.h to mpi.h for it to work with Open MPI.

Function and Macro Definitions

The MPI-2 standard requires an MPI implementation to supply variants of the functions/macros NULL_COPY_FN, NULL_DUP_FN, and NULL_DELETE_FN that can be "specified from either C, C++, or Fortran." For C++ in Sun MPI, the definitions for these functions and macros are slightly different than they are for C and Fortran.

If your applications contain calls to any of the Sun MPI functions/macros listed in TABLE 2-2, you must change the function names to their Open MPI equivalents before your applications will work with Sun HPC ClusterTools 7.1. The following table lists the Sun MPI functions and their Open MPI equivalents.


TABLE 2-2 Function/Macro Equivalents for C++ Programs

Sun MPI Function/Macro

Open MPI Equivalent

MPI::Comm::COMM_NULL_COPY_FN

MPI_COMM_NULL_COPY_FN

MPI::Comm::COMM_DUP_FN

MPI_COMM_DUP_FN

MPI::Comm::COMM_NULL_DELETE_FN

MPI_COMM_NULL_DELETE_FN

MPI::Datatype::TYPE_NULL_COPY_FN

MPI_TYPE_NULL_COPY_FN

MPI::Datatype::TYPE_DUP_FN

MPI_TYPE_DUP_FN

MPI::Datatype::TYPE_NULL_DELETE_FN

MPI_TYPE_NULL_DELETE_FN

MPI::Win::WIN_NULL_COPY_FN

MPI_WIN_NULL_COPY_FN

MPI::Win::WIN_DUP_FN

MPI_WIN_DUP_FN

MPI::Win::WIN_NULL_DELETE_FN

MPI_WIN_NULL_DELETE_FN



Fortran-Specific Issues

This section lists issues you should address if you have a Fortran application using Sun MPI and a previous version of Sun HPC ClusterTools. You might need to make changes to your code in order for it to work properly with Open MPI and Sun HPC ClusterTools 7.1.

f90 Interfaces

When compiling code for which you need the f90 interfaces, either use the mpif90 wrapper or add the following to your link line:

-lmpi_f90

Either the mpif90 wrapper or the -lmpif90 flag enables the application to compile correctly.

f90 Module Location

The f90 module mpi.mod has moved from /opt/SUNWhpc/include to /opt/SUNWhpc/lib. This should not affect your program if you make use of the compiler wrapper mpif95.

Datatypes

Four Fortran datatypes existed in previous versions of Sun HPC ClusterTools, but do not exist in Open MPI or Sun HPC ClusterTools 7.1. They are not defined in the MPI-2 standard. These datatypes are as follows:


Using MPI_Accumulate on User-Defined Types

Open MPI currently does not support MPI_Accumulate on user-defined types. If you try to use MPI_Accumulate in a program with a user-defined type, the following error message is displayed:


MPI_Accumulate currently does not support reductions with any user-defined types. This will be rectified in a future release.


Using Non-Default Error Handlers

In Sun MPI, a non-default error handler would persist past the call to MPI_Finalize. Therefore, if you called an MPI call after MPI_Finalize, the non-default error handler would be used. In Open MPI, the non-default error handler does not persist, and the default error handler is used. This causes any call used after MPI_Finalize to be aborted.


Error Codes

The following table lists error codes that are unique to Sun HPC ClusterTools 6. Only one of these error codes has an Open MPI equivalent in Sun HPC ClusterTools 7.1 software.


TABLE 2-3 Error Codes Not Defined in the MPI-2 Standard

Sun HPC ClusterTools 6
Error Code

Open MPI Equivalent

Error Class

MPI_ERR_TIMEDOUT

None

MPI_ERR_TIMEDOUT

MPI_ERR_RESOURCES

MPI_ERR_SYSRESOURCES

MPI_ERR_RESOURCES

MPI_ERR_TRANSPORT

None

MPI_ERR_TRANSPORT

MPI_ERR_HANDSHAKE

None

MPI_ERR_HANDSHAKE



More Information About the Wrapper Compilers and About MCA Parameters

For more information about the wrapper compilers, their use, and troubleshooting, see the Open MPI FAQ at:

http://www.open-mpi.org/faq/?category=mpi-apps

For more information about MCA parameters, refer to the following:

http://www.open-mpi.org/faq/?category=tuning