| C H A P T E R 2 |
|
Migrating Sun HPC ClusterTools 6 Applications |
This chapter describes changes you might need to make in your application code to recompile and run programs developed with Sun HPC ClusterTools 6 software (or a previous version of Sun HPC ClusterTools) in Sun HPC ClusterTools 7.1.
This chapter contains the following topics:
It is suggested that you recompile your applications using the mpicc compiler if you want them to be compatible with Sun HPC ClusterTools 7.1 software. The tmcc compiler is no longer supported in Sun HPC ClusterTools 7.1, and there is no backward compatibility.
Sun HPC ClusterTools 7.1 supplies wrapper compilers for you to use instead of directly calling the compilers when compiling applications for use with the Sun HPC ClusterTools 7.1 software. These wrapper compilers do not actually perform the compilation and linking steps themselves, but they add the appropriate compiler and linker flags and call the compiler and linker.
| Note - Using the wrapper compilers is strongly suggested. If you decide not to use them, the Open MPI Web site at http://www.open-mpi.org contains instructions about how to compile without using them. |
The following wrapper compilers are available:
|
mpiCC, mpicxx, or mpic++ (Note: mpiCC is for use on case-sensitive file systems only) |
|
For more information about the wrapper compilers, their use, and troubleshooting, see the Open MPI FAQ at:
http://www.open-mpi.org/faq/?category=mpi-apps
This section describes issues you should address if you have a C++ application using Sun MPI and a previous version of Sun HPC ClusterTools. You might need to make changes to your code in order for it to work properly with Open MPI and Sun HPC ClusterTools 7.1.
Sun MPI supports the mpi++.h header file for C++ programs; however, there is no mpi++.h file in Open MPI. If your application has a C++ test for Sun MPI, you must change the header file from mpi++.h to mpi.h for it to work with Open MPI.
The MPI-2 standard requires an MPI implementation to supply variants of the functions/macros NULL_COPY_FN, NULL_DUP_FN, and NULL_DELETE_FN that can be "specified from either C, C++, or Fortran." For C++ in Sun MPI, the definitions for these functions and macros are slightly different than they are for C and Fortran.
If your applications contain calls to any of the Sun MPI functions/macros listed in TABLE 2-2, you must change the function names to their Open MPI equivalents before your applications will work with Sun HPC ClusterTools 7.1. The following table lists the Sun MPI functions and their Open MPI equivalents.
This section lists issues you should address if you have a Fortran application using Sun MPI and a previous version of Sun HPC ClusterTools. You might need to make changes to your code in order for it to work properly with Open MPI and Sun HPC ClusterTools 7.1.
When compiling code for which you need the f90 interfaces, either use the mpif90 wrapper or add the following to your link line:
Either the mpif90 wrapper or the -lmpif90 flag enables the application to compile correctly.
The f90 module mpi.mod has moved from /opt/SUNWhpc/include to /opt/SUNWhpc/lib. This should not affect your program if you make use of the compiler wrapper mpif95.
Four Fortran datatypes existed in previous versions of Sun HPC ClusterTools, but do not exist in Open MPI or Sun HPC ClusterTools 7.1. They are not defined in the MPI-2 standard. These datatypes are as follows:
Open MPI currently does not support MPI_Accumulate on user-defined types. If you try to use MPI_Accumulate in a program with a user-defined type, the following error message is displayed:
MPI_Accumulate currently does not support reductions with any user-defined types. This will be rectified in a future release. |
In Sun MPI, a non-default error handler would persist past the call to MPI_Finalize. Therefore, if you called an MPI call after MPI_Finalize, the non-default error handler would be used. In Open MPI, the non-default error handler does not persist, and the default error handler is used. This causes any call used after MPI_Finalize to be aborted.
The following table lists error codes that are unique to Sun HPC ClusterTools 6. Only one of these error codes has an Open MPI equivalent in Sun HPC ClusterTools 7.1 software.
For more information about the wrapper compilers, their use, and troubleshooting, see the Open MPI FAQ at:
http://www.open-mpi.org/faq/?category=mpi-apps
For more information about MCA parameters, refer to the following: