JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Studio 12.3: Performance Analyzer MPI Tutorial     Oracle Solaris Studio 12.3 Information Library
search filter icon
search icon

Document Information

Preface

1.  Performance Analyzer MPI Tutorial

About MPI and Performance Analyzer

Setting Up for the Tutorial

Obtaining MPI Software

MPI Software for Oracle Solaris 10 and Linux

MPI Software for Oracle Solaris 11

Prepare the Sample Source Code

Sample Code for ClusterTools and Open MPI

Sample Code for Oracle Message Passing Toolkit in Oracle Solaris 11

Compile and Run the Sample Program

Collecting Data on the ring_c Example

Opening the Experiment

Navigating the MPI Timeline

Viewing Message Details

Viewing Function Details and Application Source Code

Filtering Data in the MPI Tabs

Using the Filter Stack

Using the MPI Chart Tab

Using the MPI Chart Controls

Make a Chart to Show Where Messages are Being Sent

Make a Chart to Show Which Ranks Waited Longest to Receive a Message

Look for Slow Message Effects on Time Spent in MPI Functions

Conclusion

A.  MPI Chart Control Settings

B.  Sample Code for the Tutorial

Setting Up for the Tutorial

The Performance Analyzer works with several implementations of the Message Passing Interface (MPI) standard, including the Oracle Message Passing Toolkit, a highly optimized implementation of Open MPI for Oracle Sun x86 and SPARC-based systems.


Note - The Oracle Message Passing Toolkit product was formerly named Sun HPC ClusterTools, and you might see both names in Oracle web pages and documentation. The last version of Sun HPC ClusterTools is version 8.2.1c. The next release of the product is Oracle Message Passing Toolkit 9, and is available for Oracle Solaris 11 in the software package repository.


You can see a list of MPI versions that work with the Performance Analyzer by running the command collect with no additional arguments.

This tutorial explains how to use the Performance Analyzer on a sample MPI application called ring_c.

You must already have a cluster configured and functioning for this tutorial.

Obtaining MPI Software

Although this tutorial uses MPI software from Oracle, you can also use Open MPI, available at the Open MPI web site.

Information about the Oracle Message Passing Toolkit software is available at http://www.oracle.com/us/products/tools/message-passing-toolkit-070499.html. The site provides links for downloading the software, but please use the following detailed instructions instead.

MPI Software for Oracle Solaris 10 and Linux

You can use Sun HPC ClusterTools 8 and its updates on Oracle Solaris 10 or Linux for this tutorial. See the instructions below for information about downloading Sun HPC ClusterTools 8.2.1c.

To download Sun HPC ClusterTools for Oracle Solaris 10 and Linux:

  1. Log in to My Oracle Support. You must have a support contract for Oracle Solaris or Oracle Solaris Studio in order to download the software.

  2. Type ClusterTools in the search box and click the search button.

  3. Click Patches in the Refine Search area on the left side of the page.

  4. Click HPC-CT-8.2.1C-G-F - HPC ClusterTools 8.2.1c.

  5. Click the Download button, and follow instructions in the dialog box to download a compressed file containing the ClusterTools software.

  6. After the file is downloaded, extract it and navigate to the appropriate platform directory for system. If necessary extract more files in this directory.

  7. Install the software as described in the Sun HPC ClusterTools 8.2.1c Software Installation Guide, which is available as a PDF in the extracted directory. Note that the names of the top level directories described in the manual might not match the paths used in the extracted directories, but the subdirectories and program names are the same.

  8. Add the directories /Studio-installation/bin and ClusterTools-installation/bin directory to your path.

MPI Software for Oracle Solaris 11

If you are running Oracle Solaris 11, the Oracle Message Passing Toolkit is made available as part of the Oracle Solaris 11 release under the package name openmpi-15. See the article How to Compile and Run MPI Programs on Oracle Solaris 11 for information about installing and using the Oracle Message Passing Toolkit.

See the manual Adding and Updating Oracle Solaris 11 Software Packages in the Oracle Solaris 11 documentation library for general information about installing software in Oracle Solaris 11.


Note - The sample code used in this tutorial is not provided in the Oracle Solaris 11 package openmpi-15. If you install this version, you must obtain the sample code separately, as described in Sample Code for Oracle Message Passing Toolkit in Oracle Solaris 11. You can also download Open MPI to get the sample code.


Prepare the Sample Source Code

See the section below that is appropriate for your MPI software for information about preparing the sample source code.

Sample Code for ClusterTools and Open MPI

If you have either Sun HPC ClusterTools or Open MPI, you must make a writable copy of the examples directory in a location that is accessible from all the cluster nodes.

For example:

Sample Code for Oracle Message Passing Toolkit in Oracle Solaris 11

The sample code is not included in the Oracle Solaris 11 openmpi-15 package. The source code for ring_c.c and the Makefile for building the program is shown in Appendix B, Sample Code for the Tutorial. You can copy the text of the files and create the ring_c.c and Makefile yourself.

To create the files:

  1. Make a directory called examples in a location that is accessible from all the cluster nodes.

  2. Using the mouse or keyboard, copy the source code for ring_c.c and the Makefile from Appendix B, Sample Code for the Tutorial and paste the text in a text editor.

  3. Save the files in your examples directory.

Compile and Run the Sample Program

To compile and run the sample program:

  1. Change directory to your new examples directory.

  2. Build the ring_c example.

    % make ring_c
    mpicc -g -o ring_c ring_c.c

    The program is compiled with the -g option, which allows the Performance Analyzer data collector to map MPI events to source code.

  3. Run the ring_c example with mpirun to make sure it works correctly. The ring_c program simply passes a message from process to process in a ring, then terminates.

    This example shows how to run the program on a two-node cluster. The node names are specified in a host file, along with the number of slots that are to be used on each node. The tutorial uses 25 processes, and specifies one slot on each host. You should specify a number of processes and slots that is appropriate for your system. See the mpirun(1) man page for more information about specifying hosts and slots. You can also run this command on a standalone host that isn't part of a cluster, but the results might be less educational.

    The host file for this example is called clusterhosts and contains the following content:

    hostA slots=1
    hostB slots=1

    You must have permission to use a remote shell (ssh or rsh) to each host without logging into the hosts. By default, mpirun uses ssh.

    % mpirun -np 25 --hostfile clusterhosts ring_c
    Process 0 sending 10 to 1, tag 201 (25 processes in ring)
    Process 0 sent to 1
    Process 0 decremented value: 9
    Process 0 decremented value: 8
    Process 0 decremented value: 7
    
    Process 0 decremented value: 6
    Process 0 decremented value: 5
    Process 0 decremented value: 4
    Process 0 decremented value: 3
    Process 0 decremented value: 2
    Process 0 decremented value: 1
    
    Process 0 decremented value: 0
    Process 0 exiting
    Process 1 exiting
    Process 2 exiting
    .
    .
    .
    Process 24 exiting

    Run this command and if you get similar output, you are ready to collect data on an example application as shown in the next section.

If you have problems with mpirun using ssh, try using the option --mca plm_rsh_agent rsh with the mpirun command to connect using the rsh command:

% mpirun -np 25 --hostfile clusterhosts --mca plm_rsh_agent rsh -- ring_c