Sun Studio 12: Performance Analyzer

Chapter 3 Collecting Performance Data

The first stage of performance analysis is data collection. This chapter describes what is required for data collection, where the data is stored, how to collect data, and how to manage the data collection. For more information about the data itself, see Chapter 2, Performance Data.

This chapter covers the following topics.

Compiling and Linking Your Program

You can collect and analyze data for a program compiled with almost any option, but some choices affect what you can collect or what you can see in the Performance Analyzer. The issues that you should take into account when you compile and link your program are described in the following subsections.

Source Code Information

To see source code in annotated Source and Disassembly analyses, and source lines in the Lines analyses, you must compile the source files of interest with the -g compiler option (-g0 for C++ to enable front-end inlining) to generate debug symbol information. The format of the debug symbol information can be either DWARF2 or stabs, as specified by -xdebugformat=(dwarf|stabs) . The default debug format is dwarf.

To prepare compilation objects with debug information that allows dataspace profiles, currently only for SPARC® processors, compile by specifying -xhwcprof -xdebugformat=dwarf and any level of optimization. (Currently, this functionality is not available without optimization.) To see program data objects in Data Objects analyses, also add -g (or -g0 for C++) to obtain full symbolic information.

Executables and libraries built with DWARF format debugging symbols automatically include a copy of each constituent object file’s debugging symbols. Executables and libraries built with stabs format debugging symbols also include a copy of each constituent object file’s debugging symbols if they are linked with the -xs option, which leaves stabs symbols in the various object files as well as the executable. The inclusion of this information is particularly useful if you need to move or remove the object files. With all of the debugging symbols in the executables and libraries themselves, it is easier to move the experiment and the program-related files to a new location.

Static Linking

When you compile your program, you must not disable dynamic linking, which is done with the -dn and -Bstatic compiler options. If you try to collect data for a program that is entirely statically linked, the Collector prints an error message and does not collect data. The error occurs because the collector library, among others, is dynamically loaded when you run the Collector.

Do not statically link any of the system libraries. If you do, you might not be able to collect any kind of tracing data. Also, do not link to the Collector library, libcollector.so.

Optimization at Compile Time

If you compile your program with optimization turned on at some level, the compiler can rearrange the order of execution so that it does not strictly follow the sequence of lines in your program. The Performance Analyzer can analyze experiments collected on optimized code, but the data it presents at the disassembly level is often difficult to relate to the original source code lines. In addition, the call sequence can appear to be different from what you expect if the compiler performs tail-call optimizations. See Tail-Call Optimization for more information.

Compiling Java Programs

No special action is required for compiling Java programs with the javac command.

Preparing Your Program for Data Collection and Analysis

You do not need to do anything special to prepare most programs for data collection and analysis. You should read one or more of the subsections below if your program does any of the following:

Also, if you want to control data collection from your program, you should read the relevant subsection.

Using Dynamically Allocated Memory

Many programs rely on dynamically-allocated memory, using features such as:

You must take care to ensure that a program does not rely on the initial contents of dynamically allocated memory, unless the memory allocation method is explicitly documented as setting an initial value: for example, compare the descriptions of calloc and malloc in the man page for malloc(3C).

Occasionally, a program that uses dynamically-allocated memory might appear to work correctly when run alone, but might fail when run with performance data collection enabled. Symptoms might include unexpected floating point behavior, segmentation faults, or application-specific error messages.

Such behavior might occur if the uninitialized memory is, by chance, set to a benign value when the application is run alone, but is set to a different value when the application is run in conjunction with the performance data collection tools. In such cases, the performance tools are not at fault. Any application that relies on the contents of dynamically allocated memory has a latent bug: an operating system is at liberty to provide any content whatsoever in dynamically allocated memory, unless explicitly documented otherwise. Even if an operating system happens to always set dynamically allocated memory to a certain value today, such latent bugs might cause unexpected behavior with a later revision of the operating system, or if the program is ported to a different operating system in the future.

The following tools may help in finding such latent bugs:

Using System Libraries

The Collector interposes on functions from various system libraries, to collect tracing data and to ensure the integrity of data collection. The following list describes situations in which the Collector interposes on calls to library functions.

Under some circumstances the interposition does not succeed:

The failure of interposition by the Collector can cause loss or invalidation of performance data.

Using Signal Handlers

The Collector uses two signals to collect profiling data: SIGPROF for all experiments and SIGEMT for hardware counter experiments only. The Collector installs a signal handler for each of these signals. The signal handler intercepts and processes its own signal, but passes other signals on to any other signal handlers that are installed. If a program installs its own signal handler for these signals, the Collector reinstalls its signal handler as the primary handler to guarantee the integrity of the performance data.

The collect command can also use user-specified signals for pausing and resuming data collection and for recording samples. These signals are not protected by the Collector although a warning is written to the experiment if a user handler is installed. It is your responsibility to ensure that there is no conflict between use of the specified signals by the Collector and any use made by the application of the same signals.

The signal handlers installed by the Collector set a flag that ensures that system calls are not interrupted for signal delivery. This flag setting could change the behavior of the program if the program’s signal handler sets the flag to permit interruption of system calls. One important example of a change in behavior occurs for the asynchronous I/O library, libaio.so, which uses SIGPROF for asynchronous cancel operations, and which does interrupt system calls. If the collector library, libcollector.so, is installed, the cancel signal invariably arrives too late to cancel the asynchronous I/O operation.

If you attach dbx to a process without preloading the collector library and enable performance data collection, and the program subsequently installs its own signal handler, the Collector does not reinstall its own signal handler. In this case, the program’s signal handler must ensure that the SIGPROF and SIGEMT signals are passed on so that performance data is not lost. If the program’s signal handler interrupts system calls, both the program behavior and the profiling behavior are different from when the collector library is preloaded.

Using setuid

Restrictions enforced by the dynamic loader make it difficult to use setuid(2) and collect performance data. If your program calls setuid or executes a setuid file, it is likely that the Collector cannot write an experiment file because it lacks the necessary permissions for the new user ID.

You can work around this issue by ensuring that your umask is set to give write permission to any UIDs or GIDs that the process may run under. The ids must have write permission for the experiments.

Program Control of Data Collection

If you want to control data collection from your program, the Collector shared library, libcollector.so contains some API functions that you can use. The functions are written in C. A Fortran interface is also provided. Both C and Fortran interfaces are defined in header files that are provided with the library.

The API functions are defined as follows.


void collector_sample(char *name);
void collector_pause(void);
void collector_resume(void);
void collector_thread_pause(unsigned int t);
void collector_thread_resume(unsigned int t);
void collector_terminate_expt(void);

Similar functionality is provided for JavaTM programs by the CollectorAPI class, which is described in The Java Interface.

The C and C++ Interface

There are two ways to access the C and C++ interface:

The Fortran Interface

The Fortran API libfcollector.h file defines the Fortran interface to the library. The application must be linked with -lcollectorAPI to use this library. (An alternate name for the library, -lfcollector, is provided for backward compatibility.) The Fortran API provides the same features as the C and C++ API, excluding the dynamic function and thread pause and resume calls.

Insert the following statement to use the API functions for Fortran:


include "libfcollector.h"

Note –

Do not link a program in any language with -lcollector. If you do, the Collector can exhibit unpredictable behavior.


The Java Interface

Use the following statement to import the CollectorAPI class and access the Java API. Note however that your application must be invoked with a classpath pointing to / installation_directory/lib/collector.jar where installation-directory is the directory in which the Sun Studio software is installed.


import com.sun.forte.st.collector.CollectorAPI;

The Java CollectorAPI methods are defined as follows:


CollectorAPI.sample(String name)
CollectorAPI.pause()
CollectorAPI.resume()
CollectorAPI.threadPause(Thread thread)
CollectorAPI.threadResume(Thread thread)
CollectorAPI.terminate()

The Java API includes the same functions as the C and C++ API, excluding the dynamic function API.

The C include file libcollector.h contains macros that bypass the calls to the real API functions if data is not being collected. In this case the functions are not dynamically loaded. However, using these macros is risky because the macros do not work well under some circumstances. It is safer to use collectorAPI.h because it does not use macros. Rather, it refers directly to the functions.

The Fortran API subroutines call the C API functions if performance data is being collected, otherwise they return. The overhead for the checking is very small and should not significantly affect program performance.

To collect performance data you must run your program using the Collector, as described later in this chapter. Inserting calls to the API functions does not enable data collection.

If you intend to use the API functions in a multithreaded program, you should ensure that they are only called by one thread. With the exception of collector_thread_pause() and collector_thread_resume(), the API functions perform actions that apply to the process and not to individual threads. If each thread calls the API functions, the data that is recorded might not be what you expect. For example, if collector_pause() or collector_terminate_expt() is called by one thread before the other threads have reached the same point in the program, collection is paused or terminated for all threads, and data can be lost from the threads that were executing code before the API call. To control data collection at the level of the individual threads, use the collector_thread_pause() and collector_thread_resume() functions. There are two viable ways of using these functions: by having one master thread make all the calls for all threads, including itself; or by having each thread make calls only for itself. Any other usage can lead to unpredictable results.

The C, C++, Fortran, and Java API Functions

The descriptions of the API functions follow.

Dynamic Functions and Modules

If your C or C++ program dynamically compiles functions into the data space of the program, you must supply information to the Collector if you want to see data for the dynamic function or module in the Performance Analyzer. The information is passed by calls to collector API functions. The definitions of the API functions are as follows.


void collector_func_load(char *name, char *alias,
    char *sourcename, void *vaddr, int size, int lntsize,
    Lineno *lntable);
void collector_func_unload(void *vaddr);

You do not need to use these API functions for Java methods that are compiled by the Java HotSpotTM virtual machine, for which a different interface is used. The Java interface provides the name of the method that was compiled to the Collector. You can see function data and annotated disassembly listings for Java compiled methods, but not annotated source listings.

The descriptions of the API functions follow.

collector_func_load()

Pass information about dynamically compiled functions to the Collector for recording in the experiment. The parameter list is described in the following table.

Table 3–1 Parameter List for collector_func_load()

Parameter 

Definition 

name

The name of the dynamically compiled function that is used by the performance tools. The name does not have to be the actual name of the function. The name need not follow any of the normal naming conventions of functions, although it should not contain embedded blanks or embedded quote characters. 

alias

An arbitrary string used to describe the function. It can be NULL. It is not interpreted in any way, and can contain embedded blanks. It is displayed in the Summary tab of the Analyzer. It can be used to indicate what the function is, or why the function was dynamically constructed.

sourcename

The path to the source file from which the function was constructed. It can be NULL. The source file is used for annotated source listings.

vaddr

The address at which the function was loaded. 

size

The size of the function in bytes. 

lntsize

A count of the number of entries in the line number table. It should be zero if line number information is not provided. 

lntable

A table containing lntsize entries, each of which is a pair of integers. The first integer is an offset, and the second entry is a line number. All instructions between an offset in one entry and the offset given in the next entry are attributed to the line number given in the first entry. Offsets must be in increasing numeric order, but the order of line numbers is arbitrary. If lntable is NULL, no source listings of the function are possible, although disassembly listings are available.

collector_func_unload()

Inform the collector that the dynamic function at the address vaddr has been unloaded.

Limitations on Data Collection

This section describes the limitations on data collection that are imposed by the hardware, the operating system, the way you run your program, or by the Collector itself.

There are no limitations on simultaneous collection of different data types: you can collect any data type with any other data type.

Limitations on Clock-Based Profiling

The minimum value of the profiling interval and the resolution of the clock used for profiling depend on the particular operating environment. The maximum value is set to 1 second. The value of the profiling interval is rounded down to the nearest multiple of the clock resolution. The minimum and maximum value and the clock resolution can be found by typing the collect command with no arguments.

Runtime Distortion and Dilation with Clock-profiling

Clock-based profiling records data when a SIGPROF signal is delivered to the target. It causes dilation to process that signal, and unwind the call stack. The deeper the call stack, and the more frequent the signals, the greater the dilation. To a limited extent, clock-based profiling shows some distortion, deriving from greater dilation for those parts of the program executing with the deepest stacks.

Where possible, a default value is set not to an exact number of milliseconds, but to slightly more or less than an exact number (for example, 10.007 ms or 0.997 ms) to avoid correlations with the system clock, which can also distort the data. Set custom values the same way on SPARC platforms (not possible on Linux platforms).

Limitations on Collection of Tracing Data

You cannot collect any kind of tracing data from a program that is already running unless the Collector library, libcollector.so, had been preloaded. See Collecting Tracing Data From a Running Program for more information.

Runtime Distortion and Dilation with Tracing

Tracing data dilates the run in proportion to the number of events that are traced. If done with clock-based profiling, the clock data is distorted by the dilation induced by tracing events.

Limitations on Hardware Counter Overflow Profiling

Hardware counter overflow profiling has several limitations:

Runtime Distortion and Dilation With Hardware Counter Overflow Profiling

Hardware counter overflow profiling records data when a SIGEMT is delivered to the target. It causes dilation to process that signal, and unwind the call stack. Unlike clock-based profiling, for some hardware counters, different parts of the program might generate events more rapidly than other parts, and show dilation in that part of the code. Any part of the program that generates such events very rapidly might be significantly distorted. Similarly, some events might be generated in one thread disproportionately to the other threads.

Limitations on Data Collection for Descendant Processes

You can collect data on descendant processes subject to some limitations.

If you want to collect data for all descendant processes that are followed by the Collector, you must use the collect command with the one of the following options:

See Experiment Control Options for more information about the -F option.

Limitations on Java Profiling

You can collect data on Java programs subject to the following limitations:

Runtime Performance Distortion and Dilation for Applications Written in the Java Programming Language

Java profiling uses the Java Virtual Machine Tools Interface (JVMTI), which can cause some distortion and dilation of the run.

For clock-based profiling and hardware counter overflow profiling, the data collection process makes various calls into the JVM software, and handles profiling events in signal handlers. The overhead of these routines, and the cost of writing the experiments to disk will dilate the runtime of the Java program. Such dilation is typically less than 10%.

For synchronization tracing, data collection uses other JVMTI events, which causes dilation in proportion to the amount of monitor contention in the application.

Where the Data Is Stored

The data collected during one execution of your application is called an experiment. The experiment consists of a set of files that are stored in a directory. The name of the experiment is the name of the directory.

In addition to recording the experiment data, the Collector creates its own archives of the load objects used by the program. These archives contain the addresses, sizes and names of each object file and each function in the load object, as well as the address of the load object and a time stamp for its last modification.

Experiments are stored by default in the current directory. If this directory is on a networked file system, storing the data takes longer than on a local file system, and can distort the performance data. You should always try to record experiments on a local file system if possible. You can specify the storage location when you run the Collector.

Experiments for descendant processes are stored inside the experiment for the founder process.

Experiment Names

The default name for a new experiment is test.1.er . The suffix .er is mandatory: if you give a name that does not have it, an error message is displayed and the name is not accepted.

If you choose a name with the format experiment.n.er, where n is a positive integer, the Collector automatically increments n by one in the names of subsequent experiments—for example, mytest.1.er is followed by mytest.2.er, mytest.3.er , and so on. The Collector also increments n if the experiment already exists, and continues to increment n until it finds an experiment name that is not in use. If the experiment name does not contain n and the experiment exists, the Collector prints an error message.

Experiments can be collected into groups. The group is defined in an experiment group file, which is stored by default in the current directory. The experiment group file is a plain text file with a special header line and an experiment name on each subsequent line. The default name for an experiment group file is test.erg. If the name does not end in .erg, an error is displayed and the name is not accepted. Once you have created an experiment group, any experiments you run with that group name are added to the group.

You can manually create an experiment group file by creating a plain text file whose first line is


#analyzer experiment group

and adding the names of the experiments on subsequent lines. The name of the file must end in .erg.

You can also create an experiment group by using the -g argument to the collect command.

The default experiment name is different for experiments collected from MPI programs, which create one experiment for each MPI process. The default experiment name is test.m.er, where m is the MPI rank of the process. If you specify an experiment group group.erg, the default experiment name is group.m.er. If you specify an experiment name, you override these defaults. See Collecting Data From MPI Programs for more information.

Experiments for descendant processes are named with their lineage as follows. To form the experiment name for a descendant process, an underscore, a code letter and a number are added to the stem of its creator’s experiment name. The code letter is f for a fork, x for an exec, and c for combination. The number is the index of the fork or exec (whether successful or not). For example, if the experiment name for the founder process is test.1.er, the experiment for the child process created by the third call to fork is test.1.er/_f3.er . If that child process calls exec successfully, the experiment name for the new descendant process is test.1.er/_f3_x1.er .

Moving Experiments

If you want to move an experiment to another computer to analyze it, you should be aware of the dependencies of the analysis on the operating environment in which the experiment was recorded.

The archive files contain all the information necessary to compute metrics at the function level and to display the timeline. However, if you want to see annotated source code or annotated disassembly code, you must have access to versions of the load objects or source files that are identical to the ones used when the experiment was recorded.

The Performance Analyzer searches for the source, object and executable files in the following locations in turn, and stops when it finds a file of the correct basename:

You can change the search order or add other search directories from the Analyzer GUI or by using the setpath and addpath directives.

To ensure that you see the correct annotated source code and annotated disassembly code for your program, you can copy the source code, the object files and the executable into the experiment before you move or copy the experiment. If you don’t want to copy the object files, you can link your program with -xs to ensure that the information on source lines and file locations are inserted into the executable. You can automatically copy the load objects into the experiment using the -A option of the collect command or the dbx collector archive command.

Estimating Storage Requirements

This section gives some guidelines for estimating the amount of disk space needed to record an experiment. The size of the experiment depends directly on the size of the data packets and the rate at which they are recorded, the number of LWPs used by the program, and the execution time of the program.

The data packets contain event-specific data and data that depends on the program structure (the call stack). The amount of data that depends on the data type is approximately 50 to 100 bytes. The call stack data consists of return addresses for each call, and contains 4 bytes per address, or 8 bytes per address on 64 bit executables. Data packets are recorded for each LWP in the experiment. Note that for Java programs, there are two call stacks of interest: the Java call stack and the machine call stack, which therefore result in more data being written to disk.

The rate at which profiling data packets are recorded is controlled by the profiling interval for clock data and by the overflow value for hardware counter data. However, the choice of these parameters also affects the data quality and the distortion of program performance due to the data collection overhead. Smaller values of these parameters give better statistics but also increase the overhead. The default values of the profiling interval and the overflow value have been carefully chosen as a compromise between obtaining good statistics and minimizing the overhead. Smaller values also mean more data.

For a clock-based profiling experiment with a profiling interval of 10ms and a small call stack, such that the packet size is 100 bytes, data is recorded at a rate of 10 kbytes/sec per LWP. For a hardware counter overflow profiling experiment collecting data for CPU cycles and instructions executed on a 750MHz processor with an overflow value of 1000000 and a packet size of 100 bytes, data is recorded at a rate of 150 kbytes/sec per LWP. Applications that have call stacks with a depth of hundreds of calls could easily record data at ten times these rates.

Your estimate of the size of the experiment should also take into account the disk space used by the archive files, which is usually a small fraction of the total disk space requirement (see the previous section). If you are not sure how much space you need, try running your experiment for a short time. From this test you can obtain the size of the archive files, which are independent of the data collection time, and scale the size of the profile files to obtain an estimate of the size for the full-length experiment.

As well as allocating disk space, the Collector allocates buffers in memory to store the profile data before writing it to disk. Currently no way exists to specify the size of these buffers. If the Collector runs out of memory, try to reduce the amount of data collected.

If your estimate of the space required to store the experiment is larger than the space you have available, consider collecting data for part of the run rather than the whole run. You can collect data on part of the run with the collect command, with the dbx collector subcommands, or by inserting calls in your program to the collector API. You can also limit the total amount of profiling and tracing data collected with the collect command or with the dbx collector subcommands.


Note –

The Performance Analyzer cannot read more than 2 GB of performance data.


Collecting Data

You can collect performance data in either the standalone Performance Analyzer or the Analyzer window in the IDE in several ways:

The following data collection capabilities are available only with the Performance Tools Collect dialog and the collect command:

Collecting Data Using the collect Command

To run the Collector from the command line using the collect command, type the following.


% collect collect-options program program-arguments

Here, collect-options are the collect command options, program is the name of the program you want to collect data on, and program-arguments are the program's arguments.

If no collect-options are given, the default is to turn on clock-based profiling with a profiling interval of approximately 10 milliseconds.

To obtain a list of options and a list of the names of any hardware counters that are available for profiling, type the collect command with no arguments.


% collect

For a description of the list of hardware counters, see Hardware Counter Overflow Profiling Data. See also Limitations on Hardware Counter Overflow Profiling.

Data Collection Options

These options control the types of data that are collected. See What Data the Collector Collects for a description of the data types.

If you do not specify data collection options, the default is -p on, which enables clock-based profiling with the default profiling interval of approximately 10 milliseconds. The default is turned off by the -h option but not by any of the other data collection options.

If you explicitly disable clock-based profiling, and do not enable tracing or hardware counter overflow profiling, the collect command prints a warning message, and collects global data only.

-p option

Collect clock-based profiling data. The allowed values of option are:

Collecting clock-based profiling data is the default action of the collect command.

-h counter_definition_1 ...[,counter_definition_n]

Collect hardware counter overflow profiling data. The number of counter definitions is processor-dependent. This option is now available on systems running the Linux operating system if you have installed the perfctr patch, which you can download from http://user.it.uu.se/~mikpe/linux/perfctr/2.6/perfctr-2.6.15.tar.gz .

A counter definition can take one of the following forms, depending on whether the processor supports attributes for hardware counters.

[+]counter_name[/ register_number][,interval ]

[+]counter_name[~ attribute_1=value_1]...[~attribute_n =value_n][/ register_number][,interval ]

The processor-specific counter_name can be one of the following:

If you specify more than one counter, they must use different registers. If they do not use different registers, the collect command prints an error message and exits. Some counters can count on either register.

To obtain a list of available counters, type collect with no arguments in a terminal window. A description of the counter list is given in the section Hardware Counter Lists.

If the hardware counter counts events that relate to memory access, you can prefix the counter name with a + sign to turn on searching for the true program counter address (PC) of the instruction that caused the counter overflow. This backtracking works on SPARC processors, and only with counters of type load , store , or load-store. If the search is successful, the virtual PC, the physical PC, and the effective address that was referenced are stored in the event data packet.

On some processors, attribute options can be associated with a hardware counter. If a processor supports attribute options, then running the collect command with no arguments lists the counter definitions including the attribute names. You can specify attribute values in decimal or hexadecimal format.

The interval (overflow value) is the number of events counted at which the hardware counter overflows and the overflow event is recorded. The interval can be set to one of the following:

The default is the normal threshold, which is predefined for each counter and which appears in the counter list. See also Limitations on Hardware Counter Overflow Profiling.

If you use the -h option without explicitly specifying a-p option, clock-based profiling is turned off. To collect both hardware counter data and clock-based data, you must specify both a -h option and a -p option.

-s option

Collect synchronization wait tracing data. The allowed values of option are:

Synchronization wait tracing data is not recorded for Java monitors.

-H option

Collect heap tracing data. The allowed values of option are:

Heap tracing is turned off by default. Heap tracing is not supported for Java programs; specifying it is treated as an error.

-m option

Collect MPI tracing data. The allowed values of option are:

MPI tracing is turned off by default.

See MPI Tracing Data for more information about the MPI functions whose calls are traced and the metrics that are computed from the tracing data.

-S option

Record sample packets periodically. The allowed values of option are:

By default, periodic sampling at 1 second intervals is enabled.

-c option

Record count data, for SPARC processors only.


Note –

This feature requires you to install the Binary Interface Tool (BIT), which is part of the Add-on Cool Tools for Sun Studio 12, available at http://cooltools.sunsource.net/. BIT is a tool for measuring performance or test suite coverage of SPARC binaries.


The allowed values of option are

-r option

Collect data for data race detection or deadlock detection for the Thread Analyzer. The allowed values are:

For more information about the collect -r command and Thread Analyzer, see the Sun Studio 12: Thread Analyzer User’s Guide and the tha.1 man page.

Experiment Control Options

-F option

Control whether or not descendant processes should have their data recorded. The allowed values of option are:

If you specify the -F on option, the Collector follows processes created by calls to the functions fork(2), fork1(2), fork(3F), vfork(2), and exec(2) and its variants. The call to vfork is replaced internally by a call to fork1.

If you specify the -F all option, the Collector follows all descendant processes including those created by calls to system(3C), system(3F), sh(3F), and popen(3C), and similar functions, and their associated descendant processes.

If you specify the -F '= regexp' option, the Collector follows all descendant processes whose name or lineage matches the specified regular expression. See the regexp(5) man page for information about regular expressions.

When you collect data on descendant processes, the Collector opens a new experiment for each descendant process inside the founder experiment. These new experiments are named by adding an underscore, a letter, and a number to the experiment suffix, as follows:

For example, if the experiment name for the initial process is test.1.er , the experiment for the child process created by its third fork is test.1.er/_f3.er. If that child process execs a new image, the corresponding experiment name is test.1.er/_f3_x1.er. If that child creates another process using a popen call, the experiment name is test.1.er/_f3_x1_c1.er.

The Analyzer and the er_print utility automatically read experiments for descendant processes when the founder experiment is read, but the experiments for the descendant processes are not selected for data display.

To select the data for display from the command line, specify the path name explicitly to either er_print or analyzer. The specified path must include the founder experiment name, and descendant experiment name inside the founder directory.

For example, here’s what you specify to see the data for the third fork of the test.1.er experiment:

er_print test.1.er/_f3.er

analyzer test.1.er/_f3.er

Alternatively, you can prepare an experiment group file with the explicit names of the descendant experiments in which you are interested.

To examine descendant processes in the Analyzer, load the founder experiment and select Filter Data from the View menu. A list of experiments is displayed with only the founder experiment checked. Uncheck it and check the descendant experiment of interest.


Note –

If the founder process exits while descendant processes are being followed, collection of data from descendants might continue. The founder experiment directory continues to grow accordingly.


-j option

Enable Java profiling when the target program is a JVM. The allowed values of option are:

The -j option is not needed if you want to collect data on a .class file or a .jar file, provided that the path to the java executable is in either the JDK_HOME environment variable or the JAVA_PATH environment variable. You can then specify the target program on the collect command line as the .class file or the .jar file, with or without the extension.

If you cannot define the path to java in the JDK_HOME or JAVA_PATH environment variables, or if you want to disable the recognition of methods compiled by the Java HotSpot virtual machine you can use the -j option. If you use this option, the program specified on the collect command line must be a Java virtual machine whose version is not earlier than 1.5_03. The collect command verifies that program is a JVM, and is an ELF executable; if it is not, the collect command prints an error message.

If you want to collect data using the 64-bit JVM, you must not use the -d64 option to java for a 32-bit JVM. If you do so, no data is collected. Instead you must specify the path to the 64-bit JVM either in the program argument to the collect command or in one of the environment variables given in this section.

-J java_argument

Specify a single argument to be passed to the JVM used for profiling. If you specify the -J option, but do not specify Java profiling, an error is generated, and no experiment is run. The argument is passed as a single argument to the JVM. If multiple arguments are needed, do not use the -J option. Instead, specify the path to the JVM explicitly, use -j on, and add the arguments for the JVM after the path to the JVM on the collect command line.

-l signal

Record a sample packet when the signal named signal is delivered to the process.

You can specify the signal by the full signal name, by the signal name without the initial letters SIG, or by the signal number. Do not use a signal that is used by the program or that would terminate execution. Suggested signals are SIGUSR1 and SIGUSR2. Signals can be delivered to a process by the kill command.

If you use both the -l and the -yoptions, you must use different signals for each option.

If you use this option and your program has its own signal handler, you should make sure that the signal that you specify with -l is passed on to the Collector’s signal handler, and is not intercepted or ignored.

See the signal(3HEAD) man page for more information about signals.

-t duration

Specify a time range for data collection.

The duration can be specified as a single number, with an optional m or s suffix, to indicate the time in minutes or seconds at which the experiment should be terminated. By default, the duration is in seconds. The duration can also be specified as two such numbers separated by a hyphen, which causes data collection to pause until the first time elapses, and at that time data collection begins. When the second time is reached, data collection terminates. If the second number is a zero, data will be collected after the initial pause until the end of the program's run. Even if the experiment is terminated, the target process is allowed to run to completion.

-x

Leave the target process stopped on exit from the exec system call in order to allow a debugger to attach to it. If you attach dbx to the process, use the dbx commands ignore PROF and ignore EMT to ensure that collection signals are passed on to the collect command.

-y signal[ ,r]

Control recording of data with the signal named signal. Whenever the signal is delivered to the process, it switches between the paused state, in which no data is recorded, and the recording state, in which data is recorded. Sample points are always recorded, regardless of the state of the switch.

The signal can be specified by the full signal name, by the signal name without the initial letters SIG, or by the signal number. Do not use a signal that is used by the program or that would terminate execution. Suggested signals are SIGUSR1 and SIGUSR2. Signals can be delivered to a process by the kill(1) command.

If you use both the -l and the -y options, you must use different signals for each option.

When the -y option is used, the Collector is started in the recording state if the optional r argument is given, otherwise it is started in the paused state. If the -y option is not used, the Collector is started in the recording state.

If you use this option and your program has its own signal handler, make sure that the signal that you specify with -y is passed on to the Collector’s signal handler, and is not intercepted or ignored.

See the signal(3HEAD) man page for more information about signals.

Output Options

-o experiment_name

Use experiment_name as the name of the experiment to be recorded. The experiment_name string must end in the string “.er”; if not, the collect utility prints an error message and exits.

-d directory-name

Place the experiment in directory directory-name. This option only applies to individual experiments and not to experiment groups. If the directory does not exist, the collect utility prints an error message and exits. If a group is specified with the -g option, the group file is also written to directory-name.

-g group-name

Make the experiment part of experiment group group-name. If group-name does not end in .erg, the collect utility prints an error message and exits. If the group exists, the experiment is added to it. If group-name is not an absolute path, the experiment group is placed in the directory directory-name if a directory has been specified with -d, otherwise it is placed in the current directory.

-A option

Control whether or not load objects used by the target process should be archived or copied into the recorded experiment. The allowed values of option are:

If you expect to copy experiments to a different machine from which they were recorded, or to read the experiments from a different machine, specify - A copy. Using this option does not copy any source files or object files into the experiment. You should ensure that those files are accessible on the machine to which you are copying the experiment.

-L size

Limit the amount of profiling data recorded to size megabytes. The limit applies to the sum of the amounts of clock-based profiling data, hardware counter overflow profiling data, and synchronization wait tracing data, but not to sample points. The limit is only approximate, and can be exceeded.

When the limit is reached, no more profiling data is recorded but the experiment remains open until the target process terminates. If periodic sampling is enabled, sample points continue to be written.

The default limit on the amount of data recorded is 2000 Mbytes. This limit was chosen because the Performance Analyzer cannot process experiments that contain more than 2 Gbytes of data. To remove the limit, set size to unlimited or none.

-O file

Append all output from collect itself to the name file, but do not redirect the output from the spawned target. If file is set to /dev/null, suppress all output from collect, including any error messages.

Other Options

-C comment

Put the comment into the notes file for the experiment. You can supply up to ten -C options. The contents of the notes file are prepended to the experiment header.

-n

Do not run the target but print the details of the experiment that would be generated if the target were run. This option is a dry run option.

-R

Display the text version of the Performance Analyzer Readme in the terminal window. If the readme is not found, a warning is printed. No further arguments are examined, and no further processing is done.

-V

Print the current version of the collect command. No further arguments are examined, and no further processing is done.

-v

Print the current version of the collect command and detailed information about the experiment being run.

Collecting Data From a Running Process Using the collect Utility

In the Solaris OS only, the -P pid option can be used with the collect utility to attach to the process with the specified PID, and collect data from the process. The other options to the collect command are translated into a script for dbx, which is then invoked to collect the data. Only clock-based profile data (-p option) and hardware counter overflow profile data (-h option) can be collected. Tracing data is not supported.

If you use the -h option without explicitly specifying a -p option, clock-based profiling is turned off. To collect both hardware counter data and clock-based data, you must specify both a -h option and a -p option.

ProcedureTo Collect Data From a Running Process Using the collect Utility

  1. Determine the program’s process ID (PID).

    If you started the program from the command line and put it in the background, its PID will be printed to standard output by the shell. Otherwise you can determine the program’s PID by typing the following.


    % ps -ef | grep program-name
    
  2. Use the collect command to enable data collection on the process, and set any optional parameters.


    % collect -P pid collect-options
    

    The collector options are described in Data Collection Options. For information about clock-based profiling, see -p option. For information about hardware clock profiling, see -h option.

Collecting Data Using the dbx collector Subcommands

This section shows how to run the Collector from dbx, and then explains each of the subcommands that you can use with the collector command within dbx.

ProcedureTo Run the Collector From dbx:

  1. Load your program into dbx by typing the following command.


    % dbx program
    
  2. Use the collector command to enable data collection, select the data types, and set any optional parameters.


    (dbx) collector subcommand
    

    To get a listing of available collector subcommands, type:


    (dbx) help collector
    

    You must use one collector command for each subcommand.

  3. Set up any dbx options you wish to use and run the program.

    If a subcommand is incorrectly given, a warning message is printed and the subcommand is ignored. A complete listing of the collector subcommands follows.

Data Collection Subcommands

The following subcommands control the types of data that are collected by the Collector. They are ignored with a warning if an experiment is active.

profile option

Controls the collection of clock-based profiling data. The allowed values for option are:

hwprofile option

Controls the collection of hardware counter overflow profiling data. If you attempt to enable hardware counter overflow profiling on systems that do not support it, dbx returns a warning message and the command is ignored. The allowed values for option are:

synctrace option

Controls the collection of synchronization wait tracing data. The allowed values for option are

heaptrace option

Controls the collection of heap tracing data. The allowed values for option are

By default, the Collector does not collect heap tracing data.

mpitrace option

Controls the collection of MPI tracing data. The allowed values for option are

By default, the Collector does not collect MPI tracing data.

tha option

Collect data for data race detection or deadlock detection for the Thread Analyzer. The allowed values are:

For more information about the Thread Analyzer, see the Sun Studio 12: Thread Analyzer User’s Guide and the tha.1 man page.

sample option

Controls the sampling mode. The allowed values for option are:

By default, periodic sampling is enabled, with a sampling interval value of 1 second.

dbxsample { on | off }

Controls the recording of samples when dbx stops the target process. The meanings of the keywords are as follows:

By default, samples are recorded when dbx stops the target process.

Experiment Control Subcommands

disable

Disables data collection. If a process is running and collecting data, it terminates the experiment and disables data collection. If a process is running and data collection is disabled, it is ignored with a warning. If no process is running, it disables data collection for subsequent runs.

enable

Enables data collection. If a process is running but data collection is disabled, it enables data collection and starts a new experiment. If a process is running and data collection is enabled, it is ignored with a warning. If no process is running, it enables data collection for subsequent runs.

You can enable and disable data collection as many times as you like during the execution of any process. Each time you enable data collection, a new experiment is created.

pause

Suspends the collection of data, but leaves the experiment open. Sample points are not recorded while the Collector is paused. A sample is generated prior to a pause, and another sample is generated immediately following a resume. This subcommand is ignored if data collection is already paused.

resume

Resumes data collection after a pause has been issued. This subcommand is ignored if data is being collected.

sample record name

Record a sample packet with the label name. The label is displayed in the Event tab of the Performance Analyzer.

Output Subcommands

The following subcommands define storage options for the experiment. They are ignored with a warning if an experiment is active.

archive mode

Set the mode for archiving the experiment. The allowed values for mode are

If you intend to move the experiment to a different machine, or read it from another machine, you should enable the copying of load objects. If an experiment is active, the command is ignored with a warning. This command does not copy source files or object files into the experiment.

limit value

Limit the amount of profiling data recorded to value megabytes. The limit applies to the sum of the amounts of clock-based profiling data, hardware counter overflow profiling data, and synchronization wait tracing data, but not to sample points. The limit is only approximate, and can be exceeded.

When the limit is reached, no more profiling data is recorded but the experiment remains open and sample points continue to be recorded.

The default limit on the amount of data recorded is 2000 Mbytes. This limit was chosen because the Performance Analyzer cannot process experiments that contain more than 2 Gbytes of data. To remove the limit, set value to unlimited or none.

store option

Governs where the experiment is stored. This command is ignored with a warning if an experiment is active. The allowed values for option are:

Information Subcommands

show

Shows the current setting of every Collector control.

status

Reports on the status of any open experiment.

Collecting Data From a Running Process With dbx

The Collector allows you to collect data from a running process. If the process is already under the control of dbx, you can pause the process and enable data collection using the methods described in previous sections.

If the process is not under the control of dbx, the collect –P pid command can be used to collect data from a running process, as described in Collecting Data From a Running Process Using the collect Utility. You can also attach dbx to it, collect performance data, and then detach from the process, leaving it to continue. If you want to collect performance data for selected descendant processes, you must attach dbx to each process.

ProcedureTo Collect Data From a Running Process That is Not Under the Control of dbx:

  1. Determine the program’s process ID (PID).

    If you started the program from the command line and put it in the background, its PID will be printed to standard output by the shell. Otherwise you can determine the program’s PID by typing the following.


    % ps -ef | grep program-name
    
  2. Attach to the process.

    From dbx, type the following.


    (dbx) attach program-name pid
    

    If dbx is not already running, type the following.


    % dbx program-name pid
    

    Attaching to a running process pauses the process.

    See the manual, Sun Studio 12: Debugging a Program With dbx, for more information about attaching to a process.

  3. Start data collection.

    From dbx, use the collector command to set up the data collection parameters and the cont command to resume execution of the process.

  4. Detach from the process.

    When you have finished collecting data, pause the program and then detach the process from dbx.

    From dbx, type the following.


    (dbx) detach
    

Collecting Tracing Data From a Running Program

If you want to collect any kind of tracing data, you must preload the Collector library, libcollector.so, before you run your program, because the library provides wrappers to the real functions that enable data collection to take place. In addition, the Collector adds wrapper functions to other system library calls to guarantee the integrity of performance data. If you do not preload the Collector library, these wrapper functions cannot be inserted. See Using System Libraries for more information on how the Collector interposes on system library functions.

To preload libcollector.so, you must set both the name of the library and the path to the library using environment variables, as shown in the table below. Use the environment variable LD_PRELOAD to set the name of the library. Use the environment variables LD_LIBRARY_PATH, LD_LIBRARY_PATH_32, or LD_LIBRARY_PATH_64 to set the path to the library. LD_LIBRARY_PATH is used if the _32 and _64 variants are not defined. If you have already defined these environment variables, add new values to them.

Table 3–2 Environment Variable Settings for Preloading the libcollector.so Library

Environment Variable 

Value 

LD_PRELOAD

libcollector.so

LD_LIBRARY_PATH

/opt/SUNWspro/prod/lib/dbxruntime

LD_LIBRARY_PATH_32

/opt/SUNWspro/prod/lib/dbxruntime

LD_LIBRARY_PATH_64

/opt/SUNWspro/prod/lib/v9/dbxruntime

LD_LIBRARY_PATH_64

/opt/SUNWspro/prod/lib/amd64/dbxruntime

If your Sun Studio software is not installed in /opt/SUNWspro, ask your system administrator for the correct path. You can set the full path in LD_PRELOAD, but doing this can create complications when using SPARC V9 64-bit architecture.


Note –

Remove the LD_PRELOAD and LD_LIBRARY_PATH settings after the run, so they do not remain in effect for other programs that are started from the same shell.


If you want to collect data from an MPI program that is already running, you must attach a separate instance of dbx to each process and enable the Collector for each process. When you attach dbx to the processes in an MPI job, each process is halted and restarted at a different time. The time difference could change the interaction between the MPI processes and affect the performance data you collect. To minimize this problem, one solution is to use the pstop(1) command to halt all the processes. However, once you attach dbx to the processes, you must restart them from dbx, and there is a timing delay in restarting the processes, which can affect the synchronization of the MPI processes. See also Collecting Data From MPI Programs.

Collecting Data From MPI Programs

The Collector can collect performance data from multi-process programs that use the Message Passing Interface (MPI) library. The Open MPI library is included in the Sun HPC ClusterToolsTM 7 software, and the Sun MPI library is included in ClusterTools 5 and ClusterTools 6 software. The ClusterTools software is available at http://www.sun.com/software/products/clustertools/.

To start the parallel jobs when using ClusterTools 7, use the Open Run-Time Environment (ORTE) command mpirun.

To start parallel jobs when using ClusterTools 5 or ClusterTools 6, use the Sun Cluster Runtime Environment (CRE) command mprun.

See the Sun HPC ClusterTools documentation for more information.

For information about MPI and the MPI standard, see the MPI web site http://www.mcs.anl.gov/mpi/ . For more information about Open MPI, see the web site http://www.open-mpi.org/ .

Because of the way MPI and the Collector are implemented, each MPI process records a separate experiment. Each experiment must have a unique name. Where and how the experiment is stored depends on the kinds of file systems that are available to your MPI job. Issues about storing experiments are discussed in the next section, Storing MPI Experiments.

To collect data from MPI jobs, you can either run the collect command under MPI or start dbx under MPI and use the dbx collector subcommands. These tasks are discussed in Running the collect Command Under MPI and Collecting Data by Starting dbx Under MPI.

Storing MPI Experiments

Because multiprocessing environments can be complex, you should be aware of some issues about storing MPI experiments when you collect performance data from MPI programs. These issues concern the efficiency of data collection and storage, and the naming of experiments. See Where the Data Is Stored for information on naming experiments, including MPI experiments.

Each MPI process that collects performance data creates its own experiment. When an MPI process creates an experiment, it locks the experiment directory. All other MPI processes must wait until the lock is released before they can use the directory. Thus, if you store the experiments on a file system that is accessible to all MPI processes, the experiments are created sequentially, but if you store the experiments on file systems that are local to each MPI process, the experiments are created concurrently.

Issues with experiment names and storage location can be avoided if you allow the Collector to create the experiment names. See the following section Default MPI Experiment Names.

Default MPI Experiment Names

If you do not specify an experiment name, the default experiment name is used. The Collector uses the MPI rank to construct an experiment name with the standard form experiment.m.er, where m is the MPI rank. The stem, experiment, is the stem of the experiment group name if you specify an experiment group, otherwise it is test. The experiment names are unique, regardless of whether you use a common file system or a local file system. Thus, if you use a local file system to record the experiments and copy them to a common file system, you do not have to rename the experiments when you copy them and reconstruct any experiment group file. In most cases, you should allow the Collector to create the experiment names to ensure unique names across all file systems.

Specifying Non-Default MPI Experiment Names

If you store the experiments on a common file system and specify an experiment name in the standard format, experiment. n.er, each experiment is given a unique name when the value of n is incremented for each experiment. Experiments are numbered according to the order in which MPI processes obtain the lock on the experiment directory, and cannot be guaranteed to correspond to the MPI rank of the process. If you attach dbx to MPI processes in a running MPI job, experiment numbering is determined by the order of attachment.

If you store each experiment on its own local file system and specify an explicit experiment name, each experiment might receive that same name. For example, suppose you ran an MPI job across a cluster with four single-processor nodes labelled node0, node1, node2 and node3. Each node has a local disk called /scratch, and you store the experiments in directory username on this disk. The experiments created by the MPI job have the following full path names.


node0:/scratch/username/test.1.er
node1:/scratch/username/test.1.er
node2:/scratch/username/test.1.er
node3:/scratch/username/test.1.er

The full name including the node name is unique, but in each experiment directory there is an experiment named test.1.er. If you move the experiments to a common location after the MPI job is completed, you must make sure that the names remain unique. For example, to move these experiments to your home directory, which is assumed to be accessible from all nodes, and rename the experiments, type the following commands.


rsh node0 ’er_mv /scratch/username/test.1.er test.0.er’
rsh node1 ’er_mv /scratch/username/test.1.er test.1.er’
rsh node2 ’er_mv /scratch/username/test.1.er test.2.er’
rsh node3 ’er_mv /scratch/username/test.1.er test.3.er’

For large MPI jobs, you might want to move the experiments to a common location using a script. Do not use the UNIX® commands cp or mv; use er_cp or er_mv as shown in the example above, and described in Manipulating Experiments.

If you do not know which local file systems are available to you, use the df -lk command or ask your system administrator. Always make sure that the experiments are stored in a directory that already exists, that is uniquely defined and that is not in use for any other experiment. Also make sure that the file system has enough space for the experiments. See Estimating Storage Requirements for information on how to estimate the space needed.


Note –

If you copy or move experiments between computers or nodes you cannot view the annotated source code or source lines in the annotated disassembly code unless you have access to the load objects and source files that were used to run the experiment, or a copy with the same path and timestamp.


Running the collect Command Under MPI

To collect data with the collect command under the control of MPI, use the following syntax, as appropriate for your version of ClusterTools.

On Sun HPC ClusterTools 7:


% mpirun -np n 
collect [collect-arguments] program-name
 [program-arguments]

On Sun HPC ClusterTools 6 and earlier:


% mprun -np n 
collect [collect-arguments] program-name
 [program-arguments]

In each case, n is the number of processes to be created by MPI. This procedure creates n separate instances of collect, each of which records an experiment. Read the section Where the Data Is Stored for information on where and how to store the experiments.

To ensure that the sets of experiments from different MPI runs are stored separately, you can create an experiment group with the -g option for each MPI run. The experiment group should be stored on a file system that is accessible to all MPI processes. Creating an experiment group also makes it easier to load the set of experiments for a single MPI run into the Performance Analyzer. An alternative to creating a group is to specify a separate directory for each MPI run with the -d option.

Collecting Data by Starting dbx Under MPI

To start dbx and collect data under the control of MPI, use the following syntax.

On Sun HPC ClusterTools 7:


% mpirun -np n dbx program-name < collection-script

On Sun HPC ClusterTools 6 or earlier:


% mprun -np n dbx program-name < collection-script

In each case, n is the number of processes to be created by MPI and collection-script is a dbx script that contains the commands necessary to set up and start data collection. This procedure creates n separate instances of dbx, each of which records an experiment on one of the MPI processes. If you do not define the experiment name, the experiment is labelled with the MPI rank. Read the section Storing MPI Experiments for information on where and how to store the experiments.

You can name the experiments with the MPI rank by using the collection script and a call to MPI_Comm_rank() in your program. For example, in a C program you would insert the following line.


ier = MPI_Comm_rank(MPI_COMM_WORLD,&me);

In a Fortran program you would insert the following line.


call MPI_Comm_rank(MPI_COMM_WORLD, me, ier)

If this call was inserted at line 17, for example, you could use a script like this.


stop at 18
run program-arguments
rank=$[me]
collector enable
collector store filename experiment.$rank.er
cont
quit
 

Using collect With ppgsz

You can use collect with ppgsz(1) by running collect on the ppgsz command and specifying the -F on or -F all flag. The founder experiment is on the ppgsz executable and uninteresting. If your path finds the 32-bit version of ppgsz, and the experiment is run on a system that supports 64-bit processes, the first thing it will do is exec its 64-bit version, creating _x1.er. That executable forks, creating _x1_f1.er.

The child process attempts to exec the named target in the first directory on your path, then in the second, and so forth, until one of the exec attempts succeeds. If, for example, the third attempt succeeds, the first two descendant experiments are named _x1_f1_x1.er and _x1_f1_x2.er, and both are completely empty. The experiment on the target is the one from the successful exec, the third one in the example, and is named _x1_f1_x3.er, stored under the founder experiment. It can be processed directly by invoking the Analyzer or the er_print utility on test.1.er/_x1_f1_x3.er.

If the 64-bit ppgsz is the initial process, or if the 32-bit ppgsz is invoked on a 32-bit kernel, the fork child that execs the real target has its data in _f1.er , and the real target’s experiment is in _f1_x3.er, assuming the same path properties as in the example above.