C H A P T E R  3

Collecting Performance Data

The first stage of performance analysis is data collection. This chapter describes what is required for data collection, where the data is stored, how to collect data, and how to manage the data collection. For more information about the data itself, see Chapter 2.

This chapter covers the following topics.


Compiling and Linking Your Program

You can collect and analyze data for a program compiled with almost any option, but some choices affect what you can collect or what you can see in the Performance Analyzer. The issues that you should take into account when you compile and link your program are described in the following subsections.

Source Code Information

To see source code in annotated Source and Disassembly analyses, and source lines in the Lines analyses, you must compile the source files of interest with the -g compiler option (-g0 for C++ to enable front-end inlining) to generate debug symbol information. The format of the debug symbol information can be either stabs or DWARF2, as specified by -xdebugformat=(stabs|dwarf).

To prepare compilation objects with debug information that allows dataspace hardware counter profiles, currently only for the C compiler and C++ compiler for SPARC® processors, compile by specifying -xhwcprof -xdebugformat=dwarf and any level of optimization. (Currently, this functionality is not available without optimization.) To see program data objects in Data Objects analyses, also add -g (or -g0 for C++) to obtain full symbolic information.

Executables and libraries built with DWARF format debugging symbols automatically include a copy of each constituent object file's debugging symbols. Executables and libraries built with stabs format debugging symbols also include a copy of each constituent object file's debugging symbols if they are linked with the -xs option, which leaves stabs symbols in the various object files as well as the executable. The inclusion of this information is particularly useful if you need to move or remove the object files. With all of the debugging symbols in the executables and libraries themselves, it is easier to move the experiment and the program-related files to a new location.

Static Linking

When you compile your program, you must not disable dynamic linking, which is done with the -dn and -Bstatic compiler options. If you try to collect data for a program that is entirely statically linked, the Collector prints an error message and does not collect data. The error occurs because the collector library, among others, is dynamically loaded when you run the Collector.

Do not statically link any of the system libraries. If you do, you might not be able to collect any kind of tracing data. Also, do not link to the Collector library, libcollector.so.

Optimization

If you compile your program with optimization turned on at some level, the compiler can rearrange the order of execution so that it does not strictly follow the sequence of lines in your program. The Performance Analyzer can analyze experiments collected on optimized code, but the data it presents at the disassembly level is often difficult to relate to the original source code lines. In addition, the call sequence can appear to be different from what you expect if the compiler performs tail-call optimizations. Optimization may cause unwind failures. See Tail-Call Optimization for more information.

Compiling Java Programs

No special action is required for compiling Java programs with the javac command.


Preparing Your Program for Data Collection and Analysis

You do not need to do anything special to prepare most programs for data collection and analysis. You should read one or more of the subsections below if your program does any of the following:

Also, if you want to control data collection from your program, you should read the relevant subsection.

Using Dynamically Allocated Memory

Many programs rely on dynamically-allocated memory, using features such as:

You must take care to ensure that a program does not rely on the initial contents of dynamically allocated memory, unless the memory allocation method is explicitly documented as setting an initial value: for example, compare the descriptions of calloc and malloc in the man page for malloc(3C).

Occasionally, a program that uses dynamically-allocated memory might appear to work correctly when run alone, but might fail when run with performance data collection enabled. Symptoms might include unexpected floating point behavior, segmentation faults, or application-specific error messages.

Such behavior might occur if the uninitialized memory is, by chance, set to a benign value when the application is run alone, but is set to a different value when the application is run in conjunction with the performance data collection tools. In such cases, the performance tools are not at fault. Any application that relies on the contents of dynamically allocated memory has a latent bug: an operating system is at liberty to provide any content whatsoever in dynamically allocated memory, unless explicitly documented otherwise. Even if an operating system happens to always set dynamically allocated memory to a certain value today, such latent bugs might cause unexpected behavior with a later revision of the operating system, or if the program is ported to a different operating system in the future.

here are some tools that may help in finding such latent bugs:

For more information, see the Fortran User's Guide or the f95(1) man page

For more information, see the C User's Guide or the lint(1) man page

For more information, see the Debugging a Program With dbx manual or the dbx(1) man page.

Using System Libraries

The Collector interposes on functions from various system libraries, to collect tracing data and to ensure the integrity of data collection. The following list describes situations in which the Collector interposes on calls to library functions.

Under some circumstances the interposition does not succeed:

The failure of interposition by the Collector can cause loss or invalidation of performance data.

Using Signal Handlers

The Collector uses two signals to collect profiling data: SIGPROF for all experiments and SIGEMT for hardware counter experiments only. The Collector installs a signal handler for each of these signals. The signal handler intercepts and processes its own signal, but passes other signals on to any other signal handlers that are installed. If a program installs its own signal handler for these signals, the Collector re-installs its signal handler as the primary handler to guarantee the integrity of the performance data.

The collect command can also use user-specified signals for pausing and resuming data collection and for recording samples. These signals are not protected by the Collector although a warning is written to the experiment if a user handler is installed. It is your responsibility to ensure that there is no conflict between use of the specified signals by the Collector and any use made by the application of the same signals.

The signal handlers installed by the Collector set a flag that ensures that system calls are not interrupted for signal delivery. This flag setting could change the behavior of the program if the program's signal handler sets the flag to permit interruption of system calls. One important example of a change in behavior occurs for the asynchronous I/O library, libaio.so, which uses SIGPROF for asynchronous cancel operations, and which does interrupt system calls. If the collector library, libcollector.so, is installed, the cancel signal invariably arrives too late to cancel the asynchronous I/O operation.

If you attach dbx to a process without preloading the collector library and enable performance data collection, and the program subsequently installs its own signal handler, the Collector does not re-install its own signal handler. In this case, the program's signal handler must ensure that the SIGPROF and SIGEMT signals are passed on so that performance data is not lost. If the program's signal handler interrupts system calls, both the program behavior and the profiling behavior are different from when the collector library is preloaded.

Using setuid

Restrictions enforced by the dynamic loader make it difficult to use setuid(2) and collect performance data. If your program calls setuid or executes a setuid file, it is likely that the Collector cannot write an experiment file because it lacks the necessary permissions for the new user ID.

Program Control of Data Collection

If you want to control data collection from your program, the Collector shared library, libcollector.so contains some API functions that you can use. The functions are written in C. A Fortran interface is also provided. Both C and Fortran interfaces are defined in header files that are provided with the library.

The API functions are defined as follows.


void collector_sample(char *name);
void collector_pause(void);
void collector_resume(void);
void collector_thread_pause(unsigned int t);
void collector_thread_resume(unsigned int t);
void collector_terminate_expt(void);

Similar functionality is provided for Javatrademark programs by the CollectorAPI class, which is described in The Java Interface.

The C and C++ Interface

There are two ways to access the C and C++ interface:

This way requires that you link with an API library, and works under all circumstances. If no experiment is active, the API calls are ignored.

This way works when used in the main executable, and when data collection is started at the same time the program starts. This way does not always work when dbx is used to attach to the process, nor when used from within a shared library that is dlopen'd by the process. This second way is provided for backward compatibility.



Caution - Do not link a program in any language with -lcollector. If you do, the Collector can exhibit unpredictable behavior.



The Fortran Interface

The Fortran API libfcollector.h file defines the Fortran interface to the library. The application must be linked with -lcollectorAPI to use this library. (An alternate name for the library, -lfcollector, is provided for backward compatibility.) The Fortran API provides the same features as the C and C++ API, excluding the dynamic function and thread pause and resume calls.

Insert the following statement to use the API functions for Fortran:.


include "libfcollector.h"



Caution - Do not link a program in any language with -lcollector. If you do, the Collector can exhibit unpredictable behavior.



The Java Interface

Use the following statement to import the CollectorAPI class and access the Javatrademark API. Note however that your application must be invoked with a classpath pointing to /installation_directory/lib/collector.jar where installation-directory is the directory in which the Sun Studio software is installed.


import com.sun.forte.st.collector.CollectorAPI;

The Javatrademark CollectorAPI methods are defined as follows:


CollectorAPI.sample(String name)
CollectorAPI.pause()
CollectorAPI.resume()
CollectorAPI.threadPause(Thread thread)
CollectorAPI.threadResume(Thread thread)
CollectorAPI.terminate()

The Java API includes the same functions as the C and C++ API, excluding the dynamic function API.

The C include file libcollector.h contains macros that bypass the calls to the real API functions if data is not being collected. In this case the functions are not dynamically loaded. However, using these macros is risky because the macros do not work well under some circumstances. It is safer to use collectorAPI.h because it does not use macros. Rather, it refers directly to the functions.

The Fortran API subroutines call the C API functions if performance data is being collected, otherwise they return. The overhead for the checking is very small and should not significantly affect program performance.

To collect performance data you must run your program using the Collector, as described later in this chapter. Inserting calls to the API functions does not enable data collection.

If you intend to use the API functions in a multithreaded program, you should ensure that they are only called by one thread. With the exception of collector_thread_pause() and collector_thread_resume(), the API functions perform actions that apply to the process and not to individual threads. If each thread calls the API functions, the data that is recorded might not be what you expect. For example, if collector_pause() or collector_terminate_expt() is called by one thread before the other threads have reached the same point in the program, collection is paused or terminated for all threads, and data can be lost from the threads that were executing code before the API call. To control data collection at the level of the individual threads, use the collector_thread_pause() and collector_thread_resume() functions. There are two viable ways of using these functions: by having one master thread make all the calls for all threads, including itself; or by having each thread make calls only for itself. Any other usage can lead to unpredictable results.

The C, C++, Fortran, and Java API Functions

The descriptions of the API functions follow.

Fortran: collector_sample(string name)

Java: CollectorAPI.sample(String name)

Record a sample packet and label the sample with the given name or string. The label is displayed by the Performance Analyzer in the Event tab. The Fortran argument string is of type character.

Sample points contain data for the process and not for individual threads. In a multithreaded application, the collector_sample() API function ensures that only one sample is written if another call is made while it is recording a sample. The number of samples recorded can be less than the number of threads making the call.

The Performance Analyzer does not distinguish between samples recorded by different mechanisms. If you want to see only the samples recorded by API calls, you should turn off all other sampling modes when you record performance data.

Java: CollectorAPI.pause()

Stop writing event-specific data to the experiment. The experiment remains open, and global data continues to be written. The call is ignored if no experiment is active or if data recording is already stopped. This function stops the writing of all event-specific data even if it is enabled for specific threads by the collector_thread_resume() function.

Java: CollectorAPI.resume()

Resume writing event-specific data to the experiment after a call to collector_pause(). The call is ignored if no experiment is active or if data recording is active.

Java: CollectorAPI.threadPause(Thread t)

Stop writing event-specific data from the thread specified in the argument list to the experiment. The argument t is the POSIX thread identifier for C/C++ programs and a Java thread for Java programs. If the experiment is already terminated, or no experiment is active, or writing of data for that thread is already turned off, the call is ignored. This function stops the writing of data from the specified thread even if the writing of data is globally enabled. By default, recording of data for individual threads is turned on.

Java: CollectorAPI.threadResume(Thread t)

Resume writing event-specific data from the thread specified in the argument list to the experiment. The argument t is the POSIX thread identifier for C/C++ programs and a Java thread for Java programs. If the experiment is already terminated, or no experiment is active, or writing of data for that thread is already turned on, the call is ignored. Data is written to the experiment only if the writing of data is globally enabled as well as enabled for the thread.

Java: CollectorAPI.terminate

Terminate the experiment whose data is being collected. No further data is collected, but the program continues to run normally. The call is ignored if no experiment is active.

Dynamic Functions and Modules

If your C program or C++ program dynamically compiles functions into the data space of the program, you must supply information to the Collector if you want to see data for the dynamic function or module in the Performance Analyzer. The information is passed by calls to collector API functions. The definitions of the API functions are as follows.


void collector_func_load(char *name, char *alias, 
    char *sourcename, void *vaddr, int size, int lntsize, 
    Lineno *lntable);
void collector_func_unload(void *vaddr);

You do not need to use these API functions for Java methods that are compiled by the Java HotSpottrademark virtual machine, for which a different interface is used. The Java interface provides the name of the method that was compiled to the Collector. You can see function data and annotated disassembly listings for Java compiled methods, but not annotated source listings.

The descriptions of the API functions follow.

Pass information about dynamically compiled functions to the Collector for recording in the experiment. The parameter list is described in the following table.


TABLE 3-1 Parameter List for collector_func_load()

Parameter

Definition

name

The name of the dynamically compiled function that is used by the performance tools. The name does not have to be the actual name of the function. The name need not follow any of the normal naming conventions of functions, although it should not contain embedded blanks or embedded quote characters.

alias

An arbitrary string used to describe the function. It can be NULL. It is not interpreted in any way, and can contain embedded blanks. It is displayed in the Summary tab of the Analyzer. It can be used to indicate what the function is, or why the function was dynamically constructed.

sourcename

The path to the source file from which the function was constructed. It can be NULL. The source file is used for annotated source listings.

vaddr

The address at which the function was loaded.

size

The size of the function in bytes.

lntsize

A count of the number of entries in the line number table. It should be zero if line number information is not provided.

lntable

A table containing lntsize entries, each of which is a pair of integers. The first integer is an offset, and the second entry is a line number. All instructions between an offset in one entry and the offset given in the next entry are attributed to the line number given in the first entry. Offsets must be in increasing numeric order, but the order of line numbers is arbitrary. If lntable is NULL, no source listings of the function are possible, although disassembly listings are available.


Inform the collector that the dynamic function at the address vaddr has been unloaded.


Limitations on Data Collection

This section describes the limitations on data collection that are imposed by the hardware, the operating system, the way you run your program, or by the Collector itself.

There are no limitations on simultaneous collection of different data types: you can collect any data type with any other data type.

Limitations on Clock-Based Profiling

The minimum value of the profiling interval and the resolution of the clock used for profiling depend on the particular operating environment. The maximum value is set to 1 second. The value of the profiling interval is rounded down to the nearest multiple of the clock resolution. The minimum and maximum value and the clock resolution can be found by typing the collect command with no arguments.

The system clock is used for profiling in early versions of the Solaris 8 OS. It has a resolution of 10 milliseconds, unless you choose to enable the high-resolution system clock. If you have root privilege, you can do this by adding the following line to the file /etc/system, and then rebooting.


set hires_tick=1 

In the Solaris 9 OS, the Solaris10 OS, and later versions of the Solaris 8 OS, it is not necessary to enable the high-resolution system clock for high-resolution profiling.

Runtime Distortion and Dilation with Clock-profiling

Clock-based profiling records data when a SIGPROF signal is delivered to the target. It causes dilation to process that signal, and unwind the call stack. The deeper the call stack, and the more frequent the signals, the greater the dilation. To a limited extent, clock-mased profiling shows some distortion, deriving from greater dilation for those parts of the program executing with the deepest stacks.

Where possible, a default value is set not to an exact number of milliseconds, but to slightly more or less than an exact number (for example, 10.007 ms or 0.997 ms) to avoid correlations with the system clock, which can also distort the data. Set custom values the same way on SPARC platforms (not possible on Linux platforms).

Limitations on Collection of Tracing Data

You cannot collect any kind of tracing data from a program that is already running unless the Collector library, libcollector.so, had been preloaded. See Collecting Data From a Running Process for more information.

Runtime Distortion and Dilation with Tracing

Tracing data dilates the run in proportion to the number of events that are traced. If done with clock-based profiling, the clock data is distorted by the dilation induced by tracing events.

Limitations on Hardware Counter Overflow Profiling

Hardware counter overflow profiling has several limitations:



Note - To view a list of all available counters, run the collect command with no arguments.



Runtime Distortion and Dilation With Hardware Counter Overflow Profiling

Hardware counter overflow profiling records data when a SIGEMT is delivered to the target. It causes dilation to process that signal, and unwind the call stack. Unlike clock-based profiling, for some hardware counters, different parts of the program might generate events more rapidly than other parts, and show dilation in that part of the code. Any part of the program that generates such events very rapidly might be significantly distorted. Similarly, some events might be generated in one thread disproportionately to the other threads.

Limitations on Data Collection for Descendant Processes

You can collect data on descendant processes subject to the following limitations:

Limitations on Java Profiling

You can collect data on Java programs subject to the following limitations:

Using JVM versions earlier than 1.4.2_02 compromises the data as follows:

Runtime Performance Distortion and Dilation for Applications Written in the Java Programming Language

Java profiling uses the Javatrademark Virtual Machine Profiling Interface (JVMPI) if you are running J2SE 1.4.2, or the Javatrademark Virtual Machine Tools Interface (JVMTI) if you are running J2SE 5.0, which can cause some distortion and dilation of the run.

For clock-based profiling and hardware counter overflow profiling, the data collection process makes various calls into the JVM software, and handles profiling events in signal handlers. The overhead of these routines, and the cost of writing the experiments to disk will dilate the runtime of the Java program. Such dilation is typically less than 10%.

Although the default garbage collector supports JVMPI, there are other garbage collectors that do not. Any data-collection run specifying such a garbage collector will get a fatal error.

For heap profiling, the data collection process uses JVMPI events describing memory allocation and garbage collection, which can cause significant dilation in runtime. Most Java applications generate many of these events, which leads to large experiments, and scalability problems processing the data. Furthermore, if these events are requested, the garbage collector disables some inlined allocations, costing additional CPU time for the longer allocation path.

For synchronization tracing, data collection uses other JVMTI events, which causes dilation in proportion to the amount of monitor contention in the application.


Where the Data Is Stored

The data collected during one execution of your application is called an experiment. The experiment consists of a set of files that are stored in a directory. The name of the experiment is the name of the directory.

In addition to recording the experiment data, the Collector creates its own archives of the load objects used by the program. These archives contain the addresses, sizes and names of each object file and each function in the load object, as well as the address of the load object and a time stamp for its last modification.

Experiments are stored by default in the current directory. If this directory is on a networked file system, storing the data takes longer than on a local file system, and can distort the performance data. You should always try to record experiments on a local file system if possible. You can specify the storage location when you run the Collector.

Experiments for descendant processes are stored inside the experiment for the founder process.

Experiment Names

The default name for a new experiment is test.1.er. The suffix .er is mandatory: if you give a name that does not have it, an error message is displayed and the name is not accepted.

If you choose a name with the format experiment.n.er, where n is a positive integer, the Collector automatically increments n by one in the names of subsequent experiments--for example, mytest.1.er is followed by mytest.2.er, mytest.3.er, and so on. The Collector also increments n if the experiment already exists, and continues to increment n until it finds an experiment name that is not in use. If the experiment name does not contain n and the experiment exists, the Collector prints an error message.

Experiments can be collected into groups. The group is defined in an experiment group file, which is stored by default in the current directory. The experiment group file is a plain text file with a special header line and an experiment name on each subsequent line. The default name for an experiment group file is test.erg. If the name does not end in .erg, an error is displayed and the name is not accepted. Once you have created an experiment group, any experiments you run with that group name are added to the group.

You can create an experiment group file by creating a plain text file whose first line is


#analyzer experiment group

and adding the names of the experiments on subsequent lines. The name of the file must end in .erg.

The default experiment name is different for experiments collected from MPI programs, which create one experiment for each MPI process. The default experiment name is test.m.er, where m is the MPI rank of the process. If you specify an experiment group group.erg, the default experiment name is group.m.er. If you specify an experiment name, it overrides these defaults. See Collecting Data From MPI Programs for more information.

Experiments for descendant processes are named with their lineage as follows. To form the experiment name for a descendant process, an underscore, a code letter and a number are added to the stem of its creator's experiment name. The code letter is f for a fork, x for an exec, and c for combination. The number is the index of the fork or exec (whether successful or not). For example, if the experiment name for the founder process is test.1.er, the experiment for the child process created by the third call to fork is test.1.er/_f3.er. If that child process calls exec successfully, the experiment name for the new descendant process is test.1.er/_f3_x1.er.

Moving Experiments

If you want to move an experiment to another computer to analyze it, you should be aware of the dependencies of the analysis on the operating environment in which the experiment was recorded.

The archive files contain all the information necessary to compute metrics at the function level and to display the timeline. However, if you want to see annotated source code or annotated disassembly code, you must have access to versions of the load objects or source files that are identical to the ones used when the experiment was recorded.

The Performance Analyzer searches for the source, object and executable files in the following locations in turn, and stops when it finds a file of the correct basename:

You can change the search order or add other search directories from the Analyzer GUI or by using the setpath and addpath directives.

To ensure that you see the correct annotated source code and annotated disassembly code for your program, you can copy the source code, the object files and the executable into the experiment before you move or copy the experiment. If you don't want to copy the object files, you can link your program with -xs to ensure that the information on source lines and file locations are inserted into the executable. You can automatically copy the load objects into the experiment using the -A option of the collect command or the dbx collector archive command.


Estimating Storage Requirements

This section gives some guidelines for estimating the amount of disk space needed to record an experiment. The size of the experiment depends directly on the size of the data packets and the rate at which they are recorded, the number of LWPs used by the program, and the execution time of the program.

The data packets contain event-specific data and data that depends on the program structure (the call stack). The amount of data that depends on the data type is approximately 50 to 100 bytes. The call stack data consists of return addresses for each call, and contains 4 bytes (8 bytes on 64 bit SPARC® architecture) per address. Data packets are recorded for each LWP in the experiment. Note that for Java programs, there are two call stacks of interest: the Java call stack and the machine call stack, which therefore result in more data being written to disk.

The rate at which profiling data packets are recorded is controlled by the profiling interval for clock data and by the overflow value for hardware counter data. However, the choice of these parameters also affects the data quality and the distortion of program performance due to the data collection overhead. Smaller values of these parameters give better statistics but also increase the overhead. The default values of the profiling interval and the overflow value have been carefully chosen as a compromise between obtaining good statistics and minimizing the overhead. Smaller values also mean more data.

For a clock-based profiling experiment with a profiling interval of 10ms and a small call stack, such that the packet size is 100 bytes, data is recorded at a rate of 10 kbytes/sec per LWP. For a hardware counter overflow profiling experiment collecting data for CPU cycles and instructions executed on a 750MHz processor with an overflow value of 1000000 and a packet size of 100 bytes, data is recorded at a rate of 150 kbytes/sec per LWP. Applications that have call stacks with a depth of hundreds of calls could easily record data at ten times these rates.

Your estimate of the size of the experiment should also take into account the disk space used by the archive files, which is usually a small fraction of the total disk space requirement (see the previous section). If you are not sure how much space you need, try running your experiment for a short time. From this test you can obtain the size of the archive files, which are independent of the data collection time, and scale the size of the profile files to obtain an estimate of the size for the full-length experiment.

As well as allocating disk space, the Collector allocates buffers in memory to store the profile data before writing it to disk. Currently no way exists to specify the size of these buffers. If the Collector runs out of memory, try to reduce the amount of data collected.

If your estimate of the space required to store the experiment is larger than the space you have available, consider collecting data for part of the run rather than the whole run. You can collect data on part of the run with the collect command, with the dbx collector subcommands, or by inserting calls in your program to the collector API. You can also limit the total amount of profiling and tracing data collected with the collect command or with the dbx collector subcommands.



Note - The Performance Analyzer cannot read more than 2 GB of performance data.




Collecting Data

You can collect performance data in either the standalone Performance Analyzer or the Analyzer window in the IDE in several ways:

The following data collection capabilities are available only with the Performance Tools Collect dialog and the collect command:


Collecting Data Using the collect Command

To run the Collector from the command line using the collect command, type the following.


% collect collect-options program program-arguments 

Here, collect-options are the collect command options, program is the name of the program you want to collect data on, and program-arguments are its arguments.

If no command arguments are given, the default is to turn on clock-based profiling with a profiling interval of approximately 10 milliseconds.

To obtain a list of options and a list of the names of any hardware counters that are available for profiling, type the collect command with no arguments.


% collect

For a description of the list of hardware counters, see Hardware Counter Overflow Profiling Data. See also Limitations on Hardware Counter Overflow Profiling.

Data Collection Options

These options control the types of data that are collected. See What Data the Collector Collects for a description of the data types.

If you do not specify data collection options, the default is -p on, which enables clock-based profiling with the default profiling interval of approximately 10 milliseconds. The default is turned off by the -h option but not by any of the other data collection options.

If you explicitly disable clock-based profiling is, and neither any kind of tracing nor hardware counter overflow profiling is enabled, the collect command prints a warning message, and collects global data only.

-p option

Collect clock-based profiling data. The allowed values of option are:

Collecting clock-based profiling data is the default action of the collect command.

-h counter_definition_1...[,counter_definition_n]

Collect hardware counter overflow profiling data. The number of counter definitions is processor-dependent. This option is now available on systems running the Linux operating system if you have installed the perfctr patch, which you can download from http://user.it.uu.se/~mikpe/linux/perfctr/2.6/perfctr-2.6.15.tar.gz.

A counter definition can take one of the following forms, depending on whether the processor supports attributes for hardware counters.

[+]counter_name[/register_number][,interval]

[+]counter_name[~attribute_1=value_1]...[~attribute_n=value_n][/register_number][,interval]

The processor-specific counter_name can be one of the following:

If you specify more than one counter , they must use different registers. If they do not use different registers, the collect command prints an error message and exits. Some counters can count on either register.

To obtain a list of available counters, type collect with no arguments in a terminal window. A description of the counter list is given in the section Hardware Counter Lists.

If the hardware counter counts events that relate to memory access, you can prefix the counter name with a + sign to turn on searching for the true PC of the instruction that caused the counter overflow. This backtracking works on SPARC processors, and only with counters of type load , store , or load-store. If the search is successful, the PC and effective address that was referenced are stored in the event data packet.

On some processors, attribute options can be associated with a hardware counter. If a processor supports attribute options, then running the collect command with no arguments lists the counter definitions including the attribute names. You can specify attribute values in decimal or hexadecimal format.

The interval (overflow value) is the number of events counted at which the hardware counter overflows and the overflow event is recorded. The interval can be set to one of the following:

The default is the normal threshold, which is predefined for each counter and which appears in the counter list. See also Limitations on Hardware Counter Overflow Profiling.

If you use the -h option without explicitly specifying a -p option, clock-based profiling is turned off. To collect both hardware counter data and clock-based data, you must specify both a -h option and a -p option.

-s option

Collect synchronization wait tracing data. The allowed values of option are:

Synchronization wait tracing data is not recorded for Java monitors.

-H option

Collect heap tracing data.. The allowed values of option are:

Heap tracing is turned off by default. Heap tracing is not supported for Java programs; specifying it is treated as an error.

-m option

Collect MPI tracing data. The allowed values of option are:

MPI tracing is turned off by default.

See MPI Tracing Data for more information about the MPI functions whose calls are traced and the metrics that are computed from the tracing data.

-S option

Record sample packets periodically. The allowed values of option are:

By default, periodic sampling at 1 second intervals is enabled.

Experiment Control Options

-F option

Control whether or not descendant processes should have their data recorded. The allowed values of option are:

If you specify the -F on option, the Collector follows processes created by calls to the functions fork(2), fork1(2), fork(3F), vfork(2), and exec(2) and its variants. The call to vfork is replaced internally by a call to fork1.

If you specify the -F all option, the Collector follows all descendant processes including those created by calls to system(3C), system(3F), sh(3F), and popen(3C), and similar functions, and their associated descendant processes.

If you specify the -F on or -F all argument, the Collector opens a new experiment for each descendant process inside the founder experiment. These new experiments are named by adding an underscore, a letter, and a number to the experiment suffix, as follows:

For example, if the experiment name for the initial process is test.1.er, the experiment for the child process created by its third fork is test.1.er/_f3.er. If that child process execs a new image, the corresponding experiment name is test.1.er/_f3_x1.er. If that child creates another process using a popen call, the experiment name is test.1.er/_f3_x1_c1.er.

The Analyzer andthe er_printutility automatically read experiments for descendant processes when the founder experiment is read, but the experiments for the descendant processes are not selected for data display.

To select the data for display from the command line, specify the path name explicitly to either er_print or analyzer. The specified path must include the founder experiment name, and descendant experiment name inside the founder directory.

For example, here's what you specify to see the data for the third fork of the test.1.er experiment:

er_print test.1.er/_f3.er

analyzertest.1.er/_f3.er

Alternatively, you can prepare an experiment group file with the explicit names of the descendant experiments in which you are interested.

To examine descendant processes in the Analyzer, load the founder experiment and select Filter Data from the View menu. A list of experiments is displayed with only the founder experiment checked. Uncheck it and check the descendant experiment of interest.

-j option

Enable Java profiling when the target is a JVM machine. The allowed values of option are:

The -j option is not needed if you want to collect data on a .class file or a .jar file, provided that the path to the java executable is in either the JDK_HOME environment variable or the JAVA_PATH environment variable. You can then specify program as the .class file or the .jar file, with or without the extension.

If you cannot define the path to java in any of these variables, or if you want to disable the recognition of methods compiled by the Java HotSpot virtual machine you can use this option. If you use this option, program must be a Java virtual machine whose version is not earlier than 1.4.2_02. The collect command verifies that program is a JVM machine, and is an ELF executable; if it is not, the collect command prints an error message.

If you want to collect data using the 64-bit JVM machine, you must not use the -d64 option to java for a 32-bit JVM machine. If you do so, no data is collected. Instead you must specify the path to the 64-bit JVM machine either in program or in one of the environment variables given in this section.

-J java_arguments

Specify arguments to be passed to the JVM machine used for profiling. If you specify the -J option, but do not specify Java profiling, an error is generated, and no experiment is run.

-l signal

Record a sample packet when the signal named signal is delivered to the process.

You can specify the signal by the full signal name, by the signal name without the initial letters SIG, or by the signal number. Do not use a signal that is used by the program or that would terminate execution. Suggested signals are SIGUSR1 and SIGUSR2. Signals can be delivered to a process by the kill(1) command.

If you use both the -l and the -y options, you must use different signals for each option.

If you use this option and your program has its own signal handler, you should make sure that the signal that you specify with -l is passed on to the Collector's signal handler, and is not intercepted or ignored.

See the signal(3HEAD) man page for more information about signals.

-x

Leave the target process stopped on exit from the exec system call in order to allow a debugger to attach to it. If you attach dbx to the process, use the dbx commands ignore PROF and ignore EMT to ensure that collection signals are passed on to the collect command.

-y signal[,r]

Control recording of data with the signal named signal. Whenever the signal is delivered to the process, it switches between the paused state, in which no data is recorded, and the recording state, in which data is recorded. Sample points are always recorded, regardless of the state of the switch.

The signal can be specified by the full signal name, by the signal name without the initial letters SIG, or by the signal number. Do not use a signal that is used by the program or that would terminate execution. Suggested signals are SIGUSR1 and SIGUSR2. Signals can be delivered to a process by the kill(1) command.

If you use both the -l and the -y options, you must use different signals for each option.

When the -y option is used, the Collector is started in the recording state if the optional r argument is given, otherwise it is started in the paused state. If the -y option is not used, the Collector is started in the recording state.

If you use this option and your program has its own signal handler, make sure that the signal that you specify with -y is passed on to the Collector's signal handler, and is not intercepted or ignored.

See the signal(3HEAD) man page for more information about signals.

Output Options

-o experiment_name

Use experiment_name as the name of the experiment to be recorded. The experiment_name string must end in the string ".er"; if not, the collect utility prints an error message and exits.

-d directory-name

Place the experiment in directory directory-name. This option only applies to individual experiments and not to experiment groups. If the directory does not exist, the collect utility prints an error message and exits. If a group is specified with the -g option, the group file is also written to directory-name.

-g group-name

Make the experiment part of experiment group group-name. If group-name does not end in .erg, the collect utility prints an error message and exits. If the group exists, the experiment is added to it. If group-name is not an absolute path, the experiment group is placed in the directory directory-name if a directory has been specified with -d, otherwise it is placed in the current directory.

-A option

Control whether or not load objects used by the target process should be archived or copied into the recorded experiment. The allowed values of option are:

If you expect to copy experiments to a different machine from which they were recorded, or to read the experiments from a different machine, specify -A copy. Using this option does not copy any source files or object files into the experiment. You should ensure that those files are accessible on the machine to which you are copying the experiment.

-L size

Limit the amount of profiling data recorded to size megabytes. The limit applies to the sum of the amounts of clock-based profiling data, hardware counter overflow profiling data, and synchronization wait tracing data, but not to sample points. The limit is only approximate, and can be exceeded.

When the limit is reached, no more profiling data is recorded but the experiment remains open until the target process terminates. If periodic sampling is enabled, sample points continue to be written.

The default limit on the amount of data recorded is 2000 Mbytes. This limit was chosen because the Performance Analyzer cannot process experiments that contain more than 2 Gbytes of data. To remove the limit, set size to unlimited or none.

-O file

Append all output from collect itself to the name file, but do not redirect the output from the spawned target. If file is set to /dev/null, suppress all output from collect, including any error messages.

Other Options

-C comment

Put the comment into the notes file for the experiment. You can supply up to ten -C options. The contents of the notes file are prepended to the experiment header.

-n

Do not run the target but print the details of the experiment that would be generated if the target were run. This option is a dry run option.

-R

Display the text version of the Performance Analyzer Readme in the terminal window. If the readme is not found, a warning is printed. No further arguments are examined, and no further processing is done.

-V

Print the current version of the collect command. No further arguments are examined, and no further processing is done.

-v

Print the current version of the collect command and detailed information about the experiment being run.


Collecting Data Using the dbx collector Subcommands



Note - You can collect data using the dbx collector subcommands only on systems running the Solaris OS.



To run the Collector from dbx:

1. Load your program into dbx by typing the following command.


% dbx program 

2. Use the collector command to enable data collection, select the data types, and set any optional parameters.


(dbx) collector subcommand

To get a listing of available collector subcommands, type:


(dbx) help collector 

You must use one collector command for each subcommand.

3. Set up any dbx options you wish to use and run the program.

If a subcommand is incorrectly given, a warning message is printed and the subcommand is ignored. A complete listing of the collector subcommands follows.

Data Collection Subcommands

The following subcommands control the types of data that are collected by the Collector. They are ignored with a warning if an experiment is active.

profile option

Controls the collection of clock-based profiling data. The allowed values for option are:

The default setting is approximately 10 milliseconds.

The Collector collects clock-based profiling data by default, unless the collection of hardware-counter overflow profiling data is turned on using the hwprofile subcommand.

hwprofile option

Controls the collection of hardware counter overflow profiling data. If you attempt to enable hardware counter overflow profiling on systems that do not support it, dbx returns a warning message and the command is ignored. The allowed values for option are:

[+]counter_name[/register_number][,interval]

[+]counter_name[~attribute_1=value_1]...[~attribute_n=value_n][/register_number][,interval]

Selects the hardware counter name, and sets its overflow value to interval; optionally selects additional hardware counter names and sets their overflow values to the specified intervals. The overflow value can be one of the following.

If you specify more than one counter, they must use different registers. If they do not, a warning message is printed and the command is ignored.

If the hardware counter counts events that relate to memory access, you can prefix the counter name with a + sign to turn on searching for the true PC of the instruction that caused the counter overflow. If the search is successful, the PC and the effective address that was referenced are stored in the event data packet.

The Collector does not collect hardware counter overflow profiling data by default. If hardware-counter overflow profiling is enabled and a profile command has not been given, clock-based profiling is turned off.

See also Limitations on Hardware Counter Overflow Profiling.

synctrace option

Controls the collection of synchronization wait tracing data. The allowed values for option are

By default, the Collector does not collect synchronization wait tracing data.

heaptrace option

Controls the collection of heap tracing data. The allowed values for option are

By default, the Collector does not collect heap tracing data.

mpitrace option

Controls the collection of MPI tracing data. The allowed values for option are

By default, the Collector does not collect MPI tracing data.

sample option

Controls the sampling mode. The allowed values for option are:

By default, periodic sampling is enabled, with a sampling interval value of 1 second.

dbxsample { on | off }

Controls the recording of samples when dbx stops the target process. The meanings of the keywords are as follows:

By default, samples are recorded when dbx stops the target process.

Experiment Control Subcommands

disable

Disables data collection. If a process is running and collecting data, it terminates the experiment and disables data collection. If a process is running and data collection is disabled, it is ignored with a warning. If no process is running, it disables data collection for subsequent runs.

enable

Enables data collection. If a process is running but data collection is disabled, it enables data collection and starts a new experiment. If a process is running and data collection is enabled, it is ignored with a warning. If no process is running, it enables data collection for subsequent runs.

You can enable and disable data collection as many times as you like during the execution of any process. Each time you enable data collection, a new experiment is created.

pause

Suspends the collection of data, but leaves the experiment open. Sample points are not recorded while the Collector is paused. A sample is generated prior to a pause, and another sample is generated immediately following a resume. This subcommand is ignored if data collection is already paused.

resume

Resumes data collection after a pause has been issued. This subcommand is ignored if data is being collected.

sample record name

Record a sample packet with the label name. The label is displayed in the Event tab of the Performance Analyzer.

Output Subcommands

The following subcommands define storage options for the experiment. They are ignored with a warning if an experiment is active.

archive mode

Set the mode for archiving the experiment. The allowed values for mode are

If you intend to move the experiment to a different machine, or read it from another machine, you should enable the copying of load objects. If an experiment is active, the command is ignored with a warning. This command does not copy source files or object files into the experiment.

limit value

Limit the amount of profiling data recorded to value megabytes. The limit applies to the sum of the amounts of clock-based profiling data, hardware counter overflow profiling data, and synchronization wait tracing data, but not to sample points. The limit is only approximate, and can be exceeded.

When the limit is reached, no more profiling data is recorded but the experiment remains open and sample points continue to be recorded.

The default limit on the amount of data recorded is 2000 Mbytes. This limit was chosen because the Performance Analyzer cannot process experiments that contain more than 2 Gbytes of data. To remove the limit, set value to unlimited or none.

store option

Governs where the experiment is stored. This command is ignored with a warning if an experiment is active. The allowed values for option are:

Information Subcommands

show

Shows the current setting of every Collector control.

status

Reports on the status of any open experiment.


Collecting Data From a Running Process

The Collector allows you to collect data from a running process. If the process is already under the control of dbx (either in the command line version or in the IDE), you can pause the process and enable data collection using the methods described in previous sections.



Note - For information on starting the Performance Analyzer from the IDE, see the Performance Analyzer Readme, which is available on the SDN Sun Studio portal at http://developers.sun.com/prodtech/cc/documentation/ss11/docs/mr/READMEs/analyzer.html.



If the process is not under the control of dbx, you can attach dbx to it, collect performance data, and then detach from the process, leaving it to continue. If you want to collect performance data for selected descendant processes, you must attach dbx to each process.

To collect data from a running process that is not under the control of dbx:

1. Determine the program's process ID (PID).

If you started the program from the command line and put it in the background, its PID will be printed to standard output by the shell. Otherwise you can determine the program's PID by typing the following.


% ps -ef | grep program-name 

2. Attach to the process.

If dbx is not already running, type the following.


% dbx program-name pid

See the manual, Debugging a Program With dbx, for more details on attaching to a process. Attaching to a running process pauses the process.

3. Start data collection.

4. Detach from the process.

When you have finished collecting data, pause the program and then detach the process from dbx.

If you want to collect any kind of tracing data, you must preload the Collector library, libcollector.so, before you run your program, because the library provides wrappers to the real functions that enable data collection to take place. In addition, the Collector adds wrapper functions to other system library calls to guarantee the integrity of performance data. If you do not preload the Collector library, these wrapper functions cannot be inserted. See Using System Libraries for more information on how the Collector interposes on system library functions.

To preload libcollector.so, you must set both the name of the library and the path to the library using environment variables. Use the environment variable LD_PRELOAD to set the name of the library. Use the environment variables LD_LIBRARY_PATH, LD_LIBRARY_PATH_32, and/or LD_LIBRARY_PATH_64 to set the path to the library. (LD_LIBRARY_PATH is used if the _32 and _64 variants are not defined.) If you have already defined these environment variables, add new values to them.


TABLE 3-2 Environment Variable Settings for Preloading the Library libcollector.so

Environment variable

Value

LD_PRELOAD

libcollector.so

LD_LIBRARY_PATH

/opt/SUNWspro/prod/lib/dbxruntime

LD_LIBRARY_PATH_32

/opt/SUNWspro/prod/lib/dbxruntime

LD_LIBRARY_PATH_64

/opt/SUNWspro/prod/lib/v9/dbxruntime

LD_LIBRARY_PATH_64

/opt/SUNWspro/prod/lib/amd64/dbxruntime


If your Sun Studio software is not installed in /opt/SUNWspro, ask your system administrator for the correct path. You can set the full path in LD_PRELOAD, but doing this can create complications when using SPARC® V9 64-bit architecture.



Note - Remove the LD_PRELOAD and LD_LIBRARY_PATH settings after the run, so they do not remain in effect for other programs that are started from the same shell.



If you want to collect data from an MPI program that is already running, you must attach a separate instance of dbx to each process and enable the Collector for each process. When you attach dbx to the processes in an MPI job, each process is halted and restarted at a different time. The time difference could change the interaction between the MPI processes and affect the performance data you collect. To minimize this problem, one solution is to use pstop(1) to halt all the processes. However, once you attach dbx to the processes, you must restart them from dbx, and there is a timing delay in restarting the processes, which can affect the synchronization of the MPI processes. See also Collecting Data From MPI Programs.


Collecting Data From MPI Programs

The Collector can collect performance data from multi-process programs that use the Sun Message Passing Interface (MPI) library. The MPI library is included in the Sun HPC ClusterToolstrademark software. You should use the latest version (5.0) of the ClusterToolstrademark software if possible, but you can use 3.1 or a compatible version. To start the parallel jobs, use the Sun Cluster Runtime Environment (CRE) command mprun. See the Sun HPC ClusterTools documentation for more information. For information about MPI and the MPI standard, see the MPI web site http://www.mcs.anl.gov/mpi.

Because of the way MPI and the Collector are implemented, each MPI process records a separate experiment. Each experiment must have a unique name. Where and how the experiment is stored depends on the kinds of file systems that are available to your MPI job. Issues about storing experiments are discussed in the next subsection.

To collect data from MPI jobs, you can either run the collect command under MPI or start dbx under MPI and use the dbx collector subcommands. Each of these options is discussed in a subsequent subsection.

Storing MPI Experiments

Because multiprocessing environments can be complex, you should be aware of some issues about storing MPI experiments when you collect performance data from MPI programs. These issues concern the efficiency of data collection and storage, and the naming of experiments. See Where the Data Is Stored for information on naming experiments, including MPI experiments.

Each MPI process that collects performance data creates its own experiment. When an MPI process creates an experiment, it locks the experiment directory. All other MPI processes must wait until the lock is released before they can use the directory. Thus, if you store the experiments on a file system that is accessible to all MPI processes, the experiments are created sequentially, but if you store the experiments on file systems that are local to each MPI process, the experiments are created concurrently.

If you store the experiments on a common file system and specify an experiment name in the standard format, experiment.n.er, the experiments have unique names. The value of n is determined by the order in which MPI processes obtain a lock on the experiment directory, and cannot be guaranteed to correspond to the MPI rank of the process. If you attach dbx to MPI processes in a running MPI job, n will be determined by the order of attachment.

If you store the experiments on a local file system and specify an experiment name in the standard format, the names are not unique. For example, suppose you ran an MPI job on a machine with four single-processor nodes labelled node0, node1, node2 and node3. Each node has a local disk called /scratch, and you store the experiments in directory username on this disk. The experiments created by the MPI job have the following full path names.


node0:/scratch/username/test.1.er
node1:/scratch/username/test.1.er
node2:/scratch/username/test.1.er
node3:/scratch/username/test.1.er

The full name including the node name is unique, but in each experiment directory there is an experiment named test.1.er. If you move the experiments to a common location after the MPI job is completed, you must make sure that the names remain unique. For example, to move these experiments to your home directory, which is assumed to be accessible from all nodes, and rename the experiments, type the following commands.


rsh node0 'er_mv /scratch/username/test.1.er test.0.er'
rsh node1 'er_mv /scratch/username/test.1.er test.1.er'
rsh node2 'er_mv /scratch/username/test.1.er test.2.er'
rsh node3 'er_mv /scratch/username/test.1.er test.3.er'

For large MPI jobs, you might want to move the experiments to a common location using a script. Do not use the UNIX® commands cp or mv; see Manipulating Experiments for information on how to copy and move experiments.

If you do not specify an experiment name, the Collector uses the MPI rank to construct an experiment name with the standard form experiment.n.er, but in this case n is the MPI rank. The stem, experiment, is the stem of the experiment group name if you specify an experiment group, otherwise it is test. The experiment names are unique, regardless of whether you use a common file system or a local file system. Thus, if you use a local file system to record the experiments and copy them to a common file system, you do not have to rename the experiments when you copy them and reconstruct any experiment group file.

If you do not know which local file systems are available to you, use the df -lk command or ask your system administrator. Always make sure that the experiments are stored in a directory that already exists, that is uniquely defined and that is not in use for any other experiment. Also make sure that the file system has enough space for the experiments. See Estimating Storage Requirements for information on how to estimate the space needed.



Note - If you copy or move experiments between computers or nodes you cannot view the annotated source code or source lines in the annotated disassembly code unless you have access to the load objects and source files that were used to run the experiment, or a copy with the same path and timestamp.



Running the collect Command Under MPI

To collect data with the collect command under the control of MPI, use the following syntax.


% mprun -np n collect [collect-arguments] program-name [program-arguments]

Here, n is the number of processes to be created by MPI. This procedure creates n separate instances of collect, each of which records an experiment. Read the section Where the Data Is Stored for information on where and how to store the experiments.

To ensure that the sets of experiments from different MPI runs are stored separately, you can create an experiment group with the -g option for each MPI run. The experiment group should be stored on a file system that is accessible to all MPI processes. Creating an experiment group also makes it easier to load the set of experiments for a single MPI run into the Performance Analyzer. An alternative to creating a group is to specify a separate directory for each MPI run with the -d option.

Collecting Data by Starting dbx Under MPI

To start dbx and collect data under the control of MPI, use the following syntax.


% mprun -np n dbx program-name < collection-script 

Here, n is the number of processes to be created by MPI and collection-script is a dbx script that contains the commands necessary to set up and start data collection. This procedure creates n separate instances of dbx, each of which records an experiment on one of the MPI processes. If you do not define the experiment name, the experiment is labelled with the MPI rank. Read the section Storing MPI Experiments for information on where and how to store the experiments.

You can name the experiments with the MPI rank by using the collection script and a call to MPI_Comm_rank() in your program. For example, in a C program you would insert the following line.


ier = MPI_Comm_rank(MPI_COMM_WORLD,&me); 

In a Fortran program you would insert the following line.


call MPI_Comm_rank(MPI_COMM_WORLD, me, ier)

If this call was inserted at line 17, for example, you could use a script like this.


stop at 18
run program-arguments
rank=$[me]
collector enable
collector store filename experiment.$rank.er
cont
quit


Using collect With ppgsz

You can use collect with ppgsz(1) by running collect on the ppgsz command and specifying the -F on or -F all flag. The founder experiment is on the ppgsz executable and uninteresting. If your path finds the 32-bit version of ppgsz, and the experiment is run on a system that supports 64-bit processes, the first thing it will do is exec its 64-bit version, creating _x1.er. That executable forks, creating _x1_f1.er.

The child process attempts to exec the named target in the first directory on your path, then in the second, and so forth, until one of the execs succeed. If, for example, the third attempt succeeds, the first two descendant experiments are named _x1_f1_x1.er and _x1_f1_x2.er, and both are completely empty. The experiment on the target is the one from the successful exec, the third one in the example, and is named _x1_f1_x3.er, stored under the founder experiment. It can be processed directly by invoking the Analyzer or the er_print utility on test.1.er/_x1_f1_x3.er.

If the 64-bit ppgsz is the initial process, or if the 32-bit ppgsz is invoked on a 32-bit kernel, the fork child that execs the real target has its data in _f1.er, and the real target's experiment is in _f1_x3.er, assuming the same path properties as in the example above.


1 (Footnote) The default threads library, /usr/lib/libthread.so, on the Solaris OS8 (known as T1) has several problems when profiling. It may discard profiling interrupts when no thread is scheduled onto an LWP; in such cases, the Total LWP Time reported may seriously underestimate the true LWP time. Under some circumstances, it may also get a segmentation violation accessing an internal library mutex, causing the application to crash. The workaround is to use the alternate threads library (/usr/lib/lwp/libthread.so, known as T2), by prepending /usr/lib/lwp to your LD_LIBRARY_PATH setting. On Solaris 9, the default library is T2., and is incorporated into the libc library