JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Studio 12.2: Performance Analyzer
search filter icon
search icon

Document Information

Preface

1.  Overview of the Performance Analyzer

2.  Performance Data

3.  Collecting Performance Data

4.  The Performance Analyzer Tool

5.  The er_print Command Line Performance Analysis Tool

6.  Understanding the Performance Analyzer and Its Data

How Data Collection Works

Experiment Format

The archives Directory

Descendant Processes

Dynamic Functions

Java Experiments

Recording Experiments

collect Experiments

dbx Experiments That Create a Process

dbx Experiments on a Running Process

Interpreting Performance Metrics

Clock-Based Profiling

Accuracy of Timing Metrics

Comparisons of Timing Metrics

Synchronization Wait Tracing

Hardware Counter Overflow Profiling

Heap Tracing

Dataspace Profiling

MPI Tracing

Call Stacks and Program Execution

Single-Threaded Execution and Function Calls

Function Calls Between Shared Objects

Signals

Traps

Tail-Call Optimization

Explicit Multithreading

Overview of Java Technology-Based Software Execution

Java Call Stacks and Machine Call Stacks

Clock-based Profiling and Hardware Counter Overflow Profiling

Java Processing Representations

The User Representation

The Expert-User Representation

The Machine Representation

Overview of OpenMP Software Execution

User Mode View of OpenMP Profile Data

Artificial Functions

User Mode Call Stacks

OpenMP Metrics

Expert View Mode of OpenMP Profiling Data

Machine View Mode of OpenMP Profiling Data

Incomplete Stack Unwinds

Intermediate Files

Mapping Addresses to Program Structure

The Process Image

Load Objects and Functions

Aliased Functions

Non-Unique Function Names

Static Functions From Stripped Shared Libraries

Fortran Alternate Entry Points

Cloned Functions

Inlined Functions

Compiler-Generated Body Functions

Outline Functions

Dynamically Compiled Functions

The <Unknown> Function

OpenMP Special Functions

The <JVM-System> Function

The <no Java callstack recorded> Function

The <Truncated-stack> Function

The <Total> Function

Functions Related to Hardware Counter Overflow Profiling

Mapping Performance Data to Index Objects

Mapping Data Addresses to Program Data Objects

Data Object Descriptors

The <Total> Data Object

The <Scalars> Data Object

The <Unknown> Data Object and Its Elements

Mapping Performance Data to Memory Objects

7.  Understanding Annotated Source and Disassembly Data

8.  Manipulating Experiments

9.  Kernel Profiling

Index

Interpreting Performance Metrics

The data for each event contains a high-resolution timestamp, a thread ID, an LWP ID, and a processor ID. The first three of these can be used to filter the metrics in the Performance Analyzer by time, thread, LWP, or CPU. See the getcpuid(2) man page for information on processor IDs. On systems where getcpuid is not available, the processor ID is -1, which maps to Unknown.

In addition to the common data, each event generates specific raw data, which is described in the following sections. Each section also contains a discussion of the accuracy of the metrics derived from the raw data and the effect of data collection on the metrics.

Clock-Based Profiling

The event-specific data for clock-based profiling consists of an array of profiling interval counts. On the Solaris OS, an interval counter is provided. At the end of the profiling interval, the appropriate interval counter is incremented by 1, and another profiling signal is scheduled. The array is recorded and reset only when the Solaris LWP thread enters CPU user mode. Resetting the array consists of setting the array element for the User-CPU state to 1, and the array elements for all the other states to 0. The array data is recorded on entry to user mode before the array is reset. Thus, the array contains an accumulation of counts for each microstate that was entered since the previous entry into user mode, for each of the ten microstates maintained by the kernel for each Solaris LWP. On the Linux OS, microstates do not exist; the only interval counter is User CPU Time.

The call stack is recorded at the same time as the data. If the Solaris LWP is not in user mode at the end of the profiling interval, the call stack cannot change until the LWP or thread enters user mode again. Thus the call stack always accurately records the position of the program counter at the end of each profiling interval.

The metrics to which each of the microstates contributes on the Solaris OS are shown in Table 6-1.

Table 6-1 How Kernel Microstates Contribute to Metrics

Kernel Microstate
Description
Metric Name
LMS_USER
Running in user mode
User CPU Time
LMS_SYSTEM
Running in system call or page fault
System CPU Time
LMS_TRAP
Running in any other trap
System CPU Time
LMS_TFAULT
Asleep in user text page fault
Text Page Fault Time
LMS_DFAULT
Asleep in user data page fault
Data Page Fault Time
LMS_KFAULT
Asleep in kernel page fault
Other Wait Time
LMS_USER_LOCK
Asleep waiting for user-mode lock
User Lock Time
LMS_SLEEP
Asleep for any other reason
Other Wait Time
LMS_STOPPED
Stopped (/proc, job control, or lwp_stop)
Other Wait Time
LMS_WAIT_CPU
Waiting for CPU
Wait CPU Time
Accuracy of Timing Metrics

Timing data is collected on a statistical basis, and is therefore subject to all the errors of any statistical sampling method. For very short runs, in which only a small number of profile packets is recorded, the call stacks might not represent the parts of the program which consume the most resources. Run your program for long enough or enough times to accumulate hundreds of profile packets for any function or source line you are interested in.

In addition to statistical sampling errors, specific errors arise from the way the data is collected and attributed and the way the program progresses through the system. The following are some of the circumstances in which inaccuracies or distortions can appear in the timing metrics:

In addition to the inaccuracies just described, timing metrics are distorted by the process of collecting data. The time spent recording profile packets never appears in the metrics for the program, because the recording is initiated by the profiling signal. (This is another instance of correlation.) The user CPU time spent in the recording process is distributed over whatever microstates are recorded. The result is an underaccounting of the User CPU Time metric and an overaccounting of other metrics. The amount of time spent recording data is typically less than a few percent of the CPU time for the default profiling interval.

Comparisons of Timing Metrics

If you compare timing metrics obtained from the profiling done in a clock-based experiment with times obtained by other means, you should be aware of the following issues.

For a single-threaded application, the total Solaris LWP or Linux thread time recorded for a process is usually accurate to a few tenths of a percent, compared with the values returned by gethrtime(3C) for the same process. The CPU time can vary by several percentage points from the values returned by gethrvtime(3C) for the same process. Under heavy load, the variation might be even more pronounced. However, the CPU time differences do not represent a systematic distortion, and the relative times reported for different functions, source-lines, and such are not substantially distorted.

For multithreaded applications using unbound threads on the Solaris OS, differences in values returned by gethrvtime() could be meaningless because gethrvtime() returns values for an LWP, and a thread can change from one LWP to another.

The LWP times that are reported in the Performance Analyzer can differ substantially from the times that are reported by vmstat, because vmstat reports times that are summed over CPUs. If the target process has more LWPs than the system on which it is running has CPUs, the Performance Analyzer shows more wait time than vmstat reports.

The microstate timings that appear in the Statistics tab of the Performance Analyzer and the er_print statistics display are based on process file system /proc usage reports, for which the times spent in the microstates are recorded to high accuracy. See the proc (4) man page for more information. You can compare these timings with the metrics for the <Total> function, which represents the program as a whole, to gain an indication of the accuracy of the aggregated timing metrics. However, the values displayed in the Statistics tab can include other contributions that are not included in the timing metric values for <Total>. These contributions come from the periods of time in which data collection is paused.

User CPU time and hardware counter cycle time differ because the hardware counters are turned off when the CPU mode has been switched to system mode. For more information, see Traps.

Synchronization Wait Tracing

The Collector collects synchronization delay events by tracing calls to the functions in the threads library, libthread.so, or to the real time extensions library, librt.so. The event-specific data consists of high-resolution timestamps for the request and the grant (beginning and end of the call that is traced), and the address of the synchronization object (the mutex lock being requested, for example). The thread and LWP IDs are the IDs at the time the data is recorded. The wait time is the difference between the request time and the grant time. Only events for which the wait time exceeds the specified threshold are recorded. The synchronization wait tracing data is recorded in the experiment at the time of the grant.

The LWP on which the waiting thread is scheduled cannot perform any other work until the event that caused the delay is completed. The time spent waiting appears both as Synchronization Wait Time and as User Lock Time. User Lock Time can be larger than Synchronization Wait Time because the synchronization delay threshold screens out delays of short duration.

The wait time is distorted by the overhead for data collection. The overhead is proportional to the number of events collected. You can minimize the fraction of the wait time spent in overhead by increasing the threshold for recording events.

Hardware Counter Overflow Profiling

Hardware counter overflow profiling data includes a counter ID and the overflow value. The value can be larger than the value at which the counter is set to overflow, because the processor executes some instructions between the overflow and the recording of the event. The value is especially likely to be larger for cycle and instruction counters, which are incremented much more frequently than counters such as floating-point operations or cache misses. The delay in recording the event also means that the program counter address recorded with call stack does not correspond exactly to the overflow event. See Attribution of Hardware Counter Overflows for more information. See also the discussion of Traps. Traps and trap handlers can cause significant differences between reported User CPU time and time reported by the cycle counter.

Experiments recorded on machines that dynamically change their operating clock frequency will show inaccuracies in the conversion of cycle-based count to time.

The amount of data collected depends on the overflow value. Choosing a value that is too small can have the following consequences.

Heap Tracing

The Collector records tracing data for calls to the memory allocation and deallocation functions malloc, realloc, memalign, and free by interposing on these functions. If your program bypasses these functions to allocate memory, tracing data is not recorded. Tracing data is not recorded for Java memory management, which uses a different mechanism.

The functions that are traced could be loaded from any of a number of libraries. The data that you see in the Performance Analyzer might depend on the library from which a given function is loaded.

If a program makes a large number of calls to the traced functions in a short space of time, the time taken to execute the program can be significantly lengthened. The extra time is used in recording the tracing data.

Dataspace Profiling

Dataspace profiling is an extension to hardware counter profiling that is used for memory references. Hardware counter profiling can attribute metrics to user functions, source lines, and instructions, but not to data objects that are being referenced. By default, the Collector only captures the user instruction addresses. When dataspace profiling is enabled, the Collector also captures data addresses. Backtracking is the technique used to get the performance information that supports dataspace profiling. When backtracking is enabled, the Collector looks back at the load or store instructions that occurred before a hardware counter event to find a candidate instruction that could cause the event.

To allow dataspace profiling, the target must be a C program, compiled for the SPARC architecture, with the -xhwcprof flag and -xdebugformat=dwarf -g flag. Furthermore, the data collected must be hardware counter profiles for memory-related counters and the + sign must be prepended to the counter name. The Performance Analyzer includes two tabs related to dataspace profiling, the DataObject tab and the DataLayout tab, and various tabs for memory objects.

Dataspace profiling can also be done with clock-profiling, by prepending a plus sign ( + ) to the profiling interval.

Running collect with no arguments lists hardware counters, and specifies whether they are load, store, or load-store related. See Hardware Counter Overflow Profiling Data.

MPI Tracing

MPI tracing is based on a modified VampirTrace data collector. For more information, see the VampirTrace User Manual on the Technische Universität Dresden web site.