Oracle® Solaris Studio 12.4: Performance Analyzer

Exit Print View

Updated: January 2015
 
 

Clock Profiling Data

When you are doing clock profiling, the data collected depends on the information provided by the operating system.

Clock Profiling Under Oracle Solaris

In clock profiling under Oracle Solaris, the state of each thread is stored at regular time intervals. This time interval is called the profiling interval. The data collected is converted into times spent in each state, with a resolution of the profiling interval.

The default profiling interval is approximately 10 milliseconds (10 ms). You can specify a high-resolution profiling interval of approximately 1 ms and a low-resolution profiling interval of approximately 100 ms. If the operating system permits you can also specify a custom interval. Run the collect -h command with no other arguments to print the range and resolution allowable on the system.

The following table shows the performance metrics that Performance Analyzer and er_print can display when an experiment contains clock profiling data. Note that the metrics from all threads are added together.

Table 2-1  Timing Metrics from Clock Profiling on Oracle Solaris
Metric
Definition
Total thread time
Sum of time that threads spent in all states.
Total CPU time
Thread time spent running on the CPU in either user, kernel, or trap mode
User CPU time
Thread time spent running on the CPU in user mode.
System CPU time
Thread time spent running on the CPU in kernel mode.
Trap CPU time
Thread time spent running on the CPU in trap mode.
User lock time
Thread time spent waiting for a synchronization lock.
Data page fault time
Thread time spent waiting for a data page.
Text page fault time
Thread time spent waiting for a text page.
Kernel page fault time
Thread time spent waiting for a kernel page.
Stopped time
Thread time spent stopped.
Wait CPU time
Thread time spent waiting for the CPU.
Sleep time
Thread time spent sleeping

Timing metrics tell you where your program spent time in several categories and can be used to improve the performance of your program.

  • High user CPU time tells you where the program did most of the work. You can use it to find the parts of the program where you might gain the most from redesigning the algorithm.

  • High system CPU time tells you that your program is spending a lot of time in calls to system routines.

  • High wait CPU time tells you that more threads are ready to run than there are CPUs available, or that other processes are using the CPUs.

  • High user lock time tells you that threads are unable to obtain the lock that they request.

  • High text page fault time means that the code ordered by the linker is organized in memory so that many calls or branches cause a new page to be loaded.

  • High data page fault time indicates that access to the data is causing new pages to be loaded. Reorganizing the data structure or the algorithm in your program can fix this problem.

Clock Profiling Under Linux

On Linux platforms, the clock data can only be shown as Total CPU time. Linux CPU time is the sum of user CPU time and system CPU time.

Clock Profiling for OpenMP Programs

If clock profiling is performed on an OpenMP program, additional metrics are provided: Master Thread Time, OpenMP Work, and OpenMP Wait.

  • On Oracle Solaris, Master Thread Time is the total time spent in the master thread and corresponds to wall-clock time. The metric is not available on Linux.

  • On Oracle Solaris, OpenMP Work accumulates when work is being done either serially or in parallel. OpenMP Wait accumulates when the OpenMP runtime is waiting for synchronization, and accumulates whether the wait is using CPU time or sleeping, or when work is being done in parallel but the thread is not scheduled on a CPU.

  • On the Linux operating system, OpenMP Work and OpenMP Wait are accumulated only when the process is active in either user or system mode. Unless you have specified that OpenMP should do a busy wait, OpenMP Wait on Linux is not useful.

Data for OpenMP programs can be displayed in any of three view modes. In User mode, slave threads are shown as if they were really cloned from the master thread, and have call stacks matching those from the master thread. Frames in the call stack coming from the OpenMP runtime code (libmtsk.so) are suppressed. In Expert user mode, the master and slave threads are shown differently, and the explicit functions generated by the compiler are visible, and the frames from the OpenMP runtime code (libmtsk.so) are suppressed. For Machine mode, the actual native stacks are shown.

Clock Profiling for the Oracle Solaris Kernel

The er_kernel utility can collect clock-based profile data on the Oracle Solaris kernel. You can profile the kernel by running the er_kernel utility directly from the command line or by choosing Profile Kernel from the File menu in Performance Analyzer.

The er_kernel utility captures kernel profile data and records the data as an Performance Analyzer experiment in the same format as an experiment created on user programs by the collect utility. The experiment can be processed by the er_print utility or Performance Analyzer. A kernel experiment can show function data, caller-callee data, instruction-level data, and a timeline, but not source-line data (because most Oracle Solaris modules do not contain line-number tables).

er_kernel can also record a user-level experiment on any processes running at the time, for which the user has permissions. Such experiments are similar to experiments that collect creates but have data only for User CPU Time and System CPU Time, and do not have support for Java or OpenMP profiling.

See Chapter 9, Kernel Profiling for more information.

Clock Profiling for MPI Programs

Clock profiling data can be collected on an MPI experiment that is run with Oracle Message Passing Toolkit, formerly known as Sun HPC ClusterTools. The Oracle Message Passing Toolkit must be at least version 8.1.

The Oracle Message Passing Toolkit is available as part of the Oracle Solaris 11 release. If it is installed on your system, you can find it in /usr/openmpi. If it is not already installed on your Oracle Solaris 11 system, you can search for the package with the command pkg search openmpi if a package repository is configured for the system. See Adding and Updating Software in Oracle Solaris 11 for more information about installing software in Oracle Solaris 11.

    When you collect clock profiling data on an MPI experiment, you can view two additional metrics:

  • MPI Work, which accumulates when the process is inside the MPI runtime doing work, such as processing requests or messages

  • MPI Wait, which accumulates when the process is inside the MPI runtime but waiting for an event, buffer, or message

On Oracle Solaris, MPI Work accumulates when work is being done either serially or in parallel. MPI Wait accumulates when the MPI runtime is waiting for synchronization, and accumulates whether the wait is using CPU time or sleeping, or when work is being done in parallel but the thread is not scheduled on a CPU.

On Linux, MPI Work and MPI Wait are accumulated only when the process is active in either user or system mode. Unless you have specified that MPI should do a busy wait, MPI Wait on Linux is not useful.


Note -  If your are using Linux with Oracle Message Passing Toolkit 8.2 or 8.2.1, you might need a workaround. The workaround is not needed for version 8.1 or 8.2.1c, or for any version if you are using an Oracle Solaris Studio compiler.

The Oracle Message Passing Toolkit version number is indicated by the installation path such as /opt/SUNWhpc/HPC8.2.1, or you can type mpirun —V to see output as follows where the version is shown in italics:

mpirun (Open MPI) 1.3.4r22104-ct8.2.1-b09d-r70

If your application is compiled with a GNU or Intel compiler, and you are using Oracle Message Passing Toolkit 8.2 or 8.2.1 for MPI, to obtain MPI state data you must use the –WI and –-enable-new-dtags options with the Oracle Message Passing Toolkit link command. These options cause the executable to define RUNPATH in addition to RPATH, allowing the MPI State libraries to be enabled with the LD_LIBRARY_PATH environment variable.