Go to main content
Oracle® Developer Studio 12.5: Performance Analyzer

Exit Print View

Updated: June 2016
 
 

Configuration Settings

You can control the presentation of data and other configuration settings using the Settings dialog box. To open this dialog box, click the Settings button in the toolbar or choose Tools → Settings.

The OK button applies the changes you made for the current session, and closes the dialog box. The Apply button applies the changes for the current session, but keeps the dialog box open so you can make more changes. The Close button closes the dialog box without saving or applying changes.

The Export button opens the Export Settings dialog which you can use to select which settings to export, and where to save them. Exported configuration settings can be applied to the experiment when you open it again in future Performance Analyzer sessions as well as the current session. You can also use configurations for other experiments. See Performance Analyzer Configuration File for more information.

Views Settings

The Views settings panel lists the applicable data views for the current experiment.

Standard Views

Click the check boxes to select or deselect standard data views for display in Performance Analyzer.

Index Objects Views

Click the check boxes to select or deselect Index Object views for display. The predefined Index Object views include Threads, CPUs, Samples, Seconds, Processes.

To add a view for a custom index object, click the Add Custom View button to open the Add Index Object dialog. The index object name you specify must not already be defined, and it cannot match any existing command, or any Memory Object type. The name is not case-sensitive, and must be entirely composed of alphanumeric characters or the '_' character, and begin with an alphabetic character. The formula must follow the syntax described in Expression Grammar.

Index objects can also be created using the er_print command. See Commands That Control Index Object Lists

Memory Object Views

Click the check boxes to select or deselect predefined Memory Object views for display. These views are available when the experiment contains hardware counter profiling data.

Memory objects represent components in the memory subsystem, such as cache lines, pages, and memory banks. Memory objects are predefined for virtual and physical pages, for sizes of 8KB, 64KB, 512KB, and 4MB.

To add a view for a custom memory object, click the Add Custom Memory Object View button to open the Add Memory Object dialog box. The memory object name you specify must not already be defined and it cannot match any existing command or any Index Object type. The name is not case-sensitive, and must be entirely composed of alphanumeric characters or the '_' character, and begin with an alphabetic character. The formula must follow the syntax described in Expression Grammar.

Machine Model Files

You can load a file that defines Memory Objects for a specific SPARC system architecture. Select the system architecture of interest from the Machine Model drop down menu. Click Apply or OK and a new list of objects displays in the Memory Objects Views column. You can select from these views to display associated data. Search for "Machine Model" in the help for more information.

By default Performance Analyzer loads a machine model file that is appropriate for the machine on which an experiment was recorded. Machine model files can define both Memory Objects and Index Objects.

Metrics Settings

The Metrics settings enable you to choose the metrics that are displayed in most of the Analyzer tabs including Functions, Callers-Callees, Source, Disassembly, and others. Some metrics can be displayed in your choice of time or percentage, while others are displayed as a value. The list of metrics includes all metrics that are available in any of the experiments that are loaded.

For each metric, check boxes are provided for Time and Percentage, or Value. Select the check boxes for the types of metrics that you want Performance Analyzer to display. Click Apply to update views with the new metrics. The Overview also allows you to select metrics and is synchronized with the settings you make here. You can sort the metrics by clicking the column headers.


Note -  You can only choose to display exclusive and inclusive metrics. Attributed metrics are always displayed in the Call Tree view if either the exclusive metric or the inclusive metric is displayed.

Timeline Settings

The Timeline settings enable you to specify the information displayed in the Timeline View.

Data Types

Select the kinds of data to display. The selection applies to all experiments and all display types. If a data type is not included in an experiment, the data type is not displayed in the settings as a selectable data type.

CPU Utilization Samples

Select to display a CPU Utilization Samples bar for each process. The Samples bar shows a graph summarizing the microstate information for each periodic sample.

Clock Profiling

Select to display a timeline bar of clock profiling data captured for each LWP, thread, CPU, or experiment. The bar for each item shows a colored call stack of the function that was executing at each sampled event.

HW Counter Profiling (keyword)

Select to display a timeline bar of hardware counter profiling data.

I/O Tracing

Select to display a timeline bar of I/O tracing data.

Heap Tracing

Select to display a timeline bar of heap tracing data.

Synchronization Tracing

Select to display a timeline bar of synchronization tracing call stacks.

Event States

Select to add a graph to each timeline bar to show the microstate for each event. Event Density - select to add a graph to each timeline bar to show when an event occurs.

Group Data By

Specify how to organize the timeline bars for each process - by LWP, thread, CPU, or for the overall process. You can also set the grouping using the Group Data list in the Timeline toolbar.

Stack Alignment

Specify whether the call stacks displayed in the timeline event markers are aligned on the leaf function or the root function. Select leaf if you want the last function called to be shown at the bottom of the stack. This setting does not affect the data presented in the Selection Details panel, which always displays the leaf function at the top.

Call Stack Magnification

Specify how many pixels should be used when displaying each function in a call stack. A value of three is the default. This setting, along with the timeline vertical zoom which controls the available space per row, determines whether or not deep call stacks will be truncated or fully displayed.

Source/Disassembly Settings

The Source/Disassembly settings enable you to select the information presented in the Source view, Disassembly view, and Source/Disassembly view.

Compiler Commentary

Select the classes of compiler commentary that are displayed in the Source view and the Disassembly view.

Hot Line Highlighting Threshold

The threshold for highlighting high-metric lines in the Source view and the Disassembly view. The threshold is the percentage of the largest metric value attributed to any line in the file whose source or disassembly is displayed. The threshold is applied to each metric independently.

Source Code

Display the source code in the Disassembly view. If you display source code in the Disassembly view, the compiler commentary is also displayed for the classes that are enabled.

Metrics for Source Lines

Display metrics for the source code in the Disassembly view.

Hexadecimal Instructions

Display instructions in hexadecimal in the Disassembly view.

Only Show Data of Current Function

Display metrics only for the instructions of the current function selected in another view. If you select this option, metrics are hidden for all other instructions.

Show Compiler Command-line Flags

Display the compiler command and options used to compile the target program. Scroll to the last line of the Source view to see the command line.

Show Function Beginning Line

Toggle function beginning line on or off.

Call Tree Settings

The Call Tree setting, Expand branches when percent of metric exceeds this threshold, sets the trigger for expanding branches in the Call Tree view. If a branch of the call tree uses the specified percentage or less of the metric, it is not expanded automatically when you select an expanding action such as Expand Branch or Expand All Branches. If it exceeds the percentage, it does expand.

Formats Settings

The Formats settings enable you to specify miscellaneous data view formatting.

Function Name Style

Specify whether you want function names to be displayed in long form, short form, or mangled form of C++ function names and Java method names.

Append SO name to Function name

Select the check box to append to a function or method name the name of the shared object in which the function or method is located.

View Mode

Set the initial value for the view mode toolbar setting, which is enabled only for Java experiments and OpenMP experiments. The view modes User, Expert, and Machine set the default mode for viewing experiments. You can switch the current view using the view mode list in the toolbar.

    For Java experiments:

  • User mode shows metrics for interpreted methods and any native methods called. The special function <no Java call stack recorded > indicates that the Java Virtual Machine (JVM) software did not report a Java call stack, even though a Java program was running.

  • Expert mode shows metrics for interpreted methods and any native methods called, and additionally lists methods that were dynamically compiled by the JVM.

  • Machine mode shows multiple JVM compilations as completely independent functions, although the functions will have the same name. In this mode, all functions from the JVM software are shown as such.

See Java Profiling View Modes for more detailed descriptions of the view modes for Java experiments.

    For OpenMP experiments:

  • User mode shows reconstructed call stacks similar to those obtained when the program is compiled without OpenMP. Special functions with names of the form <OMP-*> are shown when the OpenMP runtime is performing certain operations.

  • Expert mode shows compiler-generated functions representing parallelized loops, tasks, and so on, which are aggregated with user functions in User mode. Special functions with names of the form <OMP-*> are shown when the OpenMP runtime is performing certain operations.

  • Machine mode shows machine call stacks for all threads without any special <OMP-*> functions.

See Overview of OpenMP Software Execution for more detailed descriptions of the view modes for OpenMP experiments.

For all other experiments, all three modes show the same data.

Comparison Style

Specify how you want to display data when comparing experiments. For example, a comparison experiment metric might display as x0.994 to indicate its value relative to the base experiment.

Absolute Values

Shows the metrics values for all loaded experiments.

Deltas

Shows the +/- difference between metrics for the baseline experiment and the other loaded experiments.

Ratios

Shows the difference between metrics for the baseline experiment and the other loaded experiments as a ratio.

Search Path Settings

The Search Path setting specifies the path used for finding the loaded experiment's associated source and object files for displaying annotated source data in the Source and Disassembly views. The search path is also used to locate the .jar files for the Java Runtime Environment on your system. The special directory name $expts refers to the set of current experiments, in the order in which they were loaded. Only the founder experiment is looked at when searching $expts, no descendant experiments are examined.

By default the search path is set to $expts and . (the current directory). You can add other paths to search by typing or browsing to the path and clicking Append. To edit paths in the list, select a path, edit it in the Paths field, and click Update. To change the search order, select a path in the list and click the Move Up/ Move Down buttons.

See How the Tools Find Source Code for more information about how the search path is used.

Pathmaps Settings

The Pathmaps settings enable you to map the leading part of a file path from one location to another to help Performance Analyzer locate source files. A path map is useful for an experiment that has been moved from the original location it was recorded. When the source can be found Performance Analyzer can display annotated source data in the Source and Disassembly views.

From path

Type the beginning of the path to the source that was used in the experiment. You can find this path by viewing the Selection Details panel when the experiment is open in Performance Analyzer.

To path

Type or browse to the beginning of the path to the source from the current location where you are running Performance Analyzer.

For example, if the experiment contains paths specified as /a/b/c/d/sourcefile and soucefile is now located in /x, you can use the Pathmaps setting to map /a/b/c/d/ to /x/. Multiple path maps can be specified, and each is tried in order to find a file.

See How the Tools Find Source Code for more information about how the path maps are used.

Performance Analyzer Configuration File

Performance Analyzer saves your configuration settings automatically when you exit the tool. When you open the same experiment again, it is configured as it was when you previously closed the experiment.

You can save some settings in a configuration file whose name ends in config.xml and apply the configuration file to any experiment when you open it from the Open Experiment dialog. You can save configurations in a location only for your use or save to a shared location for use by other users.

When you open an experiment, Performance Analyzer searches default locations for available configuration files and enables you to choose which configuration to apply to the experiment you are opening.

You can also export settings into a .er.rc file that can be read by er_print using Tools > Export Settings as .er.rc. This enables you to have the same metrics enabled in er_print and Performance Analyzer.