2 Diagnostic Tools

The Java Development Kit (JDK) provides diagnostic tools and troubleshooting tools specific to various operating systems. Custom diagnostic tools can also be developed using the APIs provided by the JDK.

This chapter contains the following sections:

Diagnostic Tools Overview

Most of the command-line utilities described in this section are either included in the JDK or native operating system tools and utilities.

Although the JDK command-line utilities are included in the JDK download, it is important to consider that they can be used to diagnose issues and monitor applications that are deployed with the Java Runtime Environment (JRE).

In general, the diagnostic tools and options use various mechanisms to get the information they report. The mechanisms are specific to the virtual machine (VM) implementation, operating systems, and release. Frequently, only a subset of the tools is applicable to a given issue at a particular time. Command-line options that are prefixed with -XX are specific to Java HotSpot VM. See Java HotSpot VM Command-Line Options.

Note:

The -XX options are not part of the Java API and can vary from one release to the next.

The tools and options are divided into several categories, depending on the type of problem that you are troubleshooting. Certain tools and options might fall into more than one category.

Note:

Some command-line utilities described in this section are experimental. The jstack, jinfo, and jmap utilities are examples of utilities that are experimental. It is suggested to use the latest diagnostic utility, jcmd instead of the earlier jstack, jinfo, and jmap utilities.

JDK Mission Control

Java Platform, Standard Edition (JMC) is a production-time profiling and diagnostics tool. It includes tools to monitor and manage your Java application with very small performance overhead.

JMC's very small performance overhead is a result of its tight integration with the HotSpot VM. JMC functionality is always available on-demand, and its small performance overhead is only in effect while the tools are running. This approach also eliminates the problem of the observer effect, which occurs when monitoring tools alter the execution characteristics of the system. JMC enables you to troubleshoot issues and identify root causes and bottlenecks. These properties make the JMC tool ideal for applications running in production.

JMC consists of the following client applications and plug-ins :
  • JVM Browser shows running Java applications and their JVMs.
  • JMX Console is a mechanism for monitoring and managing JVMs. It connects to a running JVM, collects, displays its characteristics in real time, and enables you to change some of its runtime properties through Managed Beans (MBeans). You can also create rules that trigger on certain events (for example, send an e-mail if the CPU usage by the application reaches 90 percent).

  • Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data about a running Java application. It is integrated into the JVM and causes very small performance overhead, so it can be used in production environments. JFR continuously saves large amounts of data about the running applications. This profiling information includes thread samples, lock profiles, and garbage collection details. JFR presents diagnostic information in logically grouped tables and charts. It enables you to select the range of time and level of detail necessary to focus on the problem. Data collected by JFR can be essential when contacting Oracle support to help diagnose issues with your Java application.

  • jcmd Utility or Diagnostic Commands is used to send diagnostic command requests to the JVM. These requests are useful for managing recordings from Flight Recorder, troubleshooting, and diagnosing JVM and Java applications.
  • Plug-ins help in heap dump analysis and DTrace recording. See Plug-in Details. Java SE plug-ins connect to a JVM using the Java Management Extensions (JMX) agent. For more information about JMX, see the Java Platform, Standard Edition Java Management Extensions Guide .

Troubleshoot with JDK Mission Control

JMC provides the following features or functionalities that can help you in troubleshooting:

  • Java Management console (JMX) connects to a running JVM, and collects and displays key characteristics in real time.
  • Triggers user-provided custom actions and rules for JVM.
  • Experimental plug-ins from the JMC tool provide troubleshooting activities.
  • Flight Recording in JMC is available to analyze events. The preconfigured tabs enable you to easily to drill down in various areas of common interest, such as, code, memory and garbage collection, threads, and I/O. The Automated Analysis Results page of flight recordings helps you to diagnose issues quicker. The provided rules and heuristics help you find functional and performance problems in your application and provide tuning tips. Some rules that operate with relatively unknown concepts, like safe points, will provide explanations and links to further information. Some rules are parametrized and can be configured to make more sense in your particular environment. Individual rules can be enabled or disabled as you see fit.
    • Flight Recorder in the JMC application presents diagnostic information in logically grouped tables, charts, and dials. It enables you to select the range of time and level of detail necessary to focus on the problem.
  • The JMC plug-ins connect to JVM using the Java Management Extensions (JMX) agent. The JMX is a standard API for the management and monitoring of resources such as applications, devices, services, and the Java Virtual Machine.

Flight Recorder

Flight Recorder (JFR) is a profiling and event collection framework built into the JDK.

Flight Recorder allows Java administrators and developers to gather detailed low-level information about how a JVM and Java applications are behaving. You can use JMC, with a plug-in, to visualize the data collected by JFR. Flight Recorder and JMC together create a complete toolchain to continuously collect low-level and detailed runtime information enabling after-the-fact incident analysis.

The advantages of using JFR are:

  • It records data about JVM events. You can record events at a particular instance of time.
  • Recording events with JFR enables you to preserve the execution states to analyze issues. You can access the data anytime to better understand problems and resolve them.
  • JFR can record a large amount of data on production systems while keeping the overhead of the recording process low.
  • It is most suited for recording latencies. It records situations where the application is not executing as expected and provide details on the bottlenecks.
  • It provides insight into how programs interact with execution environment as a whole, ranging from hardware, operating systems, JVM, JDK, and the Java application environment.

Flight recordings can be started when the application is started or while the application is running. The data is recorded as time-stamped data points called events. Events are categorized as follows:

  • Duration events: occurs at a particular duration with specific start time and stop time.
  • Instant events: occurs instantly and gets logged immediately, for example, a thread gets blocked.
  • Sample events: occurs at regular intervals to check the overall health of the system, for example, printing heap diagnostics every minute.
  • Custom events: user defined events created using JMC or APIs.

In addition, there are predefined events that are enabled in a recording template. Some templates only save very basic events and have virtually no impact on performance. Other templates may come with slight performance overhead and may also trigger garbage collections to gather additional data. The following templates are provided with Flight Recorder in the <JDK_ROOT>/lib/jfr directory:

  • default.jfc: Collects a predefined set of data with low overhead.
  • profile.jfc: Provides more data than the default.jfc template, but with overhead and impact on performance.

Flight Recorder produces following types of recordings:

  • Time fixed recordings: A time fixed recording is also known as a profiling recording that runs for a set amount of time, and then stops. Usually, a time fixed recording has more events enabled and may have a slightly bigger performance effect. Events that are turned on can be modified according to your requirements. Time fixed recordings will be automatically dumped and opened.

    Typical use cases for a time fixed recording are as follows:

    • Profile which methods are run the most and where most objects are created.

    • Look for classes that use more and more heap, which indicates a memory leak.

    • Look for bottlenecks due to synchronization and many more such use cases.

  • Continuous recordings: A continuous recording is a recording that is always on and saves, for example, the last six hours of data. During this recording, JFR collects events and writes data to the global buffer. When the global buffer fills up, the oldest data is discarded. The data currently in the buffer is written to the specified file whenever you request a dump, or if the dump is triggered by a rule.

    A continuous recording with the default template has low overhead and gathers a lot of useful data. However, this template doesn't gather heap statistics or allocation profiling.

Produce a Flight Recording

The following sections describe different ways to produce a flight recording.

Start a Flight Recording

Follow these steps to start a flight recording using JMC.

  1. Find your JVM in the JVM Browser.
  2. Right-click the JVM and select Start Flight Recording...

    The Start Flight Recording window opens.

  3. Click Browse to find a suitable location and file name to save the recording.
  4. Select either Time fixed recording (profiling recording), or Continuous recording. For continuous recordings, you can specify the maximum size or maximum age of events you want to save.
  5. Select the flight recording template in the Event settings drop-down list. Templates define the events that you want to record. To create your own templates, click Template Manager. However, for most use cases, select either the Continuous template (for very low overhead recordings) or the Profiling template (for more data and slightly more overhead).
  6. Click Finish to start the recording or click Next to modify the event options defined in the selected template.
  7. Modify the event options for the flight recording. The default settings provide a good balance between data and performance. You can change these settings based on your requirement.

    For example:

    • The Threshold value is the length of event recording. By default, synchronization events above 10 ms are collected. This means, if a thread waits for a lock for more than 10 ms, an event is saved. You can lower this value to get more detailed data for short contentions.
    • The Thread Dump setting gives you an option to perform periodic thread dumps. These are normal textual thread dumps.
  8. Click Finish to start the recording or click Next to modify the event details defined in the selected template.
  9. Modify the event details for the selected flight recording template. Event details define whether the event should be included in the recording. For some events, you can also define whether a stack trace should be attached to the event, specify the duration threshold (for duration events) and a request period (for requestable events).
  10. Click Back if you want to modify any of the settings set in the previous steps or click Finish to start the recording.
    The new flight recording appears in the Progress View.

    Note:

    Expand the node in the JVM Browser to view the recordings that are running. Right-click any of the recordings to dump, dump whole, dump last part, edit, stop, or close the recording. Stopping a profiling recording will still produce a recording file and closing a profiling recording will discard the recording.

Note:

You can set up JMC to automatically start a flight recording if a condition is met using the Triggers tab in the JMX console. For more information, see Use Triggers for Automatic Flight Recordings.
Use Triggers for Automatic Flight Recordings

The Triggers tab allows you to define and activate rules that trigger events when a certain condition is met. For example, you can set up JDK Mission Control to automatically start a flight recording if a condition is met. This is useful for tracking specific JVM runtime issues.

This is done from the JMX console.
  1. To start the JMX console, find your application in the JVM Browser, right-click it, and select Start JMX Console
  2. Click the Triggers tab at the bottom of the screen.
  3. Click Add. You can choose any MBean in the application, including your own application-specific ones.
    The Add New Rule dialog opens.
  4. Select an attribute for which the rule should trigger and click Next . For example, select java.lang > OperatingSystem > ProcessCpuLoad.
  5. Set the condition on which the rule should trigger and click Next. For example, set a value for the Maximum trigger value, Sustained period, and Limit period.

    Note:

    You can either select the Trigger when condition is met or Trigger when recovering from condition check box.
  6. Select what action you would like your rule to perform when triggered and click Next. For example, choose Start Time Limited Flight Recording and browse the file destination and recording time. Select the Open automatically checkbox, if you wish to open the flight recording automatically when it is triggered.
  7. Select constraints for your rule and click Next. For example, select the particular dates, days of the week, or time of day when the rule should be active.
  8. Enter a name for your rule and click Finish.
    The rule is added to the My Rules list.
When you select your rule from the Trigger Rules list, the Rule Details pane displays its components in the following tabs. You can edit the conditions, attributes, and constraints if you wish:
  • Condition
  • Action
  • Constraint
Use Startup Flags at the Command Line to Produce a Flight Recording

Use startup flags to start recording when the application is started. If the application is already running, use the jcmd utility to start recording.

Use the following methods to generate a flight recording:
  • Generate a profiling recording when an application is started.

    You can configure a time fixed recording at the start of the application using the -XX:StartFlightRecording option. The following example shows how to run the MyApp application and start a 60-second recording 20 seconds after starting the JVM, which will be saved to a file named myrecording.jfr:

    java -XX:StartFlightRecording.delay=20s,duration=60s,name=myrecording,filename=myrecording.jfr,settings=profile MyApp

    The settings parameter takes the name of a template. Include the path if the template is not in the java-home/lib/jfr directory, which is the location of the default templates. The standard templates are: profile, which gathers more data and is primarily for profiling recordings, and default, which is a low overhead setting made primarily for continuous recordings.

    For a complete description of Flight Recorder flags for the java command, see Advanced Runtime Options for Java in the Java Platform, Standard Edition Tools Reference.

  • Generate a continuous recording when an application is started.

    You can start a continuous recording from the command line using the -XX:StartFlightRecording option. The -XX:FlightRecorderOptions provides additional settings for managing the recording. These flags start a continuous recording that can later be dumped if needed. The following example shows how to run the MyApp application with a continuous recording that saves 6 hours of data to disk. The temporary data will be saved to the /tmp folder.

    java -XX:StartFlightRecording.disk=true,maxage=6h,settings=default -XX:FlightRecorderOptions=repository=/tmp MyApp

    Note:

    When you actually dump the recording, you specify a new location for the dumped file, so the files in the repository are only temporary.
  • Generate a recording using diagnostic commands.

    For a running application, you can generate recordings by using Java command-line diagnostic commands. The simplest way to execute a diagnostic command is to use the jcmd tool located in the java-home/bin directory. For more details see, The jcmd Utility.

    The following example shows how to start a recording for the MyApp application with the process ID 5361. 30 minutes of data is recorded and written to /usr/recording/myapp-recording1.jfr.

    jcmd 5361 JFR.start duration=30m filename=/usr/recordings/myapp-recording1.jfr

Analyze a Flight Recording

The following sections describe different ways to analyze a flight recording:

Analyze a Flight Recording Using JMC

Once the flight recording file opens in the JMC, you can look at a number of different areas like code, memory, threads, locks and I/O and analyze various aspects of runtime behavior of your application.

The recording file is automatically opened in the JMC when a timed recording finishes or when a dump of a running recording is created. You can also open any recording file by double-clicking it or by opening it through the File menu. The flight recording opens in the Automated Analysis Results page. This page helps you to diagnose issues quicker. For example, if you’re tuning the garbage collection, or tracking down memory allocation issues, then you can use the memory view to get a detailed view on individual garbage collection events, allocation sites, garbage collection pauses, and so on. You can visualize the latency profile of your application by looking at I/O and Threads views, and even drill down into a view representing individual events in the recording.

View Automated Analysis Results Page

The Flight Recorder extracts and analyzes the data from the recordings and then displays color-coded report logs on the Automated Analysis Results page.

By default, results with yellow and red scores are displayed to draw your attention to potential problems. If you want to view all results in the report, click the Show OK Results button (a tick mark) on the top-right side of the page. Similarly, to view the results as a table, click the Table button.

The benchmarks are mainly divided into problems related to the following:

Clicking on a heading in the report, for example, Java Application, displays a corresponding page.

Note:

You can select a respective entry in the Outline view to navigate between the pages of the automated analysis.
Analyze the Java Application

Java Application dashboard displays the overall health of the Java application.

Concentrate on the parameters having yellow and red scores. The dashboard provides exact references to the problematic situations. Navigate to the specific page to analyze the data and fix the issue.

Threads

The Threads page provides a snapshot of all the threads that belong to the Java application. It reveals information about an application’s thread activity that can help you diagnose problems and optimize application and JVM performance.

Threads are represented in a table and each row has an associated graph. Graphs can help you to identify the problematic execution patterns. The state of each thread is presented as a Stack Trace, which provides contextual information of where you can instantly view the problem area. For example, you can easily locate the occurrence of a deadlock.

Lock Instances

Lock instances provides further details on threads specifying the lock information, that is, if the thread is trying to take a lock or waiting for a notification on a lock. If a thread has taken any lock, the details are shown in the stack trace.

Memory

One way to detect problems with application performance to is to see how it uses memory during runtime.

In the Memory page, the graph represents heap memory usage of the Java application. Each cycle consists of a Java heap growth phase that represents the period of heap memory allocations, followed by a short drop that represents garbage collection, and then the cycle starts over. The important inference from the graph is that the memory allocations are short-lived as garbage collector pushes down the heap to the start position at each cycle.

Select the Garbage Collection check box to see the garbage collection pause time in the graph. It indicates that the garbage collector stopped the application during the pause time to do its work. Long pause times lead to poor application performance, which needs to be addressed.

Method Profiling

Method Profiling page enables you to see how often a specific method is run and for how long it takes to run a method. The bottlenecks are determined by identifying the methods that take a lot of time to execute.

As profiling generates a lot of data, it is not turned on by default. Start a new recording and select Profiling - on server in the Event settings drop-down menu. Do a time fixed recording for a short duration. JFR dumps the recording to the file name specified. Open the Method Profiling page in JMC to see the top allocations. Top packages and classes are displayed. Verify the details in the stack trace. Inspect the code to verify if the memory allocation is concentrated on a particular object. JFR points to the particular line number where the problem persists.

JVM Internals

The JVM Internals page provides detailed information about the JVM and its behavior.

One of the most important parameters to observe is Garbage Collections. Garbage collection is a process of deleting unused objects so that the space can be used for allocation of new objects. The Garbage Collections page helps you to better understand the system behavior and garbage collection performance during runtime.

The graphs shows the heap usage as compared to the pause times and how it varies during the specified period. The page also lists all the garbage collection events that occurred during the recording. Observe the longest pause times against the heap. The pause time indicates that garbage collections are taking longer during application processing. It implies that garbage collections are freeing less space on the heap. This situation can lead to memory leaks.

For effective memory management, see the Compilations page, which provides details on code compilation along with duration. In large applications, you may have many compiled methods, and memory can be exhausted, resulting in performance issues.

Environment

The Environment page provides information about the environment in which the recording was made. It helps to understand the CPU usage, memory, and operating system that is being used.

See the Processes page to understand concurrent processes running and the competing CPU usage of these processes. The application performance will be affected if many processes use CPU and other system resources.

Check the Event Browser page to see the statistics of all the event types. It helps you to focus on the bottlenecks and take appropriate action to improve application performance.

You can create Custom Pages using the Event Browser page. Select the required event type from Event Type Tree and click the Create a new page using the select event type button in the top right corner of the page. The custom page is listed as a new event page below the event browser page.

Analyze a Flight Recording Using the jfr tool or JFR APIs

To access the information in a recording from Flight Recorder, use the jfr tool to print event information, or use the Flight Recorder API to programmatically process the data.

Flight Recorder provides the following methods for reviewing the information that was recorded:

  • jfr tool - Use this command-line tool to print event data from a recording. The tool is located in the java-home/bin directory. For details about this tool, see The jfr Command in the Java Platform, Standard Edition Tools Reference
  • Flight Recorder API - Use the jdk.jfr.consumer API to extract and format the information in a recording. For more information, see Flight Recorder API Programmer’s Guide.

The events in a recording can be used to investigate the following areas:

  • General information
    • Number of events recorded at each time stamp

    • Maximum heap usage

    • CPU usage over time, application's CPU usage, and total CPU usage

      Watch for CPU usage spiking near 100 percent or the CPU usage is too low or too long garbage collection pauses.

    • GC pause time

    • JVM information and system properties set

  • Memory
    • Memory usage over time

      Typically, temporary objects are allocated all the time. When a condition is met, a Garbage Collection (GC) is triggered and all of the objects no longer used are removed. Therefore, the heap usage increases steadily until a GC is triggered, then it drops suddenly. Watch for a steadily increasing heap size over time that could indicate a memory leak.

    • Information about garbage collections, including the time spent doing them

    • Memory allocations made

      The more temporary objects the application allocates, the more the application must perform garbage collection. Reviewing memory allocations helps you find the most allocations and reduce the GC pressure in your application.

    • Classes that have the most live set

      Watch how each object type increases in size during a flight recording. A specific object type that increases a lot in size indicates a memory leak; however, a small variance is normal. Especially, investigate the top growers of non-standard Java classes.

  • Code
    • Packages and classes that used the most execution time

      Watch where methods are being called from to identify bottlenecks in your application.

    • Exceptions thrown

    • Methods compiled over time as the application was running

    • Number of loaded classes, actual loaded classes and unloaded classes over time

  • Threads
    • CPU usage and the number of threads over time

    • Threads that do most of the code execution

    • Objects that are the most waited for due to synchronization

  • I/O
    • Information about file reads, file writes, socket reads, and socket writes

  • System
    • Information about the CPU, memory and OS of the machine running the application

    • Environment variables and any other processes running at the same time as the JVM

  • Events
    • All of the events in the recording

The jcmd Utility

The jcmd utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings, troubleshoot, and diagnose JVM and Java applications.

jcmd must be used on the same machine where the JVM is running, and have the same effective user and group identifiers that were used to launch the JVM.

A special command jcmd <process id/main class> PerfCounter.print prints all performance counters in the process.

The command jcmd <process id/main class> <command> [options] sends the command to the JVM.

The following example shows diagnostic command requests to the JVM using jcmd utility.

> jcmd
5485 jdk.jcmd/sun.tools.jcmd.JCmd
2125 MyProgram
 
> jcmd MyProgram (or "jcmd 2125")
2125:
The following commands are available:
Compiler.CodeHeap_Analytics
Compiler.codecache
Compiler.codelist
Compiler.directives_add
Compiler.directives_clear
Compiler.directives_print
Compiler.directives_remove
Compiler.queue
GC.class_histogram
GC.class_stats
GC.finalizer_info
GC.heap_dump
GC.heap_info
GC.run
GC.run_finalization
JFR.check
JFR.configure
JFR.dump
JFR.start
JFR.stop
JVMTI.agent_load
JVMTI.data_dump
ManagementAgent.start
ManagementAgent.start_local
ManagementAgent.status
ManagementAgent.stop
Thread.print
VM.class_hierarchy
VM.classloader_stats
VM.classloaders
VM.command_line
VM.dynlibs
VM.events
VM.flags
VM.info
VM.log
VM.metaspace
VM.native_memory
VM.print_touched_methods
VM.set_flag
VM.stringtable
VM.symboltable
VM.system_properties
VM.systemdictionary
VM.uptime
VM.version
help

For more information about a specific command use 'help <command>'.

> jcmd MyProgram help Thread.print
2125:
Thread.print
Print all threads with stacktraces.
 
Impact: Medium: Depends on the number of threads.
 
Permission: java.lang.management.ManagementPermission(monitor)
 
Syntax : Thread.print [options]
 
Options: (options must be specified using the <key> or <key>=<value> syntax)
        -l : [optional] print java.util.concurrent locks (BOOLEAN, false)
        -e : [optional] print extended thread information (BOOLEAN, false)
 
> jcmd MyProgram Thread.print
2125:
2019-11-16 16:06:09
Full thread dump Java HotSpot(TM) 64-Bit Server VM (11.0.5+10-LTS mixed mode):
...

The following sections describe some useful commands and troubleshooting techniques with the jcmd utility:

Useful Commands for the jcmd Utility

The available diagnostic command may be different in different versions of HotSpot VM; therefore, using jcmd <process id/main class> help is the best way to see all available options.

The following are some of the most useful commands in the jcmd tool. Remember you can always use jcmd <process id/main class> help <command> to get any additional options to these commands:

  • Print full HotSpot and JDK version ID.
    jcmd <process id/main class> VM.version
  • Print all the system properties set for a VM.

    There can be several hundred lines of information displayed.

    jcmd <process id/main class> VM.system_properties

  • Print all the flags used for a VM.

    Even if you have provided no flags, some of the default values will be printed, for example initial and maximum heap size.

    jcmd <process id/main class> VM.flags

  • Print the uptime in seconds.

    jcmd <process id/main class> VM.uptime

  • Create a class histogram.

    The results can be rather verbose, so you can redirect the output to a file. Both internal and application-specific classes are included in the list. Classes taking the most memory are listed at the top, and classes are listed in a descending order.

    jcmd <process id/main class> GC.class_histogram

  • Create a heap dump.

    jcmd GC.heap_dump filename=Myheapdump

    This is the same as using jmap -dump:file=<file> <pid>, but jcmd is the recommended tool to use.

  • Create a heap histogram.

    jcmd <process id/main class> GC.class_histogram filename=Myheaphistogram

    This is the same as using jmap -histo <pid>, but jcmd is the recommended tool to use.

  • Print all threads with stack traces.

    jcmd <process id/main class> Thread.print

Troubleshoot with the jcmd Utility

Use the jcmd utility to troubleshoot.

The jcmd utility provides the following troubleshooting options:

  • Start a recording.

    For example, to start a 2-minute recording on the running Java process with the identifier 7060 and save it to myrecording.jfr in the current directory, use the following:

    jcmd 7060 JFR.start name=MyRecording settings=profile delay=20s duration=2m filename=C:\TEMP\myrecording.jfr

  • Check a recording.

    The JFR.check diagnostic command checks a running recording. For example:

    jcmd 7060 JFR.check

  • Stop a recording.

    The JFR.stop diagnostic command stops a running recording and has the option to discard the recording data. For example:

    jcmd 7060 JFR.stop

  • Dump a recording.

    The JFR.dump diagnostic command stops a running recording and has the option to dump recordings to a file. For example:

    jcmd 7060 JFR.dump name=MyRecording filename=C:\TEMP\myrecording.jfr

  • Create a heap dump.

    The preferred way to create a heap dump is

    jcmd <pid> GC.heap_dump filename=Myheapdump

  • Create a heap histogram.

    The preferred way to create a heap histogram is

    jcmd <pid> GC.class_histogram filename=Myheaphistogram

Native Memory Tracking

The Native Memory Tracking (NMT) is a Java HotSpot VM feature that tracks internal memory usage for a Java HotSpot VM.

Since NMT doesn't track memory allocations by non-JVM code, you may have to use tools supported by the operating system to detect memory leaks in native code.

The following sections describe how to monitor VM internal memory allocations and diagnose VM memory leaks.

Use NMT to Detect a Memory Leak

Procedure to use Native Memory Tracking to detect memory leaks.

Follow these steps to detect a memory leak:

  1. Start the JVM with summary or detail tracking using the command line option: -XX:NativeMemoryTracking=summary or -XX:NativeMemoryTracking=detail.
  2. Establish an early baseline. Use NMT baseline feature to get a baseline to compare during development and maintenance by running: jcmd <pid> VM.native_memory baseline.
  3. Monitor memory changes using: jcmd <pid> VM.native_memory detail.diff.
  4. If the application leaks a small amount of memory, then it may take a while to show up.

How to Monitor VM Internal Memory

Native Memory Tracking can be set up to monitor memory and ensure that an application does not start to use increasing amounts of memory during development or maintenance.

See Table 2-1 for details about NMT memory categories.

The following sections describe how to get summary or detail data for NMT and describes how to interpret the sample output.

  • Interpret sample output: From the following sample output, you will see reserved and committed memory. Note that only committed memory is actually used. For example, if you run with -Xms100m -Xmx1000m, then the JVM will reserve 1000 MB for the Java heap. Because the initial heap size is only 100 MB, only 100 MB will be committed to begin with. For a 64-bit machine where address space is almost unlimited, there is no problem if a JVM reserves a lot of memory. The problem arises if more and more memory gets committed, which may lead to swapping or native out of memory (OOM) situations.

    An arena is a chunk of memory allocated using malloc. Memory is freed from these chunks in bulk, when exiting a scope or leaving an area of code. These chunks can be reused in other subsystems to hold temporary memory, for example, pre-thread allocations. An arena malloc policy ensures no memory leakage. So arena is tracked as a whole and not individual objects. Some initial memory cannot be tracked.

    Enabling NMT will result in a 5-10 percent JVM performance drop, and memory usage for NMT adds 2 machine words to all malloc memory as a malloc header. NMT memory usage is also tracked by NMT.

    >jcmd 17320 VM.native_memory
    Native Memory Tracking:
    
    Total: reserved=5699702KB, committed=351098KB
    -                 Java Heap (reserved=4153344KB, committed=260096KB)
                                (mmap: reserved=4153344KB, committed=260096KB)
    
    -                     Class (reserved=1069839KB, committed=22543KB)
                                (classes #3554)
                                (  instance classes #3294, array classes #260)
                                (malloc=783KB #7965)
                                (mmap: reserved=1069056KB, committed=21760KB)
                                (  Metadata:   )
                                (    reserved=20480KB, committed=18944KB)
                                (    used=18267KB)
                                (    free=677KB)
                                (    waste=0KB =0.00%)
                                (  Class space:)
                                (    reserved=1048576KB, committed=2816KB)
                                (    used=2454KB)
                                (    free=362KB)
                                (    waste=0KB =0.00%)
    
    -                    Thread (reserved=24685KB, committed=1205KB)
                                (thread #24)
                                (stack: reserved=24576KB, committed=1096KB)
                                (malloc=78KB #132)
                                (arena=30KB #46)
    
    -                      Code (reserved=248022KB, committed=7890KB)
                                (malloc=278KB #1887)
                                (mmap: reserved=247744KB, committed=7612KB)
    
    -                        GC (reserved=197237KB, committed=52789KB)
                                (malloc=9717KB #2877)
                                (mmap: reserved=187520KB, committed=43072KB)
    
    -                  Compiler (reserved=148KB, committed=148KB)
                                (malloc=19KB #95)
                                (arena=129KB #5)
    
    -                  Internal (reserved=735KB, committed=735KB)
                                (malloc=663KB #1914)
                                (mmap: reserved=72KB, committed=72KB)
    
    -                     Other (reserved=48KB, committed=48KB)
                                (malloc=48KB #4)
    
    -                    Symbol (reserved=4835KB, committed=4835KB)
                                (malloc=2749KB #17135)
                                (arena=2086KB #1)
    
    -    Native Memory Tracking (reserved=539KB, committed=539KB)
                                (malloc=8KB #109)
                                (tracking overhead=530KB)
    
    -               Arena Chunk (reserved=187KB, committed=187KB)
                                (malloc=187KB)
    
    -                   Logging (reserved=4KB, committed=4KB)
                                (malloc=4KB #179)
    
    -                 Arguments (reserved=18KB, committed=18KB)
                                (malloc=18KB #467)
    
    -                    Module (reserved=62KB, committed=62KB)
                                (malloc=62KB #1060)
  • Get detail data: To get a more detailed view of native memory usage, start the JVM with command line option: -XX:NativeMemoryTracking=detail. This will track exactly what methods allocate the most memory. Enabling NMT will result in 5-10 percent JVM performance drop and memory usage for NMT adds 2 words to all malloc memory as malloc header. NMT memory usage is also tracked by NMT.

    The following example shows a sample output for virtual memory for track level set to detail. One way to get this sample output is to run: jcmd <pid> VM.native_memory detail.

    Virtual memory map:
    
    [0x0000000702800000 - 0x0000000800000000] reserved 4153344KB for Java Heap from
        [0x00007ffdca6b217d]
        [0x00007ffdca6b19a3]
        [0x00007ffdca6b0d63]
        [0x00007ffdca68e7ae]
    
            [0x0000000702800000 - 0x0000000712600000] committed 260096KB from
                [0x00007ffdca254ecc]
                [0x00007ffdca254d52]
                [0x00007ffdca25a5c6]
                [0x00007ffdca2a66bf]
    
    [0x0000000800000000 - 0x0000000840000000] reserved 1048576KB for Class from
        [0x00007ffdca6b154a]
        [0x00007ffdca6b0fb4]
        [0x00007ffdca51d2f9]
        [0x00007ffdca51e4d2]
    
            [0x0000000800000000 - 0x00000008003a0000] committed 3712KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca6b0087]
                [0x00007ffdca6af852]
    
    [0x000000bae6d00000 - 0x000000bae6e00000] reserved 1024KB for Thread Stack from
        [0x00007ffdca679569]
        [0x00007ffdca5751c2]
        [0x00007ffe13ed1ffa]
        [0x00007ffe17d17974]
    
            [0x000000bae6d00000 - 0x000000bae6d04000] committed 16KB from
                [0x00007ffdca67354e]
                [0x00007ffdca679571]
                [0x00007ffdca5751c2]
                [0x00007ffe13ed1ffa]
    
            [0x000000bae6df2000 - 0x000000bae6e00000] committed 56KB
    
    [0x000000bae6f00000 - 0x000000bae7000000] reserved 1024KB for Thread Stack from
        [0x00007ffdca679569]
        [0x00007ffdca5751c2]
        [0x00007ffe13ed1ffa]
        [0x00007ffe17d17974]
    
            [0x000000bae6f00000 - 0x000000bae6f04000] committed 16KB from
                [0x00007ffdca67354e]
                [0x00007ffdca679571]
                [0x00007ffdca5751c2]
                [0x00007ffe13ed1ffa]
    
            [0x000000bae6ff3000 - 0x000000bae7000000] committed 52KB
    
       ...
    [0x000001d4d3480000 - 0x000001d4d3482000] reserved and committed 8KB for Internal from
        [0x00007ffdca5df383]
        [0x00007ffdca6737a9]
        [0x00007ffdca322e1d]
        [0x00007ffdca3251e1]
    
            [0x000001d4d3480000 - 0x000001d4d3482000] committed 8KB from
                [0x00007ffdca5df39b]
                [0x00007ffdca6737a9]
                [0x00007ffdca322e1d]
                [0x00007ffdca3251e1]
    
    [0x000001d4d4d50000 - 0x000001d4d4d60000] reserved and committed 64KB for Internal from
        [0x00007ffdca5a0719]
        [0x00007ffdca59f627]
        [0x00007ffdca59f03e]
        [0x00007ffdca2b3632]
    
    [0x000001d4d4d60000 - 0x000001d4d4d70000] reserved 64KB for Code from
        [0x00007ffdca6b159d]
        [0x00007ffdca6b0f40]
        [0x00007ffdca29a2b2]
        [0x00007ffdca1358e9]
    
            [0x000001d4d4d60000 - 0x000001d4d4d65000] committed 20KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca6b1720]
                [0x00007ffdca29a2ee]
    
    [0x000001d4d51d0000 - 0x000001d4d52c0000] reserved 960KB for Code from
        [0x00007ffdca6b159d]
        [0x00007ffdca6b0f40]
        [0x00007ffdca29a2b2]
        [0x00007ffdca1358e9]
    
            [0x000001d4d51d0000 - 0x000001d4d51d5000] committed 20KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca6b1720]
                [0x00007ffdca29a2ee]
    
            [0x000001d4d51d5000 - 0x000001d4d51e2000] committed 52KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca299df8]
                [0x00007ffdca135acf]
       ...
    
    [0x000001d4d57f0000 - 0x000001d4d5880000] reserved and committed 576KB for GC from
        [0x00007ffdca24258b]
        [0x00007ffdca2654bb]
        [0x00007ffdca232bcd]
        [0x00007ffdca68d437]
    
            [0x000001d4d57f0000 - 0x000001d4d5880000] committed 576KB from
                [0x00007ffdca2425d1]
                [0x00007ffdca2654bb]
                [0x00007ffdca232bcd]
                [0x00007ffdca68d437]
       ...
    [0x000001d4f8930000 - 0x000001d4f9130000] reserved and committed 8192KB for Class from
        [0x00007ffdca6b159d]
        [0x00007ffdca6b0fb4]
        [0x00007ffdca6afee3]
        [0x00007ffdca6af7b5]
    
            [0x000001d4f8930000 - 0x000001d4f9130000] committed 8192KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca6b0087]
                [0x00007ffdca6af852]
    
    [0x000001d4fa0d0000 - 0x000001d4fa2d0000] reserved and committed 2048KB for Class from
        [0x00007ffdca6b159d]
        [0x00007ffdca6b0fb4]
        [0x00007ffdca6afee3]
        [0x00007ffdca6af7b5]
    
            [0x000001d4fa0d0000 - 0x000001d4fa2d0000] committed 2048KB from
                [0x00007ffdca6b10c4]
                [0x00007ffdca6b1250]
                [0x00007ffdca6b0087]
                [0x00007ffdca6af852]
    
       ...
    
  • Get diff from NMT baseline: For both summary and detail level tracking, you can set a baseline after the application is up and running. Do this by running jcmd <pid> VM.native_memory baseline after the application warms up. Then, you can runjcmd <pid> VM.native_memory summary.diff or jcmd <pid> VM.native_memory detail.diff.

    The following example shows sample output for the summary difference in native memory usage since the baseline was set and is a great way to find memory leaks.

    >jcmd 17320 VM.native_memory summary.diff
    17320:
    
    Total: reserved=5712754KB +8236KB, committed=370550KB +12940KB
    
    -                 Java Heap (reserved=4153344KB, committed=260096KB)
                                (mmap: reserved=4153344KB, committed=260096KB)
    
    -                     Class (reserved=1078291KB +6357KB, committed=32915KB +7381KB)
                                (classes #4868 +958)
                                (  instance classes #4528 +901, array classes #340 +57)
                                (malloc=1043KB +213KB #12345 +3198)
                                (mmap: reserved=1077248KB +6144KB, committed=31872KB +7168KB)
                                (  Metadata:   )
                                (    reserved=28672KB +6144KB, committed=27904KB +6400KB)
                                (    used=27206KB +6181KB)
                                (    free=698KB +219KB)
                                (    waste=0KB =0.00%)
                                (  Class space:)
                                (    reserved=1048576KB, committed=3968KB +768KB)
                                (    used=3395KB +643KB)
                                (    free=573KB +125KB)
                                (    waste=0KB =0.00%)
    
    -                    Thread (reserved=26745KB +2KB, committed=1421KB +6KB)
                                (thread #26)
                                (stack: reserved=26624KB, committed=1300KB +4KB)
                                (malloc=85KB #142)
                                (arena=35KB +2 #50)
    
    -                      Code (reserved=248533KB +323KB, committed=14725KB +3999KB)
                                (malloc=789KB +323KB #4505 +1596)
                                (mmap: reserved=247744KB, committed=13936KB +3676KB)
    
    -                        GC (reserved=197345KB +70KB, committed=52897KB +70KB)
                                (malloc=9825KB +70KB #4868 +1395)
                                (mmap: reserved=187520KB, committed=43072KB)
    
    -                  Compiler (reserved=153KB +4KB, committed=153KB +4KB)
                                (malloc=27KB +6KB #312 +154)
                                (arena=126KB -2 #5)
    
    -                  Internal (reserved=785KB +27KB, committed=785KB +27KB)
                                (malloc=713KB +27KB #2213 +214)
                                (mmap: reserved=72KB, committed=72KB)
    
    -                     Other (reserved=49KB, committed=49KB)
                                (malloc=49KB #4)
    
    -                    Symbol (reserved=6268KB +1082KB, committed=6268KB +1082KB)
                                (malloc=3926KB +1018KB #34608 +16640)
                                (arena=2342KB +64 #1)
    
    -    Native Memory Tracking (reserved=963KB +364KB, committed=963KB +364KB)
                                (malloc=9KB +1KB #123 +8)
                                (tracking overhead=953KB +363KB)
    
    -               Arena Chunk (reserved=187KB, committed=187KB)
                                (malloc=187KB)
    
    -                   Logging (reserved=4KB, committed=4KB)
                                (malloc=4KB #179)
    
    -                 Arguments (reserved=18KB, committed=18KB)
                                (malloc=18KB #467)
    
    -                    Module (reserved=71KB +7KB, committed=71KB +7KB)
                                (malloc=71KB +7KB #1119 +53)

    The following example is a sample output that shows the detail difference in native memory usage since the baseline and is a great way to find memory leaks.

    
       ...
    [0x00007ffdca51ce00]
    [0x00007ffdca127ca3]
    [0x00007ffdca51d08b]
    [0x00007ffdca195288]
                                 (malloc=81KB type=Class +18KB #869 +194)
    
    [0x00007ffdca169f01]
    [0x00007ffdca16480a]
    [0x00007ffdca164349]
    [0x00007ffdca16444d]
                                 (malloc=3KB type=Compiler +1KB #27 +8)
    
    [0x00007ffdca5c160a]
    [0x000001d4ddd73b66]
                                 (malloc=2KB type=GC +2KB #1 +1)
    
    [0x00007ffdca5c160a]
    [0x00007ffdca22d16b]
    [0x00007ffdca254a62]
    [0x00007ffdca264b9e]
                                 (malloc=6KB type=GC +6KB #3 +3)
       ...
    [0x00007ffdca2b860a]
    [0x00007ffdca166d7c]
    [0x00007ffdca3237bf]
    [0x00007ffdca313331]
                                 (malloc=16KB type=Class +1KB #61 +6)
    
    [0x00007ffdca67170c]
    [0x00007ffdca6712f3]
    [0x00007ffdca369ed1]
    [0x000001d4ddd6f0b7]
                                 (malloc=3KB type=Internal +1KB #9 +3)
    
    [0x00007ffdca60a90c]
    [0x00007ffdca60ca3f]
    [0x00007ffdca60cd29]
    [0x00007ffdca2d78f3]
                                 (malloc=16KB type=Symbol +6KB #1030 +399)
    
    [0x00007ffdca60a90c]
    [0x00007ffdca60ca3f]
    [0x00007ffdca60cc2e]
    [0x00007ffdca19a631]
                                 (malloc=116KB type=Symbol +23KB #7411 +1442)
       ...
    [0x00007ffdca29860f]
    [0x00007ffdca204dc4]
    [0x00007ffdca65070a]
    [0x00007ffdca64fd17]
                                 (malloc=11KB type=Class +3KB #357 +82)
    
    [0x00007ffdca29860f]
    [0x00007ffdca204dc4]
    [0x00007ffdca65070a]
    [0x00007ffdca64aefb]
                                 (malloc=105KB type=Class +23KB #3371 +749)
    
    [0x00007ffdca50c00f]
    [0x00007ffdca50be9d]
    [0x00007ffdca552fc9]
    [0x00007ffdca203aa0]
                                 (malloc=1KB type=Native Memory Tracking +1KB #20 +20)
    
    [0x00007ffdca53dd17]
    [0x00007ffdca53f52a]
    [0x00007ffdca350c54]
    [0x000001d4ddd6f0b7]
       ...

NMT Memory Categories

List of native memory tracking memory categories used by NMT.

Table 2-1 describes native memory categories used by NMT. These categories may change with a release.

Table 2-1 Native Memory Tracking Memory Categories

Category Description

Java Heap

The heap where your objects live

Class

Class meta data

Thread

Memory used by threads, including thread data structure, resource area, handle area, and so on

Code

Generated code

GC

Data use by the GC, such as card table

Compiler

Memory tracking used by the compiler when generating code

Internal

Memory that does not fit the previous categories, such as the memory used by the command line parser, JVMTI, properties, and so on

Other

Memory not covered by another category

Symbol

Memory for symbols

Native Memory Tracking

Memory used by NMT

Arena Chunk

Memory used by chunks in the arena chunk pool

Logging

Memory used by logging

Arguments

Memory for arguments

Module

Memory used by modules

JConsole

Another useful tool included in the JDK download is the JConsole monitoring tool. This tool is compliant with JMX. The tool uses the built-in JMX instrumentation in the JVM to provide information about the performance and resource consumption of running applications.

Although the tool is included in the JDK download, it can also be used to monitor and manage applications deployed with the JRE.

The JConsole tool can attach to any Java application in order to display useful information such as thread usage, memory consumption, and details about class loading, runtime compilation, and the operating system.

This output helps with the high-level diagnosis of problems such as memory leaks, excessive class loading, and running threads. It can also be useful for tuning and heap sizing.

In addition to monitoring, JConsole can be used to dynamically change several parameters in the running system. For example, the setting of the -verbose:gc option can be changed so that the garbage collection trace output can be dynamically enabled or disabled for a running application.

The following sections describe troubleshooting techniques with the JConsole tool.

Troubleshoot with the JConsole Tool

Use the JConsole tool to monitor data.

The following list provides an idea of the data that can be monitored using the JConsole tool. Each heading corresponds to a tab pane in the tool.

  • Overview

    This pane displays graphs that shows the heap memory usage, number of threads, number of classes, and CPU usage over time. This overview allows you to visualize the activity of several resources at once.

  • Memory

    • For a selected memory area (heap, non-heap, various memory pools):

      • Graph showing memory usage over time

      • Current memory size

      • Amount of committed memory

      • Maximum memory size

    • Garbage collector information, including the number of collections performed, and the total time spent performing garbage collection

    • Graph showing the percentage of heap and non-heap memory currently used

    In addition, on this pane you can request garbage collection to be performed.

  • Threads

    • Graph showing thread usage over time.

    • Live threads: Current number of live threads.

    • Peak: Highest number of live threads since the JVM started.

    • For a selected thread, the name, state, and stack trace, as well as, for a blocked thread, the synchronizer that the thread is waiting to acquire, and the thread that owns the lock.

    • The Deadlock Detection button sends a request to the target application to perform deadlock detection and displays each deadlock cycle in a separate tab.

  • Classes

    • Graph showing the number of loaded classes over time

    • Number of classes currently loaded into memory

    • Total number of classes loaded into memory since the JVM started, including those subsequently unloaded

    • Total number of classes unloaded from memory since the JVM started

  • VM Summary

    • General information, such as the JConsole connection data, uptime for the JVM, CPU time consumed by the JVM, compiler name, total compile time, and so on.

    • Thread and class summary information

    • Memory and garbage collection information, including number of objects pending finalization, and so on

    • Information about the operating system, including physical characteristics, the amount of virtual memory for the running process, and swap space

    • Information about the JVM itself, such as the arguments and class path

  • MBeans

    This pane displays a tree structure that shows all platform and application MBeans that are registered in the connected JMX agent. When you select an MBean in the tree, its attributes, operations, notifications, and other information are displayed.

    • You can invoke operations, if any. For example, the operation dumpHeap for the HotSpotDiagnostic MBean, which is in the com.sun.management domain, performs a heap dump. The input parameter for this operation is the path name of the heap dump file on the machine where the target VM is running.

    • You can set the value of writable attributes. For example, you can set, unset, or change the value of certain VM flags by invoking the setVMOption operation of the HotSpotDiagnostic MBean. The flags are indicated by the list of values of the DiagnosticOptions attribute.

    • You can subscribe to notifications, if any, by using the Subscribe and Unsubscribe buttons.

Monitor Local and Remote Applications with JConsole

JConsole can monitor both local applications and remote applications. If you start the tool with an argument specifying a JMX agent to connect to, then the tool will automatically start monitoring the specified application.

To monitor a local application, execute the command jconsolepid , where pid is the process ID of the application.

To monitor a remote application, execute the command jconsolehostname: portnumber, where hostname is the name of the host running the application, and portnumber is the port number you specified when you enabled the JMX agent.

If you execute the jconsole command without arguments, the tool will start by displaying the New Connection window, where you specify the local or remote process to be monitored. You can connect to a different host at any time by using the Connection menu.

With the latest JDK releases, no option is necessary when you start the application to be monitored.

As an example of the output of the monitoring tool, Figure 2-1 shows a chart of the heap memory usage.

Figure 2-1 Sample Output from JConsole

Description of Figure 2-1 follows
Description of "Figure 2-1 Sample Output from JConsole"

The jdb Utility

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the Java Debug Interface (JDI) to launch or connect to the target JVM.

The JDI is a high-level Java API that provides information useful for debuggers and similar systems that need access to the running state of a (usually remote) virtual machine. JDI is a component of the Java Platform Debugger Architecture (JPDA). See Java Platform Debugger Architecture.

The following section provides troubleshooting techniques for the jdb utility.

Troubleshoot with the jdb Utility

The jdb utility is used to monitor the debugger connectors used for remote debugging.

In JDI, a connector is the way that the debugger connects to the target JVM. The JDK traditionally ships with connectors that launch and establish a debugging session with a target JVM, as well as connectors that are used for remote debugging (using TCP/IP or shared memory transports).

These connectors are generally used with enterprise debuggers, such as the NetBeans integrated development environment (IDE) or commercial IDEs.

The command jdb -listconnectors prints a list of the available connectors. The command jdb -help prints the command usage help.

See jdb Utility in the Java Platform, Standard Edition Tools Reference

The jinfo Utility

The jinfo command-line utility gets configuration information from a running Java process or crash dump, and prints the system properties or the command-line flags that were used to start the JVM.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. Use the latest utility, jcmd, instead of the previous jinfo utility for enhanced diagnostics and reduced performance overhead.

With the -flag option, the jinfo utility can dynamically set, unset, or change the value of certain JVM flags for the specified Java process. See Java HotSpot VM Command-Line Options.

The output for the jinfo utility for a Java process with PID number 29620 is shown in the following example.


c:\Program Files\Java\jdk-11\bin>jinfo 29620
Java System Properties:
sun.desktop=windows
awt.toolkit=sun.awt.windows.WToolkit
java.specification.version=11
sun.cpu.isalist=amd64
sun.jnu.encoding=Cp1252
java.class.path=C\:\\sampleApps\\DynamicTreeDemo\\dist\\DynamicTreeDemo.jar
sun.awt.enableExtraMouseButtons=true
java.vm.vendor=Oracle Corporation
sun.arch.data.model=64
user.variant=
java.vendor.url=http\://java.oracle.com/
user.timezone=
java.vm.specification.version=11
os.name=Windows 10
sun.java.launcher=SUN_STANDARD
user.country=US
sun.boot.library.path=c\:\\Program Files\\Java\\jdk-11.0.5\\bin
sun.java.command=C\:\\sampleApps\\DynamicTreeDemo\\dist\\DynamicTreeDemo.jar
jdk.debug=release
sun.cpu.endian=little
user.home=C\:\\Users\\user1
user.language=en
sun.stderr.encoding=cp437
java.specification.vendor=Oracle Corporation
java.version.date=2019-10-15
java.home=c\:\\Program Files\\Java\\jdk-11.0.5
file.separator=\\
java.vm.compressedOopsMode=Zero based
line.separator=\r\n
sun.stdout.encoding=cp437
java.specification.name=Java Platform API Specification
java.vm.specification.vendor=Oracle Corporation
java.awt.graphicsenv=sun.awt.Win32GraphicsEnvironment
user.script=
sun.management.compiler=HotSpot 64-Bit Tiered Compilers
java.runtime.version=11.0.5+10-LTS
user.name=user1
path.separator=;
os.version=10.0
java.runtime.name=Java(TM) SE Runtime Environment
file.encoding=Cp1252
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.vendor.version=18.9
java.vendor.url.bug=http\://bugreport.java.com/bugreport/
java.io.tmpdir=C\:\\Users\\user1\\AppData\\Local\\Temp\\
java.version=11.0.5
user.dir=c\:\\Users\\user1
os.arch=amd64
java.vm.specification.name=Java Virtual Machine Specification
java.awt.printerjob=sun.awt.windows.WPrinterJob
sun.os.patch.level=
java.library.path=c\:\\Program Files\\Java\\jdk-11.0.5\\bin;...
java.vendor=Oracle Corporation
java.vm.info=mixed mode
java.vm.version=11.0.5+10-LTS
sun.io.unicode.encoding=UnicodeLittle
java.class.version=55.0
VM Flags:
   ...

The following topic describes the troubleshooting technique with jinfo utility.

Troubleshooting with the jinfo Utility

The output from jinfo provides the settings for java.class.path and sun.boot.class.path.

If you start the target JVM with the -classpath and -Xbootclasspath arguments, then the output from jinfo provides the settings for java.class.path and sun.boot.class.path. This information might be needed when investigating class loader issues.

In addition to getting information from a process, the jhsdb jinfo tool can use a core file as input. On the Oracle Solaris operating system, for example, the gcore utility can be used to get a core file of the process in the preceding example. The core file will be named core.29620 and will be generated in the working directory of the process. The path to the Java executable file and the core file must be specified as arguments to the jhsdb jinfo utility, as shown in the following example.

$ jhsdb jinfo --exe java-home/bin/java --core core.29620

Sometimes, the binary name will not be java. This happens when the VM is created using the JNI invocation API. The jhsdb jinfo tool requires the binary from which the core file was generated.

The jmap Utility

The jmap command-line utility prints memory-related statistics for a running VM or core file. For a core file, use jhsdb jmap.

JDK Mission Control, Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jmap utility for enhanced diagnostics and reduced performance overhead.

If jmap is used with a process or core file without any command-line options, then it prints the list of shared objects loaded (the output is similar to the pmap utility on Oracle Solaris operating system). For more specific information, you can use the options -heap, -histo, or -clstats. These options are described in the subsections that follow.

In addition, the JDK 7 release introduced the -dump:format=b,file=filename option, which causes jmap to dump the Java heap in binary format to a specified file.

If the jmap pid command does not respond because of a hung process, then use the jhsdb jmap utility to run the Serviceability Agent.

The following sections describe troubleshooting techniques with examples that print memory-related statistics for a running VM or a core file.

Heap Configuration and Usage

Use the jhsdb jmap --heap command to get the Java heap information.

The --heap option is used to get the following Java heap information:

  • Information specific to the garbage collection (GC) algorithm, including the name of the GC algorithm (for example, parallel GC) and algorithm-specific details (such as the number of threads for parallel GC).

  • Heap configuration that might have been specified as command-line options or selected by the VM based on the machine configuration.

  • Heap usage summary: For each generation (area of the heap), the tool prints the total heap capacity, in-use memory, and available free memory. If a generation is organized as a collection of spaces (for example, the new generation), then a space-specific memory size summary is included.

The following example shows output from the jhsdb jmap --heap command.

$ jhsdb jmap --heap --pid 29620
Attaching to process ID 29620, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 11.0.5+10-LTS

using thread-local object allocation.
Garbage-First (G1) GC with 4 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 4253024256 (4056.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 2551185408 (2433.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 1048576 (1.0MB)

Heap Usage:
G1 Heap:
   regions  = 4056
   capacity = 4253024256 (4056.0MB)
   used     = 10485760 (10.0MB)
   free     = 4242538496 (4046.0MB)
   0.2465483234714004% used
G1 Young Generation:
Eden Space:
   regions  = 11
   capacity = 15728640 (15.0MB)
   used     = 11534336 (11.0MB)
   free     = 4194304 (4.0MB)
   73.33333333333333% used
Survivor Space:
   regions  = 0
   capacity = 0 (0.0MB)
   used     = 0 (0.0MB)
   free     = 0 (0.0MB)
   0.0% used
G1 Old Generation:
   regions  = 0
   capacity = 250609664 (239.0MB)
   used     = 0 (0.0MB)
   free     = 250609664 (239.0MB)
   0.0% used

Heap Histogram

The jmap command with the -histo option or the jhsdb jmap --histo command can be used to get a class-specific histogram of the heap.

The jmap -histo command can print the heap histogram for a running process. Use jhsdb jmap --histo to print the heap histogram for a core file.

When the jmap -histo command is executed on a running process, the tool prints the number of objects, memory size in bytes, and fully qualified class name for each class. Internal classes in the Java HotSpot VM are enclosed within angle brackets. The histogram is useful to understand how the heap is used. To get the size of an object, you must divide the total size by the count of that object type.

The following example shows output from the jmap -histo command when it is executed on a process with PID number 29620.

$ jmap -histo 29620
 num     #instances         #bytes  class name (module)
-------------------------------------------------------
   1:         37127        2944304  [B (java.base@11)
   2:          5773        1860840  [I (java.base@11)
   3:         15844         887264  jdk.internal.org.objectweb.asm.Item (java.base@11)
   4:         24061         577464  java.lang.String (java.base@11)
   5:         13334         575120  [Ljava.lang.Object; (java.base@11)
   6:           562         373280  [Ljdk.internal.org.objectweb.asm.Item; (java.base@11)
   7:          2575         313392  java.lang.Class (java.base@11)
   8:          8233         250792  [Ljava.lang.Class; (java.base@11)
   9:          6043         241720  java.lang.invoke.MethodType (java.base@11)
  10:          6716         214912  java.lang.invoke.MethodType$ConcurrentWeakInternSet$WeakEntry (java.base@11)
  11:          6324         202368  java.util.HashMap$Node (java.base@11)
  12:          5352         171264  java.lang.invoke.LambdaForm$Name (java.base@11)
  13:           612         155160  [C (java.base@11)
  14:           594         133056  jdk.internal.org.objectweb.asm.MethodWriter (java.base@11)
  15:          1538         110864  [Ljava.lang.invoke.LambdaForm$Name; (java.base@11)
  16:          4521         108504  java.lang.StringBuilder (java.base@11)
  17:          2252         108096  java.lang.invoke.MemberName (java.base@11)
  18:           644         103208  [Ljava.util.HashMap$Node; (java.base@11)
  19:          1375          77000  java.lang.invoke.LambdaFormEditor$Transform (java.base@11)
  20:          2215          70880  java.util.concurrent.ConcurrentHashMap$Node (java.base@11)
... more lines removed here to reduce output...
1425:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@11)
1426:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@11)
Total        184008       11075800

When the jhsdb jmap -histo command is executed on a core file, the tool prints serial number, number of instances, bytes, and class name for each class. Internal classes in the Java HotSpot VM are prefixed with an asterisk (*).

The following example shows output of the jhsdb jmap -histo command when it is executed on a core file.

& jhsdb jmap --exe /usr/java/jdk_11/bin/java --core core.16395 --histoDebugger attached successfully.
Server compiler detected.
JVM version is 11.0.5+10-LTS
Iterating over heap. This may take a while...
Object Histogram:

num     #instances     #bytes   Class description
--------------------------------------------------------------------------
1:           11102     564520   byte[]
2:           10065     241560   java.lang.String
3:            1421     163392   java.lang.Class
4:           26403    2997816   * ConstMethodKlass
5:           26403    2118728   * MethodKlass
6:           39750    1613184   * SymbolKlass
7:            2011    1268896   * ConstantPoolKlass
8:            2011    1097040   * InstanceKlassKlass
9:            1906     882048   * ConstantPoolCacheKlass
10:           1614     125752   java.lang.Object[]
11:           1160      64960   jdk.internal.org.objectweb.asm.Item
12:           1834      58688   java.util.HashMap$Node
13:            359      40880   java.util.HashMap$Node[]
14:           1189      38048   java.util.concurrent.ConcurrentHashMap$Node
15:             46      37280   jdk.internal.org.objectweb.asm.Item[]
16:             29      35600   char[]
17:            968      32320   int[]
18:            650      26000   java.lang.invoke.MethodType
19:            475      22800   java.lang.invoke.MemberName

Class Loader Statistics

Use the jmap command with the -clstats option to print class loader statistics for the Java heap.

The jmap command connects to a running process using the process ID and prints detailed information about classes loaded in the Metaspace:

  • Index - Unique index for the class
  • Super - Index number of the super class
  • InstBytes - Number of bytes per instance
  • KlassBytes - Number of bytes for the class
  • annotations - Size of annotations
  • CpAll - Combined size of the constants, tags, cache, and operands per class
  • MethodCount - Number of methods per class
  • Bytecodes - Number of bytes used for byte codes
  • MethodAll - Combined size of the bytes per method, CONSTMETHOD, stack map, and method data
  • ROAll - Size of class metadata that could be put in read-only memory
  • RWAll - Size of class metadata that must be put in read/write memory
  • Total - Sum of ROAll + RWAll
  • ClassName - Name of the loaded class

The following example shows a subset of the output from the jmap -clstats command when it is executed on a process with PID number 10952.

c:\Program Files\Java\jdk-11.0.5\bin>jmap -clstats 10952
Index Super InstBytes KlassBytes annotations   CpAll MethodCount Bytecodes MethodAll   ROAll   RWAll    Total ClassName
    1    -1    304816        512           0       0           0         0         0      24     624      648 [B
    2    51    285264        784           0   23344         147      5815     48848   28960   46640    75600 java.lang.Class
    3    -1    256368        512           0       0           0         0         0      24     624      648 [I
    4    51    166344        680         136   17032         123      5433     48256   23920   44160    68080 java.lang.String
    5    -1    146360        512           0       0           0         0         0      24     624      648 [Ljava.lang.Object;
    6    51    123680        600           0    1384           7       149      1888    1200    3024     4224 java.util.HashMap$Node
    7    51     52928        608           0    1360           9       213      2472    1632    3184     4816 java.util.concurrent.ConcurrentHashMap$Node
    8    -1     51888        512           0       0           0         0         0      24     624      648 [C
    9    -1     49904        512           0       0           0         0         0      32     624      656 [Ljava.util.HashMap$Node;
   10    51     30400        624           0    1512           8       240      2224    1472    3256     4728 java.util.Hashtable$Entry
   11    51     25488        592           0   11520          89      4365     47936   16696   45072    61768 java.lang.invoke.MemberName
   12  1604     19296       1024           0    7904          51      4071     27568   14664   23024    37688 java.util.HashMap
   13    -1     18304        512           0       0           0         0         0      32     624      656 [Ljava.util.concurrent.ConcurrentHashMap$Node;
   14    51     17504        544         120    5464          37      1783     16648    7416   16072    23488 java.lang.invoke.LambdaForm$Name
   15    -1     16680        512           0       0           0         0         0      80     624      704 [Ljava.lang.Class;
...lines removed to reduce output...
 2320  1955         0        560           0    1912           7       170      1520    1312    3016     4328 sun.util.logging.internal.LoggingProviderImpl
 2321    51         0        528           0     232           1         0       144     128     936     1064 sun.util.logging.internal.LoggingProviderImpl$LogManagerAccess
              2055400    1621472       10680 5092080       27820   1288076   7335944 5407792 9513160 14920952 Total
                13.8%      10.9%        0.1%   34.1%           -      8.6%     49.2%   36.2%   63.8%   100.0%
Index Super InstBytes KlassBytes annotations   CpAll MethodCount Bytecodes MethodAll   ROAll   RWAll    Total ClassName

The jps Utility

The jps utility lists every instrumented Java HotSpot VM for the current user on the target system.

The utility is very useful in environments where the VM is embedded, that is, where it is started using the JNI Invocation API rather than the java launcher. In these environments, it is not always easy to recognize the Java processes in the process list.

The following example shows the use of the jps utility.

$ jps
16217 MyApplication
16342 jps

The jps utility lists the virtual machines for which the user has access rights. This is determined by access-control mechanisms specific to the operating system. On the Oracle Solaris operating system, for example, if a non-root user executes the jps utility, then the output is a list of the virtual machines that were started with that user's UID.

In addition to listing the PID, the utility provides options to output the arguments passed to the application's main method, the complete list of VM arguments, and the full package name of the application's main class. The jps utility can also list processes on a remote system if the remote system is running the jstatd daemon.

The jstack Utility

Use the jcmd or jhsdb jstack utility, instead of the jstack utility to diagnose problems with JVM and Java applications.

JDK Mission Control, Flight Recorder, and jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack utility for enhanced diagnostics and reduced performance overhead.

The following sections describe troubleshooting techniques with the jstack and jhsdb jstack utilities.

Troubleshoot with the jstack Utility

The jstack command-line utility attaches to the specified process, and prints the stack traces of all threads that are attached to the virtual machine, including Java threads and VM internal threads, and optionally native stack frames. The utility also performs deadlock detection. For core files, use jhsdb jstack.

A stack trace of all threads can be useful in diagnosing a number of issues, such as deadlocks or hangs.

The -l option instructs the utility to look for ownable synchronizers in the heap and print information about java.util.concurrent.locks. Without this option, the thread dump includes information only on monitors.

The output from the jstack pid option is the same as that obtained by pressing Ctrl+\ at the application console (standard input) or by sending the process a quit signal. See Control+Break Handler for an example of the output.

Thread dumps can also be obtained programmatically using the Thread.getAllStackTraces method, or in the debugger using the debugger option to print all thread stacks (the where command in the case of the jdb sample debugger).

Stack Trace from a Core Dump

Use the jhsdb jstack command to obtain stack traces from a core dump.

To get stack traces from a core dump, execute the jhsdb jstack command on a core file, as shown in the following example.

$ jhsdb jstack --exe java-home/bin/java --core core-file

Mixed Stack

The jhsdb jstack utility can also be used to print a mixed stack; that is, it can print native stack frames in addition to the Java stack. Native frames are the C/C++ frames associated with VM code and JNI/native code.

To print a mixed stack, use the --mixed option, as shown in the following example.

>jhsdb jstack --mixed --pid 21177
Attaching to process ID 21177, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 11.0.5+10-LTS
Deadlock Detection:

No deadlocks found.

----------------- 0 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 1 -----------------
----------------- 2 -----------------
"DestroyJavaVM" #19 prio=5 tid=0x000001a5607af000 nid=0x5ad8 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 3 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 4 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 5 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 6 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 7 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 8 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 9 -----------------
"Reference Handler" #2 daemon prio=10 tid=0x000001a57f747800 nid=0x2ecc waiting on condition [0x00000060f3afe000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 10 -----------------
"Finalizer" #3 daemon prio=8 tid=0x000001a50400c000 nid=0x3e70 in Object.wait() [0x00000060f3bfe000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 11 -----------------
"Signal Dispatcher" #4 daemon prio=9 tid=0x000001a504062800 nid=0x550 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 12 -----------------
"Attach Listener" #5 daemon prio=5 tid=0x000001a504063800 nid=0x488c runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
0x000001a504064340              ????????
----------------- 13 -----------------
"C2 CompilerThread0" #6 daemon prio=9 tid=0x000001a504066000 nid=0x5968 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
0x0400030091000000              ????????
----------------- 14 -----------------
"C1 CompilerThread0" #8 daemon prio=9 tid=0x000001a50406d800 nid=0x67c waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 15 -----------------
"Sweeper thread" #9 daemon prio=9 tid=0x000001a50406e800 nid=0x4690 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
0x010a012700000004              ????????
----------------- 16 -----------------
"Service Thread" #10 daemon prio=9 tid=0x000001a5041fd800 nid=0x3060 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 17 -----------------
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 18 -----------------
"Common-Cleaner" #11 daemon prio=8 tid=0x000001a504205800 nid=0x5db4 in Object.wait() [0x00000060f43ff000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 19 -----------------
"Java2D Disposer" #12 daemon prio=10 tid=0x000001a50c8ef800 nid=0x58e8 in Object.wait() [0x00000060f44fe000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 20 -----------------
"AWT-Shutdown" #13 prio=5 tid=0x000001a50c8d0800 nid=0x3a34 in Object.wait() [0x00000060f45ff000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 21 -----------------
"AWT-Windows" #14 daemon prio=6 tid=0x000001a50c8d4000 nid=0x5c8 runnable [0x00000060f46fe000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_in_native
----------------- 22 -----------------
"AWT-EventQueue-0" #17 prio=6 tid=0x000001a50dfe9000 nid=0x5a00 waiting on condition [0x00000060f49ff000]
   java.lang.Thread.State: WAITING (parking)
   JavaThread state: _thread_blocked
0x00007ffe17e8f7e4      ntdll!ZwWaitForSingleObject + 0x14
----------------- 23 -----------------
----------------- 24 -----------------

Frames that are prefixed with an asterisk (*) are Java frames, whereas frames that are not prefixed with an asterisk are native C/C++ frames.

The output of the utility can be piped through c++filt to demangle C++ mangled symbol names. Because the Java HotSpot VM is developed in the C++ language, the jhsdb jstack utility prints C++ mangled symbol names for the Java HotSpot internal functions.

The c++filt utility is delivered with the native C++ compiler suite: SUNWspro on the Oracle Solaris operating system and gnu on Linux.

The jstat Utility

The jstat utility uses the built-in instrumentation in the Java HotSpot VM to provide information about performance and resource consumption of running applications.

The tool can be used when diagnosing performance issues, and in particular issues related to heap sizing and garbage collection. The jstat utility does not require the VM to be started with any special options. The built-in instrumentation in the Java HotSpot VM is enabled by default. This utility is included in the JDK download for all operating system platforms supported by Oracle.

Note:

The instrumentation is not accessible on a FAT32 file system.

See jstat in the Java Platform, Standard Edition Tools Reference.

The jstat utility uses the virtual machine identifier (VMID) to identify the target process. The documentation describes the syntax of the VMID, but its only required component is the local virtual machine identifier (LVMID). The LVMID is typically (but not always) the operating system's PID for the target JVM process.

The jstat utility provides data similar to the data provided by the vmstat and iostat on Oracle Solaris and Linux operating systems.

For a graphical representation of the data, you can use the visualgc tool. See The visualgc Tool.

The following example illustrates the use of the -gcutil option, where the jstat utility attaches to LVMID number 2834 and takes 7 samples at 250-millisecond intervals.

$ jstat -gcutil 2834 250 7
  S0     S1     E      O      M     YGC     YGCT    FGC    FGCT     GCT   
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124

The output of this example shows you that a young generation collection occurred between the third and fourth samples. The collection took 0.017 seconds and promoted objects from the eden space (E) to the old space (O), resulting in an increase of old space utilization from 46.56% to 54.60%.

The following example illustrates the use of the -gcnew option where the jstat utility attaches to LVMID number 2834, takes samples at 250-millisecond intervals, and displays the output. In addition, it uses the -h3 option to display the column headers after every 3 lines of data.

$ jstat -gcnew -h3 2834 250
S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0    942.0    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1024.8    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1068.1    218    1.999
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1109.0    218    1.999
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0      0.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     71.6    219    2.019
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     73.7    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     78.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0    116.1    219    2.019

In addition to showing the repeating header string, this example shows that between the fourth and fifth samples, a young generation collection occurred, whose duration was 0.02 seconds. The collection found enough live data that the survivor space 1 utilization (S1U) would have exceeded the desired survivor size (DSS). As a result, objects were promoted to the old generation (not visible in this output), and the tenuring threshold (TT) was lowered from 15 to 1.

The following example illustrates the use of the -gcoldcapacity option, where the jstat utility attaches to LVMID number 21891 and takes 3 samples at 250-millisecond intervals. The -t option is used to generate a time stamp for each sample in the first column.

$ jstat -gcoldcapacity -t 21891 250 3
Timestamp    OGCMN     OGCMX       OGC        OC   YGC   FGC     FGCT     GCT
    150.1   1408.0   60544.0   11696.0   11696.0   194    80    2.874   3.799
    150.4   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863
    150.7   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863

The Timestamp column reports the elapsed time in seconds since the start of the target JVM. In addition, the -gcoldcapacity output shows the old generation capacity (OGC) and the old space capacity (OC) increasing as the heap expands to meet the allocation or promotion demands. The OGC has grown from 11696 KB to 13820 KB after the 81st full generation capacity (FGC). The maximum capacity of the generation (and space) is 60544 KB (OGCMX), so it still has room to expand.

The visualgc Tool

The visualgc tool provides a graphical view of the garbage collection (GC) system.

The visualgc tool is related to the jstat tool. See The jstat Utility. The visualgc tool provides a graphical view of the garbage collection (GC) system. As with jstat, it uses the built-in instrumentation of the Java HotSpot VM.

The visualgc tool is not included in the JDK release, but is available as a separate download from the jvmstat technology page.

Figure 2-2 shows how the GC and heap are visualized.

Figure 2-2 Sample Output from visualgc

Description of Figure 2-2 follows
Description of "Figure 2-2 Sample Output from visualgc"

Control+Break Handler

The result of pressing the Control key and the backslash (\) key at the application console on operating systems such as Oracle Solaris or Linux, or Windows.

On Oracle Solaris or Linux operating systems, the combination of pressing the Control key and the backslash (\) key at the application console (standard input) causes the Java HotSpot VM to print a thread dump to the application's standard output. On Windows, the equivalent key sequence is the Control and Break keys. The general term for these key combinations is the Control+Break handler.

On Oracle Solaris and Linux operating systems, a thread dump is printed if the Java process receives a quit signal. Therefore, the kill -QUIT pid command causes the process with the ID pid to print a thread dump to standard output.

The following sections describe the data traced by the Control+Break handler:

Thread Dump

The thread dump consists of the thread stack, including the thread state, for all Java threads in the virtual machine.

The thread dump does not terminate the application: it continues after the thread information is printed.

The following example illustrates a thread dump.

Full thread dump Java HotSpot(TM) Client VM (1.6.0-rc-b100 mixed mode):

"DestroyJavaVM" prio=10 tid=0x00030400 nid=0x2 waiting on condition [0x00000000..0xfe77fbf0]
   java.lang.Thread.State: RUNNABLE

"Thread2" prio=10 tid=0x000d7c00 nid=0xb waiting for monitor entry [0xf36ff000..0xf36ff8c0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)

"Thread1" prio=10 tid=0x000d6c00 nid=0xa waiting for monitor entry [0xf37ff000..0xf37ffbc0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

"Low Memory Detector" daemon prio=10 tid=0x000c7800 nid=0x8 runnable [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"CompilerThread0" daemon prio=10 tid=0x000c5400 nid=0x7 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x000c4400 nid=0x6 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x000b2800 nid=0x5 in Object.wait() [0xf3f7f000..0xf3f7f9c0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
        - locked <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x000ae000 nid=0x4 in Object.wait() [0xfe57f000..0xfe57f940]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000a40> (a java.lang.ref.Reference$Lock)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
        - locked <0xf4000a40> (a java.lang.ref.Reference$Lock)

"VM Thread" prio=10 tid=0x000ab000 nid=0x3 runnable 

"VM Periodic Task Thread" prio=10 tid=0x000c8c00 nid=0x9 waiting on condition 

The output consists of a number of thread entries separated by an empty line. The Java Threads (threads that are capable of executing Java language code) are printed first, and these are followed by information about VM internal threads. Each thread entry consists of a header line followed by the thread stack trace.

The header line contains the following information about the thread:

  • Thread name.

  • Indication if the thread is a daemon thread.

  • Thread priority (prio).

  • Thread ID (tid), which is the address of a thread structure in memory.

  • ID of the native thread (nid).

  • Thread state, which indicates what the thread was doing at the time of the thread dump. See Table 2-2 for more details.

  • Address range, which gives an estimate of the valid stack region for the thread.

Thread States for a Thread Dump

List of possible thread states for a thread dump.

Table 2-2 lists the possible thread states for a thread dump using the Control+Break Handler.

Table 2-2 Thread States for a Thread Dump

Thread State Description

NEW

The thread has not yet started.

RUNNABLE

The thread is executing in the JVM.

BLOCKED

The thread is blocked, waiting for a monitor lock.

WAITING

The thread is waiting indefinitely for another thread to perform a particular action.

TIMED_WAITING

The thread is waiting for another thread to perform an action for up to a specified waiting time.

TERMINATED

The thread has exited.

Detect Deadlocks

The Control+Break handler can be used to detect deadlocks in threads.

In addition to the thread stacks, the Control+Break handler executes a deadlock detection algorithm. If any deadlocks are detected, then the Control+Break handler, as shown in the following example, prints additional information after the thread dump about each deadlocked thread.

Found one Java-level deadlock:
=============================
"Thread2":
  waiting to lock monitor 0x000af330 (object 0xf819a938, a java.lang.String),
  which is held by "Thread1"
"Thread1":
  waiting to lock monitor 0x000af398 (object 0xf819a970, a java.lang.String),
  which is held by "Thread2"

Java stack information for the threads listed above:
===================================================
"Thread2":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)
"Thread1":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

Found 1 deadlock.

If the JVM flag -XX:+PrintConcurrentLocks is set, then the Control+Break handler will also print the list of concurrent locks owned by each thread.

Heap Summary

The Control+Break handler can be used to print a heap summary.

The following example shows the different generations (areas of the heap), with the size, the amount used, and the address range. The address range is especially useful if you are also examining the process with tools such as pmap.

Heap
 def new generation   total 1152K, used 435K [0x22960000, 0x22a90000, 0x22e40000
)
  eden space 1088K,  40% used [0x22960000, 0x229ccd40, 0x22a70000)
  from space 64K,   0% used [0x22a70000, 0x22a70000, 0x22a80000)
  to   space 64K,   0% used [0x22a80000, 0x22a80000, 0x22a90000)
 tenured generation   total 13728K, used 6971K [0x22e40000, 0x23ba8000, 0x269600
00)
   the space 13728K,  50% used [0x22e40000, 0x2350ecb0, 0x2350ee00, 0x23ba8000)
 compacting perm gen  total 12288K, used 1417K [0x26960000, 0x27560000, 0x2a9600
00)
   the space 12288K,  11% used [0x26960000, 0x26ac24f8, 0x26ac2600, 0x27560000)
    ro space 8192K,  62% used [0x2a960000, 0x2ae5ba98, 0x2ae5bc00, 0x2b160000)
    rw space 12288K,  52% used [0x2b160000, 0x2b79e410, 0x2b79e600, 0x2bd60000)

If the JVM flag -XX:+PrintClassHistogram is set, then the Control+Break handler will produce a heap histogram.

Native Operating System Tools

List of native tools available on Windows, Linux, and Oracle Solaris operating systems that are useful for troubleshooting or monitoring purposes.

A brief description is provided for each tool. For further details, see the operating system documentation (or man pages for the Oracle Solaris and Linux operating systems).

The format of log files and output from command-line utilities depends on the release. For example, if you develop a script that relies on the format of the fatal error log, then the same script may not work if the format of the log file changes in a future release.

You can also search for Windows-specific debug support on the MSDN developer network.

The following sections describe troubleshooting techniques and improvements to a few native operating system tools.

DTrace Tool

The Oracle Solaris 10 operating system includes the DTrace tool, which allows dynamic tracing of the operating system kernel and user-level programs.

This tool supports scripting at system-call entry and exit, at user-mode function entry and exit, and at many other probe points. The scripts are written in the D programming language, which is a C-like language with safe pointer semantics. These scripts can help you to troubleshoot problems or solve performance issues.

The dtrace command is a generic front end to the DTrace tool. This command provides a simple interface to invoke the D language, to retrieve buffered trace data, and to access a set of basic routines to format and print traced data.

You can write your own customized DTrace scripts, using the D language, or download and use one or more of the many scripts that are already available on various sites.

The probes are delivered and instrumented by kernel modules called providers. The types of tracing offered by the probe providers include user instruction tracing, function boundary tracing, kernel lock instrumentation, profile interrupt, system call tracing, and many more. If you write your own scripts, you use the D language to enable the probes; this language also allows conditional tracing and output formatting.

You can use the dtrace -l command to explore the set of providers and probes that are available on your Oracle Solaris operating system.

The DTraceToolkit is a collection of useful documented scripts developed by the Open Oracle Solaris DTrace community. See DTraceToolkit.

See Solaris Dynamic Tracing Guide.

Probe Providers in Java HotSpot VM

The Java HotSpot VM contains two built-in probe providers hotspot and hotspot_jni.

These providers deliver probes that can be used to monitor the internal state and activities of the VM, as well as the Java application that is running.

The JVM probe providers can be categorized as follows:

  • VM lifecycle: VM initialization begin and end, and VM shutdown

  • Thread lifecycle: thread start and stop, thread name, thread ID, and so on

  • Class-loading: Java class loading and unloading

  • Garbage collection: Start and stop of garbage collection, systemwide or by memory pool

  • Method compilation: Method compilation begin and end, and method loading and unloading

  • Monitor probes: Wait events, notification events, contended monitor entry and exit

  • Application tracking: Method entry and return, allocation of a Java object

In order to call from native code to Java code, the native code must make a call through the JNI interface. The hotspot_jni provider manages DTrace probes at the entry point and return point for each of the methods that the JNI interface provides for invoking Java code and examining the state of the VM.

At probe points, you can print the stack trace of the current thread using the ustack built-in function. This function prints Java method names in addition to C/C++ native function names. The following example is a simple D script that prints a full stack trace whenever a thread calls the read system call.

#!/usr/sbin/dtrace -s
syscall::read:entry 
/pid == $1 && tid == 1/ {    
   ustack(50, 0x2000);
}

The script in the previous example is stored in a file named read.d and is run by specifying the PID of the Java process that is traced as shown in the following example.

read.d pid

If your Java application generated a lot of I/O or had some unexpected latency, then the DTrace tool and its ustack() action can help you to diagnose the problem.

Improvements to the pmap Utility

Improvements to the pmap utility in Oracle Solaris 10 operating system.

The pmap utility was improved in Oracle Solaris 10 operating system to print stack segments with the text [stack]. This text helps you to locate the stack easily.

The following example shows the stack trace with improved pmap utility.

19846:    /net/myserver/export1/user/j2sdk6/bin/java -Djava.endorsed.d
00010000      72K r-x--  /export/disk09/jdk/6/rc/b63/binaries/solsparc/bin/java
00030000      16K rwx--  /export/disk09/jdk/6/rc/b63/binaries/solsparc/bin/java
00034000   32544K rwx--    [ heap ]
D1378000      32K rwx-R    [ stack tid=44 ]
D1478000      32K rwx-R    [ stack tid=43 ]
D1578000      32K rwx-R    [ stack tid=42 ]
D1678000      32K rwx-R    [ stack tid=41 ]
D1778000      32K rwx-R    [ stack tid=40 ]
D1878000      32K rwx-R    [ stack tid=39 ]
D1974000      48K rwx-R    [ stack tid=38 ]
D1A78000      32K rwx-R    [ stack tid=37 ]
D1B78000      32K rwx-R    [ stack tid=36 ]
[.. more lines removed here to reduce output ..]
FF370000       8K r-x--  /usr/lib/libsched.so.1
FF380000       8K r-x--  /platform/sun4u-us3/lib/libc_psr.so.1
FF390000      16K r-x--  /lib/libthread.so.1
FF3A4000       8K rwx--  /lib/libthread.so.1
FF3B0000       8K r-x--  /lib/libdl.so.1
FF3C0000     168K r-x--  /lib/ld.so.1
FF3F8000       8K rwx--  /lib/ld.so.1
FF3FA000       8K rwx--  /lib/ld.so.1
FFB80000      24K -----    [ anon ]
FFBF0000      64K rwx--    [ stack ]
 total    167224K

Improvements to the pstack Utility

Improvements to the pstack utility in Oracle Solaris 10 operating system.

Before Oracle Solaris 10 operating system, the pstack utility did not support Java. It printed hexadecimal addresses for both interpreted and compiled Java methods.

Starting with Oracle Solaris 10 operating system, the pstack command-line tool prints mixed-mode stack traces (Java and C/C++ frames) from a core file or a live process. The utility prints Java method names for interpreted, compiled, and inlined Java methods.

Custom Diagnostic Tools

The JDK has extensive APIs to develop custom tools to observe, monitor, profile, debug, and diagnose issues in applications that are deployed in the JRE.

The development of new tools is beyond the scope of this document. Instead, this section provides a brief overview of the APIs available.

All the packages mentioned in this section are described in the Java SE API specification.

See the example and demonstration code that is included in the JDK download.

The following sections describe packages, interface classes, and the Java debugger that can be used as custom diagnostic tools for troubleshooting.

Java Platform Debugger Architecture

The Java Platform Debugger Architecture (JPDA) is the architecture designed for use by debuggers and debugger-like tools.

The Java Platform Debugger Architecture consists of two programming interfaces and a wire protocol:

  • The Java Virtual Machine Tool Interface (JVM TI) is the interface to the virtual machine. See JVM Tool Interface.

  • The Java Debug Interface (JDI) defines information and requests at the user code level. It is a pure Java programming language interface for debugging Java programming language applications. In JPDA, the JDI is a remote view in the debugger process of a virtual machine in the process being debugged. It is implemented by the front end, where as a debugger-like application (for example, IDE, debugger, tracer, or monitoring tool) is the client. See the module jdk.jdi.

  • The Java Debug Wire Protocol (JDWP) defines the format of information and requests transferred between the process being debugged and the debugger front end, which implements the JDI.

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the JDI to launch or connect to the target VM. See The jdb Utility.

In addition to traditional debugger-type tools, the JDI can also be used to develop tools that help in postmortem diagnostics and scenarios where the tool needs to attach to a process in a noncooperative manner (for example, a hung process).

Postmortem Diagnostic Tools

List of tools and options available for post-mortem diagnostics of problems between the application and the Java HotSpot VM.

Table 2-3 summarizes the options and tools that are designed for postmortem diagnostics. If an application crashes, then these options and tools can be used to get additional information, either at the time of the crash or later using information from the crash dump.

Table 2-3 Postmortem Diagnostics Tools

Tool or Option Description and Usage

Fatal Error Log

When an irrecoverable (fatal) error occurs, an error log is created. This file contains information obtained at the time of the fatal error. In many cases, it is the first item to examine when a crash occurs. See Fatal Error Log.

-XX:+HeapDumpOnOutOfMemoryError option

This command-line option specifies the generation of a heap dump when the VM detects a native out-of-memory error. See The -XX:HeapDumpOnOutOfMemoryError Option.

-XX:OnError option

This command-line option specifies a sequence of user-supplied scripts or commands to be executed when a fatal error occurs. For example, on Windows, this option can execute a command to force a crash dump. This option is very useful on systems where a postmortem debugger is not configured. See The -XX:OnError Option.

-XX:+ShowMessageBoxOnError option

This command-line option suspends a process when a fatal error occurs. Depending on the user response, the option can launch the native debugger (for example, dbx, gdb, msdev) to attach to the VM. See The -XX:ShowMessageBoxOnError Option.

Other -XX options

Several other -XX command-line options can be useful in troubleshooting. See Other -XX Options.

jhsdb jinfo utility

This utility can get configuration information from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jinfo Utility.

jhsdb jmap utility

This utility can get memory map information, including a heap histogram, from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jmap Utility.

jstack utility

This utility can get Java and native stack information from a Java process. On the Oracle Solaris and Linux operating systems, the utility can also get the information from a core file or a remote debug server. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be used for postmortem diagnosis. See Native Operating System Tools.

Hung Processes Tools

List of tools and options for diagnosing problems between the application and the Java HotSpot VM in a hung process.

Table 2-4 summarizes the options and tools that can help in scenarios involving a hung or deadlocked process. These tools do not require any special options to start the application.

Java Mission Control, Java Flight Recorder, and the jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-4 Hung ProcessTools

Tool or Option Description and Usage

Ctrl+Break handler

(Control+\ or kill -QUIT pid on the Oracle Solaris and Linux operating systems, and Control+Break on Windows)

This key combination performs a thread dump and deadlock detection. The Ctrl+Break handler can optionally print a list of concurrent locks and their owners, as well as a heap histogram. See Control+Break Handler.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings (JFRs). The JFRs are used to troubleshoot and diagnose flight recording events. See The jcmd Utility.

jdb utility

Debugger support includes attaching connectors, which allow jdb and other Java language debuggers to attach to a process. This can help show what each thread is doing at the time of a hang or deadlock. See The jdb Utility.

jinfo utility

This utility can get configuration information from a Java process. See The jinfo Utility.

jmap utility

This utility can get memory map information, including a heap histogram, from a Java process. The jhsdb jmap utility can be used if the process is hung. See The jmap Utility.

jstack utility

This utility can obtain Java and native stack information from a Java process. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be useful in hang or deadlock situations. See Native Operating System Tools.

Monitoring Tools

Tools and options for monitoring running applications and detecting problems are available in the JDK and in the operating system.

The tools listed in the Table 2-5 are designed for monitoring applications that are running.

Java Mission Control, Java Flight Recorder, and the jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-5 Monitoring Tools

Tool or Option Description and Usage

Java Mission Control

Java Mission Control (JMC) is a new JDK profiling and diagnostic tool platform for HotSpot JVM. It is a tool suite for basic monitoring, managing, and production time profiling and diagnostics with high performance. Java Mission Control minimizes the performance overhead that's usually an issue with profiling tools.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings. The JFRs are used to troubleshoot and diagnose JVM and Java applications with flight recording events. See The jcmd Utility.

JConsole utility

This utility is a monitoring tool that is based on Java Management Extensions (JMX). The tool uses the built-in JMX instrumentation in the Java Virtual Machine to provide information about the performance and resource consumption of running applications. See JConsole.

jmap utility

This utility can get memory map information, including a heap histogram, from a Java process or a core file. See The jmap Utility.

jps utility

This utility lists the instrumented Java HotSpot VMs on the target system. The utility is very useful in environments where the VM is embedded, that is, it is started using the JNI Invocation API rather than the java launcher. See The jps Utility.

jstack utility

This utility can get Java and native stack information from a Java process. The utility can also get the information from a core file. See The jstack Utility.

jstat utility

This utility uses the built-in instrumentation in Java to provide information about performance and resource consumption of running applications. The tool can be used when diagnosing performance issues, especially those related to heap sizing and garbage collection. See The jstat Utility.

jstatd daemon

This tool is a Remote Method Invocation (RMI) server application that monitors the creation and termination of instrumented Java Virtual Machines and provides an interface to allow remote monitoring tools to attach to VMs running on the local host. See The jstatd Daemon.

visualgc utility

This utility provides a graphical view of the garbage collection system. As with jstat, it uses the built-in instrumentation of Java HotSpot VM. See The visualgc Tool.

Native tools

Each operating system has native tools and utilities that can be useful for monitoring purposes. For example, the dynamic tracing (DTrace) capability introduced in Oracle Solaris 10 operating system performs advanced monitoring. See Native Operating System Tools.

Other Tools, Options, Variables, and Properties

List of general troubleshooting tools, options, variables, and properties that can help to diagnose issues.

In addition to the tools that are designed for specific types of problems, the tools, options, variables, and properties listed in Table 2-6 can help in diagnosing other issues.

JDK Mission Control, Flight Recorder, and the jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-6 General Troubleshooting Tools and Options

Tool or Option Description and Usage

JDKMission Control

JDK Mission Control (JMC) is a new JDK profiling and diagnostic tool platform for HotSpot JVM. It is a tool suite for basic monitoring, managing, and production time profiling and diagnostics with high performance. JMC minimizes the performance overhead that's usually an issue with profiling tools. See JDK Mission Control.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings (JFRs). The JFRs are used to troubleshoot and diagnose JVM and Java applications with flight recording events.

jinfo utility

This utility can dynamically set, unset, and change the values of certain JVM flags for a specified Java process. On Oracle Solaris and Linux operating systems, it can also print configuration information.

jrunscript utility

This utility is a command-line script shell, which supports both interactive and batch-mode script execution.

Oracle Solaris Studio dbx debugger

This is an interactive, command-line debugging tool, which allows you to have complete control of the dynamic execution of a program, including stopping the program and inspecting its state. For details, see the latest dbx documentation located at Oracle Solaris Studio Program Debugging.

Oracle Solaris Studio Performance Analyzer

This tool can help you assess the performance of your code, identify potential performance problems, and locate the part of the code where the problems occur. The Performance Analyzer can be used from the command line or from a graphical user interface. For details, see the Oracle Solaris Studio Performance Analyzer.

Sun's Dataspace Profiling: DProfile

This tool provides insight into the flow of data within Sun computing systems, helping you identify bottlenecks in both software and hardware. DProfile is supported in the Sun Studio 11 compiler suite through the Performance Analyzer GUI. See DTrace or Dynamic Tracing diagnostic tool.

-Xcheck:jni option

This option is useful in diagnosing problems with applications that use the Java Native Interface (JNI) or that employ third-party libraries (some JDBC drivers, for example). See The -Xcheck:jni Option.

-verbose:class option

This option enables logging of class loading and unloading. See The -verbose:class Option.

-verbose:gc option

This option enables logging of garbage collection information. See The -verbose:gc Option.

-verbose:jni option

This option enables logging of JNI. See The -verbose:jni Option.

JAVA_TOOL_OPTIONS environment variable

This environment variable allows you to specify the initialization of tools, specifically the launching of native or Java programming language agents using the -agentlib or -javaagent options. See Environment Variables and System Properties.

java.security.debug system property

This system property controls whether the security checks in the JRE of the Java print trace messages during execution. See The java.security.debug System Property.

The java.lang.management Package

The java.lang.management package provides the management interface for the monitoring and management of the JVM and the operating system.

Specifically, it covers interfaces for the following systems:

  • Class loading

  • Compilation

  • Garbage collection

  • Memory manager

  • Runtime

  • Threads

In addition to the java.lang.management package, the JDK release includes platform extensions in the com.sun.management package. The platform extensions include a management interface to get detailed statistics from garbage collectors that perform collections in cycles. These extensions also include a management interface to get additional memory statistics from the operating system.

The java.lang.instrument Package

The java.lang.instrument package provides services that allow the Java programming language agents to instrument programs running on the JVM.

Instrumentation is used by tools such as profilers, tools for tracing method calls, and many others. The package facilitates both load-time and dynamic instrumentation. It also includes methods to get information about the loaded classes and information about the amount of storage consumed by a given object.

The java.lang.Thread Class

The java.lang.Thread class has a static method called getAllStackTraces, which returns a map of stack traces for all live threads.

The Thread class also has a method called getState, which returns the thread state; states are defined by the java.lang.Thread.State enumeration. These methods can be useful when you add diagnostic or monitoring capabilities to an application.

JVM Tool Interface

The JVM Tool Interface (JVM TI) is a native (C/C++) programming interface that can be used by a wide range of development and monitoring tools.

JVM TI provides an interface for the full breadth of tools that need access to the VM state, including but not limited to profiling, debugging, monitoring, thread analysis, and coverage analysis tools.

Some examples of agents that rely on JVM TI are the following:

  • Java Debug Wire Protocol (JDWP)

  • The java.lang.instrument package

The specification for JVM TI can be found in the JVM Tool Interface documentation.

The jrunscript Utility

The jrunscript utility is a command-line script shell.

It supports script execution in both interactive mode and in batch mode. By default, the shell uses JavaScript, but you can specify any other scripting language for which you supply the path to the script engine JAR file of .class files.

Thanks to the communication between the Java language and the scripting language, the jrunscript utility supports an exploratory programming style.

The jstatd Daemon

The jstatd daemon is an RMI server application that monitors the creation and termination of each instrumented Java HotSpot, and provides an interface to allow remote monitoring tools to attach to JVMs running on the local host.

For example, this daemon allows the jps utility to list processes on a remote system.

Note:

The instrumentation is not accessible on FAT32 file system.

Troubleshooting Tools Based on the Operating System

List of native Windows tools that can be used for troubleshooting problems.

Table 2-7 lists the troubleshooting tools available on the Windows operating system.

Table 2-7 Native Troubleshooting Tools on Windows

Tool Description

dumpchk

Command-line utility to verify that a memory dump file was created correctly. This tool is included in the Debugging Tools for Windows download available from the Microsoft website. See Collect Crash Dumps on Windows.

msdev debugger

Command-line utility that can be used to launch Visual C++ and the Win32 debugger

userdump

The User Mode Process Dumper is included in the OEM Support Tools download available from the Microsoft website. See Collect Crash Dumps on Windows.

windbg

Windows debugger can be used to debug Windows applications or crash dumps. This tool is included in the Debugging Tools for Windows download available from the Microsoft website. See Collect Crash Dumps on Windows.

/Md and /Mdd compiler options

Compiler options that automatically include extra support for tracking memory allocations

Table 2-8 describes some troubleshooting tools introduced or improved in the Linux operating system version 10.

Table 2-8 Native Troubleshooting Tools on Linux

Tool Description

c++filt

Demangle C++ mangled symbol names. This utility is delivered with the native C++ compiler suite: gcc on Linux.

gdb

GNU debugger

libnjamd

Memory allocation tracking

lsstack

Print thread stack (similar to pstack in the Oracle Solaris operating system)

Not all distributions provide this tool by default; therefore, you might have to download it from Open Source downloads.

ltrace

Library call tracer (equivalent to truss -u in the Oracle Solaris operating system)

Not all distributions provide this tool by default; therefore, you might have to download it from Open Source downloads.

mtrace and muntrace

GNU malloc tracer

proc tools such as pmap and pstack

Some, but not all, of the proc tools on the Oracle Solaris operating system have equivalent tools on Linux. Core file support is not as good for Linux as for Oracle Solaris operating system; for example, pstack does not work for core dumps

strace

System call tracer (equivalent to truss -t in the Oracle Solaris operating system)

top

Display most CPU-intensive processes.

vmstat

Report information about processes, memory, paging, block I/O, traps, and CPU activity.

Table 2-9 lists troubleshooting tools available on Oracle Solaris operating system.

Table 2-9 Native Troubleshooting Tools on Oracle Solaris Operating System

Tool Description

coreadm

Specify name and location of core files produced by the JVM.

cpustat

Monitor system behavior using CPU performance counters.

cputrack

Monitor process and LWP behavior using CPU performance counters.

c++filt

Demangle C++ mangled symbol names. This utility is delivered with the native C++ compiler suite: SUNWspro on the Oracle Solaris operating system.

dtrace

Introduced in Oracle Solaris 10 operating system, DTrace is a dynamic tracing compiler and tracing utility. It can perform dynamic tracing of kernel functions, system calls, and user functions. This tool allows arbitrary, safe scripting to be executed at entry, exit, and other probe points. The script is written in the C-like, but safe, pointer semantics language called the D programming language. See also DTrace Tool.

gcore

Force a core dump of a process. The process continues after the core dump is written.

intrstat

Report statistics on the CPU consumed by interrupt threads.

iostat

Report I/O statistics.

libumem

Introduced in the Oracle Solaris 9 operating system update 3, this library provides fast, scalable object-caching memory allocation and extensive debugging support. The tool can be used to find and fix memory management bugs. See Find Leaks with the libumem Tool.

mdb

Modular debugger for kernel and user applications and crash dumps

netstat

Display the contents of various network-related data structures.

pargs

Print process arguments, environment variables, or the auxiliary vector. Long output is not truncated as it would be by other commands, such as ps.

pfiles

Print information on process file descriptors. Starting with the Oracle Solaris 10 operating system, the tool prints the file name also.

pldd

Print shared objects loaded by a process.

pmap

Print memory layout of a process or core file, including heap, data, and text sections. Starting with Oracle Solaris 10, stack segments are clearly identified with the text [stack] along with the thread ID. See Improvements to the pmap Utility.

prstat

Report statistics for active Oracle Solaris operating system processes. (Similar to top)

prun

Set the process to running mode (reverse of pstop).

ps

List all processes.

psig

List the signal handlers of a process.

pstack

Print stack of threads of a given process or core file. Starting with the Oracle Solaris 10 operating system, Java method names can be printed for Java frames. See Improvements to the pstack Utility.

pstop

Stop the process (suspend).

ptree

Print the process tree that contains the given PID.

sar

System activity reporter

sdtprocess

Display most CPU-intensive processes. (similar to top).

sdtperfmeter

Display graphs that show the system performance (for example, CPU, disks, and network).

top

Display most CPU-intensive processes. This tool is available as freeware for the Oracle Solaris operating system, but is not installed by default.

trapstat

Display runtime trap statistics (SPARC only).

truss

Trace entry and exit events for system calls, user-mode functions, and signals; optionally stop the process at one of these events. This tool also prints the arguments of system calls and user functions.

vmstat

Report system virtual memory statistics.

watchmalloc

Track memory allocations.