2 Diagnostic Tools

The Java Development Kit (JDK) provides diagnostic tools and troubleshooting tools specific to various operating systems. Custom diagnostic tools can also be developed using the APIs provided by the JDK.

This chapter contains the following sections:

Diagnostic Tools Overview

Most of the command-line utilities described in this section are either included in the JDK or native operating system tools and utilities.

JDK command-line utilities can be used to diagnose issues and monitor applications that are deployed in the Java runtime environment.

In general, the diagnostic tools and options use various mechanisms to get the information they report. The mechanisms are specific to the virtual machine (VM) implementation, operating systems, and release. Frequently, only a subset of the tools is applicable to a given issue at a particular time. Command-line options that are prefixed with -XX are specific to Java HotSpot VM. See Java HotSpot VM Command-Line Options.

Note:

The -XX options are not part of the Java API and can vary from one release to the next.

The tools and options are divided into several categories, depending on the type of problem that you are troubleshooting. Certain tools and options might fall into more than one category.

Note:

Some command-line utilities described in this section are experimental. The jstack, jinfo, and jmap utilities are examples of utilities that are experimental. It is suggested to use the latest diagnostic utility, jcmd instead of the earlier jstack, jinfo, and jmap utilities.

JDK Mission Control

JDK Mission Control (JMC) is a production-time profiling and diagnostics tool. It includes tools to monitor and manage your Java application with a very small performance overhead, and is suitable for monitoring applications running in production.

JMC is not part of the regular JDK installation. For more information on JMC downloads and documentation, see JDK Mission Control Page.

JMC consists of :
  • JVM Browser shows running Java applications and their JVMs.
  • JMX Console is a mechanism for monitoring and managing JVMs. It connects to a running JVM, collects, displays its characteristics in real time, and enables you to change some of its runtime properties through Managed Beans (MBeans). You can also create rules that trigger on certain events (for example, send an e-mail if the CPU usage by the application reaches 90 percent).

  • Flight Recorder (JFR) is a tool for collecting diagnostic and profiling data about a running Java application. It is integrated into the JVM and causes very small performance overhead, so it can be used in production environments. JFR continuously saves large amounts of data about the running applications. This profiling information includes thread samples, lock profiles, and garbage collection details. JFR presents diagnostic information in logically grouped tables and charts. It enables you to select the range of time and level of detail necessary to focus on the problem. Data collected by JFR can be essential when contacting Oracle support to help diagnose issues with your Java application.

  • Plug-ins help in heap dump analysis and DTrace recording. See Plug-in Details. JMC plug-ins connect to a JVM using the Java Management Extensions (JMX) agent. For more information about JMX, see the Java Platform, Standard Edition Java Management Extensions Guide .

Troubleshoot with JDK Mission Control

JMC provides the following features or functionalities that can help you in troubleshooting:

  • Java Management console (JMX) connects to a running JVM, and collects and displays key characteristics in real time.
  • Triggers user-provided custom actions and rules for JVM.
  • Experimental plug-ins from the JMC tool provide troubleshooting activities.
  • Flight Recording in JMC is available to analyze events. The preconfigured tabs enable you to easily to drill down in various areas of common interest, such as, code, memory and garbage collection, threads, and I/O. The Automated Analysis Results page of flight recordings helps you to diagnose issues quicker. The provided rules and heuristics help you find functional and performance problems in your application and provide tuning tips. Some rules that operate with relatively unknown concepts, like safe points, will provide explanations and links to further information. Some rules are parametrized and can be configured to make more sense in your particular environment. Individual rules can be enabled or disabled as you see fit.
    • Flight Recorder in the JMC application presents diagnostic information in logically grouped tables, charts, and dials. It enables you to select the range of time and level of detail necessary to focus on the problem.
  • The JMC plug-ins connect to JVM using the Java Management Extensions (JMX) agent. The JMX is a standard API for the management and monitoring of resources such as applications, devices, services, and the Java Virtual Machine.

Flight Recorder

Flight Recorder (JFR) is a profiling and event collection framework built into the JDK.

Flight Recorder allows Java administrators and developers to gather detailed low-level information about how a JVM and Java applications are behaving. You can use JMC to visualize the data collected by JFR. Flight Recorder and JMC together create a complete toolchain to continuously collect low-level and detailed runtime information enabling after-the-fact incident analysis.

The advantages of using JFR are:

  • It records data about JVM events as they occur, with a timestamp.
  • Recording events with JFR enables you to preserve the execution states to analyze issues. You can access the data anytime to better understand problems and resolve them.
  • JFR can record a large amount of data on production systems while keeping the overhead of the recording process low.
  • It is most suited for recording latencies. It records situations where the application is not executing as expected and provide details on the bottlenecks.
  • It provides insight into how programs interact with execution environment as a whole, ranging from hardware, operating systems, JVM, JDK, and the Java application environment.

Flight recordings can be started when the application is started or while the application is running. The data is recorded as time-stamped data points called events. Events are categorized as follows:

  • Duration events: occurs at a particular duration with specific start time and stop time.
  • Instant events: occurs instantly and gets logged immediately, for example, a thread gets blocked.
  • Sample events: occurs at regular intervals to check the overall health of the system, for example, printing heap diagnostics every minute.
  • Custom events: user defined events created using JMC or APIs.

In addition, there are predefined events that are enabled in a recording template. Some templates only save very basic events and have virtually no impact on performance. Other templates may come with slight performance overhead and may also trigger garbage collections to gather additional data. The following templates are provided with Flight Recorder in the <JDK_ROOT>/lib/jfr directory:

  • default.jfc: Collects a predefined set of data with low overhead.
  • profile.jfc: Provides more data than the default.jfc template, but with overhead and impact on performance.

Flight Recorder produces following types of recordings:

  • Time fixed recordings: A time fixed recording is also known as a profiling recording that runs for a set amount of time, and then stops. Usually, a time fixed recording has more events enabled and may have a slightly bigger performance effect. Events that are turned on can be modified according to your requirements. Time fixed recordings will be automatically dumped and opened.

    Typical use cases for a time fixed recording are as follows:

    • Profile which methods are run the most and where most objects are created.

    • Look for classes that use more and more heap, which indicates a memory leak.

    • Look for bottlenecks due to synchronization and many more such use cases.

  • Continuous recordings: A continuous recording is a recording that is always on and saves, for example, the last six hours of data. During this recording, JFR collects events and writes data to the global buffer. When the global buffer fills up, the oldest data is discarded. The data currently in the buffer is written to the specified file whenever you request a dump, or if the dump is triggered by a rule.

    A continuous recording with the default template has low overhead and gathers a lot of useful data. However, this template doesn't gather heap statistics or allocation profiling.

Produce a Flight Recording

The following sections describe different ways to produce a flight recording.

Use JMC to Start a Flight Recording

Start the JMC application.

Prerequisites:

To start, find your JVM/server in the JVM Browser.

  1. Right-click the JVM/server you want to record and select Start Flight Recording....
    The Start Flight Recording window opens.
  2. Click Browse to find a suitable location and file name for saving the recording.
  3. Select either Time fixed recording (profiling recording), or Continuous recording. For continuous recordings, you can specify the maximum size or maximum age of events you want to save.
  4. Select the flight recording template in the Event settings drop-down list. Templates define the events that you want to record. To create your own templates, click Template Manager. However, for most use cases, select either the Continuous template (for very low overhead recordings) or the Profiling template (for more data and slightly more overhead).
  5. Click Finish to start the recording or click Next to modify the event options defined in the selected template.
  6. Modify the event options for the flight recording. The default settings give a good balance between data and performance. In some cases, you might want to add extra events depending on your use case.

    For example:

    The Threshold value is the length of event recording. For example, by default, synchronization events above 10 ms are collected. This means, if a thread waits for a lock for more than 10 ms, an event is saved. You can lower this value to get more detailed data for short contentions.

    The Thread Dump setting gives you an option to do periodic thread dumps. These are normal textual thread dumps.

  7. Click Finish to start the recording or click Next to modify the event details defined in the selected template.
  8. Modify the event details for the selected flight recording template. Event details define whether the event should be included in the recording. For some events, you can also define whether a stack trace should be attached to the event, specify the duration threshold (for duration events) and a request period (for sample events).
  9. Click Back if you want to modify any of the settings set in the previous steps or click Finish to start the recording.
    The new flight recording appears in the Progress View.

    Note:

    Expand the node in the JVM Browser to the recordings that are running. Right-click any of the recordings to dump, dump whole, dump last part, edit, stop, or close the recording. Stopping a profiling recording will still produce a recording file and closing a profiling recording will discard the recording.

You can set up JMC to automatically start a flight recording if a condition is met using the Triggers tab in the JMX console. For more information, see Use Triggers for Automatic Flight Recordings.

Use Triggers for Automatic Flight Recordings

The Triggers tab allows you to define and activate rules that trigger events when a certain condition is met. For example, you can set up JDK Mission Control to automatically start a flight recording if a condition is met. This is useful for tracking specific JVM runtime issues.

This is done from the JMX console.
  1. To start the JMX console, find your application in the JVM Browser, right-click it, and select Start JMX Console
  2. Click the Triggers tab at the bottom of the screen.
  3. Click Add. You can choose any MBean in the application, including your own application-specific ones.
    The Add New Rule dialog opens.
  4. Select an attribute for which the rule should trigger and click Next . For example, select java.lang > OperatingSystem > ProcessCpuLoad.
  5. Set the condition on which the rule should trigger and click Next. For example, set a value for the Maximum trigger value, Sustained period, and Limit period.

    Note:

    You can either select the Trigger when condition is met or Trigger when recovering from condition check box.
  6. Select what action you would like your rule to perform when triggered and click Next. For example, choose Start Time Limited Flight Recording and browse the file destination and recording time. Select the Open automatically checkbox, if you wish to open the flight recording automatically when it is triggered.
  7. Select constraints for your rule and click Next. For example, select the particular dates, days of the week, or time of day when the rule should be active.
  8. Enter a name for your rule and click Finish.
    The rule is added to the My Rules list.
When you select your rule from the Trigger Rules list, the Rule Details pane displays its components in the following tabs. You can edit the conditions, attributes, and constraints if you wish:
  • Condition
  • Action
  • Constraint
Use Startup Flags at the Command Line to Produce a Flight Recording

Use startup flags to start recording when the application is started. If the application is already running, use the jcmd utility to start recording.

Use the following methods to generate a flight recording:
  • Generate a profiling recording when an application is started.

    You can configure a time fixed recording at the start of the application using the -XX:StartFlightRecording option. The following example shows how to run the MyApp application and start a 60-second recording 20 seconds after starting the JVM, which will be saved to a file named myrecording.jfr:

    java -XX:StartFlightRecording.delay=20s,duration=60s,name=myrecording,filename=myrecording.jfr,settings=profile MyApp

    The settings parameter takes the name of a template. Include the path if the template is not in the java-home/lib/jfr directory, which is the location of the default templates. The standard templates are: profile, which gathers more data and is primarily for profiling recordings, and default, which is a low overhead setting made primarily for continuous recordings.

    For a complete description of Flight Recorder flags for the java command, see Advanced Runtime Options for Java in the Java Development Kit Tool Specifications.

  • Generate a continuous recording when an application is started.

    You can start a continuous recording from the command line using the -XX:StartFlightRecording option. The -XX:FlightRecorderOptions provides additional settings for managing the recording. These flags start a continuous recording that can later be dumped if needed. The following example shows how to run the MyApp application with a continuous recording that saves 6 hours of data to disk. The temporary data will be saved to the /tmp folder.

    java -XX:StartFlightRecording.disk=true,maxage=6h,settings=default -XX:FlightRecorderOptions=repository=/tmp MyApp

    Note:

    When you actually dump the recording, you specify a new location for the dumped file, so the files in the repository are only temporary.
  • Generate a recording using diagnostic commands.

    For a running application, you can generate recordings by using Java command-line diagnostic commands. The simplest way to execute a diagnostic command is to use the jcmd tool located in the java-home/bin directory. For more details see, The jcmd Utility.

    The following example shows how to start a recording for the MyApp application with the process ID 5361. 30 minutes of data is recorded and written to /usr/recording/myapp-recording1.jfr.

    jcmd 5361 JFR.start duration=30m filename=/usr/recordings/myapp-recording1.jfr

Analyze a Flight Recording

The following sections describe different ways to analyze a flight recording:

Analyze a Flight Recording Using JMC

Once the flight recording file opens in the JMC, you can look at a number of different areas like code, memory, threads, locks and I/O and analyze various aspects of runtime behavior of your application.

The recording file is automatically opened in the JMC when a timed recording finishes or when a dump of a running recording is created. You can also open any recording file by double-clicking it or by opening it through the File menu. The flight recording opens in the Automated Analysis Results page. This page helps you to diagnose issues quicker. For example, if you’re tuning the garbage collection, or tracking down memory allocation issues, then you can use the memory view to get a detailed view on individual garbage collection events, allocation sites, garbage collection pauses, and so on. You can visualize the latency profile of your application by looking at I/O and Threads views, and even drill down into a view representing individual events in the recording.

View Automated Analysis Results Page

The Flight Recorder extracts and analyzes the data from the recordings and then displays color-coded report logs on the Automated Analysis Results page.

By default, results with yellow and red scores are displayed to draw your attention to potential problems. If you want to view all results in the report, click the Show OK Results button (a tick mark) on the top-right side of the page. Similarly, to view the results as a table, click the Table button.

The benchmarks are mainly divided into problems related to the following:

Clicking on a heading in the report, for example, Java Application, displays a corresponding page.

Note:

You can select a respective entry in the Outline view to navigate between the pages of the automated analysis.
Analyze the Java Application

Java Application dashboard displays the overall health of the Java application.

Concentrate on the parameters having yellow and red scores. The dashboard provides exact references to the problematic situations. Navigate to the specific page to analyze the data and fix the issue.

Threads

The Threads page provides a snapshot of all the threads that belong to the Java application. It reveals information about an application’s thread activity that can help you diagnose problems and optimize application and JVM performance.

Threads are represented in a table and each row has an associated graph. Graphs can help you to identify the problematic execution patterns. The state of each thread is presented as a Stack Trace, which provides contextual information of where you can instantly view the problem area. For example, you can easily locate the occurrence of a deadlock.

Lock Instances

Lock instances provides further details on threads specifying the lock information, that is, if the thread is trying to take a lock or waiting for a notification on a lock. If a thread has taken any lock, the details are shown in the stack trace.

Memory

One way to detect problems with application performance to is to see how it uses memory during runtime.

In the Memory page, the graph represents heap memory usage of the Java application. Each cycle consists of a Java heap growth phase that represents the period of heap memory allocations, followed by a short drop that represents garbage collection, and then the cycle starts over. The important inference from the graph is that the memory allocations are short-lived as garbage collector pushes down the heap to the start position at each cycle.

Select the Garbage Collection check box to see the garbage collection pause time in the graph. It indicates that the garbage collector stopped the application during the pause time to do its work. Long pause times lead to poor application performance, which needs to be addressed.

Method Profiling

Method Profiling page enables you to see how often a specific method is run and for how long it takes to run a method. The bottlenecks are determined by identifying the methods that take a lot of time to execute.

As profiling generates a lot of data, it is not turned on by default. Start a new recording and select Profiling - on server in the Event settings drop-down menu. Do a time fixed recording for a short duration. JFR dumps the recording to the file name specified. Open the Method Profiling page in JMC to see the top allocations. Top packages and classes are displayed. Verify the details in the stack trace. Inspect the code to verify if the memory allocation is concentrated on a particular object. JFR points to the particular line number where the problem persists.

JVM Internals

The JVM Internals page provides detailed information about the JVM and its behavior.

One of the most important parameters to observe is Garbage Collections. Garbage collection is a process of deleting unused objects so that the space can be used for allocation of new objects. The Garbage Collections page helps you to better understand the system behavior and garbage collection performance during runtime.

The graphs shows the heap usage as compared to the pause times and how it varies during the specified period. The page also lists all the garbage collection events that occurred during the recording. Observe the longest pause times against the heap. The pause time indicates that garbage collections are taking longer during application processing. It implies that garbage collections are freeing less space on the heap. This situation can lead to memory leaks.

For effective memory management, see the Compilations page, which provides details on code compilation along with duration. In large applications, you may have many compiled methods, and memory can be exhausted, resulting in performance issues.

Environment

The Environment page provides information about the environment in which the recording was made. It helps to understand the CPU usage, memory, and operating system that is being used.

See the Processes page to understand concurrent processes running and the competing CPU usage of these processes. The application performance will be affected if many processes use CPU and other system resources.

Check the Event Browser page to see the statistics of all the event types. It helps you to focus on the bottlenecks and take appropriate action to improve application performance.

You can create Custom Pages using the Event Browser page. Select the required event type from Event Type Tree and click the Create a new page using the select event type button in the top right corner of the page. The custom page is listed as a new event page below the event browser page.

Analyze a Flight Recording Using the jfr tool or JFR APIs

To access the information in a recording from Flight Recorder, use the jfr tool to print event information, or use the Flight Recorder API to programmatically process the data.

Flight Recorder provides the following methods for reviewing the information that was recorded:

  • jfr tool - Use this command-line tool to print event data from a recording. The tool is located in the java-home/bin directory. For details about this tool, see The jfr Command in the Java Development Kit Tool Specifications
  • Flight Recorder API - Use the jdk.jfr.consumer API to extract and format the information in a recording. For more information, see Flight Recorder API Programmer’s Guide.

The events in a recording can be used to investigate the following areas:

  • General information
    • Number of events recorded at each time stamp

    • Maximum heap usage

    • CPU usage over time, application's CPU usage, and total CPU usage

      Watch for CPU usage spiking near 100 percent or the CPU usage is too low or too long garbage collection pauses.

    • GC pause time

    • JVM information and system properties set

  • Memory
    • Memory usage over time

      Typically, temporary objects are allocated all the time. When a condition is met, a Garbage Collection (GC) is triggered and all of the objects no longer used are removed. Therefore, the heap usage increases steadily until a GC is triggered, then it drops suddenly. Watch for a steadily increasing heap size over time that could indicate a memory leak.

    • Information about garbage collections, including the time spent doing them

    • Memory allocations made

      The more temporary objects the application allocates, the more the application must perform garbage collection. Reviewing memory allocations helps you find the most allocations and reduce the GC pressure in your application.

    • Classes that have the most live set

      Watch how each object type increases in size during a flight recording. A specific object type that increases a lot in size indicates a memory leak; however, a small variance is normal. Especially, investigate the top growers of non-standard Java classes.

  • Code
    • Packages and classes that used the most execution time

      Watch where methods are being called from to identify bottlenecks in your application.

    • Exceptions thrown

    • Methods compiled over time as the application was running

    • Number of loaded classes, actual loaded classes and unloaded classes over time

  • Threads
    • CPU usage and the number of threads over time

    • Threads that do most of the code execution

    • Objects that are the most waited for due to synchronization

  • I/O
    • Information about file reads, file writes, socket reads, and socket writes

  • System
    • Information about the CPU, memory and OS of the machine running the application

    • Environment variables and any other processes running at the same time as the JVM

  • Events
    • All of the events in the recording

The jcmd Utility

The jcmd utility is used to send diagnostic command requests to the JVM. These requests are useful for managing recordings from Flight Recorder, troubleshooting, and diagnosing JVM and Java applications.

jcmd must be used on the same machine where the JVM is running, and have the same effective user and group identifiers that were used to launch the JVM.

A special command jcmd <process id/main class> PerfCounter.print prints all performance counters in the process.

The command jcmd <process id/main class> <command> [options] sends the command to the JVM.

The following example shows diagnostic command requests to the JVM using jcmd utility.

> jcmd
5485 jdk.jcmd/sun.tools.jcmd.JCmd
2125 MyProgram
 
> jcmd MyProgram (or "jcmd 2125")
2125:
The following commands are available:
Compiler.CodeHeap_Analytics
Compiler.codecache
Compiler.codelist
Compiler.directives_add
Compiler.directives_clear
Compiler.directives_print
Compiler.directives_remove
Compiler.queue
GC.class_histogram
GC.class_stats
GC.finalizer_info
GC.heap_dump
GC.heap_info
GC.run
GC.run_finalization
JFR.check
JFR.configure
JFR.dump
JFR.start
JFR.stop
JVMTI.agent_load
JVMTI.data_dump
ManagementAgent.start
ManagementAgent.start_local
ManagementAgent.status
ManagementAgent.stop
Thread.print
VM.class_hierarchy
VM.classloader_stats
VM.classloaders
VM.command_line
VM.dynlibs
VM.events
VM.flags
VM.info
VM.log
VM.metaspace
VM.native_memory
VM.print_touched_methods
VM.set_flag
VM.stringtable
VM.symboltable
VM.system_properties
VM.systemdictionary
VM.uptime
VM.version
help

For more information about a specific command use 'help <command>'.

> jcmd MyProgram help Thread.print
2125:
Thread.print
Print all threads with stacktraces.
 
Impact: Medium: Depends on the number of threads.
 
Permission: java.lang.management.ManagementPermission(monitor)
 
Syntax : Thread.print [options]
 
Options: (options must be specified using the <key> or <key>=<value> syntax)
        -l : [optional] print java.util.concurrent locks (BOOLEAN, false)
        -e : [optional] print extended thread information (BOOLEAN, false)
 
> jcmd MyProgram Thread.print
2125:
2020-01-21 17:05:10
Full thread dump Java HotSpot(TM) 64-Bit Server VM (14-ea+29-1384 mixed mode):
...

The following sections describe some useful commands and troubleshooting techniques with the jcmd utility:

Useful Commands for the jcmd Utility

The available diagnostic commands depend on the JVM being used. Use jcmd <process id/main class> help to see all available options.

The following are some of the most useful commands of the jcmd tool:

  • Print full HotSpot and JDK version ID.
    jcmd <process id/main class> VM.version
  • Print all the system properties set for a VM.

    There can be several hundred lines of information displayed.

    jcmd <process id/main class> VM.system_properties

  • Print all the flags used for a VM.

    Even if you have provided no flags, some of the default values will be printed, for example initial and maximum heap size.

    jcmd <process id/main class> VM.flags

  • Print the uptime in seconds.

    jcmd <process id/main class> VM.uptime

  • Create a class histogram.

    The results can be rather verbose, so you can redirect the output to a file. Both internal and application-specific classes are included in the list. Classes taking the most memory are listed at the top, and classes are listed in a descending order.

    jcmd <process id/main class> GC.class_histogram

  • Create a heap dump.

    jcmd GC.heap_dump filename=Myheapdump

    This is the same as using jmap -dump:file=<file> <pid>, but jcmd is the recommended tool to use.

  • Create a heap histogram.

    jcmd <process id/main class> GC.class_histogram filename=Myheaphistogram

    This is the same as using jmap -histo <pid>, but jcmd is the recommended tool to use.

  • Print all threads with stack traces.

    jcmd <process id/main class> Thread.print

Troubleshoot with the jcmd Utility

Use the jcmd to send diagnostic command requests to a running Java Virtual Machine (JVM) or Java application.

The jcmd utility provides the following troubleshooting options:

  • Start recording with Flight Recorder.

    For example, to start a 2-minute recording on the running Java process with the identifier 7060 and save it to C:\TEMP\myrecording.jfr, use the following:

    jcmd 7060 JFR.start name=MyRecording settings=profile delay=20s duration=2m filename=C:\TEMP\myrecording.jfr

  • Check a recording.

    The JFR.check diagnostic command checks a running recording. For example:

    jcmd 7060 JFR.check

  • Stop a recording.

    The JFR.stop diagnostic command stops a running recording and has the option to discard the recording data. For example:

    jcmd 7060 JFR.stop

  • Dump a recording.

    The JFR.dump diagnostic command stops a running recording and has the option to dump recordings to a file. For example:

    jcmd 7060 JFR.dump name=MyRecording filename=C:\TEMP\myrecording.jfr

  • Create a heap dump.

    The preferred way to create a heap dump is

    jcmd <pid> GC.heap_dump filename=Myheapdump

  • Create a heap histogram.

    The preferred way to create a heap histogram is

    jcmd <pid> GC.class_histogram filename=Myheaphistogram

Native Memory Tracking

The Native Memory Tracking (NMT) is a Java HotSpot VM feature that tracks internal memory usage for a Java HotSpot VM.

Since NMT doesn't track memory allocations by non-JVM code, you may have to use tools supported by the operating system to detect memory leaks in native code.

The following sections describe how to monitor VM internal memory allocations and diagnose VM memory leaks.

How to Monitor VM Internal Memory

Native Memory Tracking can be set up to monitor memory and ensure that an application does not start to use increasing amounts of memory during development or maintenance.

See Table 2-1 for details about NMT memory categories.

The following sections describe how to get summary or detail data for NMT and describes how to interpret the sample output.

  • Interpret sample output: From the following sample output, you will see reserved and committed memory. Note that only committed memory is actually used. For example, if you run with -Xms100m -Xmx1000m, then the JVM will reserve 1000 MB for the Java heap. Because the initial heap size is only 100 MB, only 100 MB will be committed to begin with. For a 64-bit machine where address space is almost unlimited, there is no problem if a JVM reserves a lot of memory. The problem arises if more and more memory gets committed, which may lead to swapping or native out of memory (OOM) situations.

    An arena is a chunk of memory allocated using malloc. Memory is freed from these chunks in bulk, when exiting a scope or leaving an area of code. These chunks can be reused in other subsystems to hold temporary memory, for example, pre-thread allocations. An arena malloc policy ensures no memory leakage. So arena is tracked as a whole and not individual objects. Some initial memory cannot be tracked.

    Enabling NMT will result in a 5-10 percent JVM performance drop, and memory usage for NMT adds 2 machine words to all malloc memory as a malloc header. NMT memory usage is also tracked by NMT.

    >jcmd 17320 VM.native_memory
    Native Memory Tracking:
    
    Total: reserved=5699702KB, committed=351098KB
    -                 Java Heap (reserved=4153344KB, committed=260096KB)
                                (mmap: reserved=4153344KB, committed=260096KB)
    
    -                     Class (reserved=1069839KB, committed=22543KB)
                                (classes #3554)
                                (  instance classes #3294, array classes #260)
                                (malloc=783KB #7965)
                                (mmap: reserved=1069056KB, committed=21760KB)
                                (  Metadata:   )
                                (    reserved=20480KB, committed=18944KB)
                                (    used=18267KB)
                                (    free=677KB)
                                (    waste=0KB =0.00%)
                                (  Class space:)
                                (    reserved=1048576KB, committed=2816KB)
                                (    used=2454KB)
                                (    free=362KB)
                                (    waste=0KB =0.00%)
    
    -                    Thread (reserved=24685KB, committed=1205KB)
                                (thread #24)
                                (stack: reserved=24576KB, committed=1096KB)
                                (malloc=78KB #132)
                                (arena=30KB #46)
    
    -                      Code (reserved=248022KB, committed=7890KB)
                                (malloc=278KB #1887)
                                (mmap: reserved=247744KB, committed=7612KB)
    
    -                        GC (reserved=197237KB, committed=52789KB)
                                (malloc=9717KB #2877)
                                (mmap: reserved=187520KB, committed=43072KB)
    
    -                  Compiler (reserved=148KB, committed=148KB)
                                (malloc=19KB #95)
                                (arena=129KB #5)
    
    -                  Internal (reserved=735KB, committed=735KB)
                                (malloc=663KB #1914)
                                (mmap: reserved=72KB, committed=72KB)
    
    -                     Other (reserved=48KB, committed=48KB)
                                (malloc=48KB #4)
    
    -                    Symbol (reserved=4835KB, committed=4835KB)
                                (malloc=2749KB #17135)
                                (arena=2086KB #1)
    
    -    Native Memory Tracking (reserved=539KB, committed=539KB)
                                (malloc=8KB #109)
                                (tracking overhead=530KB)
    
    -               Arena Chunk (reserved=187KB, committed=187KB)
                                (malloc=187KB)
    
    -                   Logging (reserved=4KB, committed=4KB)
                                (malloc=4KB #179)
    
    -                 Arguments (reserved=18KB, committed=18KB)
                                (malloc=18KB #467)
    
    -                    Module (reserved=62KB, committed=62KB)
                                (malloc=62KB #1060)
  • Get detail data: To get a more detailed view of native memory usage, start the JVM with command line option: -XX:NativeMemoryTracking=detail. This will track exactly what methods allocate the most memory. Enabling NMT will result in 5-10 percent JVM performance drop and memory usage for NMT adds 2 words to all malloc memory as malloc header. NMT memory usage is also tracked by NMT.

    The following example shows sample output for virtual memory for tracking level set to detail, which is shown in addition to the summary output above. One way to get this sample output is to run: jcmd <pid> VM.native_memory detail.

    Virtual memory map:
    
    [0x00000000a1000000 - 0x0000000800000000] reserved 30916608KB for Java Heap from
        [0x00007f5b91a2472b] ReservedHeapSpace::try_reserve_heap(unsigned long, unsigned long, bool, char*)+0x20b
        [0x00007f5b91a24de9] ReservedHeapSpace::initialize_compressed_heap(unsigned long, unsigned long, bool)+0x5a9
        [0x00007f5b91a254c6] ReservedHeapSpace::ReservedHeapSpace(unsigned long, unsigned long, bool, char const*)+0x176
        [0x00007f5b919da835] Universe::reserve_heap(unsigned long, unsigned long)+0x65
    
                   [0x00000000a1000000 - 0x0000000117000000] committed 1933312KB from
                [0x00007f5b9132c9be] G1PageBasedVirtualSpace::commit(unsigned long, unsigned long)+0x18e
                [0x00007f5b913414d1] G1RegionsLargerThanCommitSizeMapper::commit_regions(unsigned int, unsigned long, WorkGang*)+0x1a1
                [0x00007f5b913d5c78] HeapRegionManager::commit_regions(unsigned int, unsigned long, WorkGang*)+0x58
                [0x00007f5b913d6c45] HeapRegionManager::expand(unsigned int, unsigned int, WorkGang*)+0x35
    
                   [0x00000007fe000000 - 0x00000007fef00000] committed 15360KB from
                [0x00007f5b9132c9be] G1PageBasedVirtualSpace::commit(unsigned long, unsigned long)+0x18e
                [0x00007f5b913414d1] G1RegionsLargerThanCommitSizeMapper::commit_regions(unsigned int, unsigned long, WorkGang*)+0x1a1
                [0x00007f5b913d5c78] HeapRegionManager::commit_regions(unsigned int, unsigned long, WorkGang*)+0x58
                [0x00007f5b913d7355] HeapRegionManager::expand_exact(unsigned int, unsigned int, WorkGang*)+0xd5
    
  • Get diff from NMT baseline: For both summary and detail level tracking, you can set a baseline after the application is up and running. Do this by running jcmd <pid> VM.native_memory baseline after the application warms up. Then, you can runjcmd <pid> VM.native_memory summary.diff or jcmd <pid> VM.native_memory detail.diff.

    The following example shows sample output for the summary difference in native memory usage since the baseline was set, and this shows us changes in memory usage by category:

    Native Memory Tracking:
    
    Total: reserved=33485260KB +28KB, committed=497784KB +96KB
    
    -                 Java Heap (reserved=30916608KB, committed=393216KB)
                                (mmap: reserved=30916608KB, committed=393216KB)
     
    -                     Class (reserved=1048702KB, committed=254KB)
                                (classes #507)
                                (  instance classes #421, array classes #86)
                                (malloc=126KB #635)
                                (mmap: reserved=1048576KB, committed=128KB)
                                (  Metadata:   )
                                (    reserved=8192KB, committed=192KB)
                                (    used=118KB)
                                (    free=74KB)
                                (    waste=0KB =0.00%)
                                (  Class space:)
                                (    reserved=1048576KB, committed=128KB)
                                (    used=5KB)
                                (    free=123KB)
                                (    waste=0KB =0.00%)
     
    -                    Thread (reserved=35984KB, committed=1432KB +68KB)
                                (thread #0)
                                (stack: reserved=35896KB, committed=1344KB +68KB)
                                (malloc=49KB #212)
                                (arena=39KB #68)
     
    -                      Code (reserved=247729KB, committed=7593KB)
                                (malloc=45KB #438)
                                (mmap: reserved=247684KB, committed=7548KB)
     
    -                        GC (reserved=1209971KB, committed=77267KB)
                                (malloc=29183KB #872)
                                (mmap: reserved=1180788KB, committed=48084KB)
     
    -                  Compiler (reserved=168KB, committed=168KB)
                                (malloc=3KB #34)
                                (arena=165KB #5)
    

    The following example is a sample output that shows the detail difference in native memory usage since the baseline, and is a great way to find specific memory leaks:

    [0x00007f5b9175ea8b] MemBaseline::aggregate_virtual_memory_allocation_sites()+0x11b
    [0x00007f5b9175ed68] MemBaseline::baseline_allocation_sites()+0x188
    [0x00007f5b9175efff] MemBaseline::baseline(bool)+0x1cf
    [0x00007f5b917d19a4] NMTDCmd::execute(DCmdSource, Thread*)+0x2b4
                                 (malloc=1KB type=Native Memory Tracking +1KB #18 +18)
    
    [0x00007f5b917635b0] MallocAllocationSiteWalker::do_malloc_site(MallocSite const*)+0x40
    [0x00007f5b91740bc8] MallocSiteTable::walk_malloc_site(MallocSiteWalker*)+0x78
    [0x00007f5b9175ec32] MemBaseline::baseline_allocation_sites()+0x52
    [0x00007f5b9175efff] MemBaseline::baseline(bool)+0x1cf
                                 (malloc=11KB type=Native Memory Tracking +10KB #156 +136)
    
    [0x00007f5b91a2472b] ReservedHeapSpace::try_reserve_heap(unsigned long, unsigned long, bool, char*)+0x20b
    [0x00007f5b91a24de9] ReservedHeapSpace::initialize_compressed_heap(unsigned long, unsigned long, bool)+0x5a9
    [0x00007f5b91a254c6] ReservedHeapSpace::ReservedHeapSpace(unsigned long, unsigned long, bool, char const*)+0x176
    [0x00007f5b919da835] Universe::reserve_heap(unsigned long, unsigned long)+0x65
                                 (mmap: reserved=30916608KB, committed=475136KB +81920KB Type=Java Heap)
    
    [0x00007f5b91804557] thread_native_entry(Thread*)+0xe7
                                 (mmap: reserved=34868KB, committed=1224KB +68KB Type=Thread Stack)
    
    [0x00007f5b91a23c63] ReservedSpace::ReservedSpace(unsigned long, unsigned long)+0x213
    [0x00007f5b912df57c] G1CollectedHeap::create_aux_memory_mapper(char const*, unsigned long, unsigned long)+0x3c
    [0x00007f5b912e4f13] G1CollectedHeap::initialize()+0x333
    [0x00007f5b919da5dd] universe_init()+0xbd
                                 (mmap: reserved=483072KB, committed=7424KB +1280KB Type=GC)
    
    [0x00007f5b91a23c63] ReservedSpace::ReservedSpace(unsigned long, unsigned long)+0x213
    [0x00007f5b912df57c] G1CollectedHeap::create_aux_memory_mapper(char const*, unsigned long, unsigned long)+0x3c
    [0x00007f5b912e4e6a] G1CollectedHeap::initialize()+0x28a
    [0x00007f5b919da5dd] universe_init()+0xbd
                                 (mmap: reserved=60384KB, committed=928KB +160KB Type=GC)

Use NMT to Detect a Memory Leak

Procedure to use Native Memory Tracking to detect memory leaks.

Follow these steps to detect a memory leak:

  1. Start the JVM with summary or detail tracking using the command line option: -XX:NativeMemoryTracking=summary or -XX:NativeMemoryTracking=detail.
  2. Establish an early baseline. Use NMT baseline feature to get a baseline to compare during development and maintenance by running: jcmd <pid> VM.native_memory baseline.
  3. Monitor memory changes using: jcmd <pid> VM.native_memory detail.diff.
  4. If the application leaks a small amount of memory, then it may take a while to show up.

NMT Memory Categories

List of native memory tracking memory categories used by NMT.

Table 2-1 describes native memory categories used by NMT. These categories may change with a release.

Table 2-1 Native Memory Tracking Memory Categories

Category Description

Java Heap

The heap where your objects live

Class

Class meta data

Thread

Memory used by threads, including thread data structure, resource area, handle area, and so on

Code

Generated code

GC

Data use by the GC, such as card table

Compiler

Memory tracking used by the compiler when generating code

Internal

Memory that does not fit the previous categories, such as the memory used by the command line parser, JVMTI, properties, and so on

Other

Memory not covered by another category

Symbol

Memory for symbols

Native Memory Tracking

Memory used by NMT

Arena Chunk

Memory used by chunks in the arena chunk pool

Logging

Memory used by logging

Arguments

Memory for arguments

Module

Memory used by modules

JConsole

Another useful tool included in the JDK download is the JConsole monitoring tool. This tool is compliant with JMX. The tool uses the built-in JMX instrumentation in the JVM to provide information about the performance and resource consumption of running applications.

The JConsole tool can attach to any Java application in order to display useful information such as thread usage, memory consumption, and details about class loading, runtime compilation, and the operating system.

This output helps with the high-level diagnosis of problems such as memory leaks, excessive class loading, and running threads. It can also be useful for tuning and heap sizing.

In addition to monitoring, JConsole can be used to dynamically change several parameters in the running system. For example, the setting of the -verbose:gc option can be changed so that the garbage collection trace output can be dynamically enabled or disabled for a running application.

The following sections describe troubleshooting techniques with the JConsole tool.

Troubleshoot with the JConsole Tool

Use the JConsole tool to monitor data.

The following list provides an idea of the data that can be monitored using the JConsole tool. Each heading corresponds to a tab pane in the tool.

  • Overview

    This pane displays graphs that shows the heap memory usage, number of threads, number of classes, and CPU usage over time. This overview allows you to visualize the activity of several resources at once.

  • Memory

    • For a selected memory area (heap, non-heap, various memory pools):

      • Graph showing memory usage over time

      • Current memory size

      • Amount of committed memory

      • Maximum memory size

    • Garbage collector information, including the number of collections performed, and the total time spent performing garbage collection

    • Graph showing the percentage of heap and non-heap memory currently used

    In addition, on this pane you can request garbage collection to be performed.

  • Threads

    • Graph showing thread usage over time.

    • Live threads: Current number of live threads.

    • Peak: Highest number of live threads since the JVM started.

    • For a selected thread, the name, state, and stack trace, as well as, for a blocked thread, the synchronizer that the thread is waiting to acquire, and the thread that ownsthe lock.

    • The Deadlock Detection button sends a request to the target application to perform deadlock detection and displays each deadlock cycle in a separate tab.

  • Classes

    • Graph showing the number of loaded classes over time

    • Number of classes currently loaded into memory

    • Total number of classes loaded into memory since the JVM started, including those subsequently unloaded

    • Total number of classes unloaded from memory since the JVM started

  • VM Summary

    • General information, such as the JConsole connection data, uptime for the JVM, CPU time consumed by the JVM, complier name, total compile time, and so on.

    • Thread and class summary information

    • Memory and garbage collection information, including number of objects pending finalization, and so on

    • Information about the operating system, including physical characteristics, the amount of virtual memory for the running process, and swap space

    • Information about the JVM itself, such as the arguments and class path

  • MBeans

    This pane displays a tree structure that shows all platform and application MBeans that are registered in the connected JMX agent. When you select an MBean in the tree, its attributes, operations, notifications, and other information are displayed.

    • You can invoke operations, if any. For example, the operation dumpHeap for the HotSpotDiagnostic MBean, which is in the com.sun.management domain, performs a heap dump. The input parameter for this operation is the path name of the heap dump file on the machine where the target VM is running.

    • You can set the value of writable attributes. For example, you can set, unset, or change the value of certain VM flags by invoking the setVMOption operation of the HotSpotDiagnostic MBean. The flags are indicated by the list of values of the DiagnosticOptions attribute.

    • You can subscribe to notifications, if any, by using the Subscribe and Unsubscribe buttons.

Monitor Local and Remote Applications with JConsole

JConsole can monitor both local applications and remote applications. If you start the tool with an argument specifying a JMX agent to connect to, then the tool will automatically start monitoring the specified application.

To monitor a local application, execute the command jconsolepid , where pid is the process ID of the application.

To monitor a remote application, execute the command jconsolehostname: portnumber, where hostname is the name of the host running the application, and portnumber is the port number you specified when you enabled the JMX agent.

If you execute the jconsole command without arguments, the tool will start by displaying the New Connection window, where you specify the local or remote process to be monitored. You can connect to a different host at any time by using the Connection menu.

With the latest JDK releases, no option is necessary when you start the application to be monitored.

As an example of the output of the monitoring tool, Figure 2-1 shows a chart of the heap memory usage.

Figure 2-1 Sample Output from JConsole

Description of Figure 2-1 follows
Description of "Figure 2-1 Sample Output from JConsole"

The jdb Utility

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the Java Debug Interface (JDI) to launch or connect to the target JVM.

The JDI is a high-level Java API that provides information useful for debuggers and similar systems that need access to the running state of a (usually remote) virtual machine. JDI is a component of the Java Platform Debugger Architecture (JPDA). See Java Platform Debugger Architecture.

The following section provides troubleshooting techniques for jdb utility.

Troubleshoot with the jdb Utility

The jdb utility is used to monitor the debugger connectors used for remote debugging.

In JDI, a connector is the way that the debugger connects to the target JVM. The JDK traditionally ships with connectors that launch and establish a debugging session with a target JVM, as well as connectors that are used for remote debugging (using TCP/IP or shared memory transports).

These connectors are generally used with enterprise debuggers, such as the NetBeans integrated development environment (IDE) or commercial IDEs.

The command jdb -listconnectors prints a list of the available connectors. The command jdb -help prints the command usage help.

See The jdb Command in the Java Development Kit Tool Specifications

The jinfo Utility

The jinfo command-line utility gets configuration information from a running Java process or crash dump, and prints the system properties or the command-line flags that were used to start the JVM.

JDK Mission Control, Flight Recorder, and the jcmd utility can be used for diagnosing problems with JVM and Java applications. Use the latest utility, jcmd, instead of the previous jinfo utility for enhanced diagnostics and reduced performance overhead.

With the -flag option, the jinfo utility can dynamically set, unset, or change the value of certain JVM flags for the specified Java process. See Java HotSpot VM Command-Line Options.

The output for the jinfo utility for a Java process with PID number 19256 is shown in the following example.

c:\Program Files\Java\jdk-13\bin>jinfo 19256
Java System Properties:
java.specification.version=13
sun.cpu.isalist=amd64
sun.jnu.encoding=Cp1252
sun.awt.enableExtraMouseButtons=true
java.class.path=C\:\\sampleApps\\DynamicTreeDemo\\dist\\DynamicTreeDemo.jar
java.vm.vendor=Oracle Corporation
sun.arch.data.model=64
user.variant=
java.vendor.url=https\://java.oracle.com/
os.name=Windows 10
java.vm.specification.version=13
sun.java.launcher=SUN_STANDARD
user.country=US
sun.boot.library.path=C\:\\Program Files\\Java\\jdk-13\\bin
sun.java.command=C\:\\sampleApps\\DynamicTreeDemo\\dist\\DynamicTreeDemo.jar
jdk.debug=release
sun.cpu.endian=little
user.home=C\:\\Users\\user1
user.language=en
java.specification.vendor=Oracle Corporation
java.version.date=2019-09-17
java.home=C\:\\Program Files\\Java\\jdk-13
file.separator=\\
java.vm.compressedOopsMode=Zero based
line.separator=\r\n
java.specification.name=Java Platform API Specification
java.vm.specification.vendor=Oracle Corporation
user.script=
sun.management.compiler=HotSpot 64-Bit Tiered Compilers
java.runtime.version=13-ea+29
user.name=user1
path.separator=;
os.version=10.0
java.runtime.name=Java(TM) SE Runtime Environment
file.encoding=Cp1252
java.vm.name=Java HotSpot(TM) 64-Bit Server VM
java.vendor.url.bug=https\://bugreport.java.com/bugreport/
java.io.tmpdir=C\:\\Users\\user1\\AppData\\Local\\Temp\\
java.version=13-ea
user.dir=C\:\\Users\\user1
os.arch=amd64
java.vm.specification.name=Java Virtual Machine Specification
sun.os.patch.level=
java.library.path=C\:\\Program Files\\Java\\jdk-13\\bin;....
java.vm.info=mixed mode, sharing
java.vendor=Oracle Corporation
java.vm.version=13-ea+29
sun.io.unicode.encoding=UnicodeLittle
java.class.version=57.0

VM Flags:

The following topic describes the troubleshooting technique with jinfo utility.

Troubleshooting with the jinfo Utility

The output from jinfo provides the settings for java.class.path and sun.boot.class.path.

If you start the target JVM with the -classpath and -Xbootclasspath arguments, then the output from jinfo provides the settings for java.class.path and sun.boot.class.path. This information might be needed when investigating class loader issues.

In addition to getting information from a process, the jhsdb jinfo tool can use a core file as input. On the Linux operating system, for example, the gcore utility can be used to get a core file of the process in the preceding example. The core file will be named core.19256 and will be generated in the working directory of the process. The path to the Java executable file and the core file must be specified as arguments to the jhsdb jinfo utility, as shown in the following example.

$ jhsdb jinfo --exe java-home/bin/java --core core.19256

Sometimes, the binary name will not be java. This happens when the VM is created using the JNI invocation API. The jhsdb jinfo tool requires the binary from which the core file was generated.

The jmap Utility

The jmap command-line utility prints memory-related statistics for a running VM or core file. For a core file, use jhsdb jmap.

JDK Mission Control, Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jmap utility for enhanced diagnostics and reduced performance overhead.

If jmap is used with a process or core file without any command-line options, then it prints the list of shared objects loaded. For more specific information, you can use the options -heap, -histo, or -clstats. These options are described in the subsections that follow.

In addition, the JDK 7 release introduced the -dump:format=b,file=filename option, which causes jmap to dump the Java heap in binary format to a specified file.

If the jmap pid command does not respond because of a hung process, then use the jhsdb jmap utility to run the Serviceability Agent.

The following sections describe troubleshooting techniques with examples that print memory-related statistics for a running VM or a core file.

Heap Configuration and Usage

Use the jhsdb jmap --heap command to get the Java heap information.

The --heap option is used to get the following Java heap information:

  • Information specific to the garbage collection (GC) algorithm, including the name of the GC algorithm (for example, parallel GC) and algorithm-specific details (such as the number of threads for parallel GC).

  • Heap configuration that might have been specified as command-line options or selected by the VM based on the machine configuration.

  • Heap usage summary: For each generation (area of the heap), the tool prints the total heap capacity, in-use memory, and available free memory. If a generation is organized as a collection of spaces (for example, the new generation), then a space-specific memory size summary is included.

The following example shows output from the jhsdb jmap --heap command.

c:\Program Files\Java\jdk-13\bin>jhsdb jmap --heap --pid 19256
Attaching to process ID 19256, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 13-ea+29

using thread-local object allocation.
Garbage-First (G1) GC with 4 thread(s)

Heap Configuration:
   MinHeapFreeRatio         = 40
   MaxHeapFreeRatio         = 70
   MaxHeapSize              = 4253024256 (4056.0MB)
   NewSize                  = 1363144 (1.2999954223632812MB)
   MaxNewSize               = 2551185408 (2433.0MB)
   OldSize                  = 5452592 (5.1999969482421875MB)
   NewRatio                 = 2
   SurvivorRatio            = 8
   MetaspaceSize            = 21807104 (20.796875MB)
   CompressedClassSpaceSize = 1073741824 (1024.0MB)
   MaxMetaspaceSize         = 17592186044415 MB
   G1HeapRegionSize         = 1048576 (1.0MB)

Heap Usage:
G1 Heap:
   regions  = 4056
   capacity = 4253024256 (4056.0MB)
   used     = 7340032 (7.0MB)
   free     = 4245684224 (4049.0MB)
   0.17258382642998027% used
G1 Young Generation:
Eden Space:
   regions  = 7
   capacity = 15728640 (15.0MB)
   used     = 7340032 (7.0MB)
   free     = 8388608 (8.0MB)
   46.666666666666664% used
Survivor Space:
   regions  = 0
   capacity = 0 (0.0MB)
   used     = 0 (0.0MB)
   free     = 0 (0.0MB)
   0.0% used
G1 Old Generation:
   regions  = 0
   capacity = 250609664 (239.0MB)
   used     = 0 (0.0MB)
   free     = 250609664 (239.0MB)
   0.0% used

Heap Histogram

The jmap command with the -histo option or the jhsdb jmap --histo command can be used to get a class-specific histogram of the heap.

The jmap -histo command can print the heap histogram for a running process. Use jhsdb jmap --histo to print the heap histogram for a core file.

When the jmap -histo command is executed on a running process, the tool prints the number of objects, memory size in bytes, and fully qualified class name for each class. Internal classes in the Java HotSpot VM are enclosed within angle brackets. The histogram is useful to understand how the heap is used. To get the size of an object, you must divide the total size by the count of that object type.

The following example shows output from the jmap -histo command when it is executed on a process with PID number 19256.

c:\Program Files\Java\jdk-13\bin>jmap -histo 19256
No dump file specified
 num     #instances         #bytes  class name (module)
-------------------------------------------------------
   1:         20913        1658720  [B (java.base@13-ea)
   2:          3647        1516888  [I (java.base@13-ea)
   3:         12321         492840  java.security.AccessControlContext (java.base@13-ea)
   4:         14806         355344  java.lang.String (java.base@13-ea)
   5:          2441         298464  java.lang.Class (java.base@13-ea)
   6:          5169         289464  jdk.internal.org.objectweb.asm.SymbolTable$Entry (java.base@13-ea)
   7:          5896         284216  [Ljava.lang.Object; (java.base@13-ea)
   8:          6887         220384  java.util.HashMap$Node (java.base@13-ea)
   9:           237         194640  [Ljdk.internal.org.objectweb.asm.SymbolTable$Entry; (java.base@13-ea)
  10:          5119         163808  java.util.ArrayList$Itr (java.base@13-ea)
  11:          1922         153760  java.awt.event.MouseEvent (java.desktop@13-ea)
  12:           672         139776  sun.java2d.SunGraphics2D (java.desktop@13-ea)
  13:          4101         131232  java.lang.ref.WeakReference (java.base@13-ea)
  14:           655         101848  [Ljava.util.HashMap$Node; (java.base@13-ea)
  15:          3915          93960  sun.awt.EventQueueItem (java.desktop@13-ea)
  16:           367          89008  [C (java.base@13-ea)
  17:          3708          88992  java.awt.Point (java.desktop@13-ea)
  18:          2158          86320  java.lang.invoke.MethodType (java.base@13-ea)
  19:          3026          81832  [Ljava.lang.Class; (java.base@13-ea)
  20:           348          77952  jdk.internal.org.objectweb.asm.MethodWriter (java.base@13-ea)
  21:          1016          73152  java.awt.geom.AffineTransform (java.desktop@13-ea)
  22:          1017          65088  java.awt.event.InvocationEvent (java.desktop@13-ea)
  23:          2013          64416  java.awt.Rectangle (java.desktop@13-ea)
  24:          1341          64368  java.lang.invoke.MemberName (java.base@13-ea)
  25:          1849          59168  java.util.concurrent.ConcurrentHashMap$Node (java.base@13-ea)
... more lines removed here to reduce output...
1414:             1             16  sun.util.resources.LocaleData$LocaleDataStrategy (java.base@13-ea)
1415:             1             16  sun.util.resources.provider.NonBaseLocaleDataMetaInfo (jdk.localedata@13-ea)
Total        145508        8388608

When the jhsdb jmap --histo command is executed on a core file, the tool prints the serial number, number of instances, bytes, and class name for each class. Internal classes in the Java HotSpot VM are prefixed with an asterisk (*).

The following example shows output of the jhsdb jmap --histo command when it is executed on a core file.

& jhsdb jmap --exe /usr/java/jdk_12/bin/java --core core.16395 --histo
Attaching to core core.16395 from executable /usr/java/jdk_12/bin/java please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 12-ea+30
Iterating over heap. This may take a while...
Object Histogram:

num     #instances     #bytes   Class description
--------------------------------------------------------------------------
1:           11102     564520   byte[]
2:           10065     241560   java.lang.String
3:            1421     163392   java.lang.Class
4:           26403    2997816   * ConstMethodKlass
5:           26403    2118728   * MethodKlass
6:           39750    1613184   * SymbolKlass
7:            2011    1268896   * ConstantPoolKlass
8:            2011    1097040   * InstanceKlassKlass
9:            1906     882048   * ConstantPoolCacheKlass
10:           1614     125752   java.lang.Object[]
11:           1160      64960   jdk.internal.org.objectweb.asm.Item
12:           1834      58688   java.util.HashMap$Node
13:            359      40880   java.util.HashMap$Node[]
14:           1189      38048   java.util.concurrent.ConcurrentHashMap$Node
15:             46      37280   jdk.internal.org.objectweb.asm.Item[]
16:             29      35600   char[]
17:            968      32320   int[]
18:            650      26000   java.lang.invoke.MethodType
19:            475      22800   java.lang.invoke.MemberName

Class Loader Statistics

Use the jmap command with the -clstats option to print class loader statistics for the Java heap.

The jmap command connects to a running process using the process ID and prints detailed information about classes loaded in the Metaspace:

  • Index - Unique index for the class
  • Super - Index number of the super class
  • InstBytes - Number of bytes per instance
  • KlassBytes - Number of bytes for the class
  • annotations - Size of annotations
  • CpAll - Combined size of the constants, tags, cache, and operands per class
  • MethodCount - Number of methods per class
  • Bytecodes - Number of bytes used for byte codes
  • MethodAll - Combined size of the bytes per method, CONSTMETHOD, stack map, and method data
  • ROAll - Size of class metadata that could be put in read-only memory
  • RWAll - Size of class metadata that must be put in read/write memory
  • Total - Sum of ROAll + RWAll
  • ClassName - Name of the loaded class

The following example shows a subset of the output from the jmap -clstats command when it is executed on a process with PID number 14400.

c:\Program Files\Java\jdk-13\bin>jmap -clstats 11848
Index Super InstBytes KlassBytes annotations   CpAll MethodCount Bytecodes MethodAll   ROAll    RWAll    Total ClassName
    1    -1    313192        512           0       0           0         0         0      24      624      648 [B
    2    51    287648        784           0   23344         147      5815     52456   28960    50248    79208 java.lang.Class
    3    -1    259936        512           0       0           0         0         0      24      624      648 [I
    4    51    171000        680         136   16304         120      4831     48024   22408    44680    67088 java.lang.String
    5    -1    147200        512           0       0           0         0         0      24      624      648 [Ljava.lang.Object;
    6    51    123680        600           0    1384           7       149      1888    1200     3024     4224 java.util.HashMap$Node
    7    51     53440        608           0    1360           9       213      2472    1632     3184     4816 java.util.concurrent.ConcurrentHashMap$Node
    8    -1     51832        512           0       0           0         0         0      24      624      648 [C
    9    -1     49904        512           0       0           0         0         0      32      624      656 [Ljava.util.HashMap$Node;
   10    51     31200        624           0    1512           8       240      2224    1472     3256     4728 java.util.Hashtable$Entry
   11    51     25536        592           0   11520          89      4365     48344   16696    45480    62176 java.lang.invoke.MemberName
   12  1614     19296       1024           0    7904          51      4071     30304   14664    25760    40424 java.util.HashMap
   13    -1     18368        512           0       0           0         0         0      32      624      656 [Ljava.util.concurrent.ConcurrentHashMap$Node;
   14    51     17504        544         120    5464          37      1783     14968    7416    14392    21808 java.lang.invoke.LambdaForm$Name
   15    -1     16680        512           0       0           0         0         0      80      624      704 [Ljava.lang.Class;
...lines removed to reduce output... 2342  1972         0        560           0    1912           7       170      1520    1312     3016     4328 sun.util.logging.internal.LoggingProviderImpl
 2343    51         0        528           0     232           1         0       144     128      936     1064 sun.util.logging.internal.LoggingProviderImpl$LogManagerAccess
              2081120    1635072       10680 5108776       27932   1288637   7813992 5420704 10014136 15434840 Total
                13.5%      10.6%        0.1%   33.1%           -      8.3%     50.6%   35.1%    64.9%   100.0%
Index Super InstBytes KlassBytes annotations   CpAll MethodCount Bytecodes MethodAll   ROAll    RWAll    Total ClassName

The jps Utility

The jps utility lists every instrumented Java HotSpot VM for the current user on the target system.

The utility is very useful in environments where the VM is embedded, that is, where it is started using the JNI Invocation API rather than the java launcher. In these environments, it is not always easy to recognize the Java processes in the process list.

The following example shows the use of the jps utility.

$ jps
16217 MyApplication
16342 jps

The jps utility lists the virtual machines for which the user has access rights. This is determined by access-control mechanisms specific to the operating system.

In addition to listing the PID, the utility provides options to output the arguments passed to the application's main method, the complete list of VM arguments, and the full package name of the application's main class. The jps utility can also list processes on a remote system if the remote system is running the jstatd daemon.

The jrunscript Utility

The jrunscript utility is a command-line script shell.

It supports script execution in both interactive mode and in batch mode. By default, the shell uses JavaScript, but you can specify any other scripting language for which you supply the path to the script engine JAR file of .class files.

Thanks to the communication between the Java language and the scripting language, the jrunscript utility supports an exploratory programming style.

The jstack Utility

Use the jcmd or jhsdb jstack utility, instead of the jstack utility to diagnose problems with JVM and Java applications.

JDK Mission Control, Flight Recorder, and jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack utility for enhanced diagnostics and reduced performance overhead.

The following sections describe troubleshooting techniques with the jstack and jhsdb jstack utilites.

Troubleshoot with the jstack Utility

The jstack command-line utility attaches to the specified process, and prints the stack traces of all threads that are attached to the virtual machine, including Java threads and VM internal threads, and optionally native stack frames. The utility also performs deadlock detection. For core files, use jhsdb jstack.

A stack trace of all threads can be useful in diagnosing a number of issues, such as deadlocks or hangs.

The -l option instructs the utility to look for ownable synchronizers in the heap and print information about java.util.concurrent.locks. Without this option, the thread dump includes information only on monitors.

The output from the jstack pid option is the same as that obtained by pressing Ctrl+\ at the application console (standard input) or by sending the process a quit signal. See Control+Break Handler for an example of the output.

Thread dumps can also be obtained programmatically using the Thread.getAllStackTraces method, or in the debugger using the debugger option to print all thread stacks (the where command in the case of the jdb sample debugger).

Stack Trace from a Core Dump

Use the jhsdb jstack command to obtain stack traces from a core dump.

To get stack traces from a core dump, execute the jhsdb jstack command on a core file, as shown in the following example.

$ jhsdb jstack --exe java-home/bin/java --core core-file

Mixed Stack

The jhsdb jstack utility can also be used to print a mixed stack; that is, it can print native stack frames in addition to the Java stack. Native frames are the C/C++ frames associated with VM code and JNI/native code.

To print a mixed stack, use the --mixed option, as shown in the following example.

>jhsdb jstack --mixed --pid 21177
Attaching to process ID 21177, please wait...Debugger attached successfully.
Server compiler detected.
JVM version is 14-ea+29-1384
Deadlock Detection:

No deadlocks found.

----------------- 0 -----------------
----------------- 1 -----------------
"DestroyJavaVM" #18 prio=5 tid=0x000001df4706f000 nid=0x744 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
0x000001df2533dc50              ????????
----------------- 2 -----------------
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 3 -----------------
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 4 -----------------
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 5 -----------------
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 6 -----------------
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 7 -----------------
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 8 -----------------
"Reference Handler" #2 daemon prio=10 tid=0x000001df47020000 nid=0x4728 waiting on condition [0x000000a733aff000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
0x000001df2533e280              ????????
----------------- 9 -----------------
"Finalizer" #3 daemon prio=8 tid=0x000001df4702b000 nid=0x5278 in Object.wait() [0x000000a733bfe000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 10 -----------------
"Signal Dispatcher" #4 daemon prio=9 tid=0x000001df47053800 nid=0xac0 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 11 -----------------
"Attach Listener" #5 daemon prio=5 tid=0x000001df47058800 nid=0x3980 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
0x000001df47059390              ????????
----------------- 12 -----------------
"Service Thread" #6 daemon prio=9 tid=0x000001df4705b800 nid=0x3350 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 13 -----------------
"C2 CompilerThread0" #7 daemon prio=9 tid=0x000001df47068800 nid=0x51e8 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
0x000001df2533d590              ????????
----------------- 14 -----------------
"C1 CompilerThread0" #9 daemon prio=9 tid=0x000001df4705d800 nid=0xc20 waiting on condition [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
0x000001df2533d590              ????????
----------------- 15 -----------------
"Sweeper thread" #10 daemon prio=9 tid=0x000001df4706c000 nid=0x1a64 runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 16 -----------------
"Notification Thread" #11 daemon prio=9 tid=0x000001df47070000 nid=0xddc runnable [0x0000000000000000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_blocked
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
----------------- 17 -----------------
0x00007ffa4529f9f4      ntdll!ZwWaitForAlertByThreadId + 0x14
0x00000f3e40772a94              ????????
----------------- 18 -----------------
"Common-Cleaner" #12 daemon prio=8 tid=0x000001df4706b000 nid=0x2054 in Object.wait() [0x000000a7344fe000]
   java.lang.Thread.State: TIMED_WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 19 -----------------
"Java2D Disposer" #13 daemon prio=10 tid=0x000001df4706c800 nid=0x4770 in Object.wait() [0x000000a7345ff000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 20 -----------------
"AWT-Shutdown" #14 prio=5 tid=0x000001df4706d800 nid=0x4ed4 in Object.wait() [0x000000a7346fe000]
   java.lang.Thread.State: WAITING (on object monitor)
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 21 -----------------
"AWT-Windows" #15 daemon prio=6 tid=0x000001df4706e800 nid=0x15e8 runnable [0x000000a7347ff000]
   java.lang.Thread.State: RUNNABLE
   JavaThread state: _thread_in_native
----------------- 22 -----------------
"AWT-EventQueue-0" #17 prio=6 tid=0x000001df4706a000 nid=0x2f54 waiting on condition [0x000000a7348fe000]
   java.lang.Thread.State: WAITING (parking)
   JavaThread state: _thread_blocked
0x00007ffa4529c144      ntdll!NtWaitForSingleObject + 0x14
----------------- 23 -----------------
----------------- 24 -----------------
----------------- 25 -----------------

Frames that are prefixed with an asterisk (*) are Java frames, whereas frames that are not prefixed with an asterisk are native C/C++ frames.

The output of the utility can be piped through c++filt to demangle C++ mangled symbol names. Because the Java HotSpot VM is developed in the C++ language, the jhsdb jstack utility prints C++ mangled symbol names for the Java HotSpot internal functions.

The c++filt utility is delivered with the native C++ compiler suite gnu on Linux.

The jstat Utility

The jstat utility uses the built-in instrumentation in the Java HotSpot VM to provide information about performance and resource consumption of running applications.

The tool can be used when diagnosing performance issues, and in particular issues related to heap sizing and garbage collection. The jstat utility does not require the VM to be started with any special options. The built-in instrumentation in the Java HotSpot VM is enabled by default. This utility is included in the JDK download for all operating system platforms supported by Oracle.

Note:

The instrumentation is not accessible on a FAT32 file system.

See The jstat Command in the Java Development Kit Tool Specifications.

The jstat utility uses the virtual machine identifier (VMID) to identify the target process. The documentation describes the syntax of the VMID, but its only required component is the local virtual machine identifier (LVMID). The LVMID is typically (but not always) the operating system's PID for the target JVM process.

The jstat utility provides data similar to the data provided by the vmstat and iostat on Linux operating systems.

For a graphical representation of the data, you can use the visualgc tool. See The visualgc Tool.

The following example illustrates the use of the -gcutil option, where the jstat utility attaches to LVMID number 2834 and takes 7 samples at 250-millisecond intervals.

$ jstat -gcutil 2834 250 7
  S0     S1     E      O      M     YGC     YGCT    FGC    FGCT     GCT   
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124

The output of this example shows you that a young generation collection occurred between the third and fourth samples. The collection took 0.017 seconds and promoted objects from the eden space (E) to the old space (O), resulting in an increase of old space utilization from 46.56% to 54.60%.

The following example illustrates the use of the -gcnew option where the jstat utility attaches to LVMID number 2834, takes samples at 250-millisecond intervals, and displays the output. In addition, it uses the -h3 option to display the column headers after every 3 lines of data.

$ jstat -gcnew -h3 2834 250
S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0    942.0    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1024.8    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1068.1    218    1.999
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1109.0    218    1.999
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0      0.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     71.6    219    2.019
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     73.7    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     78.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0    116.1    219    2.019

In addition to showing the repeating header string, this example shows that between the fourth and fifth samples, a young generation collection occurred, whose duration was 0.02 seconds. The collection found enough live data that the survivor space 1 utilization (S1U) would have exceeded the desired survivor size (DSS). As a result, objects were promoted to the old generation (not visible in this output), and the tenuring threshold (TT) was lowered from 15 to 1.

The following example illustrates the use of the -gcoldcapacity option, where the jstat utility attaches to LVMID number 21891 and takes 3 samples at 250-millisecond intervals. The -t option is used to generate a time stamp for each sample in the first column.

$ jstat -gcoldcapacity -t 21891 250 3
Timestamp    OGCMN     OGCMX       OGC        OC   YGC   FGC     FGCT     GCT
    150.1   1408.0   60544.0   11696.0   11696.0   194    80    2.874   3.799
    150.4   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863
    150.7   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863

The Timestamp column reports the elapsed time in seconds since the start of the target JVM. In addition, the -gcoldcapacity output shows the old generation capacity (OGC) and the old space capacity (OC) increasing as the heap expands to meet the allocation or promotion demands. The OGC has grown from 11696 KB to 13820 KB after the 81st full generation capacity (FGC). The maximum capacity of the generation (and space) is 60544 KB (OGCMX), so it still has room to expand.

The visualgc Tool

The visualgc tool provides a graphical view of the garbage collection (GC) system.

The visualgc tool is related to the jstat tool. See The jstat Utility. The visualgc tool provides a graphical view of the garbage collection (GC) system. As with jstat, it uses the built-in instrumentation of the Java HotSpot VM.

The visualgc tool is not included in the JDK release, but is available as a separate download from the jvmstat technology page.

Figure 2-2 shows how the GC and heap are visualized.

Figure 2-2 Sample Output from visualgc

Description of Figure 2-2 follows
Description of "Figure 2-2 Sample Output from visualgc"

Control+Break Handler

On Linux operating systems, the combination of pressing the Control key and the backslash (\) key at the application console (standard input) causes the Java HotSpot VM to print a thread dump to the application's standard output. On Windows, the equivalent key sequence is the Control and Break keys. The general term for these key combinations is the Control+Break handler.

On Linux operating systems, a thread dump is printed if the Java process receives a quit signal. Therefore, the kill -QUIT pid command causes the process with the ID pid to print a thread dump to standard output.

The following sections describe the data traced by the Control+Break handler:

Thread Dump

The thread dump consists of the thread stack, including the thread state, for all Java threads in the virtual machine.

The thread dump does not terminate the application: it continues after the thread information is printed.

The following example illustrates a thread dump.

Full thread dump Java HotSpot(TM) Client VM (1.6.0-rc-b100 mixed mode):

"DestroyJavaVM" prio=10 tid=0x00030400 nid=0x2 waiting on condition [0x00000000..0xfe77fbf0]
   java.lang.Thread.State: RUNNABLE

"Thread2" prio=10 tid=0x000d7c00 nid=0xb waiting for monitor entry [0xf36ff000..0xf36ff8c0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)

"Thread1" prio=10 tid=0x000d6c00 nid=0xa waiting for monitor entry [0xf37ff000..0xf37ffbc0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

"Low Memory Detector" daemon prio=10 tid=0x000c7800 nid=0x8 runnable [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"CompilerThread0" daemon prio=10 tid=0x000c5400 nid=0x7 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x000c4400 nid=0x6 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x000b2800 nid=0x5 in Object.wait() [0xf3f7f000..0xf3f7f9c0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
        - locked <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x000ae000 nid=0x4 in Object.wait() [0xfe57f000..0xfe57f940]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000a40> (a java.lang.ref.Reference$Lock)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
        - locked <0xf4000a40> (a java.lang.ref.Reference$Lock)

"VM Thread" prio=10 tid=0x000ab000 nid=0x3 runnable 

"VM Periodic Task Thread" prio=10 tid=0x000c8c00 nid=0x9 waiting on condition 

The output consists of a number of thread entries separated by an empty line. The Java Threads (threads that are capable of executing Java language code) are printed first, and these are followed by information about VM internal threads. Each thread entry consists of a header line followed by the thread stack trace.

The header line contains the following information about the thread:

  • Thread name.

  • Indication if the thread is a daemon thread.

  • Thread priority (prio).

  • Thread ID (tid), which is the address of a thread structure in memory.

  • ID of the native thread (nid).

  • Thread state, which indicates what the thread was doing at the time of the thread dump. See Table 2-2 for more details.

  • Address range, which gives an estimate of the valid stack region for the thread.

Thread States for a Thread Dump

List of possible thread states for a thread dump.

Table 2-2 lists the possible thread states for a thread dump using the Control+Break Handler.

Table 2-2 Thread States for a Thread Dump

Thread State Description

NEW

The thread has not yet started.

RUNNABLE

The thread is executing in the JVM.

BLOCKED

The thread is blocked, waiting for a monitor lock.

WAITING

The thread is waiting indefinitely for another thread to perform a particular action.

TIMED_WAITING

The thread is waiting for another thread to perform an action for up to a specified waiting time.

TERMINATED

The thread has exited.

Detect Deadlocks

The Control+Break handler can be used to detect deadlocks in threads.

In addition to the thread stacks, the Control+Break handler executes a deadlock detection algorithm. If any deadlocks are detected, then the Control+Break handler, as shown in the following example, prints additional information after the thread dump about each deadlocked thread.

Found one Java-level deadlock:
=============================
"Thread2":
  waiting to lock monitor 0x000af330 (object 0xf819a938, a java.lang.String),
  which is held by "Thread1"
"Thread1":
  waiting to lock monitor 0x000af398 (object 0xf819a970, a java.lang.String),
  which is held by "Thread2"

Java stack information for the threads listed above:
===================================================
"Thread2":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)
"Thread1":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

Found 1 deadlock.

If the JVM flag -XX:+PrintConcurrentLocks is set, then the Control+Break handler will also print the list of concurrent locks owned by each thread.

Heap Summary

The Control+Break handler can be used to print a heap summary.

The following example shows the different generations (areas of the heap), with the size, the amount used, and the address range. The address range is especially useful if you are also examining the process with tools such as pmap.

Heap
 def new generation   total 1152K, used 435K [0x22960000, 0x22a90000, 0x22e40000
)
  eden space 1088K,  40% used [0x22960000, 0x229ccd40, 0x22a70000)
  from space 64K,   0% used [0x22a70000, 0x22a70000, 0x22a80000)
  to   space 64K,   0% used [0x22a80000, 0x22a80000, 0x22a90000)
 tenured generation   total 13728K, used 6971K [0x22e40000, 0x23ba8000, 0x269600
00)
   the space 13728K,  50% used [0x22e40000, 0x2350ecb0, 0x2350ee00, 0x23ba8000)
 compacting perm gen  total 12288K, used 1417K [0x26960000, 0x27560000, 0x2a9600
00)
   the space 12288K,  11% used [0x26960000, 0x26ac24f8, 0x26ac2600, 0x27560000)
    ro space 8192K,  62% used [0x2a960000, 0x2ae5ba98, 0x2ae5bc00, 0x2b160000)
    rw space 12288K,  52% used [0x2b160000, 0x2b79e410, 0x2b79e600, 0x2bd60000)

If the JVM flag -XX:+PrintClassHistogram is set, then the Control+Break handler will produce a heap histogram.

Native Operating System Tools

Windows and Linux operating systems provide native tools that are useful for troubleshooting or monitoring purposes.

A brief description is provided for each tool. For further details, see the operating system documentation or man pages for the Linux operating system.

The format of log files and output from command-line utilities depends on the release. For example, if you develop a script that relies on the format of the fatal error log, then the same script may not work if the format of the log file changes in a future release.

You can also search for Windows-specific debug support on the MSDN developer network.

The following sections describe troubleshooting techniques and improvements to a few native operating system tools.

Troubleshooting Tools Based on the Operating System

List of native Windows tools that can be used for troubleshooting problems.

Table 2-3 lists the troubleshooting tools available on the Windows operating system.

Table 2-3 Native Troubleshooting Tools on Windows

Tool Description

dumpchk

Command-line utility to verify that a memory dump file was created correctly. This tool is included in the Debugging Tools for Windows download available from the Microsoft website. See Collect Crash Dumps on Windows.

msdev debugger

Command-line utility that can be used to launch Visual C++ and the Win32 debugger

userdump

The User Mode Process Dumper is included in the OEM Support Tools download available from the Microsoft website. See Collect Crash Dumps on Windows.

windbg

Windows debugger can be used to debug Windows applications or crash dumps. This tool is included in the Debugging Tools for Windows download available from the Microsoft website. See Collect Crash Dumps on Windows.

/Md and /Mdd compiler options

Compiler options that automatically include extra support for tracking memory allocations

Table 2-4 describes some troubleshooting tools introduced or improved in the Linux operating system version 10.

Table 2-4 Native Troubleshooting Tools on Linux

Tool Description

c++filt

Demangle C++ mangled symbol names. This utility is delivered with the native C++ compiler suite: gcc on Linux.

gdb

GNU debugger

libnjamd

Memory allocation tracking

lsstack

Print thread stack

Not all distributions provide this tool by default; therefore, you might have to download it from SourceForge.

ltrace

Library call tracer

Not all distributions provide this tool by default; therefore, you might have to download it from SourceForge.

mtrace and muntrace

GNU malloc tracer

/proc filesystem

Virtual filesystem that contains information about processes and other system information

strace

System call tracer

top

Display most CPU-intensive processes.

vmstat

Report information about processes, memory, paging, block I/O, traps, and CPU activity.

Probe Providers in Java HotSpot VM

The Java HotSpot VM contains two built-in probe providers hotspot and hotspot_jni.

These providers deliver probes that can be used to monitor the internal state and activities of the VM, as well as the Java application that is running.

The JVM probe providers can be categorized as follows:

  • VM lifecycle: VM initialization begin and end, and VM shutdown

  • Thread lifecycle: thread start and stop, thread name, thread ID, and so on

  • Class-loading: Java class loading and unloading

  • Garbage collection: Start and stop of garbage collection, systemwide or by memory pool

  • Method compilation: Method compilation begin and end, and method loading and unloading

  • Monitor probes: Wait events, notification events, contended monitor entry and exit

  • Application tracking: Method entry and return, allocation of a Java object

In order to call from native code to Java code, the native code must make a call through the JNI interface. The hotspot_jni provider manages DTrace probes at the entry point and return point for each of the methods that the JNI interface provides for invoking Java code and examining the state of the VM.

At probe points, you can print the stack trace of the current thread using the ustack built-in function. This function prints Java method names in addition to C/C++ native function names. The following example is a simple D script that prints a full stack trace whenever a thread calls the read system call.

#!/usr/sbin/dtrace -s
syscall::read:entry 
/pid == $1 && tid == 1/ {    
   ustack(50, 0x2000);
}

The script in the previous example is stored in a file named read.d and is run by specifying the PID of the Java process that is traced as shown in the following example.

read.d pid

If your Java application generated a lot of I/O or had some unexpected latency, then the DTrace tool and its ustack() action can help you to diagnose the problem.

Custom Diagnostic Tools

The JDK has extensive APIs to develop custom tools to observe, monitor, profile, debug, and diagnose issues in applications that are deployed in the Java runtime environment.

The development of new tools is beyond the scope of this document. Instead, this section provides a brief overview of the APIs available.

All the packages mentioned in this section are described in the Java SE API specification.

See the example and demonstration code that is included in the JDK download.

The following sections describe packages, interface classes, and the Java debugger that can be used as custom diagnostic tools for troubleshooting.

The java.lang.management Package

The java.lang.management package provides the management interface for the monitoring and management of the JVM and the operating system.

Specifically, it covers interfaces for the following systems:

  • Class loading

  • Compilation

  • Garbage collection

  • Memory manager

  • Runtime

  • Threads

In addition to the java.lang.management package, the JDK release includes platform extensions in the com.sun.management package. The platform extensions include a management interface to get detailed statistics from garbage collectors that perform collections in cycles. These extensions also include a management interface to get additional memory statistics from the operating system.

The java.lang.instrument Package

The java.lang.instrument package provides services that allow the Java programming language agents to instrument programs running on the JVM.

Instrumentation is used by tools such as profilers, tools for tracing method calls, and many others. The package facilitates both load-time and dynamic instrumentation. It also includes methods to get information about the loaded classes and information about the amount of storage consumed by a given object.

The java.lang.Thread Class

The java.lang.Thread class has a static method called getAllStackTraces, which returns a map of stack traces for all live threads.

The Thread class also has a method called getState, which returns the thread state; states are defined by the java.lang.Thread.State enumeration. These methods can be useful when you add diagnostic or monitoring capabilities to an application.

JVM Tool Interface

The JVM Tool Interface (JVM TI) is a native (C/C++) programming interface that can be used by a wide range of development and monitoring tools.

JVM TI provides an interface for the full breadth of tools that need access to the VM state, including but not limited to profiling, debugging, monitoring, thread analysis, and coverage analysis tools.

Some examples of agents that rely on JVM TI are the following:

  • Java Debug Wire Protocol (JDWP)

  • The java.lang.instrument package

The specification for JVM TI can be found in the JVM Tool Interface documentation.

Java Platform Debugger Architecture

The Java Platform Debugger Architecture (JPDA) is the architecture designed for use by debuggers and debugger-like tools.

The Java Platform Debugger Architecture consists of two programming interfaces and a wire protocol:

  • The Java Virtual Machine Tool Interface (JVM TI) is the interface to the virtual machine. See JVM Tool Interface.

  • The Java Debug Interface (JDI) defines information and requests at the user code level. It is a pure Java programming language interface for debugging Java programming language applications. In JPDA, the JDI is a remote view in the debugger process of a virtual machine in the process being debugged. It is implemented by the front end, where as a debugger-like application (for example, IDE, debugger, tracer, or monitoring tool) is the client. See the module jdk.jdi.

  • The Java Debug Wire Protocol (JDWP) defines the format of information and requests transferred between the process being debugged and the debugger front end, which implements the JDI.

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the JDI to launch or connect to the target VM. See The jdb Utility.

In addition to traditional debugger-type tools, the JDI can also be used to develop tools that help in postmortem diagnostics and scenarios where the tool needs to attach to a process in a noncooperative manner (for example, a hung process).

Postmortem Diagnostic Tools

List of tools and options available for post-mortem diagnostics of problems between the application and the Java HotSpot VM.

Table 2-5 summarizes the options and tools that are designed for postmortem diagnostics. If an application crashes, then these options and tools can be used to get additional information, either at the time of the crash or later using information from the crash dump.

Table 2-5 Postmortem Diagnostics Tools

Tool or Option Description and Usage

Fatal Error Log

When an irrecoverable (fatal) error occurs, an error log is created. This file contains information obtained at the time of the fatal error. In many cases, it is the first item to examine when a crash occurs. See Fatal Error Log.

-XX:+HeapDumpOnOutOfMemoryError option

This command-line option specifies the generation of a heap dump when the VM detects a native out-of-memory error. See The -XX:HeapDumpOnOutOfMemoryError Option.

-XX:OnError option

This command-line option specifies a sequence of user-supplied scripts or commands to be executed when a fatal error occurs. For example, on Windows, this option can execute a command to force a crash dump. This option is very useful on systems where a postmortem debugger is not configured. See The -XX:OnError Option.

-XX:+ShowMessageBoxOnError option

This command-line option suspends a process when a fatal error occurs. Depending on the user response, the option can launch the native debugger (for example, dbx, gdb, msdev) to attach to the VM. See The -XX:ShowMessageBoxOnError Option.

Other -XX options

Several other -XX command-line options can be useful in troubleshooting. See Other -XX Options.

jhsdb jinfo utility

This utility can get configuration information from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jinfo Utility.

jhsdb jmap utility

This utility can get memory map information, including a heap histogram, from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jmap Utility.

jstack utility

This utility can get Java and native stack information from a Java process. On the Linux operating system, the utility can also get the information from a core file or a remote debug server. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be used for postmortem diagnosis. See Native Operating System Tools.

Hung Processes Tools

Tools and options for diagnosing problems between the application and the Java HotSpot VM in a hung process are available in the JDK and in the operating system.

Table 2-6 summarizes the options and tools that can help in scenarios involving a hung or deadlocked process. These tools do not require any special options to start the application.

JDK Mission Control, Flight Recorder, and the jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-6 Hung ProcessTools

Tool or Option Description and Usage

Ctrl+Break handler

(Control+\ or kill -QUIT pid on the and Linux operating system, and Control+Break on Windows)

This key combination performs a thread dump and deadlock detection. The Ctrl+Break handler can optionally print a list of concurrent locks and their owners, as well as a heap histogram. See Control+Break Handler.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling recordings from Flight Recorder. The recordings are used to troubleshoot and diagnose flight recording events. See The jcmd Utility.

jdb utility

Debugger support includes attaching connectors, which allow jdb and other Java language debuggers to attach to a process. This can help show what each thread is doing at the time of a hang or deadlock. See The jdb Utility.

jinfo utility

This utility can get configuration information from a Java process. See The jinfo Utility.

jmap utility

This utility can get memory map information, including a heap histogram, from a Java process. The jhsdb jmap utility can be used if the process is hung. See The jmap Utility.

jstack utility

This utility can obtain Java and native stack information from a Java process. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be useful in hang or deadlock situations. See Native Operating System Tools.

Monitoring Tools

Tools and options for monitoring running applications and detecting problems are available in the JDK and in the operating system.

The tools listed in the Table 2-7 are designed for monitoring applications that are running.

JDK Mission Control, Flight Recorder and the jcmd utility can be used to diagnose problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-7 Monitoring Tools

Tool or Option Description and Usage

JDK Mission Control

JDK Mission Control (JMC) is a JDK profiling and diagnostic tool platform for HotSpot JVM. It is a tool suite for basic monitoring, managing, and production time profiling and diagnostics with high performance. JMC minimizes the performance overhead that's usually an issue with profiling tools.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling recordings from Flight Recorder. The recordings are used to troubleshoot and diagnose JVM and Java applications with flight recording events. See The jcmd Utility.

JConsole utility

This utility is a monitoring tool that is based on Java Management Extensions (JMX). The tool uses the built-in JMX instrumentation in the Java Virtual Machine to provide information about the performance and resource consumption of running applications. See JConsole.

jmap utility

This utility can get memory map information, including a heap histogram, from a Java process or a core file. See The jmap Utility.

jps utility

This utility lists the instrumented Java HotSpot VMs on the target system. The utility is very useful in environments where the VM is embedded, that is, it is started using the JNI Invocation API rather than the java launcher. See The jps Utility.

jstack utility

This utility can get Java and native stack information from a Java process or a core file. See The jstack Utility.

jstat utility

This utility uses the built-in instrumentation in Java to provide information about performance and resource consumption of running applications. The tool can be used when diagnosing performance issues, especially those related to heap sizing and garbage collection. See The jstat Utility.

jstatd daemon

This tool is a Remote Method Invocation (RMI) server application that monitors the creation and termination of instrumented Java Virtual Machines and provides an interface to allow remote monitoring tools to attach to VMs running on the local host. See The jstatd Daemon.

visualgc utility

This utility provides a graphical view of the garbage collection system. As with jstat, it uses the built-in instrumentation of Java HotSpot VM. See The visualgc Tool.

Native tools

Each operating system has native tools and utilities that can be useful for monitoring purposes. See Native Operating System Tools.

Other Tools, Options, Variables, and Properties

General troubleshooting tools, options, variables, and properties that can help to diagnose issues are available in the JDK and in the operating system.

In addition to the tools that are designed for specific types of problems, the tools, options, variables, and properties listed in Table 2-8 can help in diagnosing other issues.

JDK Mission Control, Flight Recorder and the jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd, instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-8 General Troubleshooting Tools and Options

Tool or Option Description and Usage

JDK Mission Control

JDK Mission Control (JMC) is a JDK profiling and diagnostic tool platform for HotSpot JVM. It is a tool suite for basic monitoring, managing, and production time profiling and diagnostics with high performance. JMC minimizes the performance overhead that's usually an issue with profiling tools.

jcmd utility

This utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling recordings from Flight Recorder. The recordings are used to troubleshoot and diagnose JVM and Java applications with flight recording events.

jinfo utility

This utility can dynamically set, unset, and change the values of certain JVM flags for a specified Java process. On Linux operating systems, it can also print configuration information.

jrunscript utility

This utility is a command-line script shell, which supports both interactive and batch-mode script execution.

-Xcheck:jni option

This option is useful in diagnosing problems with applications that use the Java Native Interface (JNI) or that employ third-party libraries (some JDBC drivers, for example). See The -Xcheck:jni Option.

-verbose:class option

This option enables logging of class loading and unloading. See The -verbose:class Option.

-verbose:gc option

This option enables logging of garbage collection information. See The -verbose:gc Option.

-verbose:jni option

This option enables logging of JNI. See The -verbose:jni Option.

JAVA_TOOL_OPTIONS environment variable

This environment variable allows you to specify the initialization of tools, specifically the launching of native or Java programming language agents using the -agentlib or -javaagent options. See Environment Variables and System Properties.

java.security.debug system property

This system property controls whether the security checks in the Java runtime environment print trace messages during execution. See The java.security.debug System Property.

The jstatd Daemon

The jstatd daemon is an RMI server application that monitors the creation and termination of each instrumented Java HotSpot, and provides an interface to allow remote monitoring tools to attach to JVMs running on the local host.

For example, this daemon allows the jps utility to list processes on a remote system.

Note:

The instrumentation is not accessible on FAT32 file system.