Pre-General Availability: 2017-05-23

2 Diagnostic Tools

This chapter introduces various diagnostic and other monitoring tools that can be used with the Java Development Kit (JDK). Then, it describes in detail the diagnostic tools in JDK 9 and troubleshooting tools specific to various operating systems. Finally, explains how to develop custom diagnostic tools using the application programing interfaces (APIs) provided by JDK.

This chapter contains the following sections:

Diagnostic Tools Overview

Most of the command-line utilities described in this section are either included in the JDK or native operating system tools and utilities.

Although the JDK command-line utilities are included in the JDK download, it is important to consider that they can be used to diagnose issues and monitor applications that are deployed with the Java Runtime Environment (JRE).

In general, the diagnostic tools and options use various mechanisms to obtain the information they report. The mechanisms are specific to the Virtual Machine (VM) implementation, operating systems, and release. Frequently, only a subset of the tools is applicable to a given issue at a particular time. Command-line options that are prefixed with -XX are specific to Java HotSpot VM. See Java HotSpot VM Command-Line Options.

Note:

The -XX options are not part of the Java API and can vary from one release to the next.

The tools and options are divided into several categories, depending on the type of problem that you are troubleshooting. Certain tools and options might fall into more than one category.

Note:

Some command-line utilities described in this section are experimental. The jstack, jinfo, and jmap utilities are examples of utilities that are experimental. It is suggested to use the latest diagnostic utility, jcmd instead of the earlier jstack, jinfo, and jmap utilities.

Java Mission Control

The Java Mission Control (JMC) is a new JDK profiling and diagnostics tools platform for HotSpot JVM.

It is a tool suite for basic monitoring, managing, and production time profiling and diagnostics with high performance. Java Mission Control minimizes the performance overhead that's usually an issue with profiling tools. This tool is a commercial feature built into the JVM and available at runtime.

The Java Flight Recorder (JFR) is a commercial feature. You can use it for free on developer desktops/laptops, and for evaluation purposes in test, development, and production environments. However, to enable JFR on a production server, you require a commercial license. Using JMC UI for other purposes on the JDK does not require a commercial license.

The Java Mission Control (JMC) consists of Java Management Console (JMX), Java Flight Recorder (JFR), and several other plug-ins downloadable from the tool. The JMX is a tool for monitoring and managing Java applications and the JFR is a profiling tool. Java Mission Control is also available as a set of plug-ins for the Eclipse IDE.

The following topic describes how to troubleshoot with Java Mission Control.

Troubleshoot with Java Mission Control

Troubleshooting activities that you can perform with Java Mission Control.

Java Mission Control allows you to perform the following troubleshooting activities:

  • Java Management console (JMX) connects to a running JVM, collects and displays key characteristics in real time.
  • Triggers user provided custom actions and rules for JVM.
  • Experimental plug-ins like - WLS, DTrace, JOverflow and others from JMC tool provide troubleshooting activities.
    • DTrace plug-in is an extended DScript language to produce self describing events. It provides visualization similar to Flight Recorder.
    • JOverflow is another plug-in tool for analyzing heap waste (empty/sparse collections). It is recommended to use JDK 8 release and above for optimal use of JOverflow plug-in.
  • The Java Flight Recordings (JFR) in Java Mission Control is available to analyze events. The pre-configured tabs allow easy drill down in various areas of common interest, such as, code, memory and gc, threads and IO. The General Events tab and Operative Events tab together allow drilling down further and rapidly homing in on a set of events with certain properties. The Events tab usually have check boxes to only show events in the Operative set.
    • JFR when used as a plug-in for JMC client presents diagnostic information in logically grouped tables, charts, and dials. It enables you to select the range of time and level of detail necessary to focus on the problem. See Java Flight Recorder.
  • The Java Mission Control plug-ins connect to JVM using Java Management Extensions (JMX) agent. The JMX is a standard API for management and monitoring of resources such as applications, devices, services, and the Java Virtual Machine.

    To know more about JMC, see JMC documentation.

What are Java Flight Recordings

The Java Flight Recorder (JFR) is a commercial feature. You can use it for free on developer desktops/laptops, and for evaluation purposes in test, development, and production environments.

However, to enable JFR on a production server, you need a commercial license. Using JMC UI for other purposes on the JDK does not require a commercial license.

To know more about JFR commercial features and availability, see the product documentation.

To know more about JFR commercial license, see the license agreement.

The Java Flight Recorder records detailed information about the java runtime and the Java application running in the java runtime. The recording process is done with little overhead. The data is recorded as time stamped data points called events. Typical events can be threads waiting for locks, garbage collections, periodic CPU usage data, etc.

When creating a flight recording, you select which events should be saved. This is called a recording template. Some templates only save very basic events and have virtually no impact on performance. Other templates may come with slight performance overhead, and may also trigger GCs in order to gather additional information. In general, it is rare to see more than a few percentage of overhead.

Flight Recordings can be used to debug a wide range of issues from performance problems to memory leaks or heavy lock contention.

The following topic describes types of recording to produce a Java flight recording.

Types of Recordings

The two types of flight recordings are continuous recordings and profiling recordings.

The following are two types of flight recordings - continuous recordings and profiling recordings.

  • Continuous Recordings: A continuous recording is a recording that is always on and saves, for example, the last six hours of data. If your application runs into any issues, you can dump the data from, for example, the last hour and see what happened at the time of the problem.

    The default setting for a continuous recordings is to use a recording profile with extremely low overhead. This profile will not get heap statistics or allocation profiling, but will still gather a lot of useful data.

    A continuous recording is great to always have running, and is very helpful when debugging issues that happen very rarely. The recording can be dumped manually using either jcmd or JMC. You can also set a trigger in JMC to dump the flight recording when some specific criteria is fulfilled.

  • Profiling Recordings: A profiling recording is a recording that is turned on, runs for a set amount of time, and then stops. Usually, a profiling recording has more events enabled and may have a slightly bigger performance impact. The events that are turned on can be modified depending on your use of profiling recording.

    Typical use cases for profiling recordings are as follows:

    • Profile what methods are run the most and where most objects are created.

    • Look for classes that use more and more heap indicating a memory leak.

    • Look for bottle necks due to synchronization and many more such use cases.

    A profiling recording will give a lot of information; even though, you are not troubleshooting a specific issue. A profiling recording will give you a very good view of the application and can help you find any bottlenecks or areas in need of improvement.

Note:

A typical overhead is only around 2%, so you can definitely run a profiling recording on your production environment (which is one of the main use cases for JFR), unless you are extremely sensitive for performance or latencies.

How to Produce a Flight Recording

The following sections describe three ways to produce a flight recording.

Use Java Mission Control to Produce a Flight Recording

Use Java Mission Control (JMC) to easily manage flight recordings.

Prerequisites:

To begin with, find your server in the JVM Browser in the left most frame as shown in Figure 2-1.

Figure 2-1 Java Mission Control - Find Server

Description of Figure 2-1 follows
Description of "Figure 2-1 Java Mission Control - Find Server"

By default, any local running JVMs will be listed. Remote JVMs (running as the same effective user as the user running JMC) must be set up to use a remote JMX agent, see . Then, click the New JVM Connection button, and enter the network details.

Prior to JDK 8u40 release, the JVM must also have been started with the flag: -XX:+UnlockCommercialFeatures -XX:FlightRecorder.

Since the JDK 8u40 release, the Java Flight Recorder can be enabled during runtime.

The following are three ways to use Java Mission Control to produce a flight recording:

  1. Inspect running recordings: Expand the node in the JVM browser to see running recordings. Figure 2-2 shows both a running continuous recording (with the infinity sign) and a timed profiling recording.

    Figure 2-2 Java Mission Control - Running Recordings

    Description of Figure 2-2 follows
    Description of "Figure 2-2 Java Mission Control - Running Recordings"

    Right-click any of the recordings to dump, edit, or stop the recording. Stopping a profiling recording will still produce a recording file and closing a profiling recording will discard the recording.

  2. Dump continuous recordings: Right-click a continuous recording in the JVM Browser and then select to dump it to a file. In the dialog box that comes up, select to dump all available data or only the last part of the recording, as shown in Figure 2-3.

    Figure 2-3 Java Mission Control - Dump Continuous Recordings

    Description of Figure 2-3 follows
    Description of "Figure 2-3 Java Mission Control - Dump Continuous Recordings"
  3. Start a new recording: To start a new recording, right click the JVM you want to record on and select Start Flight Recording. Then, a window displays, as shown in Figure 2-4.

    Figure 2-4 Java Mission Control - Start Flight Recordings

    Description of Figure 2-4 follows
    Description of "Figure 2-4 Java Mission Control - Start Flight Recordings"

    Select either Time fixed recording (profiling recording), or Continuous recording as shown in Figure 2-4. For continuous recordings, you also specify the maximum size or age of events you want to save.

    You can also select Event settings. There is an option to create your own templates, but for 99 percent of all use cases you want to select either the Continuous template (for very low overhead recordings) or the Profiling template (for more data and slightly more overhead). Note: The typical overhead for a profiling recording is about 2 percent.

    When done, click Next. The next screen, as shown in Figure 2-5, gives you a chance to modify the template for different use cases.

    Figure 2-5 Java Mission Control - Event Options for Profiling

    Description of Figure 2-5 follows
    Description of "Figure 2-5 Java Mission Control - Event Options for Profiling"

    The default settings give a good balance between data and performance. In some cases, you may want to add extra events. For example, if you are investigating a memory leak or want to see the objects that take up the most Java heap, enable Heap Statistics. This will trigger two Old Collections at the start and end of the recording, so this will give some extra latency. You can also select to show all exceptions being thrown, even the ones that are caught. For some applications, this will generate a lot of events.

    The Threshold value is the length of event recording. For example, by default, synchronization events above 10 ms are gathered. This means, if a thread waits for a lock for more than 10 ms, an event is saved. You can lower this value to get more detailed data for short contentions.

    The Thread Dump setting gives you an option to do periodic thread dumps. These will be normal textual thread dumps, like the ones you would get using the diagnostic command Thread.print, or by using the jstack tool. The thread dumps complement the events.

Use Startup Flags at the Command Line to Produce a Flight Recording

Use startup flags at the command line to produce profiling recording, continuous recording, and using diagnostic commands.

For a complete description of JFR flags, see Advanced Runtime Options in the Java Platform, Standard Edition Tools Reference.

The following are three ways to startup flags at the command line to produce a flight recording.

  1. Start a profiling recording: You can configure a time fixed recording at the start of the application using the -XX:StartFlightRecording option. Because the JFR is a commercial feature, you must specify the -XX:+UnlockCommercialFeatures option. The following example illustrates how to run the MyApp application and start a 60-second recording 20 seconds after starting the JVM, which will be saved to a file named myrecording.jfr:

    java -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:StartFlightRecording=delay=20s,duration=60s,name=myrecording,filename=C:\TEMP\myrecording.jfr,settings=profile MyApp

    The settings parameter takes either the path to or the name of a template. Default templates are located in the jre/lib/jfr folder. The two standard profiles are: default - a low overhead setting made primarily for continuous recordings and profile - gathers more data and is primarily for profiling recordings.

  2. Start a continuous recording: You can also start a continuous recording from the command line using -XX:FlightRecorderOptions. These flags will start a continuous recording that can later be dumped if needed. The following example illustrates a continuous recording. The temporary data will be saved to disk, to the /tmp folder, and 6 hours of data will be stored.

    java -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=/tmp,maxage=6h,settings=default MyApp

    Note:

    When you actually dump the recording, you specify a new location for the dumped file, so the files in the repository are only temporary.

    To know more about configuring and managing Java Flight Recordings, see Java Flight Recorder Runtime Guide.

  3. Use diagnostic commands:

    You can also control recordings by using Java command-line diagnostic commands. The simplest way to execute a diagnostic command is to use the jcmd tool located in the Java installation directory. For more details see, The jcmd Utility.

Use Triggers for Automatic Recordings

You can set up Java Mission Control to automatically start or dump a flight recording if a condition is met. This is done from the JMX console. To start the JMX console, find your application in the JVM Browser, right-click it, and select Start JMX Browser.

Select the Triggers tab at the bottom of the screen, as shown in Figure 2-6.

Figure 2-6 Java Mission Control - Automatic Recordings

Description of Figure 2-6 follows
Description of "Figure 2-6 Java Mission Control - Automatic Recordings"

You can choose to create a trigger on any MBean in the application. There are several default triggers set up for common conditions such as high CPU usage, deadlocked threads, or too large of a live set. Select Add to choose any MBean in the application, including your own application-specific ones. When you select your trigger, you can also select the conditions that must be met. For more information, click the question mark in the top right corner to see the built-in help.

Click the boxes next to the triggers to have several triggers running.

Once you have selected your condition, click the Action tab. Then, select what to do when the condition is met. Finally, choose to either dump a continuous recording or to start a time-limited flight recording as shown in Figure 2-7.

Figure 2-7 Java Mission Control - Use Triggers



Inspect a Flight Recording

Information about how to get a sample JFR to inspect a flight recording and various tabs in Java Mission Control for you to analyze the flight recordings.

The following sections are described:

How to Get a Sample JFR to Inspect

Create a Flight Recording, you can open it in Mission Control.

After you create a Flight Recording, you can open it in Mission Control. An easy way to look at a flight recording is:

  • Open Mission Control and select the JVM Browser tab.
  • Select The JVM Running Mission Control option to create a short recording.

    Open a flight recording to see several main tabs such as General, Memory, Code, Threads, I/O, System, and Events. You can also have other main tabs if any plug-ins are installed. Each of these main tabs have sub tabs. Click the question mark to view the built-in help section for the main tabs and subtabs.

Range Navigator

Inspect the flight recordings using the range navigator.

Each tab has a range navigator at the top view.

Figure 2-8 Inspect Flight Recordings - Range Navigator



The vertical bars in Figure 2-8 represent the events in the recording. The higher the bar, the more events there are at that time. You can drag the edges of the selected time to zoom in or out in the recording. Double click the range navigator to zoom out and view the entire recording. Click the Synchronize Selection check box for all the subtabs to use the same zoom level.

See Using the Range Navigator in the built-in help for more information. The events are named as per the tab name.

General Tab

Inspect flight recordings in the General tab.

The General Tab contains a few subtabs that describe the general application. The first subtab is Overview, which shows some basic information such as the maximum heap usage, total CPU usage, and GC pause time, as shown in Figure 2-9.

Figure 2-9 Inspect Flight Recordings - General Tab

Description of Figure 2-9 follows
Description of "Figure 2-9 Inspect Flight Recordings - General Tab"

Also, look at the CPU Usage over time and both the Application Usage and Machine Total. This tab is good to look at when something that goes wrong immediately in the application. For example, watch for CPU usage spiking near 100 percent or the CPU usage is too low or too long garbage collection pauses.

Note: A profiling recording started with Heap Statistics gets two old collections, at the start and the end of the recording that may be longer than the rest.

The other subtab - JVM Information shows the JVM information. The start parameters subtabs - System Properties shows all system properties set, and Recording shows information about the specific recording such as, the events that are turned on. Click the question marks for built-in detailed information about all tabs and subtabs.

Memory Tab

Inspect the flight recordings in the Memory tab.

The Memory tab contains information about Garbage Collections, Allocation patterns and Object Statistics. This tab is specifically helpful to debug memory leaks as well as for tuning the GC.

The Overview tab shows some general information about the memory usage and some statistics over garbage collections. Note: The graph scale in the Overview tab goes up to the available physical memory in the machine; therefore, in some cases the Java heap may take up only a small section at the bottom.

The following three subtabs are described from the Memory tab.

  • Garbage Collection tab: The Garbage Collection tab shows memory usage over time and information about all garbage collections.

    Figure 2-10 Inspect Flight Recordings - Garbage Collections

    Description of Figure 2-10 follows
    Description of "Figure 2-10 Inspect Flight Recordings - Garbage Collections"

    As shown in Figure 2-10, the spiky pattern of the heap usage is perfectly normal. In most applications, temporary objects are allocated all the time. Once a condition is met, a Garbage Collection (GC) is triggered and all the objects no longer used are removed. Therefore, the heap usage increases steadily until a GC is triggered, then it drops suddenly.

    Most GCs in Java have some kind of smaller garbage collections. The old GC goes through the entire Java heap, while the other GC might look at part of the heap. The heap usage after an old collection is the memory the application is using, which is called the live set.

    The flight recording generated with Heap Statistics enabled will start and end with an old GC. Select that old GC in the list of GCs, and then choose the General tab to see the GC Reason as - Heap Inspection Initiated GC. These GCs usually take slightly longer than other GCs.

    For a better way to address memory leaks, look at the Heap After GC value in the first and last old GC. There could a memory leak when this value is increasing over time.

    The GC Times tab has information about the time spent doing GCs and time when the application is completely paused due to GCs. The GC Configuration tab has GC configuration information. For more details about these tabs, click the question mark in the top right corner to see the built-in help.

  • Allocations tab: Figure 2-11 shows a selection of all memory allocations made. Small objects in Java are allocated in a TLAB (Thread Local Area Buffer). TLAB is a small memory area where new objects are allocated. Once a TLAB is full, the thread gets a new one. Logging all memory allocations gives an overhead; therefore, all allocations that triggered a new TLAB are logged. Larger objects are allocated outside TLAB, which are also logged.

    Figure 2-11 Inspect Flight Recordings - Allocations Tab

    Description of Figure 2-11 follows
    Description of "Figure 2-11 Inspect Flight Recordings - Allocations Tab"

    To estimate the memory allocation for each class, select the Allocation in new TLAB tab and then select Allocations tab. These allocations are object allocations that happen to trigger the new TLABs. The char arrays trigger the most new TLABs. How much memory is allocated as char arrays is not known. The size of the TLABs is a good estimate for memory allocated by char arrays.

    Figure 2-11 is an example for char arrays allocating the most memory. Click one of the classes to see the Stack Trace of these allocations. The example recording shows that 44% of all allocation pressure comes from char arrays and 27 percent comes from Array.copyOfRange, which is called from StringBuilder.toString. The StringBuilder.toString is in turn usually called by Throwable.printStackTrace and StackTraceElement.toString. Expand further to see how these methods are called.

    Note: The more temporary objects the application allocates, the more the application must garbage collect. The Allocations tab helps you find the most allocations and reduce the GC pressure in your application. Look at Allocation outside TLAB tab to see large memory allocations, which usually have less memory pressure than the allocations in New TLAB tab.

  • Object Statistics tab: The Object Statistics tab shows the classes that have the most live set. Read the Garbage Collection subtab from the Memory Tab to understand a live set. Figure 2-12 shows heap statistics for a flight recording. Enable Heap Statistics for a flight recording to show the data. The Top Growers tab at the bottom shows how each object type increased in size during a flight recording. A specific object type increased a lot in size indicates a memory leak; however, a small variance is normal. Especially, investigate the top growers of non-standard Java classes.

    Figure 2-12 Inspect Flight Recordings - Object Statistics Tab

    Description of Figure 2-12 follows
    Description of "Figure 2-12 Inspect Flight Recordings - Object Statistics Tab"

Code Tab

Inspect flight recordings in the Code tab.

The Code tab contains information about where the application spends most of its time. The Overview subtab shows the packages and classes that spent the most execution time. This data comes from sampling. JFR takes samples of threads running at intervals. Only the threads running actual code are sampled; the threads that are sleeping, waiting for locks or I/O are not shown.

To see more details about the application time for running the actual code, look at the Hot Methods subtab.

Figure 2-13 Inspect Flight Recordings - Code Tab

Description of Figure 2-13 follows
Description of "Figure 2-13 Inspect Flight Recordings - Code Tab"

Figure 2-13 shows the methods that are sampled the most. Expand the samples to see from where they are called. If a HashMap.getEntry is called a lot, then expand this node until you find the method that called the most. This is the best tab to use to find bottlenecks in the application.

The Call Tree subtab shows the same events, but starts from the bottom; for example, from Thread.run.

The Exceptions sub tab shows any exceptions thrown. By default, only Errors are logged, but change this setting to include All Exceptions when starting a new recording.

The Compilations sub tab shows the methods compiled over time as the application was running.

The Class Loading sub tab shows the number of loaded classes, actual loaded classes and unloaded classes over time. This sub tab shows information only when Class Loading events were enabled at the start of the recording.

For more details about these tabs, click the question mark in the top right corner to see the built-in help.

Threads Tab

Inspect flight recordings in the Threads tab.

The Threads tab contains information about threads, lock contention and other latencies.

The Overview subtab shows CPU usage and the number of threads over time.

The Hot Threads sub tab shows the threads that do most of the code execution. This information is based on the same sampling data as the Hot Methods subtab in the Code tab.

The Contention tab is useful for finding bottle necks due to lock contention.

Figure 2-14 Inspect Flight Recordings - Contention Tab

Description of Figure 2-14 follows
Description of "Figure 2-14 Inspect Flight Recordings - Contention Tab"
Figure 2-14 shows objects that are the most waited for due to synchronization. Select a Class to see the Stack Trace of the wait time for each object. These pauses are generally caused by synchronized methods, where another thread holds the lock.

Note:

By default, only synchronization events longer than 10 ms will be recorded, but you can lower this threshold when starting a recording.

The Latencies subtab shows other sources of latencies; for example, calling sleep or wait, reading from sockets, or waiting for file I/O.

The Thread Dumps subtab shows the periodic thread dumps that can be triggered in the recording.

The Lock Instances subtab shows the exact instances of objects that are waited upon the most due to synchronization.

For more details about these tabs, click the question mark in the top right corner to see the built-in help.

I/O Tab

The I/O tab shows information on file reads, file writes, socket reads, and socket writes.

This tab is helpful depending on the application; especially, when any I/O operation takes a long time.

Note:

By default, only events longer than 10 ms are shown. The thresholds can be modified when creating a new recording.

System Tab

The System tab gives detailed information about the CPU, Memory and OS of the machine running the application.

It also shows environment variables and any other processes running at the same time as the JVM.

Events Tab

The Events tab shows all the events in the recording.

This is an advanced tab that can be used in many different ways. For more details about these tabs, click the question mark in the top right corner to see the built-in help.

The jcmd Utility

The jcmd utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings, troubleshoot, and diagnose JVM and Java Applications.

jcmd must be used on the same machine where the JVM is running, and have the same effective user and group identifiers that were used to launch the JVM.

See jcmd in the Java Platform, Standard Edition Tools Reference.

A special command jcmd <process id/main class> PerfCounter.print prints all performance counters in the process.

The command jcmd <process id/main class> <command> [options] sends the actual command to the JVM.

The following example shows diagnostic command requests to JVM using jcmd utility.

> jcmd
5485 sun.tools.jcmd.JCmd
2125 MyProgram
 
> jcmd MyProgram help (or "jcmd 2125 help")
2125:
The following commands are available:
JFR.configure
JFR.stop
JFR.start
JFR.dump
JFR.check
VM.log
VM.native_memory
VM.check_commercial_features
VM.unlock_commercial_features
ManagementAgent.status
ManagementAgent.stop
ManagementAgent.start_local
ManagementAgent.start
Compiler.directives_clear
Compiler.directives_remove
Compiler.directives_add
Compiler.directives_print
VM.print_touched_methods
Compiler.codecache
Compiler.codelist
Compiler.queue
VM.classloader_stats
Thread.print
JVMTI.data_dump
JVMTI.agent_load
VM.stringtable
VM.symboltable
VM.class_hierarchy
GC.class_stats
GC.class_histogram
GC.heap_dump
GC.finalizer_info
GC.heap_info
GC.run_finalization
GC.run
VM.info
VM.uptime
VM.dynlibs
VM.set_flag
VM.flags
VM.system_properties
VM.command_line
VM.version
help
For more information about a specific command use 'help <command>'. 

> jcmd MyProgram help Thread.print
2125:
Thread.print
Print all threads with stacktraces.
 
Impact: Medium: Depends on the number of threads.
 
Permission: java.lang.management.ManagementPermission(monitor)
 
Syntax : Thread.print [options]
 
Options: (options must be specified using the <key> or <key>=<value> syntax)
        -l : [optional] print java.util.concurrent locks (BOOLEAN, false)
 
> jcmd MyProgram Thread.print
2125:
2014-07-04 15:58:56
Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.0-b69 mixed mode):
...

The following sections describe some useful commands and troubleshooting techniques with jcmd utility:

Useful Commands for jcmd Utility

The available diagnostic command may be different in different versions of HotSpot VM; therefore, using jcmd <process id/main class> help is the best way to see all available options.

The following are some of the most useful commands in the jcmd tool. Remember you can always use jcmd <process id/main class> help <command> to get any additional options to these commands:

  • Print full HotSpot and JDK version ID
    jcmd <process id/main class> VM.version
  • Print all the system properties set for a VM

    There can be several hundred lines of information displayed.

    jcmd <process id/main class> VM.system_properties

  • Print all the flags used for a VM

    Even if you have provided no flags, some of the default values will be printed, for example initial and maximum heap size.

    jcmd <process id/main class> VM.flags

  • Print the uptime in seconds

    jcmd <process id/main class> VM.uptime

  • Create a class histogram

    The results can be rather verbose, so you can redirect the output to a file. Both internal and application specific classes are included in the list. Classes taking the most memory are listed at the top, and classes are listed in a descending order.

    jcmd <process id/main class> GC.class_histogram

  • Create a heap dump

    jcmd GC.heap_dump filename=Myheapdump

    This is the same as using jmap -dump:file=<file> <pid>, but jcmd is the recommended tool to use.

  • Create a heap histogram

    jcmd <process id/main class> GC.class_histogram filename=Myheaphistogram

    This is the same as using jmap -histo <pid>, but jcmd is the recommended tool to use.

  • Print all threads with stack traces

    jcmd <process id/main class> Thread.print

Troubleshoot with jcmd Utility

Use the jcmd utility to troubleshoot.

The jcmd utility provides the following troubleshooting options:

  • Start a recording

    For example, to start a 2-minute recording on the running Java process with the identifier 7060 and save it to myrecording.jfr in the current directory, use the following:

    jcmd 7060 JFR.start name=MyRecording settings=profile delay=20s duration=2m filename=C:\TEMP\myrecording.jfr

  • Check a recording

    The JFR.check diagnostic command checks a running recording. For example:

    jcmd 7060 JFR.check

  • Stop a recording

    The JFR.stop diagnostic command stops a running recording and has the option to discard the recording data. For example:

    jcmd 7060 JFR.stop

  • Dump a recording

    The JFR.dump diagnostic command stops a running recording and has the option to dump recordings to a file. For example:

    jcmd 7060 JFR.dump name=MyRecording filename=C:\TEMP\myrecording.jfr

  • Create a heap dump

    The preferred way to create a heap dump is

    jcmd <pid> GC.heap_dump filename=Myheapdump

  • Create a heap histogram

    The preferred way to create a heap histogram is

    jcmd <pid> GC.class_histogram filename=Myheaphistogram

Native Memory Tracking

The Native Memory Tracking (NMT) is a Java HotSpot VM feature that tracks internal memory usage for a Java HotSpot VM.

For details about NMT scope, how to enable NMT, and other additional usage details, see Native Memory Tracking in the Java Platform, Standard Edition Java Virtual Machine Guide.

Since NMT doesn't track memory allocations by non-JVM code, you may have to use tools supported by the operating system to detect memory leaks in native code.

The following sections describe how to monitor VM internal memory allocations and diagnose VM memory leaks.

Use NMT to Detect a Memory Leak

Procedure to use Native Memory Tracking to detect memory leaks.

Follow these steps to detect a memory leak:

  1. Start the JVM with summary or detail tracking using the command line option: -XX:NativeMemoryTracking=summary or -XX:NativeMemoryTracking=detail.
  2. Establish an early baseline - use NMT baseline feature to get a baseline to compare during development and maintenance by running: jcmd <pid> VM.native_memory baseline.
  3. Monitor memory changes using: jcmd <pid> VM.native_memory detail.diff.
  4. If the application leaks a small amount of memory, it make take a while to show up.

How to Monitor VM Internal Memory

Native Memory Tracking can be set up to monitor memory and ensure that an application does not start to use increasing amounts of memory during development or maintenance.

Used with The jcmd Utility, Native Memory Tracking can be set up to monitor memory and ensure that an application does not start to use increasing amounts of memory during development or maintenance. See Table 2-1 for details about NMT memory categories.

The following sections describe how to get summary or detail data for NMT and describes how to interpret the sample output.

  • Interpret sample output: From the sample output below, you will see reserved and committed memory. Note that only committed memory is actually used. For example, if you run with -Xms100m -Xmx1000m, the JVM will reserve 1000 MB for the Java Heap. Since the initial heap size is only 100 MB, only 100MB will be committed to begin with. For a 64-bit machine where address space is almost unlimited, there is no problem if a JVM reserves a lot of memory. The problem arises if more and more memory gets committed, which may lead to swapping or native OOM situations.

    Arena is a chunk of memory allocated using malloc. Memory is freed from these chunks in bulk, when exiting a scope or leaving an area of code. These chunks may be reused in other subsystems to hold temporary memory, for example, pre-thread allocations. Arena malloc policy ensures no memory leakage. So Arena is tracked as a whole and not individual objects. Some amount of initial memory can not by tracked.

    Enabling NMT will result in a 5-10 percent JVM performance drop and memory usage for NMT adds 2 machine words to all malloc memory as malloc header. NMT memory usage is also tracked by NMT.

    Total:  reserved=664192KB,  committed=253120KB                                           <--- total memory tracked by Native Memory Tracking
     
    -                 Java Heap (reserved=516096KB, committed=204800KB)                      <--- Java Heap
                                (mmap: reserved=516096KB, committed=204800KB)
     
    -                     Class (reserved=6568KB, committed=4140KB)                          <--- class metadata
                                (classes #665)                                               <--- number of loaded classes
                                (malloc=424KB, #1000)                                        <--- malloc'd memory, #number of malloc
                                (mmap: reserved=6144KB, committed=3716KB)
     
    -                    Thread (reserved=6868KB, committed=6868KB)
                                (thread #15)                                                 <--- number of threads
                                (stack: reserved=6780KB, committed=6780KB)                   <--- memory used by thread stacks
                                (malloc=27KB, #66)
                                (arena=61KB, #30)                                            <--- resource and handle areas
     
    -                      Code (reserved=102414KB, committed=6314KB)
                                (malloc=2574KB, #74316)
                                (mmap: reserved=99840KB, committed=3740KB)
     
    -                        GC (reserved=26154KB, committed=24938KB)
                                (malloc=486KB, #110)
                                (mmap: reserved=25668KB, committed=24452KB)
     
    -                  Compiler (reserved=106KB, committed=106KB)
                                (malloc=7KB, #90)
                                (arena=99KB, #3)
     
    -                  Internal (reserved=586KB, committed=554KB)
                                (malloc=554KB, #1677)
                                (mmap: reserved=32KB, committed=0KB)
     
    -                    Symbol (reserved=906KB, committed=906KB)
                                (malloc=514KB, #2736)
                                (arena=392KB, #1)
     
    -           Memory Tracking (reserved=3184KB, committed=3184KB)
                                (malloc=3184KB, #300)
     
    -        Pooled Free Chunks (reserved=1276KB, committed=1276KB)
                                (malloc=1276KB)
     
    -                   Unknown (reserved=33KB, committed=33KB)
                                (arena=33KB, #1)
    
  • Get detail data: To get a more detailed view of native memory usage, start the JVM with command line option: -XX:NativeMemoryTracking=detail. This will track exactly what methods allocate the most memory. Enabling NMT will result in 5-10 percent JVM performance drop and memory usage for NMT adds 2 words to all malloc memory as malloc header. NMT memory usage is also tracked by NMT.

    The following example shows a sample output for virtual memory for track level set to detail. One way to get this sample output is to run: jcmd <pid> VM.native_memory detail.

    Virtual memory map:
     
    [0x8f1c1000 - 0x8f467000] reserved 2712KB for Thread Stack
                    from [Thread::record_stack_base_and_size()+0xca]
            [0x8f1c1000 - 0x8f467000] committed 2712KB from [Thread::record_stack_base_and_size()+0xca]
     
    [0x8f585000 - 0x8f729000] reserved 1680KB for Thread Stack
                    from [Thread::record_stack_base_and_size()+0xca]
            [0x8f585000 - 0x8f729000] committed 1680KB from [Thread::record_stack_base_and_size()+0xca]
     
    [0x8f930000 - 0x90100000] reserved 8000KB for GC
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x555]
            [0x8f930000 - 0x90100000] committed 8000KB from [PSVirtualSpace::expand_by(unsigned int)+0x95]
     
    [0x902dd000 - 0x9127d000] reserved 16000KB for GC
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x555]
            [0x902dd000 - 0x9127d000] committed 16000KB from [os::pd_commit_memory(char*, unsigned int, unsigned int, bool)+0x36]
     
    [0x9127d000 - 0x91400000] reserved 1548KB for Thread Stack
                    from [Thread::record_stack_base_and_size()+0xca]
            [0x9127d000 - 0x91400000] committed 1548KB from [Thread::record_stack_base_and_size()+0xca]
     
    [0x91400000 - 0xb0c00000] reserved 516096KB for Java Heap                                                                            <--- reserved memory range
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x190]                  <--- callsite that reserves the memory
            [0x91400000 - 0x93400000] committed 32768KB from [VirtualSpace::initialize(ReservedSpace, unsigned int)+0x3e8]               <--- committed memory range and its callsite
            [0xa6400000 - 0xb0c00000] committed 172032KB from [PSVirtualSpace::expand_by(unsigned int)+0x95]                             <--- committed memory range and its callsite
     
    [0xb0c61000 - 0xb0ce2000] reserved 516KB for Thread Stack
                    from [Thread::record_stack_base_and_size()+0xca]
            [0xb0c61000 - 0xb0ce2000] committed 516KB from [Thread::record_stack_base_and_size()+0xca]
     
    [0xb0ce2000 - 0xb0e83000] reserved 1668KB for GC
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x555]
            [0xb0ce2000 - 0xb0cf0000] committed 56KB from [PSVirtualSpace::expand_by(unsigned int)+0x95]
            [0xb0d88000 - 0xb0d96000] committed 56KB from [CardTableModRefBS::resize_covered_region(MemRegion)+0xebf]
            [0xb0e2e000 - 0xb0e83000] committed 340KB from [CardTableModRefBS::resize_covered_region(MemRegion)+0xebf]
     
    [0xb0e83000 - 0xb7003000] reserved 99840KB for Code
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x555]
            [0xb0e83000 - 0xb0e92000] committed 60KB from [VirtualSpace::initialize(ReservedSpace, unsigned int)+0x3e8]
            [0xb1003000 - 0xb139b000] committed 3680KB from [VirtualSpace::initialize(ReservedSpace, unsigned int)+0x37a]
     
    [0xb7003000 - 0xb7603000] reserved 6144KB for Class
                    from [ReservedSpace::initialize(unsigned int, unsigned int, bool, char*, unsigned int, bool)+0x555]
            [0xb7003000 - 0xb73a4000] committed 3716KB from [VirtualSpace::initialize(ReservedSpace, unsigned int)+0x37a]
     
    [0xb7603000 - 0xb760b000] reserved 32KB for Internal
                    from [PerfMemory::create_memory_region(unsigned int)+0x8ba]
     
    [0xb770b000 - 0xb775c000] reserved 324KB for Thread Stack
                    from [Thread::record_stack_base_and_size()+0xca]
            [0xb770b000 - 0xb775c000] committed 324KB from [Thread::record_stack_base_and_size()+0xca]
    
  • Get diff from NMT baseline: For both summary and detail level tracking, you can set baseline once the application is up and running. Do this by running jcmd <pid> VM.native_memory baseline after some warm up of the application. Then, you can run: jcmd <pid> VM.native_memory summary.diff or jcmd <pid> VM.native_memory detail.diff.

    The following example shows a sample output for the summary difference in native memory usage since the baseline and is a great way to find memory leaks.

    Total:  reserved=664624KB  -20610KB, committed=254344KB -20610KB                         <--- total memory changes vs. earlier baseline. '+'=increase '-'=decrease
     
    -                 Java Heap (reserved=516096KB, committed=204800KB)
                                (mmap: reserved=516096KB, committed=204800KB)
     
    -                     Class (reserved=6578KB +3KB, committed=4530KB +3KB)
                                (classes #668 +3)                                            <--- 3 more classes loaded
                                (malloc=434KB +3KB, #930 -7)                                 <--- malloc'd memory increased by 3KB, but number of malloc count decreased by 7
                                (mmap: reserved=6144KB, committed=4096KB)
     
    -                    Thread (reserved=60KB -1129KB, committed=60KB -1129KB)
                                (thread #16 +1)                                              <--- one more thread
                                (stack: reserved=7104KB +324KB, committed=7104KB +324KB)
                                (malloc=29KB +2KB, #70 +4)
                                (arena=31KB -1131KB, #32 +2)                                 <--- 2 more arenas (one more resource area and one more handle area)
     
    -                      Code (reserved=102328KB +133KB, committed=6640KB +133KB)
                                (malloc=2488KB +133KB, #72694 +4287)
                                (mmap: reserved=99840KB, committed=4152KB)
     
    -                        GC (reserved=26154KB, committed=24938KB)
                                (malloc=486KB, #110)
                                (mmap: reserved=25668KB, committed=24452KB)
     
    -                  Compiler (reserved=106KB, committed=106KB)
                                (malloc=7KB, #93)
                                (arena=99KB, #3)
     
    -                  Internal (reserved=590KB +35KB, committed=558KB +35KB)
                                (malloc=558KB +35KB, #1699 +20)
                                (mmap: reserved=32KB, committed=0KB)
     
    -                    Symbol (reserved=911KB +5KB, committed=911KB +5KB)
                                (malloc=519KB +5KB, #2921 +180)
                                (arena=392KB, #1)
     
    -           Memory Tracking (reserved=2073KB -887KB, committed=2073KB -887KB)
                                (malloc=2073KB -887KB, #84 -210)
     
    -        Pooled Free Chunks (reserved=2624KB -15876KB, committed=2624KB -15876KB)
                                (malloc=2624KB -15876KB)
    

    The following example is a sample output that shows the detail difference in native memory usage since the baseline and is a great way to find memory leaks.

    Details:
     
    [0x01195652] ChunkPool::allocate(unsigned int)+0xe2
                                (malloc=482KB -481KB, #8 -8)
     
    [0x01195652] ChunkPool::allocate(unsigned int)+0xe2
                                (malloc=2786KB -19742KB, #134 -618)
     
    [0x013bd432] CodeBlob::set_oop_maps(OopMapSet*)+0xa2
                                (malloc=591KB +6KB, #681 +37)
     
    [0x013c12b1] CodeBuffer::block_comment(int, char const*)+0x21                <--- [callsite address] method name + offset
                                (malloc=562KB +33KB, #35940 +2125)               <--- malloc'd amount, increased by 33KB #malloc count, increased by 2125
     
    [0x0145f172] ConstantPool::ConstantPool(Array<unsigned char>*)+0x62
                                (malloc=69KB +2KB, #610 +15)
     
    ...
     
    [0x01aa3ee2] Thread::allocate(unsigned int, bool, unsigned short)+0x122
                                (malloc=21KB +2KB, #13 +1)
     
    [0x01aa73ca] Thread::record_stack_base_and_size()+0xca
                                (mmap: reserved=7104KB +324KB, committed=7104KB +324KB)
    

JConsole

Another useful tool included in the JDK download is the JConsole monitoring tool. This tool is compliant with JMX. The tool uses the built-in JMX instrumentation in the JVM to provide information about the performance and resource consumption of running applications.

Although the tool is included in the JDK download, it can also be used to monitor and manage applications deployed with the JRE.

The JConsole tool can attach to any Java application in order to display useful information such as thread usage, memory consumption, and details about class loading, runtime compilation, and the operating system.

This output helps with high-level diagnosis of problems such as memory leaks, excessive class loading, and running threads. It can also be useful for tuning and heap sizing.

In addition to monitoring, JConsole can be used to dynamically change several parameters in the running system. For example, the setting of the -verbose:gc option can be changed so that garbage collection trace output can be dynamically enabled or disabled for a running application.

The following sections describe troubleshooting techniques with JConsole tool.

Troubleshoot with JConsole Tool

Use the JConsole tool to monitor data.

The following list provides an idea of the data that can be monitored using the JConsole tool. Each heading corresponds to a tab pane in the tool.

  • Overview

    This pane displays graphs showing heap memory usage, number of threads, number of classes, and CPU usage over time. This overview allows you to visualize the activity of several resources at once.

  • Memory

    • For a selected memory area (heap, non-heap, various memory pools):

      • Graph showing memory usage over time

      • Current memory size

      • Amount of committed memory

      • Maximum memory size

    • Garbage collector information, including the number of collections performed, and the total time spent performing garbage collection

    • Graph showing percentage of heap and non-heap memory currently used

    In addition, on this pane you can request garbage collection to be performed.

  • Threads

    • Graph showing thread usage over time.

    • Live threads: Current number of live threads.

    • Peak: Highest number of live threads since the JVM started.

    • For a selected thread, the name, state, and stack trace, as well as, for a blocked thread, the synchronizer that the thread is waiting to acquire, and the thread owning the lock.

    • The Deadlock Detection button sends a request to the target application to perform deadlock detection and displays each deadlock cycle in a separate tab.

  • Classes

    • Graph showing the number of loaded classes over time

    • Number of classes currently loaded into memory

    • Total number of classes loaded into memory since the JVM started, including those subsequently unloaded

    • Total number of classes unloaded from memory since the JVM started

  • VM Summary

    • General information, such as the JConsole connection data, uptime for the JVM, CPU time consumed by the JVM, complier name, and total compile time, and so on.

    • Thread and class summary information

    • Memory and garbage collection information, including number of objects pending finalization, and so on

    • Information about the operating system, including physical characteristics, the amount of virtual memory for the running process, and swap space

    • Information about the JVM itself, such as arguments, and class path

  • MBeans

    This pane displays a tree structure showing all platform and application MBeans that are registered in the connected JMX agent. When you select an MBean in the tree, its attributes, operations, notifications, and other information are displayed.

    • You can invoke operations, if any. For example, the operation dumpHeap for the HotSpotDiagnostic MBean, which is in the com.sun.management domain, performs a heap dump. The input parameter for this operation is the pathname of the heap dump file on the machine where the target VM is running.

    • You can set the value of writable attributes. For example, you can set, unset, or change the value of certain VM flags by invoking the setVMOption operation of the HotSpotDiagnostic MBean. The flags are indicated by the list of values of the DiagnosticOptions attribute.

    • You can subscribe to notifications, if any, by using the Subscribe and Unsubscribe buttons.

Monitor Local and Remote Applications with JConsole

JConsole can monitor both local applications and remote applications. If you start the tool with an argument specifying a JMX agent to connect to, then the tool will automatically start monitoring the specified application.

To monitor a local application, execute the command jconsole pid, where pid is the process ID of the application.

To monitor a remote application, execute the command jconsole hostname:portnumber, where hostname is the name of the host running the application, and portnumber is the port number you specified when you enabled the JMX agent.

If you execute the jconsole command without arguments, the tool will start by displaying the New Connection window, where you specify the local or remote process to be monitored. You can connect to a different host at any time by using the Connection menu.

With the latest JDK releases, no option is necessary when starting the application to be monitored.

As an example of the output of the monitoring tool, Figure 2-15 shows a chart of heap memory usage.

Figure 2-15 Sample Output from JConsole

Description of Figure 2-15 follows
Description of "Figure 2-15 Sample Output from JConsole"

The jdb Utility

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the Java Debug Interface (JDI) to launch or connect to the target JVM.

The source code for jdb is included in $JAVA_HOME/demo/jpda/examples.jar.

The JDI is a high-level Java API that provides information useful for debuggers and similar systems that need access to the running state of a (usually remote) virtual machine. JDI is a component of the Java Platform Debugger Architecture (JPDA). See Java Platform Debugger Architecture.

The following sections provide troubleshooting techniques for jdb utility.

Troubleshoot with jdb Utility

The jdb utility is used to monitor the debugger connectors used for remote debugging.

In JDI, a connector is the means by which the debugger connects to the target JVM. The JDK traditionally ships with connectors that launch and establish a debugging session with a target JVM, as well as connectors that are used for remote debugging (using TCP/IP or shared memory transports).

The JDK also ships with several Serviceability Agent (SA) connectors that allow a Java language debugger to attach to a crash dump or hung process. This can be useful in determining what the application was doing at the time of the crash or hang.

These connectors are SACoreAttachingConnector, SADebugServerAttachingConnector, and SAPIDAttachingConnector.

These connectors are generally used with enterprise debuggers, such as the NetBeans integrated development environment (IDE) or commercial IDEs. The following sections demonstrate how these connectors can be used with the jdb command-line debugger.

The command jdb -listconnectors prints a list of the available connectors. The command jdb -help prints the command usage help.

See jdb Utility in the Java Platform, Standard Edition Tools Reference

Attach a Process

The following example uses the SA PID Attaching Connector to attach to a process. The target process is not started with any special options; that is, the -agentlib:jdwp option is not required. When this connector attaches to a process it does so in read-only mode: the debugger can examine threads and the running application, but it cannot change anything. The process is frozen while the debugger is attached.

The command in the following example instructs jdb to use a connector named sun.jvm.hotspot.jdi.SAPIDAttachingConnector. This is a connector name rather than a class name. The connector takes one argument named pid, whose value is the process ID of the target process (9302).

$ jdb -connect sun.jvm.hotspot.jdi.SAPIDAttachingConnector:pid=9302

Initializing jdb ...
> threads
Group system:
  (java.lang.ref.Reference$ReferenceHandler)0xa Reference Handler unknown
  (java.lang.ref.Finalizer$FinalizerThread)0x9  Finalizer         unknown
  (java.lang.Thread)0x8                         Signal Dispatcher running
  (java.lang.Thread)0x7                         Java2D Disposer   unknown
  (java.lang.Thread)0x2                         TimerQueue        unknown
Group main:
  (java.lang.Thread)0x6                         AWT-XAWT          running
  (java.lang.Thread)0x5                         AWT-Shutdown      unknown
  (java.awt.EventDispatchThread)0x4             AWT-EventQueue-0  unknown
  (java.lang.Thread)0x3                         DestroyJavaVM     running
  (sun.awt.image.ImageFetcher)0x1               Image Animator 0  sleeping
  (java.lang.Thread)0x0                         Intro             running
> thread 0x7
Java2D Disposer[1] where
  [1] java.lang.Object.wait (native method)
  [2] java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:116)
  [3] java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:132)
  [4] sun.java2d.Disposer.run (Disposer.java:125)
  [5] java.lang.Thread.run (Thread.java:619)
Java2D Disposer[1] up 1
Java2D Disposer[2] where
  [2] java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:116)
  [3] java.lang.ref.ReferenceQueue.remove (ReferenceQueue.java:132)
  [4] sun.java2d.Disposer.run (Disposer.java:125)
  [5] java.lang.Thread.run (Thread.java:619)

In this example, the threads command is used to get a list of all threads. Then a specific thread is selected with the thread 0x7 command, and the where command is used to get a thread dump. Next, the up 1 command is used to move up one frame in the stack, and the where command is used again to get a thread dump.

Attach to a Core File on the Same Machine

The SA Core Attaching Connector is used to attach the debugger to a core file.

The core file might have been created after a crash, see Troubleshoot System Crashes. The core file can also be obtained by using the gcore command on Oracle Solaris operating system or the gcore command in gdb on Linux. Because the core file is a snapshot of the process at the time the core file was created, the connector attaches in read-only mode: the debugger can examine threads and the running application at the time of the crash.

The command in the following example instructs jdb to use a connector named sun.jvm.hotspot.jdi.SACoreAttachingConnector. The connector takes two arguments: javaExecutable and core. The javaExecutable argument indicates the name of the Java binary. The core argument is the core file name (the core from the process with PID 20441 as shown in the following example).

$ jdb -connect sun.jvm.hotspot.jdi.SACoreAttachingConnector:javaExecutable=$JAVA_HOME/bin/java,core=core.20441

Attach to a Core File or a Hung Process from a Different Machine

On the machine where the debugger is installed, you can use the SA Debug Server Attaching Connector to connect to the debug server.

To debug a core file that has been transported from another machine, the operating system versions and libraries must match. In this case you can first run a proxy server called the SA Debug Server. Then, on the machine where the debugger is installed, you can use the SA Debug Server Attaching Connector to connect to the debug server.

For example there are two machines: machine1 and machine2. A core file is available on machine1 and the debugger is available on machine2. The SA Debug Server is started on machine1 as shown in the following example.

$ jsadebugd $JAVA_HOME/bin/java core.20441

The jsadebugd command takes two arguments. The first argument is the name of the executable. Usually, this is java, but it can be another name (in embedded VMs, for example). The second argument is the name of the core file. In this example, the core file was obtained for a process with PID 20441 using the gcore utility.

On machine2, the debugger connects to the remote SA Debug Server using the SA Debug Server Attaching Connector, as shown in the following example.

$ jdb -connect sun.jvm.hotspot.jdi.SADebugServerAttachingConnector:debugServerName=machine1

The command in the example instructs jdb to use a connector named sun.jvm.hotspot.jdi.SADebugServerAttachingConnector. The connector has one argument, debugServerName, which is the host name or IP address of the machine where the SA Debug Server is running.

Note:

The SA Debug Server can also be used to remotely debug a hung process. In that case, it takes a single argument, which is the PID of the process. In addition, if it is required to run multiple debug servers on the same machine, each one must be provided with a unique ID. With the SA Debug Server Attaching Connector, this ID is provided as an additional connector argument.

The jinfo Utility

The jinfo command-line utility gets configuration information from a running Java process or crash dump and prints the system properties or the command-line flags that were used to start the JVM.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jinfo utility for enhanced diagnostics and reduced performance overhead.

The utility can also use the jsadebugd daemon to query a process or core file on a remote machine.

Note:

The output takes longer to print in this case.

With the -flag option, the utility can dynamically set, unset, or change the value of certain JVM flags for the specified Java process. See Java HotSpot VM Command-Line Options.

See jinfo in the Java Platform, Standard Edition Tools Reference.

The output for jinfo utility for a Java Process with PID number 29620 as shown in the following example.

$ jinfo 29620
Attaching to process ID 29620, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 1.6.0-rc-b100
Java System Properties:

java.runtime.name = Java(TM) SE Runtime Environment
sun.boot.library.path = /usr/jdk/instances/jdk1.6.0/jre/lib/sparc
java.vm.version = 1.6.0-rc-b100
java.vm.vendor = Sun Microsystems Inc.
java.vendor.url = http://java.sun.com/
path.separator = :
java.vm.name = Java HotSpot(TM) Client VM
file.encoding.pkg = sun.io
sun.java.launcher = SUN_STANDARD
sun.os.patch.level = unknown
java.vm.specification.name = Java Virtual Machine Specification
user.dir = /home/js159705
java.runtime.version = 1.6.0-rc-b100
java.awt.graphicsenv = sun.awt.X11GraphicsEnvironment
java.endorsed.dirs = /usr/jdk/instances/jdk1.6.0/jre/lib/endorsed
os.arch = sparc
java.io.tmpdir = /var/tmp/
line.separator =

java.vm.specification.vendor = Sun Microsystems Inc.
os.name = SunOS
sun.jnu.encoding = ISO646-US
java.library.path = /usr/jdk/instances/jdk1.6.0/jre/lib/sparc/client:/usr/jdk/instances/jdk1.6.0/jre/lib/sparc:
/usr/jdk/instances/jdk1.6.0/jre/../lib/sparc:/net/gtee.sfbay/usr/sge/sge6/lib/sol-sparc64:
/usr/jdk/packages/lib/sparc:/lib:/usr/lib
java.specification.name = Java Platform API Specification
java.class.version = 50.0
sun.management.compiler = HotSpot Client Compiler
os.version = 5.10
user.home = /home/js159705
user.timezone = US/Pacific
java.awt.printerjob = sun.print.PSPrinterJob
file.encoding = ISO646-US
java.specification.version = 1.6
java.class.path = /usr/jdk/jdk1.6.0/demo/jfc/Java2D/Java2Demo.jar
user.name = js159705
java.vm.specification.version = 1.0
java.home = /usr/jdk/instances/jdk1.6.0/jre
sun.arch.data.model = 32
user.language = en
java.specification.vendor = Sun Microsystems Inc.
java.vm.info = mixed mode, sharing
java.version = 1.6.0-rc
java.ext.dirs = /usr/jdk/instances/jdk1.6.0/jre/lib/ext:/usr/jdk/packages/lib/ext
sun.boot.class.path = /usr/jdk/instances/jdk1.6.0/jre/lib/resources.jar:
/usr/jdk/instances/jdk1.6.0/jre/lib/rt.jar:/usr/jdk/instances/jdk1.6.0/jre/lib/sunrsasign.jar:
/usr/jdk/instances/jdk1.6.0/jre/lib/jsse.jar:
/usr/jdk/instances/jdk1.6.0/jre/lib/jce.jar:/usr/jdk/instances/jdk1.6.0/jre/lib/charsets.jar:
/usr/jdk/instances/jdk1.6.0/jre/classes
java.vendor = Sun Microsystems Inc.
file.separator = /
java.vendor.url.bug = http://java.sun.com/cgi-bin/bugreport.cgi
sun.io.unicode.encoding = UnicodeBig
sun.cpu.endian = big
sun.cpu.isalist =

VM Flags:

The following topic describes troubleshooting technique with jinfo utility.

Troubleshooting with jinfo Utility

The output from jinfo provides the settings for java.class.path and sun.boot.class.path.

If you start the target JVM with the -classpath and -Xbootclasspath arguments, the output from jinfo provides the settings for java.class.path and sun.boot.class.path. This information might be needed when investigating class loader issues.

In addition to obtaining information from a process, the jinfo tool can use a core file as input. On Oracle Solaris operating system, for example, the gcore utility can be used to get a core file of the process in the preceding example. The core file will be named core.29620 and will be generated in the working directory of the process. The path to the Java executable and the core file must be specified as arguments to the jinfo utility, as shown in the following example.

$ jinfo $JAVA_HOME/bin/java core.29620

Sometimes the binary name will not be java. This occurs when the VM is created using the JNI invocation API. The jinfo tool requires the binary from which the core file was generated.

The jmap Utility

The jmap command-line utility prints memory-related statistics for a running VM or core file.

The utility can also use the jsadebugd daemon to query a process or core file on a remote machine.

Note:

The output takes longer to print in this case.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jmap utility for enhanced diagnostics and reduced performance overhead.

If jmap is used with a process or core file without any command-line options, then it prints the list of shared objects loaded (the output is similar to the pmap utility on Oracle Solaris operating system). For more specific information, you can use the options -heap, -histo, or -permstat. These options are described in the subsections that follow.

In addition, the JDK 7 release introduced the -dump:format=b,file=filename option, which causes jmap to dump the Java heap in binary format to a specified file.

If the jmap pid command does not respond because of a hung process, then the -F option can be used (on Oracle Solaris and Linux operating systems only) to force the use of the Serviceability Agent.

See jmap in the Java Platform, Standard Edition Tools Reference.

The following sections describe jmap command usage and troubleshooting techniques with examples that print memory-related statistics for a running VM or a core file.

Heap Configuration and Usage

Use the jmap -heap command to obtain the Java heap information.

The -heap option is used to obtain the following Java heap information:

  • Information specific to the garbage collection (GC) algorithm, including the name of the GC algorithm (for example, parallel GC) and algorithm-specific details (such as number of threads for parallel GC).

  • Heap configuration that might have been specified as command-line options or selected by the VM based on the machine configuration.

  • Heap usage summary: For each generation (area of the heap), the tool prints the total heap capacity, in-use memory, and available free memory. If a generation is organized as a collection of spaces (for example, the new generation), then a space specific memory size summary is included.

The following example shows output from the jmap -heap command.

$ jmap -heap 29620
Attaching to process ID 29620, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 1.6.0-rc-b100

using thread-local object allocation.
Mark Sweep Compact GC

Heap Configuration:
   MinHeapFreeRatio = 40
   MaxHeapFreeRatio = 70
   MaxHeapSize      = 67108864 (64.0MB)
   NewSize          = 2228224 (2.125MB)
   MaxNewSize       = 4294901760 (4095.9375MB)
   OldSize          = 4194304 (4.0MB)
   NewRatio         = 8
   SurvivorRatio    = 8
   PermSize         = 12582912 (12.0MB)
   MaxPermSize      = 67108864 (64.0MB)

Heap Usage:
New Generation (Eden + 1 Survivor Space):
   capacity = 2031616 (1.9375MB)
   used     = 70984 (0.06769561767578125MB)
   free     = 1960632 (1.8698043823242188MB)
   3.4939673639112905% used
Eden Space:
   capacity = 1835008 (1.75MB)
   used     = 36152 (0.03447723388671875MB)
   free     = 1798856 (1.7155227661132812MB)
   1.9701276506696428% used
From Space:
   capacity = 196608 (0.1875MB)
   used     = 34832 (0.0332183837890625MB)
   free     = 161776 (0.1542816162109375MB)
   17.716471354166668% used
To Space:
   capacity = 196608 (0.1875MB)
   used     = 0 (0.0MB)
   free     = 196608 (0.1875MB)
   0.0% used
tenured generation:
   capacity = 15966208 (15.2265625MB)
   used     = 9577760 (9.134063720703125MB)
   free     = 6388448 (6.092498779296875MB)
   59.98769400974859% used
Perm Generation:
   capacity = 12582912 (12.0MB)
   used     = 1469408 (1.401336669921875MB)
   free     = 11113504 (10.598663330078125MB)
   11.677805582682291% used

Heap Histogram

The jmap command with the -histo option can be used to obtain a class specific histogram of the heap.

Depending on the parameter specified, the jmap -histo command can print out the heap histogram for a running process or a core file.

When the command is executed on a running process, the tool prints the number of objects, memory size in bytes, and fully qualified class name for each class. Internal classes in the Java HotSpot VM are enclosed in angle brackets. The histogram is useful in understanding how the heap is used. To get the size of an object, you must divide the total size by the count of that object type.

The following example shows output from the jmap -histo command when it is executed on a process with PID number 29620.

$ jmap -histo 29620
num   #instances    #bytes  class name
--------------------------------------
  1:      1414     6013016  [I
  2:       793      482888  [B
  3:      2502      334928  <constMethodKlass>
  4:       280      274976  <instanceKlassKlass>
  5:       324      227152  [D
  6:      2502      200896  <methodKlass>
  7:      2094      187496  [C
  8:       280      172248  <constantPoolKlass>
  9:      3767      139000  [Ljava.lang.Object;
 10:       260      122416  <constantPoolCacheKlass>
 11:      3304      112864  <symbolKlass>
 12:       160       72960  java2d.Tools$3
 13:       192       61440  <objArrayKlassKlass>
 14:       219       55640  [F
 15:      2114       50736  java.lang.String
 16:      2079       49896  java.util.HashMap$Entry
 17:       528       48344  [S
 18:      1940       46560  java.util.Hashtable$Entry
 19:       481       46176  java.lang.Class
 20:        92       43424  javax.swing.plaf.metal.MetalScrollButton
... more lines removed here to reduce output...
1118:         1           8  java.util.Hashtable$EmptyIterator
1119:         1           8  sun.java2d.pipe.SolidTextRenderer
Total    61297    10152040

When the jmap -histo command is executed on a core file, the tool prints the size, count, and class name for each class. Internal classes in the Java HotSpot VM are prefixed with an asterisk (*).

shows output of the jmap -histo command when it is executed on a core file.

& jmap -histo /net/koori.sfbay/onestop/jdk/6.0/promoted/all/b100/binaries/solaris-sparcv9/bin/java core
Attaching to core core from executable /net/koori.sfbay/onestop/jdk/6.0/
promoted/all/b100/binaries/solaris-sparcv9/bin/java, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 1.6.0-rc-b100
Iterating over heap. This may take a while...
Heap traversal took 8.902 seconds.

Object Histogram:

Size    Count    Class description
-------------------------------------------------------
4151816    2941    int[]
2997816    26403    * ConstMethodKlass
2118728    26403    * MethodKlass
1613184    39750    * SymbolKlass
1268896    2011    * ConstantPoolKlass
1097040    2011    * InstanceKlassKlass
882048    1906    * ConstantPoolCacheKlass
758424    7572    char[]
733776    2518    byte[]
252240    3260    short[]
214944    2239    java.lang.Class
177448    3341    * System ObjArray
176832    7368    java.lang.String
137792    3756    java.lang.Object[]
121744    74    long[]
72960    160    java2d.Tools$3
63680    199    * ObjArrayKlassKlass
53264    158    float[]
... more lines removed here to reduce output...

Permanent Generation Statistics

The permanent generation is the area of the heap that holds all the reflective data of the virtual machine itself, such as class and method objects.

This area is also called "method area" in The Java Virtual Machine Specification.

Configuring the size of the permanent generation can be important for applications that dynamically generate and load a very large number of classes (for example, Java Server Pages or web containers). If an application loads too many classes, then it is possible it will terminate with the following error:

Exception in thread thread_name java.lang.OutOfMemoryError: PermGen space

See Understand the OutOfMemoryError Exception.

To get further information about the permanent generation, you can use the -permstat option of the jmap command to print statistics for the objects in the permanent generation.

The following example shows the output from the jmap -permstat command executed on a process with PID number 29620.

$ jmap -permstat 29620
Attaching to process ID 29620, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 1.6.0-rc-b100
12674 intern Strings occupying 1082616 bytes.
finding class loader instances ..Unknown oop at 0xd0400900
Oop's klass is 0xd0bf8408
Unknown oop at 0xd0401100
Oop's klass is null
done.
computing per loader stat ..done.
please wait.. computing liveness.........................................done.
class_loader    classes bytes   parent_loader   alive?  type

<bootstrap>     1846 5321080  null        live   <internal>
0xd0bf3828  0      0      null         live    sun/misc/Launcher$ExtClassLoader@0xd8c98c78
0xd0d2f370  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99280  1   1440      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b71d90  0      0   0xd0b5b9c0    live java/util/ResourceBundle$RBClassLoader@0xd8d042e8
0xd0d2f4c0  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5bf98  1    920   0xd0b5bf38      dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99248  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f488  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5bf38  6   11832  0xd0b5b9c0      dead    sun/reflect/misc/MethodUtil@0xd8e8e560
0xd0d2f338  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f418  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f3a8  1    904     null          dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0b5b9c0  317 1397448 0xd0bf3828     live    sun/misc/Launcher$AppClassLoader@0xd8cb83d8
0xd0d2f300  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f3e0  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0ec3968  1   1440      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0e0a248  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0c99210  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f450  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0d2f4f8  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50
0xd0e0a280  1    904      null         dead    sun/reflect/DelegatingClassLoader@0xd8c22f50

total = 22      2186    6746816   N/A   alive=4, dead=18       N/A    

For each class loader object, the following details are printed:

  • The address of the class loader object at the snapshot when the utility was run

  • The number of classes loaded

  • The approximate number of bytes consumed by metadata for all classes loaded by this class loader

  • The address of the parent class loader (if any)

  • A live or dead indication of whether the loader object will be garbage collected in the future

  • The class name of this class loader

The jps Utility

The jps utility lists every instrumented Java HotSpot VM for the current user on the target system.

The utility is very useful in environments where the VM is embedded, that is; where it is started using the JNI Invocation API rather than the java launcher. In these environments, it is not always easy to recognize the Java processes in the process list.

To know more about the jps utility, see jps in the Java Platform, Standard Edition Tools Reference.

The following example demonstrates the usage of the jps utility.

$ jps
16217 MyApplication
16342 jps

The utility lists the virtual machines for which the user has access rights. This is determined by access-control mechanisms specific to the operating system. On Oracle Solaris operating system, for example, if a non-root user executes the jps utility, then the output is a list of the virtual machines that were started with that user's uid.

In addition to listing the PID, the utility provides options to output the arguments passed to the application's main method, the complete list of VM arguments, and the full package name of the application's main class. The jps utility can also list processes on a remote system if the remote system is running the jstatd daemon.

If you are running several Java Web Start applications on a system, they tend to look the same, as shown in the following example.

$ jps
1271 jps
     1269 Main
     1190 Main

In this case, use jps -m to distinguish them, as shown in the following example.

$ jps -m
1271 jps -m
     1269 Main http://bugster.central.sun.com/bugster.jnlp
     1190 Main http://webbugs.sfbay/IncidentManager/incident.jnlp

The jstack Utility

Use the jcmd utility instead of jcmd utility for diagnosing problems with JVM and Java applications.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jstack utility for enhanced diagnostics and reduced performance overhead.

The following sections describe troubleshooting techniques with jstack utility.

Troubleshoot with jstack Utility

The jstack command-line utility attaches to the specified process or core file and prints the stack traces of all threads that are attached to the virtual machine, including Java threads and VM internal threads, and optionally native stack frames. The utility also performs deadlock detection.

The utility can also use the jsadebugd daemon to query a process or core file on a remote machine.

Note:

The output takes longer to print in this case.

A stack trace of all threads can be useful in diagnosing a number of issues, such as deadlocks or hangs.

The -l option, which instructs the utility to look for ownable synchronizers in the heap and print information about java.util.concurrent.locks. Without this option, the thread dump includes information only on monitors.

The output from the jstack pid option is the same as that obtained by pressing Ctrl+\ at the application console (standard input) or by sending the process a QUIT signal. See Control+Break Handler for an output example.

Thread dumps can also be obtained programmatically using the Thread.getAllStackTraces method, or in the debugger using the debugger option to print all thread stacks (the where command in the case of the jdb sample debugger).

See jstack in the Java Platform, Standard Edition Tools Reference.

Stack Trace from a Core Dump

Use the jstack command to obtain stack traces from a core dump.

To obtain stack traces from a core dump, execute the jstack command on a core file, as shown in the following example.

$ jstack $JAVA_HOME/bin/java core

Mixed Stack

The jstack utility can also be used to print a mixed stack; that is, it can print native stack frames in addition to the Java stack. Native frames are the C/C++ frames associated with VM code and JNI/native code.

To print a mixed stack, use the -m option, as shown in the following example.

$ jstack -m 21177
Attaching to process ID 21177, please wait...
Debugger attached successfully.
Client compiler detected.
JVM version is 1.6.0-rc-b100
Deadlock Detection:

Found one Java-level deadlock:
=============================

"Thread1":
  waiting to lock Monitor@0x0005c750 (Object@0xd4405938, a java/lang/String),
  which is held by "Thread2"
"Thread2":
  waiting to lock Monitor@0x0005c6e8 (Object@0xd4405900, a java/lang/String),
  which is held by "Thread1"

Found a total of 1 deadlock.

----------------- t@1 -----------------
0xff2c0fbc    __lwp_wait + 0x4
0xff2bc9bc    _thrp_join + 0x34
0xff2bcb28    thr_join + 0x10
0x00018a04    ContinueInNewThread + 0x30
0x00012480    main + 0xeb0
0x000111a0    _start + 0x108
----------------- t@2 -----------------
0xff2c1070    ___lwp_cond_wait + 0x4
0xfec03638    bool Monitor::wait(bool,long) + 0x420
0xfec9e2c8    bool Threads::destroy_vm() + 0xa4
0xfe93ad5c    jni_DestroyJavaVM + 0x1bc
0x00013ac0    JavaMain + 0x1600
0xff2bfd9c    _lwp_start
----------------- t@3 -----------------
0xff2c1070    ___lwp_cond_wait + 0x4
0xff2ac104    _lwp_cond_timedwait + 0x1c
0xfec034f4    bool Monitor::wait(bool,long) + 0x2dc
0xfece60bc    void VMThread::loop() + 0x1b8
0xfe8b66a4    void VMThread::run() + 0x98
0xfec139f4    java_start + 0x118
0xff2bfd9c    _lwp_start
----------------- t@4 -----------------
0xff2c1070    ___lwp_cond_wait + 0x4
0xfec195e8    void os::PlatformEvent::park() + 0xf0
0xfec88464    void ObjectMonitor::wait(long long,bool,Thread*) + 0x548
0xfe8cb974    void ObjectSynchronizer::wait(Handle,long long,Thread*) + 0x148
0xfe8cb508    JVM_MonitorWait + 0x29c
0xfc40e548    * java.lang.Object.wait(long) bci:0 (Interpreted frame)
0xfc40e4f4    * java.lang.Object.wait(long) bci:0 (Interpreted frame)
0xfc405a10    * java.lang.Object.wait() bci:2 line:485 (Interpreted frame)
... more lines removed here to reduce output...
----------------- t@12 -----------------
0xff2bfe3c    __lwp_park + 0x10
0xfe9925e4    AttachOperation*AttachListener::dequeue() + 0x148
0xfe99115c    void attach_listener_thread_entry(JavaThread*,Thread*) + 0x1fc
0xfec99ad8    void JavaThread::thread_main_inner() + 0x48
0xfec139f4    java_start + 0x118
0xff2bfd9c    _lwp_start
----------------- t@13 -----------------
0xff2c1500    _door_return + 0xc
----------------- t@14 -----------------
0xff2c1500    _door_return + 0xc

Frames that are prefixed with an asterisk (*) are Java frames, whereas frames that are not prefixed with an asterisk are native C/C++ frames.

The output of the utility can be piped through c++filt to demangle C++ mangled symbol names. Because the Java HotSpot VM is developed in the C++ language, the jstack utility prints C++ mangled symbol names for the Java HotSpot internal functions.

The c++filt utility is delivered with the native C++ compiler suite: SUNWspro on Oracle Solaris operating system and gnu on Linux.

The jstat Utility

The jstat utility uses the built-in instrumentation in the Java HotSpot VM to provide information about performance and resource consumption of running applications.

The tool can be used when diagnosing performance issues, and in particular issues related to heap sizing and garbage collection. The jstat utility does not require the VM to be started with any special options. The built-in instrumentation in the Java HotSpot VM is enabled by default. This utility is included in the JDK download for all operating system platforms supported by Oracle.

Note:

The instrumentation is not accessible on a FAT32 file system.

See jstat in the Java Platform, Standard Edition Tools Reference.

The jstat utility uses the virtual machine identifier (VMID) to identify the target process. The documentation describes the syntax of the VMID, but its only required component is the local virtual machine identifier (LVMID). The LVMID is typically (but not always) the operating system's PID for the target JVM process.

The jstat tool provides data similar to the data provided by the tools vmstat and iostat on Oracle Solaris and Linux operating systems.

For a graphical representation of the data, you can use the visualgc tool. See The visualgc Tool.

The following example illustrates the use of -gcutil option where the jstat utility attaches to LVMID number 2834, takes seven samples at 250-millisecond intervals.

$ jstat -gcutil 2834 250 7
  S0     S1     E      O      M     YGC     YGCT    FGC    FGCT     GCT   
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124
  0.00  99.74  13.49   7.86  95.82      3    0.124     0    0.000    0.124

The output of this example shows that a young generation collection occurred between the third and fourth samples. The collection took 0.017 seconds and promoted objects from the eden space (E) to the old space (O), resulting in an increase of old space utilization from 46.56% to 54.60%.

The following example illustrates the use of the -gcnew option where the jstat utility attaches to LVMID number 2834, takes samples at 250-millisecond intervals, and displays the output. In addition, it uses the -h3 option to display the column headers after every three lines of data.

$ jstat -gcnew -h3 2834 250
S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0    942.0    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1024.8    218    1.999
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1068.1    218    1.999
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0    0.0 15  15   96.0   1984.0   1109.0    218    1.999
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0      0.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     71.6    219    2.019
 S0C    S1C    S0U    S1U   TT MTT  DSS      EC       EU     YGC     YGCT  
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     73.7    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0     78.0    219    2.019
 192.0  192.0    0.0  103.2  1  15   96.0   1984.0    116.1    219    2.019

In addition to showing the repeating header string, this example shows that between the fourth and fifth samples, a young generation collection occurred, whose duration was 0.02 seconds. The collection found enough live data that the survivor space 1 utilization (S1U) would have exceeded the desired survivor size (DSS). As a result, objects were promoted to the old generation (not visible in this output), and the tenuring threshold (TT) was lowered from 15 to 1.

The following example illustrates the use of the -gcoldcapacity option where the jstat utility attaches to LVMID number 21891 and takes three samples at 250-millisecond intervals. The -t option is used to generate a time stamp for each sample in the first column.

$ jstat -gcoldcapacity -t 21891 250 3
Timestamp    OGCMN     OGCMX       OGC        OC   YGC   FGC     FGCT     GCT
    150.1   1408.0   60544.0   11696.0   11696.0   194    80    2.874   3.799
    150.4   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863
    150.7   1408.0   60544.0   13820.0   13820.0   194    81    2.938   3.863

The Timestamp column reports the elapsed time in seconds since the start of the target JVM. In addition, the -gcoldcapacity output shows the old generation capacity (OGC) and the old space capacity (OC) increasing as the heap expands to meet allocation or promotion demands. The OGC has grown from 11696 KB to 13820 KB after the 81st full generation capacity (FGC). The maximum capacity of the generation (and space) is 60544 KB (OGCMX), so it still has room to expand.

The visualgc Tool

The visualgc tool provides a graphical view of the garbage collection (GC) system.

The visualgc tool is related to the jstat tool, see The jstat Utility. The visualgc tool provides a graphical view of the garbage collection (GC) system. As with jstat, it uses the built-in instrumentation of the Java HotSpot VM.

The visualgc tool is not included in the JDK release but is available as a separate download from the jvmstat technology page.

Figure 2-16 demonstrates how the GC and heap are visualized.

Figure 2-16 Sample Output from visualgc

Description of Figure 2-16 follows
Description of "Figure 2-16 Sample Output from visualgc"

Control+Break Handler

The result of pressing Control key and the backslash (\) key at the application console on operating systems such as Oracle Solaris or Linux, or Windows.

On Oracle Solaris or Linux operating systems, the combination of pressing the Control key and the backslash (\) key at the application console (standard input) causes the Java HotSpot VM to print a thread dump to the application's standard output. On Windows, the equivalent key sequence is the Control and Break keys. The general term for these key combinations is the Control+Break handler.

On Oracle Solaris and Linux operating systems, a thread dump is printed if the Java process receives a QUIT signal. Therefore, the kill -QUIT pid command causes the process with the ID pid to print a thread dump to standard output.

The following sections describe the data traced by the Control+Break handler:

Thread Dump

The thread dump consists of the thread stack, including thread state, for all Java threads in the virtual machine.

The thread dump does not terminate the application: it continues after the thread information is printed.

The following example illustrates a thread dump.

Full thread dump Java HotSpot(TM) Client VM (1.6.0-rc-b100 mixed mode):

"DestroyJavaVM" prio=10 tid=0x00030400 nid=0x2 waiting on condition [0x00000000..0xfe77fbf0]
   java.lang.Thread.State: RUNNABLE

"Thread2" prio=10 tid=0x000d7c00 nid=0xb waiting for monitor entry [0xf36ff000..0xf36ff8c0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)

"Thread1" prio=10 tid=0x000d6c00 nid=0xa waiting for monitor entry [0xf37ff000..0xf37ffbc0]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

"Low Memory Detector" daemon prio=10 tid=0x000c7800 nid=0x8 runnable [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"CompilerThread0" daemon prio=10 tid=0x000c5400 nid=0x7 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x000c4400 nid=0x6 waiting on condition [0x00000000..0x00000000]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x000b2800 nid=0x5 in Object.wait() [0xf3f7f000..0xf3f7f9c0]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:116)
        - locked <0xf4000b40> (a java.lang.ref.ReferenceQueue$Lock)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:132)
        at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x000ae000 nid=0x4 in Object.wait() [0xfe57f000..0xfe57f940]
   java.lang.Thread.State: WAITING (on object monitor)
        at java.lang.Object.wait(Native Method)
        - waiting on <0xf4000a40> (a java.lang.ref.Reference$Lock)
        at java.lang.Object.wait(Object.java:485)
        at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
        - locked <0xf4000a40> (a java.lang.ref.Reference$Lock)

"VM Thread" prio=10 tid=0x000ab000 nid=0x3 runnable 

"VM Periodic Task Thread" prio=10 tid=0x000c8c00 nid=0x9 waiting on condition 

The output consists of a number of thread entries separated by an empty line. The Java Threads (threads that are capable of executing Java language code) are printed first, and these are followed by information about VM internal threads. Each thread entry consists of a header line followed by the thread stack trace.

The header line contains the following information about the thread:

  • Thread name

  • Indication if the thread is a daemon thread

  • Thread priority (prio)

  • Thread ID (tid), which is the address of a thread structure in memory

  • ID of the native thread (nid)

  • Thread state, which indicates what the thread was doing at the time of the thread dump. See Table 2-6 for more details.

  • Address range, which gives an estimate of the valid stack region for the thread

Detect Deadlocks

The Control+Break handler can be used to detect deadlocks in threads.

In addition to the thread stacks, the Control+Break handler executes a deadlock detection algorithm. If any deadlocks are detected, the Control+Break handler, as shown in the following example prints additional information after the thread dump on each deadlocked thread.

Found one Java-level deadlock:
=============================
"Thread2":
  waiting to lock monitor 0x000af330 (object 0xf819a938, a java.lang.String),
  which is held by "Thread1"
"Thread1":
  waiting to lock monitor 0x000af398 (object 0xf819a970, a java.lang.String),
  which is held by "Thread2"

Java stack information for the threads listed above:
===================================================
"Thread2":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a938> (a java.lang.String)
        - locked <0xf819a970> (a java.lang.String)
"Thread1":
        at Deadlock$DeadlockMakerThread.run(Deadlock.java:32)
        - waiting to lock <0xf819a970> (a java.lang.String)
        - locked <0xf819a938> (a java.lang.String)

Found 1 deadlock.

If the JVM flag -XX:+PrintConcurrentLocks is set, then the Control+Break handler will also print the list of concurrent locks owned by each thread.

Heap Summary

The Control+Break handler can be used to print a heap summary.

The following example shows the different generations (areas of the heap), with the size, the amount used, and the address range. The address range is especially useful if you are also examining the process with tools such as pmap.

Heap
 def new generation   total 1152K, used 435K [0x22960000, 0x22a90000, 0x22e40000
)
  eden space 1088K,  40% used [0x22960000, 0x229ccd40, 0x22a70000)
  from space 64K,   0% used [0x22a70000, 0x22a70000, 0x22a80000)
  to   space 64K,   0% used [0x22a80000, 0x22a80000, 0x22a90000)
 tenured generation   total 13728K, used 6971K [0x22e40000, 0x23ba8000, 0x269600
00)
   the space 13728K,  50% used [0x22e40000, 0x2350ecb0, 0x2350ee00, 0x23ba8000)
 compacting perm gen  total 12288K, used 1417K [0x26960000, 0x27560000, 0x2a9600
00)
   the space 12288K,  11% used [0x26960000, 0x26ac24f8, 0x26ac2600, 0x27560000)
    ro space 8192K,  62% used [0x2a960000, 0x2ae5ba98, 0x2ae5bc00, 0x2b160000)
    rw space 12288K,  52% used [0x2b160000, 0x2b79e410, 0x2b79e600, 0x2bd60000)

If the JVM flag -XX:+PrintClassHistogram is set, then the Control+Break handler will produce a heap histogram.

Native Operating System Tools

List of native tools available on Windows, Linux, and Oracle Solaris operating systems that are useful for troubleshooting or monitoring purposes.

A brief description is provided for each tool. For further details, refer to the operating system documentation (or man pages for Oracle Solaris and Linux operating systems).

The format of log files and output from command-line utilities depends on the release. For example, if you develop a script that relies on the format of the fatal error log, then the same script may not work if the format of the log file changes in a future release.

You can also search for Windows-specific debug support on MSDN developer network.

The following sections describe troubleshooting techniques and improvements to a few native operating system tools.

DTrace Tool

Oracle Solaris 10 operating system includes the DTrace tool, which allows dynamic tracing of the operating system kernel and user-level programs.

This tool supports scripting at system-call entry and exit, at user-mode function entry and exit, and at many other probe points. The scripts are written in the D programming language, which is a C-like language with safe pointer semantics. These scripts can help you in troubleshooting problems or solving performance issues.

The dtrace command is a generic front end to the DTrace tool. This command provides a simple interface to invoke the D language, to retrieve buffered trace data, and to access a set of basic routines to format and print traced data.

You can write your own customized DTrace scripts, using the D language, or download and use one or more of the many scripts that are already available on various sites.

The probes are delivered and instrumented by kernel modules called providers. The types of tracing offered by the probe providers include user instruction tracing, function boundary tracing, kernel lock instrumentation, profile interrupt, system call tracing, and many more. If you write your own scripts, you use the D language to enable the probes; this language also allows conditional tracing and output formatting.

You can use the dtrace -l command to explore the set of providers and probes that are available on your Oracle Solaris operating system.

The DTraceToolkit is a collection of useful documented scripts developed by the Open Oracle Solaris DTrace community. See DTraceToolkit.

See Solaris Dynamic Tracing Guide.

Probe Providers in Java HotSpot VM

The Java HotSpot VM contains two built-in probe providers hotspot and hotspot_jni.

These providers deliver probes that can be used to monitor the internal state and activities of the VM, as well as the Java application that is running.

The JVM probe providers can be categorized as follows:

  • VM lifecycle: VM initialization begin and end, and VM shutdown

  • Thread lifecycle: thread start and stop, thread name, thread ID, and so on

  • Class-loading: Java class loading and unloading

  • Garbage collection: start and stop of garbage collection, systemwide or by memory pool

  • Method compilation: method compilation begin and end, and method loading and unloading

  • Monitor probes: wait events, notification events, contended monitor entry and exit

  • Application tracking: method entry and return, allocation of a Java object

In order to call from native code to Java code, the native code must make a call through the JNI interface. The hotspot_jni provider manages DTrace probes at the entry point and return point for each of the methods that the JNI interface provides for invoking Java code and examining the state of the VM.

At probe points, you can print the stack trace of the current thread using the ustack built-in function. This function prints Java method names in addition to C/C++ native function names. The following example is a simple D script that prints a full stack trace whenever a thread calls the read system call.

#!/usr/sbin/dtrace -s
syscall::read:entry 
/pid == $1 && tid == 1/ {    
   ustack(50, 0x2000);
}

The script in the above example is stored in a file named read.d and is run by specifying the PID of the Java process that is traced as shown in the following example.

read.d pid

If your Java application generated a lot of I/O or had some unexpected latency, then the DTrace tool and its ustack() action can help you diagnose the problem.

Improvements to pmap Tool

Improvements to the pmap utility in Oracle Solaris 10 operating system.

The pmap utility was improved in Oracle Solaris 10 operating system to print stack segments with the text [stack]. This text helps you to locate the stack easily.

The following example shows the stack trace with improved pmap tool.

19846:    /net/myserver/export1/user/j2sdk6/bin/java -Djava.endorsed.d
00010000      72K r-x--  /export/disk09/jdk/6/rc/b63/binaries/solsparc/bin/java
00030000      16K rwx--  /export/disk09/jdk/6/rc/b63/binaries/solsparc/bin/java
00034000   32544K rwx--    [ heap ]
D1378000      32K rwx-R    [ stack tid=44 ]
D1478000      32K rwx-R    [ stack tid=43 ]
D1578000      32K rwx-R    [ stack tid=42 ]
D1678000      32K rwx-R    [ stack tid=41 ]
D1778000      32K rwx-R    [ stack tid=40 ]
D1878000      32K rwx-R    [ stack tid=39 ]
D1974000      48K rwx-R    [ stack tid=38 ]
D1A78000      32K rwx-R    [ stack tid=37 ]
D1B78000      32K rwx-R    [ stack tid=36 ]
[.. more lines removed here to reduce output ..]
FF370000       8K r-x--  /usr/lib/libsched.so.1
FF380000       8K r-x--  /platform/sun4u-us3/lib/libc_psr.so.1
FF390000      16K r-x--  /lib/libthread.so.1
FF3A4000       8K rwx--  /lib/libthread.so.1
FF3B0000       8K r-x--  /lib/libdl.so.1
FF3C0000     168K r-x--  /lib/ld.so.1
FF3F8000       8K rwx--  /lib/ld.so.1
FF3FA000       8K rwx--  /lib/ld.so.1
FFB80000      24K -----    [ anon ]
FFBF0000      64K rwx--    [ stack ]
 total    167224K

Improvements to pstack Tool

Improvements to the pstack utility in Oracle Solaris 10 operating system.

Prior to Oracle Solaris 10 operating system, the pstack utility did not support Java. It printed hexadecimal addresses for both interpreted and compiled Java methods.

Starting with Oracle Solaris 10 operating system, the pstack command-line tool prints mixed mode stack traces (Java and C/C++ frames) from a core file or a live process. The tool prints Java method names for interpreted, compiled, and inlined Java methods.

Custom Diagnostic Tools

The JDK has extensive APIs for developing custom tools to observe, monitor, profile, debug, and diagnose issues in applications that are deployed in the JRE.

The development of new tools is beyond the scope of this document. Instead this section provides a brief overview of the APIs available.

All the packages mentioned in this section are described in the Java SE API specification.

Refer also to example and demonstration code that is included in the JDK download.

The following sections describe packages, interface classes, and the Java debugger that can be used as custom diagnostic tools for troubleshooting.

Java Platform Debugger Architecture

The Java Platform Debugger Architecture (JPDA) is the architecture designed for use by debuggers and debugger-like tools.

JPDA consists of two programming interfaces and a wire protocol:

  • The Java Virtual Machine Tool Interface (JVM TI) is the interface to the virtual machine, see JVM Tool Interface.

  • The Java Debug Interface (JDI) defines information and requests at the user code level. It is a pure Java programming language interface for debugging Java programming language applications. In JPDA, the JDI is a remote view in the debugger process of a virtual machine in the process being debugged. It is implemented by the front end, where as a debugger-like application (for example, IDE, debugger, tracer, or monitoring tool) is the client.

  • The Java Debug Wire Protocol (JDWP) defines the format of information and requests transferred between the process being debugged and the debugger front end, which implements the JDI.

The jdb utility is included in the JDK as an example command-line debugger. The jdb utility uses the JDI to launch or connect to the target VM. See The jdb Utility.

In addition to traditional debugger-type tools, the JDI can also be used to develop tools that help in postmortem diagnostics and scenarios where the tool needs to attach to a process in a noncooperative manner (for example, a hung process).

NMT Memory Categories

List of native memory tracking memory categories used by NMT.

Table 2-1 describes native memory categories used by NMT. These categories may change with the release.

Table 2-1 Native Memory Tracking Memory Categories

Category Description

Java Heap

The heap where your objects live

Class

Class meta data

Code

Generated code

GC

data use by the GC, such as card table

Compiler

Memory used by the compiler when generating code

Symbol

Symbols

Memory Tracking

Memory used by NMT itself

Pooled Free Chunks

Memory used by chunks in the arena chunk pool

Shared space for classes

Memory mapped to class data sharing archive

Thread

Memory used by threads, including thread data structure, resource area and handle area and so on.

Thread stack

Thread stack. It is marked as committed memory, but it might not be completely committed by the OS

Internal

Memory that does not fit the previous categories, such as the memory used by the command line parser, JVMTI, properties and so on.

Unknown

When memory category can not be determined.

Arena: When arena is used as a stack or value object

Virtual Memory: When type information has not yet arrived

Postmortem Diagnostics Tools

List of tools and options available for post-mortem diagnostics of problems between the application and the Java HotSpot VM.

Table 2-2 summarizes the options and tools that are designed for postmortem diagnostics. If an application crashes, these options and tools can be used to obtain additional information, either at the time of the crash or later using information from the crash dump.

Table 2-2 Postmortem Diagnostics Tools

Tool or Option Description and Usage

Fatal Error Log

When an irrecoverable (fatal) error occurs, an error log is created. This file contains much information obtained at the time of the fatal error. In many cases it is the first item to examine when a crash occurs. See Fatal Error Log.

-XX:+HeapDumpOnOutOfMemoryError option

This command-line option specifies the generation of a heap dump when the VM detects a native out-of-memory error. See The -XX:HeapDumpOnOutOfMemoryError Option.

-XX:OnError option

This command-line option specifies a sequence of user-supplied scripts or commands to be executed when a fatal error occurs. For example, on Windows, this option can execute a command to force a crash dump. This option is very useful on systems where a post-mortem debugger is not configured. See The -XX:OnError Option.

-XX:+ShowMessageBoxOnError option

This command-line option suspends a process when a fatal error occurs. Depending on the user response, the option can launch the native debugger (for example, dbx, gdb, msdev) to attach to the VM. See The -XX:ShowMessageBoxOnError Option.

Other -XX options

Several other -XX command-line options can be useful in troubleshooting. See Other -XX Options.

jdb utility

Debugger support includes an AttachingConnector, which allows jdb and other Java language debuggers to attach to a core file. This can be useful when trying to understand what each thread was doing at the time of a crash. See The jdb Utility.

jinfo utility

(postmortem use on Oracle Solaris and Linux operating systems only)

This utility can obtain configuration information from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jinfo Utility.

jmap utility

(postmortem use on Oracle Solaris and Linux operating systems only)

This utility can obtain memory map information, including a heap histogram, from a core file obtained from a crash or from a core file obtained using the gcore utility. See The jmap Utility.

jsadebugd daemon

(Oracle Solaris and Linux operating systems only)

The Serviceability Agent Debug Daemon (jsadebugd) attaches to a Java process or to a core file and acts as a debug server. See The jsadebugd Daemon.

jstack utility

This utility can obtain Java and native stack information from a Java process. On Oracle Solaris and Linux operating systems the utility can also get the information from a core file or a remote debug server. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be used for postmortem diagnosis. See Native Operating System Tools.

Hung Processes Tools

List of tools and options for diagnosing problems between the application and the Java HotSpot VM in a hung process.

Table 2-3 summarizes the options and tools that can help in scenarios involving a hung or deadlocked process. These tools do not require any special options to start the application.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-3 Hung Processes Tools

Tool or Option Description and Usage

Ctrl-Break handler

(Control+\ or kill -QUIT pid on Oracle Solaris and Linux operating systems, and Control+Break on Windows)

This key combination performs a thread dump as well as deadlock detection. The Ctrl-Break handler can optionally print a list of concurrent locks and their owners, as well as a heap histogram. See Control+Break Handler.

jcmd utility

The jcmd utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings. The JFRs are used to troubleshoot and diagnose flight recording events. See The jcmd Utility.

jdb utility

Debugger support includes attaching connectors, which allow jdb and other Java language debuggers to attach to a process. This can help show what each thread is doing at the time of a hang or deadlock. See The jdb Utility.

jinfo utility

This utility can obtain configuration information from a Java process. See The jinfo Utility.

jmap utility

This utility can obtain memory map information, including a heap histogram, from a Java process. On Oracle Solaris and Linux operating systems, the -F option can be used if the process is hung. See The jmap Utility.

jsadebugd daemon

(Oracle Solaris and Linux operating systems only)

The Serviceability Agent Debug Daemon (jsadebugd) attaches to a Java process or to a core file and acts as a debug server. See The jsadebugd Daemon.

jstack utility

This utility can obtain Java and native stack information from a Java process. See The jstack Utility.

Native tools

Each operating system has native tools and utilities that can be useful in hang or deadlock situations. See Native Operating System Tools.

Monitoring Tools

List of tools and options for monitoring running applications and detecting problems.

The tools listed in the Table 2-4 are designed for monitoring applications that are running.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-4 Monitoring Tools

Tool or Option Description and Usage

Java Mission Control

Java Mission Control (JMC) is a new JDK profiling and diagnostic tools platform for HotSpot JVM. It s a tool suite basic monitoring, managing, and production time profiling and diagnostics with high performance. Java Mission Control minimizes the performance overhead that's usually an issue with profiling tools.

jcmd utility

The jcmd utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings. The JFRs are used to troubleshoot and diagnose JVM and Java Applications with flight recording events. See The jcmd Utility.

JConsole utility

This utility is a monitoring tool that is based on Java Management Extensions (JMX). The tool uses the built-in JMX instrumentation in the Java Virtual Machine to provide information about performance and resource consumption of running applications. See JConsole.

jmap utility

This utility can obtain memory map information, including a heap histogram, from a Java process, a core file, or a remote debug server. See The jmap Utility.

jps utility

This utility lists the instrumented Java HotSpot VMs on the target system. The utility is very useful in environments where the VM is embedded, that is, it is started using the JNI Invocation API rather than the java launcher. See The jps Utility.

jstack utility

This utility can obtain Java and native stack information from a Java process. On Oracle Solaris and Linux operating systems the utility can alos get the information from a core file or a remote debug server. See The jstack Utility.

jstat utility

This utility uses the built-in instrumentation in Java to provide information about performance and resource consumption of running applications. The tool can be used when diagnosing performance issues, especially those related to heap sizing and garbage collection. See The jstat Utility.

jstatd daemon

This tool is a Remote Method Invocation (RMI) server application that monitors the creation and termination of instrumented Java Virtual Machines and provides an interface to allow remote monitoring tools to attach to VMs running on the local host. See The jstatd Daemon.

visualgc utility

This utility provides a graphical view of the garbage collection system. As with jstat, it uses the built-in instrumentation of Java HotSpot VM. See The visualgc Tool.

Native tools

Each operating system has native tools and utilities that can be useful for monitoring purposes. For example, the dynamic tracing (DTrace) capability introduced in Oracle Solaris 10 operating system performs advanced monitoring. See Native Operating System Tools.

Other Tools, Options, Variables and Properties

List of general troubleshooting tools, options, variables, and properties that can help diagnosing issues.

In addition to the tools that are designed for specific types of problems, the tools, options, variables, and properties listed in Table 2-5 can help in diagnosing other issues.

Java Mission Control, Java Flight Recorder, and jcmd utility can be used for diagnosing problems with JVM and Java applications. It is suggested to use the latest utility, jcmd instead of the previous jstack, jinfo, and jmap utilities for enhanced diagnostics and reduced performance overhead.

Table 2-5 General Troubleshooting Tools and Options

Tool or Option Description and Usage

Java Mission Control

Java Mission Control (JMC) is a new JDK profiling and diagnostic tools platform for HotSpot JVM. It s a tool suite basic monitoring, managing, and production time profiling and diagnostics with high performance. Java Mission Control minimizes the performance overhead that's usually an issue with profiling tools. See Java Mission Control.

jcmd utility

The jcmd utility is used to send diagnostic command requests to the JVM, where these requests are useful for controlling Java Flight Recordings (JFR). The JFRs are used to troubleshoot and diagnose JVM and Java Applications with flight recording events. See The jcmd Utility.

jinfo utility

This utility can dynamically set, unset, and change the values of certain JVM flags for a specified Java process. On Oracle Solaris and Linux operating systems, it can also print configuration information. See The jinfo Utility.

jrunscript utility

This utility is a command-line script shell, which supports both interactive and batch-mode script execution. See The jrunscript Utility.

Oracle Solaris Studio dbx debugger

This is an interactive, command-line debugging tool, which allows you to have complete control of the dynamic execution of a program, including stopping the program and inspecting its state. For details, see the latest dbx documentation located at Oracle Solaris Studio Program Debugging.

Oracle Solaris Studio Performance Analyzer

This tool can help you assess the performance of your code, identify potential performance problems, and locate the part of the code where the problems occur. The Performance Analyzer can be used from the command line or from a graphical user interface. For details, see the Oracle Solaris Studio Performance Analyzer.

Sun's Dataspace Profiling: DProfile

This tool provides insight into the flow of data within Sun computing systems, helping you identify bottlenecks in both software and hardware. DProfile is supported in the Sun Studio 11 compiler suite through the Performance Analyzer GUI. See DTrace or Dynamic Tracing diagnostic tool.

-Xcheck:jni option

This option is useful in diagnosing problems with applications that use the Java Native Interface (JNI) or that employ third-party libraries (some JDBC drivers, for example). See The -Xcheck:jni Option.

-verbose:class option

This option enables logging of class loading and unloading. See The -verbose:class Option.

-verbose:gc option

This option enables logging of garbage collection information. See The -verbose:gc Option.

-verbose:jni option

This option enables logging of JNI. See The -verbose:jni Option.

JAVA_TOOL_OPTIONS environment variable

This environment variable allows you to specify the initialization of tools, specifically the launching of native or Java programming language agents using the -agentlib or -javaagent options. See Environment Variables and System Properties.

java.security.debug system property

This system property controls whether the security checks in the JRE of the Java print trace messages during execution. See The java.security.debug System Property.

The java.lang.management Package

The java.lang.management package provides the management interface for monitoring and management of the JVM and the operating system.

Specifically it covers interfaces for the following systems:

  • Class loading

  • Compilation

  • Garbage collection

  • Memory manager

  • Runtime

  • Threads

The JDK includes example code that demonstrates the usage of the java.lang.management package. These examples can be found in the $JAVA_HOME/demo/management directory. Some of these example codes are as follows:

  • MemoryMonitor demonstrates the use of the java.lang.management API to observe the memory usage of all memory pools consumed by the application.

  • FullThreadDump demonstrates the use of the java.lang.management API to get a full thread dump and detect deadlocks programmatically.

  • VerboseGC demonstrates the use of the java.lang.management API to print the garbage collection statistics and memory usage of an application.

In addition to the java.lang.management package, the JDK release includes platform extensions in the com.sun.management package. The platform extensions include a management interface to obtain detailed statistics from garbage collectors that perform collections in cycles. These extensions also include a management interface to obtain additional memory statistics from the operating system.

The java.lang.instrument Package

The java.lang.instrument package provides services that allow Java programming language agents to instrument programs running on the JVM.

Instrumentation is used by tools such as profilers, tools for tracing method calls, and many others. The package facilitates both load-time and dynamic instrumentation. It also includes methods to obtain information about the loaded classes and information about the amount of storage consumed by a given object.

The java.lang.Thread Class

The java.lang.Thread class has a static method called getAllStackTraces, which returns a map of stack traces for all live threads.

The Thread class also has a method called getState, which returns the thread state; states are defined by the java.lang.Thread.State enumeration. These methods can be useful when you add diagnostic or monitoring capabilities to an application.

JVM Tool Interface

The JVM Tool Interface (JVM TI) is a native (C/C++) programming interface that can be used by a wide range of development and monitoring tools.

JVM TI provides an interface for the full breadth of tools that need access to VM state, including but not limited to profiling, debugging, monitoring, thread analysis, and coverage analysis tools.

Some examples of agents that rely on JVM TI are the following:

  • Java Debug Wire Protocol (JDWP)

  • The java.lang.instrument package

The specification for JVM TI can be found in the JVM Tool Interface documentation.

The JDK includes example code that demonstrates the usage of JVM TI. These examples can be found in the $JAVA_HOME/demo/jvmti directory. Some of the example codes are as follows:

  • mtrace is an agent library that tracks method call and return counts. It uses bytecode instrumentation to instrument all classes loaded into the virtual machine and prints a sorted list of the frequently used methods.

  • heapTracker is an agent library that tracks object allocation. It uses bytecode instrumentation to instrument constructor methods.

  • heapViewer is an agent library that prints heap statistics when the Control+Break handler is invoked. See Control+Break Handler. For each loaded class it prints an instance count of that class and the space used.

The jrunscript Utility

The jrunscript utility is a command-line script shell.

It supports script execution in both interactive mode and in batch mode. By default, the shell uses JavaScript, but you can specify any other scripting language for which you supply the path to the script engines's JAR file of .class files.

Thanks to the communication between the Java language and the scripting language, the jrunscript utility supports an exploratory programming style.

See jrunscript in the Java Platform, Standard Edition Tools Reference.

The jsadebugd Daemon

The Java Serviceability Agent Debug Daemon (jsadebugd) attaches to a Java process or to a core file and acts as a debug server.

This utility is currently available only on Oracle Solaris and Linux operating systems. Remote clients such as jstack, jmap, and jinfo can attach to the server using Java Remote Method Invocation (RMI).

See jsadebugd in the Java Platform, Standard Edition Tools Reference.

The jstatd Daemon

The jstatd daemon is an RMI server application that monitors the creation and termination of each instrumented Java HotSpot and provides an interface to allow remote monitoring tools to attach to JVMs running on the local host.

For example, this daemon allows the jps utility to list processes on a remote system.

Note:

The instrumentation is not accessible on FAT32 file system.

See jstatd in the Java Platform, Standard Edition Tools Reference.

Thread States for a Thread Dump

List of possible thread states for a thread dump.

Table 2-6 lists the possible thread states for a thread dump using Control+Break Handler.

Table 2-6 Thread States for a Thread Dump

Thread State Description

NEW

The thread has not yet started.

RUNNABLE

The thread is executing in the JVM.

BLOCKED

The thread is blocked waiting for a monitor lock.

WAITING

The thread is waiting indefinitely for another thread to perform a particular action.

TIMED_WAITING

The thread is waiting for another thread to perform an action for up to a specified waiting time.

TERMINATED

The thread has exited.

Troubleshooting Tools Based on Operating System

List of native Windows tools which can be used for troubleshooting problems.

Table 2-7 lists troubleshooting tools available on Windows operating system.

Table 2-7 Native Troubleshooting Tools on Windows

Tool Description

dumpchk

Command-line utility to verify that a memory dump file has been created correctly. This tool is included in the Debugging Tools for Windows download available from the Microsoft website, see Collect Crash Dumps on Windows.

msdev debugger

Command-line utility that can be used to launch Visual C++ and the Win32 debugger

userdump

The User Mode Process Dumper is included in the OEM Support Tools download available from the Microsoft website, see Collect Crash Dumps on Windows.

windbg

Windows debugger can be used to debug Windows applications or crash dumps. This tool is included in the Debugging Tools for Windows download available from the Microsoft website, see Collect Crash Dumps on Windows.

/Md and /Mdd compiler options

Compiler options that automatically include extra support for tracking memory allocations

Table 2-8 describes some troubleshooting tools introduced or improved in Linux operating system version 10.

Table 2-8 Native Troubleshooting Tools on Linux

Tool Description

c++filt

Demangle C++ mangled symbol names. This utility is delivered with the native C++ compiler suite: gcc on Linux.

gdb

GNU debugger

libnjamd

Memory allocation tracking

lsstack

Print thread stack (similar to pstack in Oracle Solaris operating system)

Not all distributions provide this tool by default; therefore, you might have to download it from Open Source downloads.

ltrace

Library call tracer (equivalent to truss -u in Oracle Solaris operating system)

Not all distributions provide this tool by default; therefore, you might have to download it from Open Source downloads.

mtrace and muntrace

GNU malloc tracer

proc tools such as pmap and pstack

Some, but not all, of the proc tools on Oracle Solaris operating system have equivalent tools on Linux. Core file support is not as good for Linux as for Oracle Solaris operating system; for example, pstack does not work for core dumps

strace

System call tracer (equivalent to truss -t in Oracle Solaris operating system)

top

Display most CPU-intensive processes.

vmstat

Report information about processes, memory, paging, block I/O, traps, and CPU activity.

Table 2-9 lists troubleshooting tools available on Oracle Solaris operating system.

Table 2-9 Native Troubleshooting Tools on Oracle Solaris Operating System

Tool Description

coreadm

Specify name and location of core files produced by the JVM.

cpustat

Monitor system behavior using CPU performance counters.

cputrack

Monitor process and LWP behavior using CPU performance counters.

c++filt

Demangle C++ mangled symbol names. This utility is delivered with the native C++ compiler suite: SUNWspro on Oracle Solaris operating system.

dtrace

Introduced in Oracle Solaris 10 operating system, DTrace is a dynamic tracing compiler and tracing utility. It can perform dynamic tracing of kernel functions, system calls, and user functions. This tool allows arbitrary, safe scripting to be executed at entry, exit, and other probe points. The script is written in C-like but safe pointer semantics language called the D programming language. See also DTrace Tool.

gcore

Force a core dump of a process. The process continues after the core dump is written.

intrstat

Report statistics on CPU consumed by interrupt threads.

iostat

Report I/O statistics.

libumem

Introduced in Oracle Solaris 9 operating system update 3, this library provides fast, scalable object-caching memory allocation and extensive debugging support. The tool can be used to find and fix memory management bugs, see Find Leaks with libumem Tool.

mdb

Modular debugger for kernel and user applications and crash dumps

netstat

Display the contents of various network-related data structures.

pargs

Print process arguments, environment variables, or the auxiliary vector. Long output is not truncated as it would be by other commands, such as ps.

pfiles

Print information on process file descriptors. Starting with Oracle Solaris 10 operating system, the tool prints the file name also.

pldd

Print shared objects loaded by a process.

pmap

Print memory layout of a process or core file, including heap, data, and text sections. Starting with Oracle Solaris 10, stack segments are clearly identified with the text [stack] along with the thread ID. See also Improvements to pmap Tool.

prstat

Report statistics for active Oracle Solaris operating system processes. (Similar to top)

prun

Set the process to running mode (reverse of pstop).

ps

List all processes.

psig

List the signal handlers of a process.

pstack

Print stack of threads of a given process or core file. Starting with Oracle Solaris 10 operating system, Java method names can be printed for Java frames. See also Improvements to pstack Tool.

pstop

Stop the process (suspend).

ptree

Print process tree containing the given PID.

sar

System activity reporter

sdtprocess

Display most CPU-intensive processes. (Similar to top)

sdtperfmeter

Display graphs showing system performance (for example, CPU, disks, and network).

top

Display most CPU-intensive processes. This tool is available as freeware for Oracle Solaris operating system but is not installed by default.

trapstat

Display runtime trap statistics. (SPARC only)

truss

Trace entry and exit events for system calls, user-mode functions, and signals; optionally stop the process at one of these events. This tool also prints the arguments of system calls and user functions.

vmstat

Report system virtual memory statistics.

watchmalloc

Track memory allocations.