Diagnostics Guide

     Previous  Next    Open TOC in new window  Open Index in new window  View as PDF - New Window  Get Adobe Reader - New Window
Content starts here

Tuning Locks

The interaction between Java threads affects the performance of your application.There are two ways of tuning the interaction of threads.

  1. By modifying the structure of your program code, for example to minimize the amount of contention between threads.
  2. By using options in the Oracle JRockit JVM that affect how contention is handled when your application is running.

The Oracle JRockit JVM Diagnostics Guide does not provide any documentation on how to optimize thread management when coding Java, but this section contains information about JRockit JVM options for tuning how locks and contention for locks are handled. This section covers the following topics:

For more information on how the JRockit JVM handles threads and locks, see Understanding Threads and Locks.


Lock Profiling

You can enable the JRockit Runtime Analyzer to collect and analyze information about the locks and the contention that has occurred while the runtime analyzer was recording. To do this, add the following option when you start your application:


When lock profiling has been enabled, you can view information about Java locks on the Lock Profiling tab in the JRockit Mission Control Client.

Note: Lock profiling creates a lot (in the order of 20%) of overhead processing when your Java application runs.

There are two Ctrl-Break handlers tied to the lock profile counters. To work, both require lock profiling to be enabled with the -Djrockit.lockprofiling option. These are used with jrcmd.

The handler lockprofile_print prints the current values of the lock profile counters. The handler lockprofile_reset resets the current values of the lock profile counters.

For more information about Ctrl-Break handlers and using jrcmd, see Running Diagnostic Commands.


Disabling Spinning Against Fat Locks

Spinning against a fat lock is generally beneficial. However, in some instances, it can be expensive and costly in terms of performance, for example when you have locks that create long waiting periods and high contention. You can turn off spinning against a fat lock and eliminate a potential performance degradation with the following option:


The option disables the fat lock spin code in Java, allowing threads that are trying to acquire a fat lock go to sleep directly.


Adaptive Spinning Against Fat Locks

You can let the JVM decide whether threads should spin against a fat lock or not (and directly go into sleeping state when failing to take it). To enable adaptive lock spinning, set the option


By default, adaptive spinning against fat locks is disabled. Note that whether threads failing to take a particular fat lock will go spinning or sleeping can change during runtime.

You can specify the criteria that needs to be fulfilled for threads to start spinning against a fat lock. The following options let you tune adaptive spinning.


This sets the maximum difference in CPU-specific ticks where spinning is beneficial.


Number of spins that must fail before threads switch from spinning to sleeping.


Number of sleeps that must get the lock early before threads go back to spinning.


Number of loops before JRockit JVM tries to read from the lock again in the innermost lock spin code.


Lock Deflation

If the amount of contention on a lock that has turned fat has been small, then the lock will convert back to a thin lock. This process is called lock deflation. By default, lock deflation is enabled. If you do not want fat locks to deflate, then run you application with the following option:


With lock deflation disabled, a fat lock stays a fat lock even after there is no threads contending or waiting to take the lock.

You can also tune when lock deflation will be triggered. Specify, with the following option, the number of uncontended fat lock unlocks that should occur before deflation:



Lazy Unlocking

So called “lazy” unlocking is intended for applications with many non-shared locks. Be aware that it can introduce performance penalties with applications that have many short-lived but shared locks.

When lazy unlocking is enabled, locks will not be released when a critical section is exited. Instead, once a lock is acquired, the next thread that tries to acquire such a lock will have to ensure that the lock is or can be released. It does this by determining if the initial thread still uses the lock. A shared lock will convert to a normal lock and not stay in lazy mode.

Lazy unlocking is enabled by default in the Java 6 version of the Oracle JRockit JVM R27.6 on all platforms except IA64 and for all garbage collection strategies except the deterministic garbage collector. In older releases you can enable lazy unlocking with the command line option -XXlazyUnlocking.

  Back to Top       Previous  Next