4 Compilation Optimization

This chapter describes various compiler options available in Oracle JRockit and HotSpot VMs to optimize compilation.

Note:

Some of the tools described in this document require a commercial license for use in production. To learn more about commercial features and how to enable them, see http://www.oracle.com/technetwork/java/javaseproducts/.

4.1 Compiler Considerations

Unlike Oracle JRockit, HotSpot features a Java byte code interpreter in addition to two different JIT compilers - client (aka C1) and server (aka C2)). HotSpot VM defaults to interpreting Java byte code, and will only JIT compile methods that runtime profiling determines to be "hot" - the methods that have been executed for a threshold number of times. Originally, users had to chose at startup which of the two JIT compilers, client or server, would be used. The client compiler compiles methods quickly, but emits machine code that is less optimized than the server compiler. By comparison, the server JIT compiler often takes more time (and memory) to compile the same methods, but generates better optimized machine code than the code produced by the client compiler. The result is that the client compiler allows most applications to start up faster (because of less compilation overhead), but the server compiler untimely provides better run-time performance once the application has reached stead-state (has warmed up). Used independently, each of these two compilers serve two different use-cases:

  • client: quick startup and smaller memory footprint is more important than steady-state performance

  • server: steady–state performance is more important than a quick startup

If having to chose a single JIT compiler, most Oracle JRockit users should chose the server compiler. As Oracle JRockit was designed as a server side JVM, most environments that use Oracle JRockit are server deployments like WLS or Coherence. The one notable exception would be cases where Oracle JRockit was used to run a smaller client application. For example, the client compiler would probably be a better fit for a command line administration tool like WLST.

Oracle JRockit JVM compiles a Java method and generates the machine code for it the very first time it is invoked. This compiled code of frequently invoked methods is then later optimized in the background by an Optimizer thread. This is completely different from the HotSpot JVM where methods are first interpreted and later compiled either by the Client (less optimizations) or the Server (more optimizations) compiler.

The client compiler can be invoked using -client JVM option and the server compile can be invoked using -server JVM option. The server compiler is selected by default on the server-class machines.

Tiered compilation, introduced in Java SE 7, brings client startup speeds to the server VM. A server VM uses the interpreter to collect profiling information about methods that is fed into the compiler. In the tiered scheme, in addition to the interpreter, the client compiler generates compiled versions of methods that collect profiling information about themselves. Since the compiled code is substantially faster than the interpreter, the program executes with greater performance during this profiling phase. In many cases, a startup that is even faster than with the client VM can be achieved because the final code produced by the server compiler may be already available during the early stages of application initialization. The tiered scheme can also achieve better peak performance than a regular server VM because the faster profiling phase allows a longer period of profiling, which may yield better optimization. Use the -XX:+TieredCompilation flag with the java command to enable tiered compilation.

In Java SE 8, Tiered compilation is the default mode for the server VM. Both 32 and 64 bit modes are supported. -XX:-TieredCompilation flag can be used to disable tiered compilation.

Important HotSpot JIT Compiler Options

The following table lists some important Oracle JRockit and HotSpot compiler options.

Table 4-1 JIT Compiler Options

Oracle JRockit HotSpot Note

-XnoOpt

-XXoptFile:<file>

Because JIT compilation in HotSpot can be considered analogous to optimization in Oracle JRockit (that is both techniques are only used on methods that are determined by profiling to be "hot"), the HotSpot equivalent to Oracle JRockit's -XnoOpt is -Xint, where no JIT compilation is done at all, and only the byte code interpreter is used to execute all methods. This may result in a substantial performance impact, but can be useful for the same types of situations where -XnoOpt was used for Oracle JRockit: Troubleshooting or working around possible compiler issues.

Like Oracle JRockit, HotSpot also offers ways to exclude methods from compilation and/or to turn off specific optimizations on them.

If you are using XnoOpt or XXoptFile options with Oracle JRockit VM to turn off the optimization on certain methods as you were facing some issues when these methods were optimized, then these options should not be directly translated to HotSpot options to exclude the compilation and/or turn off specific optimizations on these methods.

The exact same compilation/optimization issues observed with the Oracle JRockit JVM for any specific methods are very unlikely to be present with the HotSpot JVM. So, to begin with, it is best to remove these options when migrating to the HotSpot JVM.

Equivalent HotSpot JVM options:

  • -XX:CompileCommand=command,method[,option]

    Specifies a command to perform on a method. For example, to exclude the indexOf() method of the String class from being compiled, use the following:

    -XX:CompileCommand=exclude,java/lang/String.indexOf

  • -XX:CompileCommandFile=<filename>

    Sets the file from which JIT compiler commands are read. By default, the .hotspot_compiler file is used to store commands performed by the JIT compiler.

  • -XX:CompileOnly=<methods>

    Sets the list of methods (separated by commas) to which compilation should be restricted.

  • -XX:CompileThreshold=<invocations>

    Sets the number of interpreted method invocations before compilation. By default, in the server JVM, the JIT compiler performs 10,000 interpreted method invocations to gather information for efficient compilation. For the client JVM, the default setting is 1,500 invocations.

Options CompileCommand, CompileCommandFile, CompileOnly and CompileThreshold can be used to disable or delay the compilation of specified methods.

-XX:OptThreads

There are no optimization threads in HotSpot JVM. The count of compiler threads that perform both the compilation and the optimizations can be set using:

-XX:CICompilerCount=<threads>

Sets the number of compiler threads to use for compilation. By default, the number of threads is set to 2 for the server JVM, to 1 for the client JVM, and it scales to the number of cores if tiered compilation is used.

-XX:+ReserveCodeMemory

-XX:MaxCodeMemory=<size>

-XX:ReservedCodeCacheSize=<size>

Sets the maximum code cache size (in bytes) for JIT-compiled code. This option is equivalent to -Xmaxjitcodesize.

None

-XX:+TieredCompilation

Enables the use of tiered compilation. On JDK8 this option is enabled by default. Only the Java HotSpot Server VM supports this option.