Describes the fatal error log, its location, and contents.
The fatal error log is created when a fatal error occurs. It contains information and the state obtained at the time of the fatal error.
The format of this file can change slightly in update releases.
This appendix contains the following sections:
To specify where the log file will be created, use the product flag
file represents the full path for the log file location.
%% in the file variable is converted to
%, and the substring
%p is converted to the PID of the process.
In the following example, the error log file will be written to the directory
/var/log/java and will be named
-XX:ErrorFile=file flag is not specified, then the default log file name is
pid is the PID of the process.
In addition, if the
-XX:ErrorFile=file flag is not specified, the system attempts to create the file in the working directory of the process. In the event that the file cannot be created in the working directory (insufficient space, permission problem, or other issue), the file is created in the temporary directory for the operating system. On the Oracle Solaris and Linux operating systems, the temporary directory is
/tmp. On the Windows, the temporary directory is specified by the value of the
TMP environment variable. If that environment variable is not defined, then the value of the
TEMP environment variable is used.
Description of the fatal error log file and the sections that contain information obtained at the time of the fatal error.
The error log contains information obtained at the time of the fatal error, including the following information, where possible:
The operating exception or signal that provoked the fatal error
Version and configuration information
Details about the thread that provoked the fatal error and the thread's stack trace
List of running threads and their states
Summary information about the heap
List of native libraries loaded
Details about the operating system and CPU
In some cases only a subset of this information is output to the error log. This can happen when a fatal error is of such severity that the error handler is unable to recover and report all the details.
The error log is a text file consisting of the following sections:
A header that provides a brief description of the crash. See Header Format.
A section with thread information. See Thread Section Format.
A section with process information. See Process Section Format.
A section with system information. See System Section Format.
The format of the fatal error log described here is based on Java SE 6. The format might be different with other releases.
The header section at the beginning of every fatal error log file contains a brief description of the problem.
The header is also printed to standard output and may show up in the application's output log.
The header includes a link to the HotSpot Virtual Machine Error Reporting Page, where the user can submit a bug report.
# # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007f0f159f857d, pid=18240, tid=18245 # # JRE version: Java(TM) SE Runtime Environment (9.0+167) (build 9-ea+167) # Java VM: Java HotSpot(TM) 64-Bit Server VM (9-ea+167, mixed mode, tiered, compressed oops, g1 gc, linux-amd64) # Problematic frame: # C [libMyApp.so+0x57d] Java_MyApp_readData+0x11 # # Core dump will be written. Default location: /cores/core.18240) # # If you would like to submit a bug report, please visit: # http://bugreport.java.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. #
The example shows that the VM crashed on an unexpected signal.
The next line describes the signal type, program counter (pc) that caused the signal, process ID, and thread ID, as shown in the following example.
# SIGSEGV (0xb) at pc=0x00007f0f159f857d, pid=18240, tid=18245 | | | | +--- thread id | | | +------------- process id | | +--------------------------- program counter | | (instruction pointer) | +--------------------------------------- signal number +---------------------------------------------- signal name
The next line contains the VM version (client VM or server VM), an indication of whether the application was run in mixed or interpreted mode, and an indication of whether class file sharing was enabled, as shown in the following line.
# Java VM: Java HotSpot(TM) 64-Bit Server VM (9-ea+167, mixed mode, tiered, compressed oops, g1 gc, linux-amd64)
The next information is the function frame that caused the crash, as shown in the following example.
# Problematic frame: # C [libMyApp.so+0x57d] Java_MyApp_readData+0x11 | +-- Same as pc, but represented as library name and offset. | For position-independent libraries (JVM and most shared | libraries), it is possible to inspect the instructions | that caused the crash without a debugger or core file | by using a disassembler to dump instructions near the | offset. +----------------- Frame type
In this example, the "C" frame type indicates a native C frame. Table A-1 shows the possible frame types.
Table A-1 Frame Types
Native C frame
Interpreted Java frame
VM-generated stub frame
Other frame types, including compiled Java frames
Internal errors will cause the VM error handler to generate a similar error dump. However, the header format is different. Examples of internal errors are
guarantee() failure, assertion failure,
ShouldNotReachHere(), and so forth. The following example shows the header format for an internal error.
# # An unexpected error has been detected by HotSpot Virtual Machine: # # Internal Error (4F533F4C494E55583F491418160E43505000F5), pid=10226, tid=16384 # # Java VM: Java HotSpot(TM) Client VM (1.6.0-rc-b63 mixed mode)
In the above header, there is no signal name or signal number. Instead the second line now contains
Internal Error and a long hexadecimal string. This hexadecimal string encodes the source module and line number where the error was detected. In general this "error string" is useful only to engineers working on the HotSpot Virtual Machine.
The error string encodes a line number and therefore it changes with each code change and release. A crash with a given error string in one release (for example, 1.6.0) might not correspond to the same crash in an update release (for example, 1.6.0_01), even if the strings match.
Do not assume that a workaround or solution that worked in one situation associated with a given error string will work in another situation associated with that same error string. Note the following facts:
Errors with the same root cause might have different error strings.
Errors with the same error string might have completely different root causes.
Therefore, the error string should not be used as the sole criterion when troubleshooting bugs.
Information about the thread that crashed.
If multiple threads crash at the same time, then only one thread is printed.
The first part of the thread section shows the thread that caused the fatal error, as shown in the following example.
Current thread (0x00007f102c013000): JavaThread "main" [_thread_in_native, id=18245, stack(0x00007f10345c0000,0x00007f10346c0000)] | | | | | + stack | | | | +------------------ ID | | | +------------------------------- state | | +-------------------------------------------- name | +------------------------------------------------------ type +----------------------------------------------------------------------------- pointer
The thread pointer is the pointer to the Java VM internal thread structure. It is generally of no interest unless you are debugging a live Java VM or core file.
The following list shows possible thread types.
Table A-2 shows the important thread states.
Table A-2 Thread States
Thread is not created. This occurs only in the case of memory corruption.
Thread was created, but it has not yet started.
Thread is running native code. The error is probably a bug in the native code.
Thread is running VM code.
Thread is running either interpreted or compiled Java code.
Thread is blocked.
If any of the previous states is followed by the string
The thread ID in the output is the native thread identifier.
If a Java thread is a daemon thread, then the string daemon is printed before the thread state.
The next information in the error log describes the unexpected signal that caused the VM to terminate. On a Windows system the output appears as shown in the following example.
siginfo: ExceptionCode=0xc0000005, reading address 0xd8ffecf1
In the above example, the exception code is
ACCESS_VIOLATION), and the exception occurred when the thread attempted to read address
On Oracle Solaris and Linux operating systems the signal number (
si_signo) and signal code (
si_code) are used to identify the exception, as follows:
siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), si_addr: 0x0000000000000000
The next information in the error log shows the register context at the time of the fatal error. The exact format of this output is processor-dependent. The following example shows output for the Intel(R) Xeon(R) processor.
Registers: RAX=0x0000000000000000, RBX=0x00007f0f17aff3b0, RCX=0x0000000000000001, RDX=0x00007f1033880358 RSP=0x00007f10346be930, RBP=0x00007f10346be930, RSI=0x00007f10346be9a0, RDI=0x00007f102c013218 R8 =0x00007f0f17aff3b0, R9 =0x0000000000000008, R10=0x00007f1011bb1de9, R11=0x0000000101cfc5e0 R12=0x0000000000000000, R13=0x00007f0f17aff3b0, R14=0x00007f10346be9a8, R15=0x00007f102c013000 RIP=0x00007f0f159f857d, EFLAGS=0x0000000000010283, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
The register values might be useful when combined with instructions, as described below.
After the register values, the following example shows the error log that contains the top of stack followed by 32 bytes of instructions (opcodes) near the program counter (PC) when the system crashed. These opcodes can be decoded with a disassembler to produce the instructions around the location of the crash. Note: IA32 and AMD64 instructions are variable in length, and so it is not always possible to reliably decode instructions before the crash PC.
Top of Stack: (sp=0x00007f10346be930) 0x00007f10346be930: 00007f10346be990 00007f1011bb1e15 0x00007f10346be940: 00007f1011bb1b33 00007f10346be948 0x00007f10346be950: 00007f0f17aff3b0 00007f10346be9a8 0x00007f10346be960: 00007f0f17aff5a0 0000000000000000 Instructions: (pc=0x00007f0f159f857d) 0x00007f0f159f855d: 3d e6 08 20 00 ff e0 0f 1f 40 00 5d c3 90 90 55 0x00007f0f159f856d: 48 89 e5 48 89 7d f8 48 89 75 f0 b8 00 00 00 00 0x00007f0f159f857d: 8b 00 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 0x00007f0f159f858d: 90 90 90 55 48 89 e5 53 48 83 ec 08 48 8b 05 88
Where possible, the next output in the error log is the thread stack, as shown in the following example. This includes the addresses of the base and the top of the stack, the current stack pointer, and the amount of unused stack available to the thread. This is followed, where possible, by the stack frames, and up to 100 frames are printed. For C/C++ frames, the library name may also be printed. Note: In some fatal error conditions, the stack may be corrupt, and this detail may not be available.
Stack: [0x00007f10345c0000,0x00007f10346c0000], sp=0x00007f10346be930, free space=1018k Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libMyApp.so+0x57d] Java_MyApp_readData+0x11 j MyApp.readData()I+0 j MyApp.main([Ljava/lang/String;)V+15 v ~StubRoutines::call_stub V [libjvm.so+0x839eea] JavaCalls::call_helper(JavaValue*, methodHandle const&, JavaCallArguments*, Thread*)+0x47a V [libjvm.so+0x896fcf] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .isra.90]+0x21f V [libjvm.so+0x8a7f1e] jni_CallStaticVoidMethod+0x14e C [libjli.so+0x4142] JavaMain+0x812 C [libpthread.so.0+0x7e9a] start_thread+0xda Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j MyApp.readData()I+0 j MyApp.main([Ljava/lang/String;)V+15 v ~StubRoutines::call_stub
The log contains two thread stacks.
The first thread stack is
Native frames, which prints the native thread showing all function calls. However, this thread stack does not take into account the Java methods that are inlined by the runtime compiler; if methods are inlined, then they appear to be part of the parent's stack frame.
The information in the thread stack for native frames provides important information about the cause of the crash. By analyzing the libraries in the list from the top down, you can generally determine which library might have caused the problem and report it to the appropriate organization responsible for that library.
The second thread stack is
Java frames, which prints the Java frames including the inlined methods, skipping the native frames. Depending on the crash, it might not be possible to print the native thread stack, but it might be possible to print the Java frames.
If the error occurred in the VM thread or in a compiler thread, then further details may be seen from the following example. For example, in the case of the VM thread, the VM operation is printed if the VM thread is executing a VM operation at the time of the fatal error. In the following output example, the compiler thread caused the fatal error. The task is a compiler task, and the HotSpot Client VM is the compiling method
Current CompileTask: HotSpot Client Compiler:754 b nsk.jvmti.scenarios.hotswap.HS101.hs101t004Thread.ackermann(IJ)J (42 bytes)
For the HotSpot Server VM, the output for the compiler task is slightly different but will also include the full class name and method.
The process section is printed after the thread section.
It contains information about the whole process, including the thread list and memory usage of the process.
The thread list includes the threads that the VM is aware of, as shown in the following example.
=>0x0805ac88 JavaThread "main" [_thread_in_native, id=21139, stack(0x00007f10345c0000,0x00007f10346c0000)] | | | | | | + stack | | | | | +---------------------------------------- ID | | | | +------------------------------------------------------ state | | | | (JavaThread only) | | | +-------------------------------------------------------------------- name | | +----------------------------------------------------------------------------- type | +--------------------------------------------------------------------------------------- pointer +---------------------------------------------------------------------------------------------- "=>" current thread
This includes all Java threads and some VM internal threads, but does not include any native threads created by the user application that have not attached to the VM, as shown in the following example.
Java Threads: ( => current thread ) 0x00007f102c469800 JavaThread "C2 CompilerThread0" daemon [_thread_blocked, id=18302, stack(0x00007f0f16f31000,0x00007f0f17032000)] 0x00007f102c468000 JavaThread "Signal Dispatcher" daemon [_thread_blocked, id=18301, stack(0x00007f0f17032000,0x00007f0f17133000)] 0x00007f102c450800 JavaThread "Finalizer" daemon [_thread_blocked, id=18298, stack(0x00007f0f173fc000,0x00007f0f174fd000)] 0x00007f102c448800 JavaThread "Reference Handler" daemon [_thread_blocked, id=18297, stack(0x00007f0f174fd000,0x00007f0f175fe000)] =>0x00007f102c013000 JavaThread "main" [_thread_in_native, id=18245, stack(0x00007f10345c0000,0x00007f10346c0000)] Other Threads: 0x00007f102c43f000 VMThread "VM Thread" [stack: 0x00007f0f175ff000,0x00007f0f176ff000] [id=18296] 0x00007f102c54b000 WatcherThread [stack: 0x00007f0f15bfb000,0x00007f0f15cfb000] [id=18338]
The thread type and thread state are described in Thread Section Format.
The next information is the VM state, which indicates the overall state of the virtual machine. Table A-3 describes the general states.
Table A-3 VM States
|General VM State||Description|
All threads are blocked in the VM waiting for a special VM operation to complete.
A special VM operation is required, and the VM is waiting for all threads in the VM to block.
The VM state output is a single line in the error log, as follows:
VM state:not at safepoint (normal execution)
Mutexes and Monitors
The next information in the error log is a list of mutexes and monitors that are currently owned by a thread, as shown in the following example. These mutexes are VM internal locks rather than monitors associated with Java objects. The following is an example to show how the output might look when a crash happens when VM locks are held. For each lock, the log contains the name of the lock, its owner, and the addresses of a VM internal mutex structure and its OS lock. In general, this information is useful only to those who are very familiar with the HotSpot VM. The owner thread can be cross-referenced to the thread list.
VM Mutex/Monitor currently owned by a thread: ([mutex/lock_event])[0x007357b0/0x0000031c] Threads_lock - owner thread: 0x00996318 [0x00735978/0x000002e0] Heap_lock - owner thread: 0x00736218
The next information is a summary of the heap, as shown in the following example. The output depends on the garbage collection (GC) configuration. In this example, the serial collector is used, class data sharing is disabled, and the tenured generation is empty. This probably indicates that the fatal error occurred early or during startup, and a GC has not yet promoted any objects into the tenured generation.
Heap def new generation total 576K, used 161K [0x46570000, 0x46610000, 0x46a50000) eden space 512K, 31% used [0x46570000, 0x46598768, 0x465f0000) from space 64K, 0% used [0x465f0000, 0x465f0000, 0x46600000) to space 64K, 0% used [0x46600000, 0x46600000, 0x46610000) tenured generation total 1408K, used 0K [0x46a50000, 0x46bb0000, 0x4a570000) the space 1408K, 0% used [0x46a50000, 0x46a50000, 0x46a50200, 0x46bb0000) compacting perm gen total 8192K, used 1319K [0x4a570000, 0x4ad70000, 0x4e570000) the space 8192K, 16% used [0x4a570000, 0x4a6b9d48, 0x4a6b9e00, 0x4ad70000) No shared spaces configured.
The next information in the log is a list of virtual memory regions at the time of the crash. This list can be long if the application is large. The memory map can be very useful when debugging some crashes, because it can tell you which libraries are actually being used, their location in memory, as well as the location of the heap, stack, and guard pages.
/proc/pid/maps) is printed. On the Windows system, the base and end addresses of each library are printed. The following example shows the output generated on Linux/x86.
Note:Most of the lines were omitted from the example for the sake of brevity.
Dynamic libraries: 00400000-00401000 r-xp 00000000 00:47 1374716350 /export/java_re/jdk/9/ea/167/binaries/linux-x64/bin/java 00601000-00602000 rw-p 00001000 00:47 1374716350 /export/java_re/jdk/9/ea/167/binaries/linux-x64/bin/java 016c6000-016e7000 rw-p 00000000 00:00 0 [heap] 82000000-102000000 rw-p 00000000 00:00 0 102000000-800000000 ---p 00000000 00:00 0 40014000-40015000 r--p 00000000 00:00 0 Lines omitted. 7f0f159f8000-7f0f159f9000 r-xp 00000000 08:11 116808980 /export/users/dh198349/tests/hs-err/libMyApp.so 7f0f159f9000-7f0f15bf8000 ---p 00001000 08:11 116808980 /export/users/dh198349/tests/hs-err/libMyApp.so 7f0f15bf8000-7f0f15bf9000 r--p 00000000 08:11 116808980 /export/users/dh198349/tests/hs-err/libMyApp.so 7f0f15bf9000-7f0f15bfa000 rw-p 00001000 08:11 116808980 /export/users/dh198349/tests/hs-err/libMyApp.so Lines omitted. 7f0f15dfc000-7f0f15e00000 ---p 00000000 00:00 0 7f0f15e00000-7f0f15efd000 rw-p 00000000 00:00 0 7f0f15efd000-7f0f15f13000 r-xp 00000000 00:47 1374714565 /export/java_re/jdk/9/ea/167/binaries/linux-x64/lib/libnet.so 7f0f15f13000-7f0f16113000 ---p 00016000 00:47 1374714565 /export/java_re/jdk/9/ea/167/binaries/linux-x64/lib/libnet.so 7f0f16113000-7f0f16114000 rw-p 00016000 00:47 1374714565 /export/java_re/jdk/9/ea/167/binaries/linux-x64/lib/libnet.so 7f0f16114000-7f0f16124000 r-xp 00000000 00:47 1374714619 /export/java_re/jdk/9/ea/167/binaries/linux-x64/lib/libnio.so Lines omitted. 7f0f17032000-7f0f17036000 ---p 00000000 00:00 0 7f0f17036000-7f0f17133000 rw-p 00000000 00:00 0 7f0f17133000-7f0f173fc000 r--p 00000000 08:02 2102853 /usr/lib/locale/locale-archive 7f0f173fc000-7f0f17400000 ---p 00000000 00:00 0 Lines omtted.
The following is a format of memory map in the error log.
40049000-4035c000 r-xp 00000000 03:05 824473 /jdk1.5/jre/lib/i386/client/libjvm.so |<------------->| ^ ^ ^ ^ |<--------------------------------->| Memory region | | | | | | | | | | Permission --- + | | | | r: read | | | | w: write | | | | x: execute | | | | p: private | | | | s: share | | | | | | | | File offset ----------+ | | | | | | Major ID and minor ID of -------+ | | the device where the file | | is located (i.e. /dev/hda5) | | | | inode number ------------------------+ | | File name --------------------------------------------------+
The example shows the memory map output and each library has two virtual memory regions: one for code and one for data. The permission for the code segment is marked with
r-xp (readable, executable, private), and the permission for the data segment is
rw-p (readable, writable, private).
The Java heap is already included in the heap summary earlier in the output, but it can be useful to verify that the actual memory regions reserved for the heap match the values in the heap summary and that the attributes are set to
Thread stacks usually show up in the memory map as two back-to-back regions, one with permission
---p (guard page) and one with permission
rwxp (actual stack space). In addition, it is useful to know the guard page size or stack size. For example, in this memory map, the stack is located from 4127b000 to 412fb000.
On a Windows system, the memory map output is the load and end address of each loaded module, as shown in the following example.
Dynamic libraries: 0x00400000 - 0x0040c000 c:\jdk6\bin\java.exe 0x77f50000 - 0x77ff7000 C:\WINDOWS\System32\ntdll.dll 0x77e60000 - 0x77f46000 C:\WINDOWS\system32\kernel32.dll 0x77dd0000 - 0x77e5d000 C:\WINDOWS\system32\ADVAPI32.dll 0x78000000 - 0x78087000 C:\WINDOWS\system32\RPCRT4.dll 0x77c10000 - 0x77c63000 C:\WINDOWS\system32\MSVCRT.dll 0x08000000 - 0x08183000 c:\jdk6\jre\bin\client\jvm.dll 0x77d40000 - 0x77dcc000 C:\WINDOWS\system32\USER32.dll 0x7e090000 - 0x7e0d1000 C:\WINDOWS\system32\GDI32.dll 0x76b40000 - 0x76b6c000 C:\WINDOWS\System32\WINMM.dll 0x6d2f0000 - 0x6d2f8000 c:\jdk6\jre\bin\hpi.dll 0x76bf0000 - 0x76bfb000 C:\WINDOWS\System32\PSAPI.DLL 0x6d680000 - 0x6d68c000 c:\jdk6\jre\bin\verify.dll 0x6d370000 - 0x6d38d000 c:\jdk6\jre\bin\java.dll 0x6d6a0000 - 0x6d6af000 c:\jdk6\jre\bin\zip.dll 0x10000000 - 0x10032000 C:\bugs\crash2\App.dll
VM Arguments and Environment Variables
The next information in the error log is a list of VM arguments, followed by a list of environment variables, as shown in the following example.
VM Arguments: jvm_args: java_command: MyApp java_class_path (initial): . Launcher Type: SUN_STANDARD Logging: Log output configuration: #0: stdout all=warning uptime,level,tags #1: stderr all=off uptime,level,tags Environment Variables: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin SHELL=/bin/bash DISPLAY=localhost:10.0 ARCH=i386
The list of environment variables is not the full list but rather a subset of the environment variables that are applicable to the Java VM.
On the Oracle Solaris and Linux operating systems, the next information in the error log is the list of signal handlers, as shown in the following example.
Signal Handlers: SIGSEGV: [libjvm.so+0xd48840], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGBUS: [libjvm.so+0xd48840], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGFPE: [libjvm.so+0xd48840], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGPIPE: [libjvm.so+0xb60080], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGXFSZ: [libjvm.so+0xb60080], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGILL: [libjvm.so+0xd48840], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGUSR2: [libjvm.so+0xb5ff40], sa_mask=00000000000000000000000000000000, sa_flags=SA_RESTART|SA_SIGINFO SIGHUP: [libjvm.so+0xb60150], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGINT: [libjvm.so+0xb60150], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGTERM: [libjvm.so+0xb60150], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO SIGQUIT: [libjvm.so+0xb60150], sa_mask=11111111011111111101111111111110, sa_flags=SA_RESTART|SA_SIGINFO
The final section in the error log is the system information. The output is operating-system-specific but in general includes the operating system version, CPU information, and summary information about the memory configuration.
The following example shows output on a Linux operating system.
--------------- S Y S T E M --------------- OS:DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS" uname:Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 libc:glibc 2.15 NPTL 2.15 rlimit: STACK 8192k, CORE infinity, NPROC 1160369, NOFILE 4096, AS infinity load average:0.46 0.33 0.27 /proc/meminfo: MemTotal: 148545440 kB MemFree: 1020964 kB Buffers: 29600728 kB Cached: 86607768 kB SwapCached: 16112 kB Active: 52272944 kB Inactive: 64862992 kB Active(anon): 314080 kB Inactive(anon): 616296 kB Active(file): 51958864 kB Inactive(file): 64246696 kB Unevictable: 16 kB Mlocked: 16 kB SwapTotal: 1051644 kB SwapFree: 976092 kB Dirty: 40 kB Writeback: 0 kB AnonPages: 912404 kB Mapped: 95804 kB Shmem: 2936 kB Slab: 28625980 kB SReclaimable: 28337400 kB SUnreclaim: 288580 kB KernelStack: 6040 kB PageTables: 42524 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 75324364 kB Committed_AS: 6172612 kB VmallocTotal: 34359738367 kB VmallocUsed: 681668 kB VmallocChunk: 34282379392 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 171520 kB DirectMap2M: 8208384 kB DirectMap1G: 142606336 kB CPU:total 24 (initial active 24) (6 cores per cpu, 2 threads per core) family 6 model 44 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, aes, clmul, ht, tsc, tscinvbit, tscinv CPU Model and flags from /proc/cpuinfo: model name : Intel(R) Xeon(R) CPU X5675 @ 3.07GHz flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt aes lahf_lm ida arat epb dts tpr_shadow vnmi flexpriority ept vpid Memory: 4k page, physical 148545440k(1020964k free), swap 1051644k(976092k free) vm_info: Java HotSpot(TM) 64-Bit Server VM (9-ea+167) for linux-amd64 JRE (9-ea+167), built on Apr 27 2017 00:28:45 by "javare" with gcc 4.9.2
On the Oracle Solaris and Linux, the operating system, information is in the file
/etc/*release. This file describes the kind of system the application is running on, and in some cases, the information string might include the patch level. Some system upgrades are not reflected in the
/etc/*release file. This is especially true on the Linux system, where the user can rebuild any part of the system.
On Oracle Solaris operating system the
uname system call is used to get the name for the kernel. The thread library (T1 or T2) is also printed.
On the Linux system, the
uname system call is also used to get the kernel name. The
libc version and the thread library type are also printed, as shown in the following example.
uname:Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 libc:glibc 2.15 NPTL 2.15 |<- glibc version ->|<-- pthread type -->|
On Linux, there are three possible thread types, namely
linuxthreads (fixed stack),
linuxthreads (floating stack), and
NPTL. They are normally installed in
It is useful to know the thread type. For example, if the crash appears to be related to
pthread, then you might be able to work around the issue by selecting a different
pthread library. A different
pthread library (and
libc) can be selected by setting
glibc version usually does not include the patch level. The command
rpm -q glibc might provide more detailed version information.
Note:The default stack size of the VM is usually smaller than the system limit, as shown in the following example.
rlimit: STACK 8192k, CORE infinity, NPROC 1160369, NOFILE 4096, AS infinity | | | | virtual memory (-v) | | | +--- max open files (ulimit -n) | | +----------- max user processes (ulimit -u) | +------------------------------ core dump size (ulimit -c) +----------------------------------------- stack size (ulimit -s) load average:0.04 0.05 0.02
rlimit: STACK 8192k, CORE 0k, NPROC 4092, NOFILE 1024, AS infinity | | | | virtual memory (-v) | | | +--- max open files (ulimit -n) | | +----------- max user processes (ulimit -u) | +------------------------- core dump size (ulimit -c) +---------------------------------------- stack size (ulimit -s) load average:0.04 0.05 0.02
The next information specifies the CPU architecture and capabilities identified by the VM at startup, as shown in the following example.
CPU:total 24 (initial active 24) (6 cores per cpu, 2 threads per core) family 6 model 44 stepping 2, cmov, cx8, fxsr, mmx | | | |<----- CPU features ---->| | | | +--- processor family (IA32 only): | 3 - i386 | 4 - i486 | 5 - Pentium | 6 - PentiumPro, PII, PIII | 15 - Pentium 4 +------------ Total number of CPUs
Table A-4 shows the possible CPU features on a SPARC system.
Table A-4 SPARC Features
Supports v8 instructions.
Supports v9 instructions.
Supports visualization instructions.
Supports visualization instructions.
No hardware integer multiply and divide.
No multiply-add and multiply-subtract instructions.
Table A-5 shows the possible CPU features on an Intel/IA32 system.
Table A-5 Intel/IA32 Features
Supports cmov instruction.
Supports cmpxchg8b instruction.
Supports fxsave and fxrstor.
Supports SSE extensions.
Supports SSE2 extensions.
Supports Hyper-Threading Technology.
Table A-6 shows the possible CPU features on an AMD64/EM64T system.
Table A-6 AMD64/EM64T Features
AMD Opteron, Athlon64, and so forth.
Intel EM64T processor.
Supports 3DNow extension.
Supports Hyper-Threading Technology.
The next information in the error log is memory information, as shown in the following example.
unused swap space total amount of swap space | unused physical memory | | total amount of physical memory | | | page size | | | | v v v v v Memory: 4k page, physical 513604k(11228k free), swap 530104k(497504k free)
Some systems require swap space to be at lease twice the size of real physical memory, whereas other systems do not have any requirements. As a general rule, if both physical memory and swap space are almost full, then there is good reason to suspect that the crash was due to insufficient memory.
On Linux system, the kernel may convert most of unused physical memory to file cache. When there is a need for more memory, the Linux kernel will give the cache memory back to the application. This is handled transparently by the kernel, but it means that the amount of unused physical memory reported by the fatal error handler could be close to zero when there is still sufficient physical memory available.
The final information in the SYSTEM section of the error log is
vm_info, which is a version string embedded in
libjvm.so/jvm.dll. Every Java VM has its own unique
vm_info string. If you are in doubt about whether the fatal error log was generated by a particular Java VM, check the version string.