Go to main content

What's New in Oracle® Solaris 11.4

Exit Print View

Updated: August 2018

Performance and Observability

This section describes the platform and performance enhancements that are new in this release. These features help optimize Oracle Solaris for SPARC and x86 based systems, thereby increasing performance and provide better diagnosis for your systems.

DTrace SCSI Provider

The Oracle Solaris 11.4 release introduces a new DTrace SCSI provider that is designed to trace SCSI commands and task management functions that are issued by an Oracle Solaris system. The SCSI provider has the following benefits:

  • Enables you to trace SCSI commands on an Oracle Solaris system without knowing the internal structure

  • Includes probes and structures that follow the SCSI T10 standards as much as possible

  • Provides a counter-part to the DTrace I/O provider that traces I/O traffic at a different layer

  • Delivers a scsitrace script that consumes the new probes

The following example illustrates a one-line trace that identifies SCSI target resets:

# dtrace -n 'scsi:::tmf-request
                      /(args[1] == SCSI_TMF_TARGET_RESET) &&
                       (args[0]->addr_path != "NULL")/ {                   
        printf("Target Reset sent to %s", args[0]->addr_path);}'

For more information, see the iscsi Provider in Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide.

DTrace fileops provider

The fileops provider exposes a full set of standard UNIX file operation probes that are intended more for an Oracle Solaris administrator than for a developer. For example, the provider can display read or write latency information for all file systems, including pseudo file systems.

The fileops probes pertain to the file operations: open, close, read, write and so on. These probes are neither specific to any file system type, nor are they dependent on I/O to external storage devices. For example, the fileops:::read probe fires on any read from a file, regardless of whether the data comes from disk or is cached in memory.

You can use the read probe to observe read latencies on different file system types. For example:

    @[args[0]->fi_fs] =

The resulting output provides a graph of read counts and latencies across all file system types on the system.

For more information, see the fileops Provider in Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide.

DTrace MIB Provider for TCP, UDP, and IP

The Oracle Solaris 11.4 release extends the existing DTrace MIB provider for observing events in the networking stack with protocol information so that TCP, UDP, and IP connections can be identified.

For more information, see the mib Provider in Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide.

DTrace pcap() Action

A new action, pcap(), is added to DTrace. The pcap() action will do one of the following:

  • Display packet data as tracemem() does, but coalesced into a contiguous buffer.

  • If freopen() has specified a capture file, the pcap() action will capture packet data to a packet capture file via the libpcap function pcap_dump(). DTrace does the following with the packet data:

    1. Collects packet data in probe context.

    2. Coalesces the packet data into a contiguous buffer if the data is not already in a contiguous buffer.

    3. Dumps the data to the specified file via the pcap_dump() function, which was called when collecting the data.

The following pcap() action dumps memory to stdout as tracemem() does:

pcap(mblk, protocol);

The following calls dump packet data to the capture file with a suffix that specifies the current pid:

freopen("/tmp/cap.%d", pid);
pcap(mblk, protocol);

This allows you to collate packet traces by process or service, for example. Because freopen() is classified as a destructive action, the above script must specify the –w (”destructive”) dtrace option. The pcap() action is not destructive.

DTrace print() Action

DTrace has a new print() action to display arbitrary types, as shown in the following example:

# dtrace -q -n 'fop_close:entry {print(*args[0]);exit(0)}'

vnode_t {
 v_lock = {
   _opaque = [ NULL ]
 v_flag = 0x0
 v_count = 0x1
 v_data = 0xffffc10054425378
 v_vfsp = specfs`spec_vfs
 v_stream = 0xffffc100623354e8
 v_type = VCHR
 v_rdev = 0xee00000026
 v_vfsmountedhere = NULL
 v_op = 0xffffc10029d98040
 v_pages = NULL
 v_filocks = NULL
 v_shrlocks = NULL
 v_nbllock = {
   _opaque = [ NULL ]
 v_cv = {
   _opaque = 0x0
 v_pad = 0xbadd
 v_count_dnlc = 0x0
 v_locality = NULL
 v_femhead = NULL
 v_path = "/devices/pseudo/udp@0:udp"
 v_rdcnt = 0x0
 v_wrcnt = 0x0
 v_mmap_read = 0x0
 v_mmap_write = 0x0
 v_mpssdata = NULL
 v_fopdata = NULL
 v_vsd_lock = {
   _opaque = [ NULL ]
 v_vsd = NULL
 v_xattrdir = NULL
 v_fw = 0xbaddcafebaddcafe


For more information, see print Action in Oracle Solaris 11.4 DTrace (Dynamic Tracing) Guide.

Kstats v2 Framework

The Kernel statistics (kstats) v2 framework provides better performance and a number of optimizations, when compared to the previous kstat framework. Some of the new notable components included are:

  • Kernel API which provides the functionality to create and manipulate v2 kstats. Kstats are identified using a unique URI and include metadata for both the kstat and the name-value pairs that the kstat contains. This API allows the kstat to describe the values that it is reporting.

  • libkstat2 library that which provides access to v2 kstats created in the kernel. Kstats are looked up through their unique URI and are presented as hashmaps. Developers can subscribe to events at a particular kstat URI level and will be notified when any kstats under them in the URI tree are added or removed below them in the URI tree.

  • /usr/bin/kstat2 utility which provides CLI access to kstat. This new utility examines the available kstats on the system and reports those statistics that match the criteria specified on the command line. Each matching statistic is then printed with its URI and its actual value. A number of different output formats are supported, including human-readable, parsable and JavaScript Object Notation (JSON) formats.

For information about the kernel API, see the kstat2_create(9F), kstat2_create_with_template(9F), and kstat2_create_histogram(9F) man pages. For information about the libkstat2 library, see the libkstat2(3LIB) and kstat2(3KSTAT2) man pages. For information about the kstat2 utility, see the kstat2(8) man page.

FMA Core File Diagnostics

Oracle Solaris 11.4 includes the core file diagnostics feature, which provides a summary of basic telemetrics from userland core files, raises FMA alerts to notify the user, and provides a diagnostic core retention policy and SMF case association.

The diagnostic core files contain only necessary content, which makes the size of the content small. The core files will be deleted after the text summary files are generated thereby, reducing disk space. With other new features such as stack diagnosis, FMA will be able to search the stacks in the summary file in the Oracle database for known issues. The retention policy enables the user to set the diagnostic core policy through the coreadm command. The coreadm command also provides functionalities such as deleting the cores immediately or keeping a certain number of cores for a certain time. The case association feature is for the sw-diag-response diagnosis engine. All core diagnosis alerts leading to a software service failure can be viewed together along with each event’s set of stack and environment data.

The user now has more control over diagnostic cores. When a software service does not run properly and is taken out of service, the administrator can easily and quickly view all the events that led to the service failure and be better informed about the processes that failed and where each process failed in its code execution.

For more information, see the coreadm(8) man page.

pfiles Enhancements

In Oracle Solaris 11.4, the pfiles command accepts a core file name as an argument and can display information about file descriptors opened by a process that dumps core. This functionality provides additional help in debugging the process core dump for the root cause of the dump.

Unlike previous Oracle Solaris releases, in Oracle Solaris 11.4 the pfiles command no longer stops a running target process while retrieving data on open files in that process.

For more information, see the proc(1) man page.

Monitor I/O Latency via fsstat

The fsstat command has a new –l option that reports latency information for read, write, and readdir operations. The latency information is independent of physical I/O operations, and therefore is representative of file system performance, as seen by applications. This feature enables users to observe file system latency for file system types or individual file systems. This feature is useful in troubleshooting file system performance problems.

For more information, see the fsstat(8) man page.

SCSI I/O Response Time Distribution Statistics

Oracle Solaris 11.4 now provides SCSI I/O response time or I/O latency distribution information for better observability. I/O response time distribution can be used to identify response time outliers. The distribution is stored in a histogram with three different x-scale options: linear, log2-based, and log10-based. The distribution can be displayed using the iostat command. The –L option is added in conjunction with the –x and –Y options to show the histogram. This distribution information can be used to investigate performance issues.

For more information, see the sd(4D) and iostat(8) man pages.