JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Dynamic Tracing Guide     Oracle Solaris 11 Information Library
search filter icon
search icon

Document Information

Preface

1.  About DTrace

2.  D Programming Language

3.  Aggregations

4.  Actions and Subroutines

5.  Buffers and Buffering

6.  Output Formatting

7.  Speculative Tracing

8.  dtrace(1M) Utility

9.  Scripting

10.  Options and Tunables

11.  Providers

dtrace Provider

BEGIN Probe

END Probe

ERROR Probe

Stability

lockstat Provider

Overview

Adaptive Lock Probes

Spin Lock Probes

Thread Locks

Readers/Writer Lock Probes

Stability

profile Provider

profile- n probes

tick- n probes

Arguments

Timer Resolution

Probe Creation

Stability

cpc Provider

Probes

Arguments

Probe Availability

Probe Creation

Co-existence With Existing Tools

Examples

user-insts.d

kern-cycles.d

brendan-l2miss.d

brendan-generic-l2miss.d

Stability

fbt Provider

Probes

Probe arguments

entry probes

return probes

Examples

Tail-call Optimization

Assembly Functions

Instruction Set Limitations

x86 Limitations

SPARC Limitations

Breakpoint Interaction

Module Loading

Stability

syscall Provider

Probes

System Call Anachronisms

Subcoded System Calls

New System Calls

Deleted System Calls

Large File System Calls

Private System Calls

Arguments

Stability

sdt Provider

Probes

Examples

Creating SDT Probes

Declaring Probes

Probe Arguments

Stability

mib Provider

Probes

Arguments

Stability

fpuinfo Provider

Probes

Arguments

Stability

pid Provider

Naming pid Probes

Function Boundary Probes

entry Probes

return Probes

Function Offset Probes

Stability

plockstat Provider

Overview

Mutex Probes

Reader/Writer Lock Probes

Stability

fasttrap Provider

Probes

Stability

sysinfo Provider

Probes

Arguments

Example

Stability

vminfo Provider

Probes

Arguments

Example

Stability

proc Provider

Probes

Arguments

lwpsinfo_t

psinfo_t

Examples

exec

start and exit

lwp-start and lwp-exit

signal-send

Stability

sched Provider

Probes

Arguments

cpuinfo_t

Examples

on-cpu and off-cpu

enqueue and dequeue

sleep and wakeup

preempt and remain-cpu

change-pri

tick

cpucaps-sleep and cpucaps-wakeup

Stability

io Provider

Probes

Arguments

bufinfo_t structure

devinfo_t

fileinfo_t

Examples

Stability

Protocols

ip Provider

Probes

Arguments

args[0] - pktinfo_t Structure

args[1] - csinfo_t Structure

args[2] - ipinfo_t Structure

args[3] - ifinfo_t Structure

args[4] - ipv4info_t Structure

args[5] - ipv6info_t Structure

Examples

Packets by host address

Sent size distribution

ipio.d

ipproto.d

Stability

iscsi Provider

Probes

Arguments

Types

Examples

One-liners

iscsiwho.d

iscsixfer.d

nfsv3 Provider

Arguments

Probes

Examples

nfsv3rwsnoop.d

nfsv3ops.d

nfsv3fileio.d

nfsv3rwtime.d

nfsv3io.d

nfsv4 Provider

Arguments

Probes

Examples

nfsv4rwsnoop.d

nfsv4ops.d

nfsv4fileio.d

nfsv4rwtime.d

nfsv4io.d

srp Provider

Probes

Probes Overview

Service up/down Event Probes

Remote Port Login/Logout Event Probes

SRP Command Event Probes

SCSI Command Event Probes

Data Transfer Probes

Types

scsicmd_t

conninfo_t

srp_portinfo_t

srp_logininfo_t

srp_taskinfo_t

xferinfo_t

Examples

service.d

srpwho.d

srpsnoop.d

tcp Provider

Probes

Arguments

pktinfo_t Structure

csinfo_t Structure

ipinfo_t Structure

tcpsinfo_t Structure

tcplsinfo_t Structure

tcpinfo_t Structure

Examples

Connections by Host Address

Connections by TCP Port

Who is Connecting to What

Who Isn't Connecting to What

Packets by Host Address

Packets by Local Port

Sent Size Distribution

tcpstate.d

tcpio.d

tcp Stability

udp Provider

Probes

Arguments

pktinfo_t Structure

csinfo_t Structure

ipinfo_t Structure

udpsinfo_t Structure

udpsinfo_t Structure

Examples

Packets by Host Address

Packets by Local Port

Sent Size Distribution

udp Stability

12.  User Process Tracing

13.  Statically Defined Tracing for User Applications

14.  Security

15.  Anonymous Tracing

16.  Postmortem Tracing

17.  Performance Considerations

18.  Stability

19.  Translators

20.  Versioning

cpc Provider

The cpc provider makes available probes associated with CPU performance counter events. A probe fires when a specified number of events of a given type in a chosen processor mode have occurred. When a probe fires we can sample aspects of system state and inferences can be made about system behavior. Accurate inferences are possible when high enough sampling rates and/or long sampling times are employed.

Probes

Probes made available by the cpc provider have the format of cpc:::<event name>-<mode>-<optional mask>-<count>. The definitions of the components of the probename are listed in table.

Table 11-6 Probename Components

Component
Meaning
event name
The platform specific or generic event name. A full list of events can be obtained using the -h option to cpustat(1M).
mode
The privilege mode in which to count events. Valid modes are "user" for user mode events, "kernel" for kernel mode events and "all" for both user mode and kernel mode events.
optional mask
On some platforms it is possible to specify a mask (commonly referred to as a unit mask or an event mask) to further refine a platform specific event specification. This field is optional and can only be specified for platform specific events. Specified as a hex value.
count
The number of events that must occur on a CPU for a probe to be fired on that CPU.

The following introductory example fires a probe on a CPU for every 10000 user-mode Level 1 instruction cache misses on a SPARC platform. When the probe fires we record the name of the executable that was on processor at the time the probe fires (see Examples section for further examples):

#!/usr/sbin/dtrace -s

#pragma D option quiet

cpc:::IC_miss-user-10000
{
        @[execname] = count();
}

END
{
        trunc(@, 10);
}
# ./user-l1miss.d 
^C

  dirname                                                           8
  firefox                                                           8
  sed                                                              11
  intrd                                                            12
  run-mozilla.sh                                                   13
  java                                                             64
  sshd                                                            135
  gconfd-2                                                        569
  thunderbird-bin                                                1666
  firefox-bin                                                    2060

Note - When working with the cpc provider it is important to remember that the state available when a probe fires is valid for the performance counter event that caused the probe to fire and not for all events counted with that probe. In the above output we see that the firefox-bin application caused the cpc:::IC_miss-user-10000 probe to fire 2060 times. As this probe fires once for every 10000 level 1 instruction cache misses on a CPU, the firefox-bin application could have contributed anywhere from 2060 to 20600000 of these misses.


Arguments

The arguments to cpc probes are listed in table below.

Table 11-7 Probe Arguments

arg0
The program counter (PC) in the kernel at the time that the probe fired, or 0 if the current process was not executing in the kernel at the time that the probe fired
arg1
The PC in the user-level process at the time that the probe fired, or 0 if the current process was executing at the kernel at the time that the probe fired

As the descriptions imply, if arg0 is non-zero then arg1 is zero; if arg0 is zero then arg1 is non-zero.

Probe Availability

CPU performance counters are a finite resource and the number of probes that can be enabled depends upon hardware capabilities. Processors that cannot determine which counter has overflowed when multiple counters are programmed (e.g. AMD, UltraSPARC) are only allowed to have a single enabling at any one time. On such platforms, consumers attempting to enable more than 1 probe will fail as will consumers attempting to enable a probe when a disparate enabling already exists. Processors that can detect which counter has overflowed (e.g. Niagara2, Intel P4) are allowed to have as many probes enabled as the hardware will allow. This will be, at most, the number of counters available on a processor. On such configurations, multiple probes can be enabled at any one time.

Probes are enabled by consumers on a first-come, first-served basis. When hardware resources are fully utilized subsequent enablings will fail until resources become available.

Probe Creation

Like the profile provider, the cpc provider creates probes dynamically on an as-needed basis. Thus, the desired cpc probe might not appear in a listing of all probes (for example, by using dtrace -l -P cpc) but the probe will be created when it is explicitly enabled.

Specifying a small event overflow count for frequently occurring events (e.g. cycle count, instructions executed) would quickly render the system unusable as a processor would be continuously servicing performance counter overflow interrupts. To prevent this situation, the smallest overflow count that can be specified for any probe is set, by default, at 5000. This can be altered by adjusting the dcpc-min-overflow variable in the /kernel/drv/dcpc.conf configuration file and then unloading and reloading the dcpc driver.


Note - It is necessary to take care specifying high frequency events such as instructions executed or cycle count. For example, measuring busy cycles on a fully utilized 3GHz processor with a count of 50000 would generate approximately 65000 interrupts/sec. This rate of interrupt delivery could degrade system performance to some degree.


Co-existence With Existing Tools

The provider has priority over per-LWP libcpc usage (i.e. cputrack) for access to counters. In the same manner as cpustat, enabling probes causes all existing per-LWP counter contexts to be invalidated. As long as enabled probes remain active, the counters will remain unavailable to cputrack-type consumers.

Only one of cpustat and DTrace may use the counter hardware at any one time. Ownership of the counters is given on a first-come, first-served basis.

Examples

Some simple examples of cpc provider usage follow.

user-insts.d

The simple script displays instructions executed by applications on an AMD platform

cpc:::FR_retired_x86_instr_w_excp_intr-user-10000
{
        @[execname] = count();
}

# ./user-insts.d
dtrace: script './user-insts.d' matched 1 probe
^C
[chop]
  init                                                            138
  dtrace                                                          175
  nis_cachemgr                                                    179
  automountd                                                      183
  intrd                                                           235
  run-mozilla.sh                                                  306
  thunderbird                                                     316
  Xorg                                                            453
  thunderbird-bin                                                2370
  sshd                                                           8114

kern-cycles.d

The following example shows a kernel profiled by cycle usage on an AMD platform.

cpc:::BU_cpu_clk_unhalted-kernel-10000
{
        @[func(arg0)] = count();
}
 
# ./kern-cycles.d                                
dtrace: script './kern-cycles.d' matched 1 probe
^C
[chop]
  genunix`vpm_sync_pages                                       478948
  genunix`vpm_unmap_pages                                      496626
  genunix`vpm_map_pages                                        640785
  unix`mutex_delay_default                                     916703
  unix`hat_kpm_page2va                                         988880
  tmpfs`rdtmp                                                  991252
  unix`hat_page_setattr                                       1077717
  unix`page_try_reclaim_lock                                  1213379
  genunix`free_vpmap                                          1914810
  genunix`get_vpmap                                           2417896
  unix`page_lookup_create                                     3992197
  unix`mutex_enter                                            5595647
  unix`do_copy_fault_nta                                     27803554

brendan-l2miss.d

In this example we are looking at user-mode L2 cache misses and the functions that generated them on an AMD platform. The predicate ensures that we only sample function names when the probe was fired by the 'brendan' executable.

cpc:::BU_fill_req_missed_L2-all-0x7-10000
/execname == "brendan"/
{
    @[ufunc(arg1)] = count();
}

./brendan-l2miss.d
dtrace: script './brendan-l2miss.d' matched 1 probe
CPU     ID                    FUNCTION:NAME
^C

  brendan`func_gamma                                               930
  brendan`func_beta                                               1578
  brendan`func_alpha                                              2945

brendan-generic-l2miss.d

Here we use the same example as about but we use the much simpler generic event PAPI_l2_dcm to indicate our interest in L2 data cache misses instead of the platform event.

cpc:::PAPI_l2_dcm-all-10000
/execname == "brendan"/
{
        @[ufunc(arg1)] = count();
}

# ./breandan-generic-l2miss.d  
dtrace: script './brendan-generic-l2miss.d' matched 1 probe
^C

  brendan`func_gamma                                              1681
  brendan`func_beta                                               2521
  brendan`func_alpha                                              5068

Stability

The cpc provider uses DTrace's stability mechanism to describe its stabilities as shown in the following table. For more information about the stability mechanism, see Chapter 18, Stability.

Element
Name Stability
Data Stability
Dependency Class
Provider
Evolving
Evolving
Common
Module
Private
Private
Unknown
Function
Private
Private
Unknown
Name
Evolving
Evolving
CPU
Arguments
Evolving
Evolving
Common