Go to main content

Oracle® Solaris 11.4 DTrace (Dynamic Tracing) Guide

Exit Print View

Updated: September 2020
 
 

cpc Provider

The cpc provider makes available probes associated with CPU performance counter events. A probe fires when a specified number of events of a given type in a chosen processor mode have occurred. When a probe fires you can sample aspects of the system state and you can make inferences to the system behavior. Accurate inferences are possible when high sampling rates or long sampling times are employed.

cpc Probes

Probes made available by the cpc provider have the following format:

cpc:::event name-mode[-attributes]-count

The format of attributes is:

attr1_val1[-attr2_val2...]|value

The cpc probe names and meaning are the following:

event name

The platform specific or generic event name. A full list of events can be obtained using the –h option with the cpustat command.

mode

The privilege mode in which to count events. Valid modes are "user" for user mode events, "kernel" for kernel mode events and "all" for both user mode and kernel mode events.

attributes

This component is optional and accepts one or more event attributes. On some platforms it is possible to specify event attributes to further refine a platform specific event specification. The attributes can only be specified for platform specific events.

The attributes are specified as name-value pairs or single value in the following format:

attr1_val1[-attr2_val2...]|value
  • The attributes (attr1, attr2, and so on) are the string names of the platform-specific attributes.

  • The values (val1, val2, and so on) are the hex values for the corresponding attributes.


Note - 
  • If only a value without attribute name is specified, it is interpreted as a mask value. The mask value is commonly referred to as a unit mask or event mask.

  • Available attribute names can be obtained through –h option with the cpustat command.

  • The nouser and the sys attributes are not accepted and should be specified at the mode component.


count

The number of events that must occur on a CPU for a probe to be fired on that CPU.

A sample usage of the cpc provider is as follows:

cpc:::BU_fill_req_missed_L2-all-umask_0x7-cmask_0x0-10000

    In this example, the parameters are set to the following values:

  • Event name is set to BU_fill_req_missed_L2

  • Mode is set to all

  • Attributes are set to umask = 0x7 and cmask = 0x0

  • Count is set to 10000

The following introductory example fires a probe on a CPU for every 10000 user-mode Level 1 instruction cache misses on a SPARC platform. When the probe fires, record the name of the executable that was on processor at the time the probe fires.

#!/usr/sbin/dtrace -s

#pragma D option quiet

cpc:::IC_miss-user-10000
{
        @[execname] = count();
}

END
{
        trunc(@, 10);
}
# ./user-l1miss.d 
^C

  dirname                                                           8
  firefox                                                           8
  sed                                                              11
  intrd                                                            12
  run-mozilla.sh                                                   13
  java                                                             64
  sshd                                                            135
  gconfd-2                                                        569
  thunderbird-bin                                                1666
  firefox-bin                                                    2060

Note -  When working with the cpc provider, note that the state available when a probe fires is valid for the performance counter event that caused the probe to fire and not for all events counted with that probe. The preceding output shows that the firefox-bin application caused the cpc:::IC_miss-user-10000 probe to fire 2060 times. As this probe fires once for every 10000 level 1 instruction cache misses on a CPU, the firefox-bin application could have contributed anywhere from 2060 to 20600000 of these misses.

For more examples, see Using the cpc Provider.

cpc Probe Arguments

The arguments to cpc probes are the following:

arg0

The program counter (PC) in the kernel at the time that the probe fired, or 0 if the current process was not executing in the kernel at the time that the probe fired

arg1

The PC in the user-level process at the time that the probe fired, or 0 if the current process was executing at the kernel at the time that the probe fired

As the descriptions imply, if arg0 is non-zero then arg1 is zero; if arg0 is zero then arg1 is non-zero.

Probe Availability and CPU Counters

CPU performance counters are a finite resource and the number of probes that can be enabled depends upon hardware capabilities. Processors that cannot determine which counter has overflowed when multiple counters are programmed, such as AMD and UltraSPARC, are only allowed to have a single enabling at any one time. On such platforms, consumers attempting to enable more than 1 probe will fail as will consumers attempting to enable a probe when a disparate enabling already exists. Processors that can detect which counter has overflowed, such as Niagara2 and Intel P4, are allowed to have as many probes enabled as the hardware allows. This will be, at most, the number of counters available on a processor. On such configurations, multiple probes can be enabled at any one time.

Probes are enabled by consumers on a first-come, first-served basis. When hardware resources are fully utilized subsequent enablings will fail until resources become available.

cpc Probe Creation

Like the profile provider, the cpc provider creates probes dynamically on an as-needed basis. Thus, the desired cpc probe might not appear in a listing of all probes but the probe will be created when it is explicitly enabled. You can use dtrace -l -P cpc for listing all the cpc probes.

Specifying a small event overflow count for frequently occurring events, such as cycle count and instructions executed, renders the system unusable as a processor would be continuously servicing performance counter overflow interrupts. To prevent this situation, the smallest overflow count that can be specified for any probe is set, by default, at 5000. This can be altered by adjusting the dcpc-min-overflow variable in the /kernel/drv/dcpc.conf configuration file and then unloading and reloading the dcpc driver.


Note -  Specify high frequency events such as instructions executed or cycle count. For example, measuring busy cycles on a fully utilized 3GHz-processor with a count of 50000 would generate approximately 65000 interrupts/sec. This rate of interrupt delivery could degrade system performance to some degree.

cpc Probe and Existing Tools

The cpc provider has priority over per-LWP libcpc usage, that is cputrack, for access to counters. In the same manner as cpustat, enabling probes causes all existing per-LWP counter contexts to be invalidated. As long as enabled probes remain active, the counters will remain unavailable to cputrack-type consumers.

Only one of cpustat and DTrace may use the counter hardware at any one time. Ownership of the counters is given on a first-come, first-served basis.

Using the cpc Provider

Examples of cpc provider usage follow.

Example 19  Showing Application Instructions on an AMD Platform

The script displays instructions executed by applications on an AMD platform.

cpc:::FR_retired_x86_instr_w_excp_intr-user-10000
{
        @[execname] = count();
}

# ./user-insts.d
dtrace: script './user-insts.d' matched 1 probe
^C
[chop]
  init                                                            138
  dtrace                                                          175
  nis_cachemgr                                                    179
  automountd                                                      183
  intrd                                                           235
  run-mozilla.sh                                                  306
  thunderbird                                                     316
  Xorg                                                            453
  thunderbird-bin                                                2370
  sshd                                                           8114
Example 20  Showing Kernel Cycle Usage on an AMD Platform

The following example shows a kernel profiled by cycle usage on an AMD platform.

cpc:::BU_cpu_clk_unhalted-kernel-10000
{
        @[func(arg0)] = count();
}
 
# ./kern-cycles.d                                
dtrace: script './kern-cycles.d' matched 1 probe
^C
[chop]
  genunix`vpm_sync_pages                                       478948
  genunix`vpm_unmap_pages                                      496626
  genunix`vpm_map_pages                                        640785
  unix`mutex_delay_default                                     916703
  unix`hat_kpm_page2va                                         988880
  tmpfs`rdtmp                                                  991252
  unix`hat_page_setattr                                       1077717
  unix`page_try_reclaim_lock                                  1213379
  genunix`free_vpmap                                          1914810
  genunix`get_vpmap                                           2417896
  unix`page_lookup_create                                     3992197
  unix`mutex_enter                                            5595647
  unix`do_copy_fault_nta                                     27803554
Example 21  Describing User-Mode Cache Misses on an AMD Platform

This example describes user-mode L2 cache misses and the functions that generated the cache misses on an AMD platform. The predicate ensures that you only sample function names when the probe was fired by the brendan executable.

cpc:::BU_fill_req_missed_L2-all-0x7-10000
/execname == "brendan"/
{
	@[ufunc(arg1)] = count();
}

./brendan-l2miss.d
dtrace: script './brendan-l2miss.d' matched 1 probe
CPU     ID                    FUNCTION:NAME
^C

  brendan`func_gamma                                               930
  brendan`func_beta                                               1578
  brendan`func_alpha                                              2945

You can have the same result with the following probe name format.

 cpc:::BU_fill_req_missed_L2-all-umask_0x7-10000
 / execname == "brendan" /
 {
	@[ufunc(arg1)] = count();
 }
Example 22  Describing a Generic Event on an AMD Platform

This example describes a generic event PAPI_l2_dcm to indicate the interest in L2 data cache misses instead of the platform event.

cpc:::PAPI_l2_dcm-all-10000
/execname == "brendan"/
{
        @[ufunc(arg1)] = count();
}

# ./brendan-generic-l2miss.d
dtrace: script './brendan-generic-l2miss.d' matched 1 probe
^C

  brendan`func_gamma                                              1681
  brendan`func_beta                                               2521
  brendan`func_alpha                                              5068
Example 23  Probing Offcore Events on an Intel Platform

The following example probes offcore event on an Intel platform:

cpc:::off_core_response_0-all-msr_offcore_0x3001-10000
 {
	@[execname] = count();
 }

 # ./off_core_event.d
 dtrace: script './off_core_event.d' matched 1 probe 
 ^C 

fmd                                                         3
fsflush                                                    36
sched                                                     175
Example 24  Showing the Use of Multiple Attributes

Multiple attributes are allowed using a minus sign (-) between attributes.

The following example sets two attributes to probe L2 miss event in an AMD platform.

 cpc:::BU_fill_req_missed_L2-all-umask_0x7-cmask_0x0-10000
 {
	@[execname] = count();
 }

 # ./l2miss.d
 dtrace: script './l2miss.d' matched 1 probe


automountd                                                   1
dtrace                                                       1
fmd                                                          1
in.routed                                                    1
netcfgd                                                      1
nscd                                                         1
sendmail                                                     1
utmpd                                                        1
kcfd                                                         2
syslogd                                                      2
uname                                                        2
file                                                         3
ls                                                           3
sshd                                                         4
zfs                                                          9
bash                                                        10
ksh93                                                       10
ssh                                                         22
fsflush                                                     34
sched                                                       68
beadm                                                      146

cpc Stability

The cpc provider uses the stability mechanism of DTrace to describe its stabilities as shown in the following table. For more information about the stability mechanism, see DTrace Stability Mechanisms.

Table 20  Stability Mechanism for the cpc Provider
Element
Name Stability
Data Stability
Dependency Class
Provider
Evolving
Evolving
Common
Module
Private
Private
Unknown
Function
Private
Private
Unknown
Name
Evolving
Evolving
CPU
Arguments
Evolving
Evolving
Common