Using the fbt
Provider
You can use fbt
to easily explore the kernel's implementation. The following example script records the first ioctl
from any xclock
process and then follows the subsequent code path through the kernel.
/* * To make the output more readable, indent every function entry * and unindent every function return. This is done by setting the * "flowindent" option. */ #pragma D option flowindent syscall::ioctl:entry /execname == "xclock" && guard++ == 0/ { self->traceme = 1; printf("fd: %d", arg0); } fbt::: /self->traceme/ {} syscall::ioctl:return /self->traceme/ { self->traceme = 0; exit(0); }
Running this script results in output similar to the following example:
# dtrace -s ./xioctl.d
dtrace: script './xioctl.d' matched 26254 probes
CPU FUNCTION
0 => ioctl fd: 3
0 -> ioctl
0 -> getf
0 -> set_active_fd
0 <- set_active_fd
0 <- getf
0 -> fop_ioctl
0 -> sock_ioctl
0 -> strioctl
0 -> job_control_type
0 <- job_control_type
0 -> strcopyout
0 -> copyout
0 <- copyout
0 <- strcopyout
0 <- strioctl
0 <- sock_ioctl
0 <- fop_ioctl
0 -> releasef
0 -> clear_active_fd
0 <- clear_active_fd
0 -> cv_broadcast
0 <- cv_broadcast
0 <- releasef
0 <- ioctl
0 <= ioctl
The output shows that an xclock
process called ioctl
on a file descriptor that appears to be associated with a socket.
You can also use fbt
when trying to understand kernel drivers. For example, the ssd
driver has many code paths by which EIO
may be returned. fbt
can be easily used to determine the precise code path that resulted in an error condition, as shown in the following example:
fbt:ssd::return /arg1 == EIO/ { printf("%s+%x returned EIO.", probefunc, arg0); }
For more information about any one return of EIO
, you might want to speculatively trace all fbt
probes, and then commit
or discard
based on the return value of a specific function. For more information about speculative tracing, see Speculative Tracing in DTrace.
Alternatively, you can use fbt
to understand the functions called within a specified module. The following example lists all of the functions called in UFS:
# dtrace -n fbt:ufs::entry'{@a[probefunc] = count()}'
dtrace: description 'fbt:ufs::entry' matched 353 probes
^C
ufs_ioctl 1
ufs_statvfs 1
ufs_readlink 1
ufs_trans_touch 1
wrip 1
ufs_dirlook 1
bmap_write 1
ufs_fsync 1
ufs_iget 1
ufs_trans_push_inode 1
ufs_putpages 1
ufs_putpage 1
ufs_syncip 1
ufs_write 1
ufs_trans_write_resv 1
ufs_log_amt 1
ufs_getpage_miss 1
ufs_trans_syncip 1
getinoquota 1
ufs_inode_cache_constructor 1
ufs_alloc_inode 1
ufs_iget_alloced 1
ufs_iget_internal 2
ufs_reset_vnode 2
ufs_notclean 2
ufs_iupdat 2
blkatoff 3
ufs_close 5
ufs_open 5
ufs_access 6
ufs_map 8
ufs_seek 11
ufs_addmap 15
rdip 15
ufs_read 15
ufs_rwunlock 16
ufs_rwlock 16
ufs_delmap 18
ufs_getattr 19
ufs_getpage_ra 24
bmap_read 25
findextent 25
ufs_lockfs_begin 27
ufs_lookup 46
ufs_iaccess 51
ufs_imark 92
ufs_lockfs_begin_getpage 102
bmap_has_holes 102
ufs_getpage 102
ufs_itimes_nolock 107
ufs_lockfs_end 125
dirmangled 498
dirbadname 498
If you know the purpose or arguments of a kernel function, you can use fbt
to understand how or why the function is being called. For example, putnext
takes a pointer to a queue structure as its first member. The q_qinfo
member of the queue
structure is a pointer to a qinit
structure. The qi_minfo
member of the qinit
structure has a pointer to a module_info
structure, which contains the module name in its mi_idname
member. For more information, see putnext
(9F), queue
(9S), qinit
(9S), and module_info
(9S) man pages.
The following example puts this information together by using the fbt
probe in putnext
to track putnext
calls by module name:
fbt::putnext:entry { @calls[stringof(args[0]->q_qinfo->qi_minfo->mi_idname)] = count(); }
Running the preceding script results in output similar to the following example:
# dtrace -s ./putnext.d
^C
iprb
1
rpcmod
1
pfmod 1
timod 2
vpnmod 2
pts 40
conskbd 42
kb8042 42
tl 58
arp 108
tcp 126
ptm 249
ip 313
ptem 340
vuid2ps2 361
ttcompat 412
ldterm 413
udp 569
strwhead 624
mouse8042 726
This example shows how to determine the processes that call the zio_wait
function and how long the processes run. Note that this example works even if zio_wait
is a recursive function.
The script outputs a distribution graph that shows the amount of time each zio_wait
process runs while the DTrace script executes:
fbt::zio_wait:entry { self->in[++self->count] = timestamp; } fbt::zio_wait:return /self->count/ { this->count = self->count--; @waiters[execname] = quantize(timestamp - self->in[this->count]); self->in[this->count] = 0; }
Use the dtrace -s zio_wait.d
command to produce output similar to the following:
# dtrace -s zio_wait.d
^C
dtrace
value ------------- Distribution ------------- count
1024 | 0
2048 |@@@@@@@@ 1
4096 |@@@@@@@@@@@@@@@@ 2
8192 |@@@@@@@@@@@@@@@@ 2
16384 | 0
zpool-rpool
value ------------- Distribution ------------- count
1024 | 0
2048 |@@@ 1
4096 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 10
8192 |@@@ 1
16384 | 0
sched
value ------------- Distribution ------------- count
1024 | 0
2048 |@@@ 1
4096 |@@@@@@@@@@@ 4
8192 |@@@ 1
16384 | 0
32768 | 0
65536 | 0
131072 |@@@ 1
262144 |@@@ 1
524288 | 0
1048576 | 0
2097152 | 0
4194304 | 0
8388608 | 0
16777216 |@@@ 1
33554432 | 0
67108864 |@@@@@@@@@@@ 4
134217728 | 0
268435456 |@@@ 1
536870912 | 0