Cache: ARC Accesses
The ARC is the Adaptive Replacement Cache, and is an in-DRAM cache for filesystem and
volume data. This statistic shows accesses to the ARC, and allows its usage and
performance to be observed.
You can check ARC accesses when investigating performance issues, to understand
how well the current workload is caching in the ARC.
The available breakdowns of Cache ARC accesses are:
Table 14 Breakdowns of ARC Accesses
|
|
hit/miss
|
The result of the ARC lookup. hit/miss states are described in
the table below.
|
filename
|
The filename that was requested from the ARC. Using this
breakdown allows hierarchy mode to be used, so that filesystem
directories can be navigated.
|
L2ARC eligibility
|
This is the eligibility of L2ARC caching, as measured at the
time of ARC access. A high level of ARC misses which are L2ARC
eligible would suggest that the workload would benefit from 2nd
level cache devices.
|
project
|
This shows the project which is accessing the ARC.
|
share
|
This shows the share which is accessing the ARC.
|
LUN
|
This shows the LUN which is accessing the ARC.
|
|
As described in Execution Performance Impact,
breakdown such as by filename would be the most expensive to leave enabled.
The hit/miss states are:
Table 15 Hit/Miss Breakdowns
|
|
data hits
|
A data block was in the ARC DRAM cache and returned.
|
data misses
|
A data block was not in the ARC DRAM cache. It will be read
from the L2ARC cache devices (if available and the data is
cached on them) or the pool disks.
|
metadata hits
|
A metadata block was in the ARC DRAM cache and returned.
Metadata includes the on-disk filesystem framework which refers
to the data blocks. Other examples are listed below.
|
metadata misses
|
A metadata block was not in the ARC DRAM cache. It will be
read from the L2ARC cache devices (if available and the data is
cached on them) or the pool disks.
|
prefetched data/metadata hits/misses
|
ARC accesses triggered by the prefetch mechanism, not directly
from an application request. More details on prefetch
follow.
|
|
Examples of metadata:
Prefetch is a mechanism to improve the performance of streaming read workloads. It
examines I/O activity to identify sequential reads, and can issue extra reads ahead
of time so that the data can be in cache before the application requests it.
Prefetch occurs before the ARC by performing accesses to the
ARC - bear this in mind when trying to understand prefetch ARC activity. For
example, if you see:
Table 16 Prefetch Types
|
|
prefetched data misses
|
Prefetch identified a sequential workload, and requested that
the data be cached in the ARC ahead of time by performing ARC
accesses for that data. The data was not in the cache already,
and so this is a "miss" and the data is read from disk. This is
normal, and is how prefetch populates the ARC from disk.
|
prefeteched data hits
|
Prefetch identified a sequential workload, and requested that
the data be cached in the ARC ahead of time by performing ARC
accesses for that data. As it turned out, the data was already
in the ARC - so these accesses returned as "hits" (and so the
prefetch ARC access wasn't actually needed). This happens if
cached data is repeatedly read in a sequential manner.
|
|
After data has been prefetched, the application may then request it with its own
ARC accesses. Note that the sizes may be different: prefetch may occur with a 128
Kbyte I/O size, while the application may be reading with an 8 Kbyte I/O size. For
example, the following doesn't appear directly related:
-
Data hits: 368
-
Prefetch data misses: 23
However it may be: if prefetch was requesting with a 128 KByte I/O size, 23 x 128
= 2944 Kbytes. And if the application was requesting with an 8 Kbyte I/O size, 368 x
8 = 2944 Kbytes.
To investigate ARC misses, check that the ARC has grown to use available DRAM
using Cache ARC size.