Chapter 3 Statistics and Datasets
Determining the impact of a dynamic statistic
Capacity: Capacity Percent Used
Capacity System Pool Bytes Used
Capacity: System Pool Bytes Used
Capacity System Pool Percent Used
Capacity: System Pool Percent Used
Data Movement NDMP Bytes Statistics
Data Movement: NDMP Bytes Statistics
Data Movement NDMP Operations Statistics
Data Movement: NDMP Operations Statistics
Data Movement Replication Bytes
Data Movement: Replication Bytes
Data Movement Replication Operations
Data Movement: Replication Operations
Data Movement Shadow Migration Bytes
Data Movement: Shadow Migration Bytes
Data Movement Shadow Migration Ops
Data Movement: Shadow Migration Ops
Data Movement Shadow Migration Requests
Data Movement: Shadow Migration Requests
Protocol Fibre Channel Operations
Protocol: Fibre Channel Operations
Protocol: HTTP/WebDAV Requests
Data Movement NDMP Bytes Transferred to/from Disk
Data Movement: NDMP Bytes Transferred to/from Disk
Data Movement NDMP Bytes Transferred to/from Tape
Data Movement: NDMP Bytes Transferred to/from Tape
Data Movement NDMP File System Operations
Data Movement: NDMP File System Operations
Data Movement Replication Latencies
Data Movement: Replication Latencies
Disk ZFS Logical I/O Operations
Disk: ZFS Logical I/O Operations
Memory Kernel Memory Lost to Fragmentation
Memory: Kernel Memory Lost to Fragmentation
This statistic shows the back-end throughput to the disks. This is after the appliance has processed logical I/O into physical I/O based on share settings, and after software RAID as configured by Chapter 5, Storage Configuration, in Oracle ZFS Storage Appliance Administration Guide .
For example, an 8 Kbyte write over NFSv3 may became a 128 Kbyte write after the record size is applied from the share settings, which may then become a 256 Kbyte write to the disks after mirroring is applied, plus additional bytes for filesystem metadata. On the same mirrored environment, an 8 Kbyte NFSv3 read may become a 128 Kbyte disk read after the record size is applied, however this doesn't get doubled by mirroring (the data only needs to be read from one half.) It can help to monitor throughput at all layers at the same time to examine this behavior, for example by viewing:
Network: device bytes - data rate on the network (logical)
Disk: ZFS logical I/O bytes - data rate to the share (logical)
Disk: I/O bytes - data rate to the disks (physical)
To understand the nature of back-end disk I/O, after an issue has already been determined based on disk utilization or latency. It is difficult to identify an issue from disk I/O throughput alone: a single disk may be performing well at 50 Mbytes/sec (sequential I/O), yet poorly at 5 Mbytes/sec (random I/O.)
Using the disk breakdown and the hierarchy view can be used to determine if the JBODs are balanced with disk I/O throughput. Note that cache and log devices will usually have a different throughput profile to the pool disks, and can often stand out as the highest throughput disks when examining by-disk throughput.
|
For the best measure of disk utilization, see Disk: Disks. To examine bytes/sec instead of operations/sec, see Disk: I/O bytes.