The zpool list command provides a number of ways to request information regarding pool status. The information available generally falls into three categories: basic usage information, I/O statistics, and health status. All three types of storage pool information are covered in this section.
You can use the zpool list command to display basic information about pools.
With no arguments, the command displays all the fields for all pools on the system. For example:
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - dozer 1.2T 384G 816G 32% ONLINE - |
This output displays the following information:
The name of the pool.
The total size of the pool, equal to the sum of the size of all top-level virtual devices.
The amount of space allocated by all datasets and internal metadata. Note that this amount is different from the amount of space as reported at the file system level.
For more information about determining available file system space, see ZFS Space Accounting.
The amount of unallocated space in the pool.
The amount of space used, expressed as a percentage of total space.
The current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
The alternate root of the pool, if any.
For more information about alternate root pools, see Using ZFS Alternate Root Pools.
You can also gather statistics for a specific pool by specifying the pool name. For example:
# zpool list tank NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - |
Specific statistics can be requested by using the -o option. This option allows for custom reports or a quick way to list pertinent information. For example, to list only the name and size of each pool, you use the following syntax:
# zpool list -o name,size NAME SIZE tank 80.0G dozer 1.2T |
The column names correspond to the properties that are listed in Listing Information About All Storage Pools.
The default output for the zpool list command is designed for readability, and is not easy to use as part of a shell script. To aid programmatic uses of the command, the -H option can be used to suppress the column headings and separate fields by tabs, rather than by spaces. For example, to request a simple list of all pool names on the system:
# zpool list -Ho name tank dozer |
Here is another example:
# zpool list -H -o name,size tank 80.0G dozer 1.2T |
ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.
For example, the following syntax displays the command output for the root pool.
# zpool history History for 'rpool': 2009-05-07.13:51:00 zpool create -f -o failmode=continue -R /a -m legacy -o cachefile= /tmp/root/etc/zfs/zpool.cache rpool c1t0d0s0 2009-05-07.13:51:01 zfs set canmount=noauto rpool 2009-05-07.13:51:02 zfs set mountpoint=/rpool rpool 2009-05-07.13:51:02 zfs create -o mountpoint=legacy rpool/ROOT 2009-05-07.13:51:03 zfs create -b 8192 -V 2048m rpool/swap 2009-05-07.13:51:04 zfs create -b 131072 -V 1024m rpool/dump 2009-05-07.13:51:09 zfs create -o canmount=noauto rpool/ROOT/snv_114 2009-05-07.13:51:10 zpool set bootfs=rpool/ROOT/snv_114 rpool 2009-05-07.13:51:10 zfs set mountpoint=/ rpool/ROOT/snv_114 2009-05-07.13:51:11 zfs set canmount=on rpool 2009-05-07.13:51:12 zfs create -o mountpoint=/export rpool/export 2009-05-07.13:51:12 zfs create rpool/export/home |
You can use similar output on your system to identify the exact set of ZFS commands that was executed to troubleshoot an error scenario.
The features of the history log are as follows:
The log cannot be disabled.
The log is saved persistently on disk, which means the log is saved across system reboots.
The log is implemented as a ring buffer. The minimum size is 128 Kbytes. The maximum size is 32 Mbytes.
For smaller pools, the maximum size is capped at 1% of the pool size, where size is determined at pool creation time.
Requires no administration, which means tuning the size of the log or changing the location of the log is unnecessary.
To identify the command history of a specific storage pool, use syntax similar to the following:
# zpool history mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 2009-06-02.10:57:54 zpool offline mypool c0t5d0 2009-06-02.10:58:02 zpool online mypool c0t5d0 |
Use the -l option to display a long format that includes the user name, the hostname, and the zone in which the operation was performed. For example:
# zpool history -l mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 [user root on neo:global] 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 [user root on neo:global] 2009-06-02.10:57:54 zpool offline mypool c0t5d0 [user root on neo:global] 2009-06-02.10:58:02 zpool online mypool c0t5d0 [user root on neo:global] |
Use the -i option to display internal event information that can be used for diagnostic purposes. For example:
# zpool history -i mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 2009-06-02.10:57:54 zpool offline mypool c0t5d0 2009-06-02.10:58:02 zpool online mypool c0t5d0 2009-06-02.11:02:20 [internal create txg:23] dataset = 24 2009-06-02.11:02:20 [internal property set txg:24] mountpoint=/data dataset = 24 2009-06-02.11:02:20 zfs create -o mountpoint=/data mypool/data 2009-06-02.11:02:34 [internal create txg:26] dataset = 30 2009-06-02.11:02:34 zfs create mypool/data/datab |
To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity so far, as well as updated statistics for every specified interval. The following statistics are reported:
The amount of data currently stored in the pool or device. This figure differs from the amount of space available to actual file systems by a small amount due to internal implementation details.
For more information about the difference between pool space and dataset space, see ZFS Space Accounting.
The amount of space available in the pool or device. As with the used statistic, this amount differs from the amount of space available to datasets by a small margin.
The number of read I/O operations sent to the pool or device, including metadata requests.
The number of write I/O operations sent to the pool or device.
The bandwidth of all read operations (including metadata), expressed as units per second.
The bandwidth of all write operations, expressed as units per second.
With no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:
# zpool iostat capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 100G 20.0G 1.2M 102K 1.2M 3.45K dozer 12.3G 67.7G 132K 15.2K 32.1K 1.20K |
Because these statistics are cumulative since boot, bandwidth might appear low if the pool is relatively idle. You can request a more accurate view of current bandwidth usage by specifying an interval. For example:
# zpool iostat tank 2 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 100G 20.0G 1.2M 102K 1.2M 3.45K tank 100G 20.0G 134 0 1.34K 0 tank 100G 20.0G 94 342 1.06K 4.1M |
In this example, the command displays usage statistics only for the pool tank every two seconds until you type Ctrl-C. Alternately, you can specify an additional count parameter, which causes the command to terminate after the specified number of iterations. For example, zpool iostat 2 3 would print a summary every two seconds for three iterations, for a total of six seconds. If there is a single pool, then the statistics are displayed on consecutive lines. If more than one pool exists, then an additional dashed line delineates each iteration to provide visual separation.
In addition to pool-wide I/O statistics, the zpool iostat command can display statistics for virtual devices. This command can be used to identify abnormally slow devices, or simply to observe the distribution of I/O generated by ZFS. To request the complete virtual device layout as well as all I/O statistics, use the zpool iostat -v command. For example:
# zpool iostat -v capacity operations bandwidth tank used avail read write read write ---------- ----- ----- ----- ----- ----- ----- mirror 20.4G 59.6G 0 22 0 6.00K c1t0d0 - - 1 295 11.2K 148K c1t1d0 - - 1 299 11.2K 148K ---------- ----- ----- ----- ----- ----- ----- total 24.5K 149M 0 22 0 6.00K |
Note two important things when viewing I/O statistics on a virtual device basis:
First, space usage is only available for top-level virtual devices. The way in which space is allocated among mirror and RAID-Z virtual devices is particular to the implementation and not easily expressed as a single number.
Second, the numbers might not add up exactly as you would expect them to. In particular, operations across RAID-Z and mirrored devices will not be exactly equal. This difference is particularly noticeable immediately after a pool is created, as a significant amount of I/O is done directly to the disks as part of pool creation that is not accounted for at the mirror level. Over time, these numbers should gradually equalize, although broken, unresponsive, or offlined devices can affect this symmetry as well.
You can use the same set of options (interval and count) when examining virtual device statistics.
ZFS provides an integrated method of examining pool and device health. The health of a pool is determined from the state of all its devices. This state information is displayed by using the zpool status command. In addition, potential pool and device failures are reported by fmd and are displayed on the system console and logged in the /var/adm/messages file. This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information on troubleshooting and data recovery, see Chapter 11, ZFS Troubleshooting and Pool Recovery.
Each device can fall into one of the following states:
The device or virtual device is in normal working order. While some transient errors might still occur, the device is otherwise in working order.
The virtual device has experienced failure but is still able to function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised, as a subsequent fault in another device might be unrecoverable.
The device or virtual device is completely inaccessible. This status typically indicates total failure of the device, such that ZFS is incapable of sending or receiving data from it. If a top-level virtual device is in this state, then the pool is completely inaccessible.
The device has been explicitly taken offline by the administrator.
The device or virtual device cannot be opened. In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is unavailable, then nothing in the pool can be accessed.
The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms.
The health of a pool is determined from the health of all its top-level virtual devices. If all virtual devices are ONLINE, then the pool is also ONLINE. If any one of the virtual devices is DEGRADED or UNAVAIL, then the pool is also DEGRADED. If a top-level virtual device is FAULTED or OFFLINE, then the pool is also FAULTED. A pool in the faulted state is completely inaccessible. No data can be recovered until the necessary devices are attached or repaired. A pool in the degraded state continues to run, but you might not achieve the same level of data redundancy or data throughput than if the pool were online.
The simplest way to request a quick overview of pool health status is to use the zpool status command:
# zpool status -x all pools are healthy |
Specific pools can be examined by specifying a pool name to the command. Any pool that is not in the ONLINE state should be investigated for potential problems, as described in the next section.
You can request a more detailed health summary by using the -v option. For example:
# zpool status -v tank pool: tank state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c1t0d0 FAULTED 0 0 0 cannot open c1t1d0 ONLINE 0 0 0 errors: No known data errors |
This output displays a complete description of why the pool is in its current state, including a readable description of the problem and a link to a knowledge article for more information. Each knowledge article provides up-to-date information on the best way to recover from your current problem. Using the detailed configuration information, you should be able to determine which device is damaged and how to repair the pool.
In the above example, the faulted device should be replaced. After the device is replaced, use the zpool online command to bring the device back online. For example:
# zpool online tank c1t0d0 Bringing device c1t0d0 online # zpool status -x all pools are healthy |
If the autoreplace property is on, you might not have to online the replaced device.
If a pool has an offlined device, the command output identifies the problem pool. For example:
# zpool status -x pool: tank state: DEGRADED status: One or more devices has been taken offline by the adminstrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 OFFLINE 0 0 0 errors: No known data errors |
The READ and WRITE columns provides a count of I/O errors seen on the device, while the CKSUM column provides a count of uncorrectable checksum errors that occurred on the device. Both of these error counts likely indicate potential device failure, and some corrective action is needed. If non-zero errors are reported for a top-level virtual device, portions of your data might have become inaccessible.
The errors: field identifies any known data errors.
In the example output above, the offlined device is not causing data errors.
For more information about diagnosing and repairing faulted pools and data, see Chapter 11, ZFS Troubleshooting and Pool Recovery.