The zpool list command provides several ways to request information regarding pool status. The information available generally falls into three categories: basic usage information, I/O statistics, and health status. This section discuses all three types of storage pool information.
The zpool list command displays basic information about pools. You can use the command in the following ways:
Without options: zpool list [pool]
If you do not specify a pool, then information for all pools is displayed.
With options: zpool list options [arguments]
The zpool list [pool] command displays the following pool information:
Name of the pool.
Total size of the pool, equal to the sum of the sizes of all top-level virtual devices.
Amount of physical space allocated to all datasets and internal metadata. Note that this amount differs from the amount of disk space as reported at the file system level.
Amount of unallocated space in the pool.
Amount of disk space used, expressed as a percentage of the total disk space.
Current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
Alternate root of the pool, if one exists.
For more information about alternate root pools, see Using a ZFS Pool With an Alternate Root Location.
The following example shows sample zpool list command output:
# zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT syspool1 80.0G 22.3G 47.7G 28% ONLINE - syspool2 1.2T 384G 816G 32% ONLINE -
To obtain statistics for a specific pool, specify the pool name with the command.
You can select the specific pool information to be displayed by issuing options and arguments with the zpool list command.
The –o option enables you to filter which columns are displayed. The following example shows how to list only the name and size of each pool:
# zpool list -o name,size NAME SIZE syspool1 80.0G syspool2 1.2T
You can use the zpool list command as part of a shell script by issuing the combined –Ho options. The –H option suppresses display of column headings and instead displays tab-separated pool information. For example:
# zpool list -Ho name,size syspool1 80.0G syspool2 1.2T
The –T option enables you to gather time-stamped statistics about the pools. Use the following syntax:
# zpool list -T d interval [count]
Specifies to use the standard date format when displaying the date.
Specifies the interval in seconds between which information is displayed.
Specifies the number of times to report the information. If you do not specify count then the information is continuously refreshed at the specified interval until you press Ctl-C.
The following example displays pool information twice, with a 3-second gap between the reports. The output uses the standard format to display the date.
# zpool list -T d 3 2 Tue Nov 2 10:36:11 MDT 2010 NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool 33.8G 83.5K 33.7G 0% 1.00x ONLINE - rpool 33.8G 12.2G 21.5G 36% 1.00x ONLINE - Tue Nov 2 10:36:14 MDT 2010 pool 33.8G 83.5K 33.7G 0% 1.00x ONLINE - rpool 33.8G 12.2G 21.5G 36% 1.00x ONLINE -
Use the zpool status –l option to display information about the physical location of pool devices. Reviewing the physical location information is helpful when you need to physically remove or replace a disk.
In addition, you can use the fmadm add-alias command to include a disk alias name that helps you identify the physical location of disks in your environment. For example:
# fmadm add-alias SUN-Storage-J4400.1002QCQ015 Lab10Rack5disk
# zpool status -l system1 pool: system1 state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug 3 16:00:35 2012 config: NAME STATE READ WRITE CKSUM system1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_02/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_20/disk ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_22/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_14/disk ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_10/disk ONLINE 0 0 0 /dev/chassis/Lab10Rack5.../DISK_16/disk ONLINE 0 0 0 . . . spares /dev/chassis/Lab10Rack5.../DISK_17/disk AVAIL /dev/chassis/Lab10Rack5.../DISK_12/disk AVAIL errors: No known data errors
Use the zpool history command to display the log of zfs and zpool command use. The log records when these commands were successfully used to modify pool state information or to troubleshoot an error condition.
Note the following information about the history log:
You cannot disable the log. The log is saved persistently on disk and is preserved across system reboots.
The log is implemented as a ring buffer. The minimum size is 128 KB. The maximum size is 32 MB.
For smaller pools, the maximum size is capped at 1 percent of the pool size, where the size is determined at pool creation time.
Because the log requires no administration, you do not need to tune the log size or its location.
The following example shows the zfs and zpool command history on the pool system1.
# zpool history system1 2012-01-25.16:35:32 zpool create -f system1 mirror c3t1d0 c3t2d0 spare c3t3d0 2012-02-17.13:04:10 zfs create system1/test 2012-02-17.13:05:01 zfs snapshot -r system1/test@snap1
Use the –l option to display a long format that includes the user name, the host name, and the zone in which the operation was performed. For example:
# zpool history -l system1 History for 'system1': 2012-01-25.16:35:32 zpool create -f system1 mirror c3t1d0 c3t2d0 spare c3t3d0 [user root on host1:global] 2012-02-17.13:04:10 zfs create system1/test [user root on host1:global] 2012-02-17.13:05:01 zfs snapshot -r system1/test@snap1 [user root on host1:global]
Use the –i option to display internal event information that can be used for diagnostic purposes. For example:
# zpool history -i system1 History for 'system1': 2012-01-25.16:35:32 zpool create -f system1 mirror c3t1d0 c3t2d0 spare c3t3d0 2012-01-25.16:35:32 [internal pool create txg:5] pool spa 33; zfs spa 33; zpl 5; uts host1 5.11 11.1 sun4v 2012-02-17.13:04:10 zfs create system1/test 2012-02-17.13:04:10 [internal property set txg:66094] $share2=2 dataset = 34 2012-02-17.13:04:31 [internal snapshot txg:66095] dataset = 56 2012-02-17.13:05:01 zfs snapshot -r system1/test@snap1 2012-02-17.13:08:00 [internal user hold txg:66102] <.send-4736-1> temp = 1 ...
To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity, as well as updated statistics for every specified interval. The following statistics are reported:
The amount of data currently stored in the pool or device. This amount differs from the amount of disk space available to actual file systems by a small margin due to internal implementation details.
The amount of disk space available in the pool or device. Like the used statistic, this amount differs from the amount of disk space available to datasets by a small margin.
The number of read I/O operations sent to the pool or device, including metadata requests.
The number of write I/O operations sent to the pool or device.
The bandwidth of all read operations (including metadata), expressed as units per second.
The bandwidth of all write operations, expressed as units per second.
When issued with no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:
# zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- rpool 6.05G 61.9G 0 0 786 107 system1 31.3G 36.7G 4 1 296K 86.1K ---------- ----- ----- ----- ----- ----- -----
Because these statistics are cumulative since boot, bandwidth might appear low if the pool is relatively idle. You can request a more accurate view of current bandwidth usage by specifying an interval. For example:
# zpool iostat system1 2 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ---- ----- ----- ----- system1 18.5G 49.5G 0 187 0 23.3M system1 18.5G 49.5G 0 464 0 57.7M system1 18.5G 49.5G 0 457 0 56.6M system1 18.8G 49.2G 0 435 0 51.3M
In this example, the command displays usage statistics for the pool system1 every two seconds until you press Control-C. Alternately, you can specify an additional count argument, which causes the command to terminate after the specified number of iterations.
For example, zpool iostat 2 3 would print a summary every two seconds for three iterations, for a total of six seconds.
The zpool iostat –v command can display I/O statistics for virtual devices. Use this command to identify abnormally slow devices or to observe the distribution of I/O generated by ZFS. See the following three examples. The last two examples display a multigroup configuration.
# zpool iostat -v tank capacity operations bandwidth pool alloc free read write read write ----------------------- ----- ----- ----- ----- ----- ----- tank 2.69G 1.81T 0 29 252 14.2M c0t5000C5001032271Bd0 1.34G 927G 0 14 130 7.09M c0t5000C50010349387d0 1.34G 927G 0 14 122 7.09M ----------------------- ----- ----- ----- ----- ----- -----
# zpool iostat -v tank capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 810M 1.81T 0 390 536 32.1M mirror-0 405M 928G 0 194 232 16.1M c0t5000C5001032271Bd0 - - 0 37 1.07K 16.2M c0t5000C50010349387d0 - - 0 38 858 16.1M mirror-1 405M 928G 0 195 304 16.1M c0t5000C5001033963Fd0 - - 0 37 1.14K 16.2M c0t5000C5001033024Fd0 - - 0 38 858 16.2M ------------------------- ----- ----- ----- ----- ----- -----
# zpool iostat -v tank capacity operations bandwidth pool alloc free read write read write ------------------------- ----- ----- ----- ----- ----- ----- tank 258M 5.44T 0 321 876 31.5M raidz1-0 128M 2.72T 0 160 29 15.9M c0t5000C5001032271Bd0 - - 0 33 1.40K 8.07M c0t5000C50010349387d0 - - 0 30 1.37K 8.07M c0t5000C5001033963Fd0 - - 0 30 1.37K 8.07M raidz1-1 130M 2.72T 0 160 847 15.5M c0t5000C5001033024Fd0 - - 1 34 2.20K 8.10M c0t5000C500103C9817d0 - - 0 34 1.37K 7.87M c0t5000C50010324F67d0 - - 0 34 1.37K 8.10M ------------------------- ----- ----- ----- ----- ----- -----
The zpool iostat -v command provides specific information for each level of the pool configuration:
Pool level shows the sum of the group level data.
Group level shows the compiled data of the mirror or raidz configuration.
Leaf level shows information for each physical disk.
Note two important points when viewing I/O statistics for virtual devices:
Statistics on disk space use are available only for top-level virtual devices. The way in which disk space is allocated among mirror and RAID-Z virtual devices is particular to the implementation and not easily expressed as a single number.
The numbers might not add up exactly as you would expect. In particular, operations across RAID-Z and mirrored devices will not be exactly equal. This difference is particularly noticeable immediately after a pool is created because a significant amount of I/O is done directly to the disks as part of pool creation, which is not accounted for at the mirror level. Over time, these numbers gradually equalize. However, broken, unresponsive, or offline devices can affect this symmetry as well.
You can use interval and count when examining virtual device statistics.
You can also display physical location information about the pool's virtual devices. The following example shows sample output that has been truncated:
# zpool iostat -lv capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- export 2.39T 2.14T 13 27 42.7K 300K mirror 490G 438G 2 5 8.53K 60.3K /dev/chassis/lab10rack15/SCSI_Device__2/disk - - 1 0 4.47K 60.3K /dev/chassis/lab10rack15/SCSI_Device__3/disk - - 1 0 4.45K 60.3K mirror 490G 438G 2 5 8.62K 59.9K /dev/chassis/lab10rack15/SCSI_Device__4/disk - - 1 0 4.52K 59.9K /dev/chassis/lab10rack15/SCSI_Device__5/disk - - 1 0 4.48K 59.9K
You can display pool and device health by using the zpool status command. In addition, the fmd command also reports potential pool and device failures on the system console, and the /var/adm/messages file.
This section describes only how to determine pool and device health. For data recovery from unhealthy pools, see Oracle Solaris ZFS Troubleshooting and Pool Recovery.
A pool's health status is described by one of four states:
A pool with one or more failed devices whose data is still available due to a redundant configuration.
A pool that has all devices operating normally.
A pool that is waiting for device connectivity to be restored. A SUSPENDED pool remains in the wait state until the device issue is resolved.
A pool with corrupted metadata, or one or more unavailable devices, and insufficient replicas to continue functioning.
Each pool device can fall into one of the following states:
The virtual device has experienced a failure but can still function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised because a subsequent fault in another device might be unrecoverable.
The device has been explicitly taken offline by the administrator.
The device or virtual device is in normal working order even though some transient errors might still occur.
The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms.
The device or virtual device cannot be opened. In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is UNAVAIL, then nothing in the pool can be accessed.
The health of a pool is determined from the health of all its top-level virtual devices. If all virtual devices are ONLINE, then the pool is also ONLINE. If any one of the virtual devices is DEGRADED or UNAVAIL, then the pool is also DEGRADED. If a top-level virtual device is UNAVAIL or OFFLINE, then the pool is also UNAVAIL or SUSPENDED. A pool in the UNAVAIL or SUSPENDED state is completely inaccessible. No data can be recovered until the necessary devices are attached or repaired. A pool in the DEGRADED state continues to run but you might not achieve the same level of data redundancy or data throughput than if the pool were online.
The zpool status command also displays the state of resilver and scrub operations as follows:
Resilver or scrub operations are in progress.
Resilver or scrub operations have been completed.
Resilver and scrub completion messages persist across system reboots.
Operations have been cancelled.
You can review pool health status by using one of the following zpool status command options:
zpool status –x [pool] – Displays only the status pools that have errors or are otherwise unavailable.
zpool status –v [pool] – Generates verbose output providing detailed information about the pools and their devices.
You should investigate any pool that is not in the ONLINE state for potential problems.
The following example shows how to generate a verbose status report about the pool system1.
# zpool status -v system1 pool: system1 state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jun 20 15:38:08 2012 config: NAME STATE READ WRITE CKSUM system1 DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 UNAVAIL 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 device details: c0t5000C500335F907Fd0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. see: URL to My Oracle Support knowledge article for recovery errors: No known data errors
The READ and WRITE columns provide a count of I/O errors that occurred on the device, while the CKSUM column provides a count of uncorrectable checksum errors that occurred on the device. Both error counts indicate a potential device failure for which some corrective action is needed. If non-zero errors are reported for a top-level virtual device, portions of your data might have become inaccessible.
The output identifies problems as well as possible causes for the pool's current state. The output also includes a link to a knowledge article for up-to-date information about the best way to recover from the problem. From the output, you can determine which device is damaged and how to repair the pool.
For more information about diagnosing and repairing UNAVAIL pools and data, see Oracle Solaris ZFS Troubleshooting and Pool Recovery.
You can use the zpool status interval and count options to gather statistics over a period of time. In addition, you can display a time stamp by using the –T option. For example:
# zpool status -T d 3 2 Wed Jun 20 16:10:09 MDT 2012 pool: pond state: ONLINE scan: resilvered 9.50K in 0h0m with 0 errors on Wed Jun 20 16:07:34 2012 config: NAME STATE READ WRITE CKSUM pond ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335BA8C3d0s0 ONLINE 0 0 0 c0t5000C500335FC3E7d0s0 ONLINE 0 0 0 errors: No known data errors Wed Jun 20 16:10:12 MDT 2012 pool: pond state: ONLINE scan: resilvered 9.50K in 0h0m with 0 errors on Wed Jun 20 16:07:34 2012 config: NAME STATE READ WRITE CKSUM pond ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335F95E3d0 ONLINE 0 0 0 c0t5000C500335F907Fd0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t5000C500335BD117d0 ONLINE 0 0 0 c0t5000C500335DC60Fd0 ONLINE 0 0 0 errors: No known data errors pool: rpool state: ONLINE scan: scrub repaired 0 in 0h11m with 0 errors on Wed Jun 20 15:08:23 2012 config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t5000C500335BA8C3d0s0 ONLINE 0 0 0 c0t5000C500335FC3E7d0s0 ONLINE 0 0 0 errors: No known data errors