This chapter describes how to create and administer ZFS storage pools.
The following sections are provided in this chapter:
The following sections provide detailed information about the following storage pool components:
The most basic element of a storage pool is a piece of physical storage. Physical storage can be any block device of at least 128 Mbytes in size. Typically, this device is a hard drive that is visible to the system in the /dev/dsk directory.
A storage device can be a whole disk (c1t0d0) or an individual slice (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not need to be specially formatted. ZFS formats the disk using an EFI label to contain a single, large slice. When used in this way, the partition table that is displayed by the format command appears similar to the following:
Current partition table (original): Total disk sectors available: 17672849 + 16384 (reserved sectors) Part Tag Flag First Sector Size Last Sector 0 usr wm 256 8.43GB 17672849 1 unassigned wm 0 0 0 2 unassigned wm 0 0 0 3 unassigned wm 0 0 0 4 unassigned wm 0 0 0 5 unassigned wm 0 0 0 6 unassigned wm 0 0 0 8 reserved wm 17672850 8.00MB 17689233 |
To use whole disks, the disks must be named by using the /dev/dsk/cXtXdX naming convention. Some third-party drivers use a different naming convention or place disks in a location other than the /dev/dsk directory. To use these disks, you must manually label the disk and provide a slice to ZFS.
ZFS applies an EFI label when you create a storage pool with whole disks. For more information about EFI labels, see EFI Disk Label in System Administration Guide: Devices and File Systems.
A disk that is intended for a ZFS root pool must be created with an SMI label, not an EFI label. You can relabel a disk with an SMI label by using the format -e command.
Disks can be specified by using either the full path, such as /dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid disk names:
c1t0d0
/dev/dsk/c1t0d0
c0t0d6s2
/dev/foo/disk
Using whole physical disks is the simplest way to create ZFS storage pools. ZFS configurations become progressively more complex, from management, reliability, and performance perspectives, when you build pools from disk slices, LUNs in hardware RAID arrays, or volumes presented by software-based volume managers. The following considerations might help you determine how to configure ZFS with other hardware or software storage solutions:
If you construct ZFS configurations on top of LUNs from hardware RAID arrays, you need to understand the relationship between ZFS redundancy features and the redundancy features offered by the array. Certain configurations might provide adequate redundancy and performance, but other configurations might not.
You can construct logical devices for ZFS using volumes presented by software-based volume managers, such as SolarisTM Volume Manager (SVM) or Veritas Volume Manager (VxVM). However, these configurations are not recommended. While ZFS functions properly on such devices, less-than-optimal performance might be the result.
For additional information about storage pool recommendations, see the ZFS best practices site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Disks are identified both by their path and by their device ID, if available. This method allows devices to be reconfigured on a system without having to update any ZFS state. If a disk is switched between controller 1 and controller 2, ZFS uses the device ID to detect that the disk has moved and should now be accessed using controller 2. The device ID is unique to the drive's firmware. While unlikely, some firmware updates have been known to change device IDs. If this situation happens, ZFS can still access the device by path and update the stored device ID automatically. If you inadvertently change both the path and the ID of the device, then export and re-import the pool in order to use it.
Disks can be labeled with a traditional Solaris VTOC (SMI) label when you create a storage pool with a disk slice.
For a bootable ZFS root pool, the disks in the pool must contain slices and must be labeled with an SMI label. The simplest configuration would be to put the entire disk capacity in slice 0 and use that slice for the root pool.
On a SPARC based system, a 72-Gbyte disk has 68 Gbytes of usable space located in slice 0 as shown in the following format output.
# format . . . Specify disk (enter its number): 4 selecting c1t1d0 partition> p Current partition table (original): Total disk cylinders available: 14087 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 14086 68.35GB (14087/0/0) 143349312 1 unassigned wm 0 0 (0/0/0) 0 2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 |
On a x86 based system, a 72–GByte disk has 68 GBytes of usable space located in slice 0 as shown in the following format output. A small amount of boot information is contained in slice 8. Slice 8 requires no administration and cannot be changed.
# format . . . selecting c1t0d0 partition> p Current partition table (original): Total disk cylinders available: 49779 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 1 - 49778 68.36GB (49778/0/0) 143360640 1 unassigned wu 0 0 (0/0/0) 0 2 backup wm 0 - 49778 68.36GB (49779/0/0) 143363520 3 unassigned wu 0 0 (0/0/0) 0 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wu 0 0 (0/0/0) 0 8 boot wu 0 - 0 1.41MB (1/0/0) 2880 9 unassigned wu 0 0 (0/0/0) 0 |
ZFS also allows you to use UFS files as virtual devices in your storage pool. This feature is aimed primarily at testing and enabling simple experimentation, not for production use. The reason is that any use of files relies on the underlying file system for consistency. If you create a ZFS pool backed by files on a UFS file system, then you are implicitly relying on UFS to guarantee correctness and synchronous semantics.
However, files can be quite useful when you are first trying out ZFS or experimenting with more complicated layouts when not enough physical devices are present. All files must be specified as complete paths and must be at least 64 Mbytes in size.
ZFS provides data redundancy, as well as self-healing properties, in a mirrored and a RAID-Z configuration.
A mirrored storage pool configuration requires at least two disks, preferably on separate controllers. Many disks can be used in a mirrored configuration. In addition, you can create more than one mirror in each pool. Conceptually, a simple mirrored configuration would look similar to the following:
mirror c1t0d0 c2t0d0 |
Conceptually, a more complex mirrored configuration would look similar to the following:
mirror c1t0d0 c2t0d0 c3t0d0 mirror c4t0d0 c5t0d0 c6t0d0 |
For information about creating a mirrored storage pool, see Creating a Mirrored Storage Pool.
In addition to a mirrored storage pool configuration, ZFS provides a RAID-Z configuration with either single, double, or triple parity fault tolerance. Single-parity RAID-Z (raidz or raidz1) is similar to RAID-5. Double-parity RAID-Z (raidz2) is similar to RAID-6.
For more information about RAIDZ-3 (raidz3), see the following blog:
http://blogs.sun.com/ahl/entry/triple_parity_raid_z
All traditional RAID-5-like algorithms (RAID-4, RAID-6, RDP, and EVEN-ODD, for example) suffer from a problem known as the “RAID-5 write hole.” If only part of a RAID-5 stripe is written, and power is lost before all blocks have made it to disk, the parity will remain out of sync with the data, and therefore useless, forever (unless a subsequent full-stripe write overwrites it). In RAID-Z, ZFS uses variable-width RAID stripes so that all writes are full-stripe writes. This design is only possible because ZFS integrates file system and device management in such a way that the file system's metadata has enough information about the underlying data redundancy model to handle variable-width RAID stripes. RAID-Z is the world's first software-only solution to the RAID-5 write hole.
A RAID-Z configuration with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. You need at least two disks for a single-parity RAID-Z configuration and at least three disks for a double-parity RAID-Z configuration. For example, if you have three disks in a single-parity RAID-Z configuration, parity data occupies space equal to one of the three disks. Otherwise, no special hardware is required to create a RAID-Z configuration.
Conceptually, a RAID-Z configuration with three disks would look similar to the following:
raidz c1t0d0 c2t0d0 c3t0d0 |
A more complex conceptual RAID-Z configuration would look similar to the following:
raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0 c10t0d0 c11t0d0 c12t0d0 c13t0d0 c14t0d0 |
If you are creating a RAID-Z configuration with many disks, as in this example, a RAID-Z configuration with 14 disks is better split into a two 7-disk groupings. RAID-Z configurations with single-digit groupings of disks should perform better.
For information about creating a RAID-Z storage pool, see Creating RAID-Z Storage Pools.
For more information about choosing between a mirrored configuration or a RAID-Z configuration based on performance and space considerations, see the following blog:
http://blogs.sun.com/roller/page/roch?entry=when_to_and_not_to
For additional information on RAID-Z storage pool recommendations, see the ZFS best practices site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
The ZFS hybrid storage pool, available in the Sun Storage 7000 product series, is a special storage pool that combines DRAM, SSDs, and HDDs, to improve performance and increase capacity, while reducing power consumption. You can select the ZFS redundancy configuration of the storage pool and easily manage other configuration options with this product's management interface.
For more information about this product, see the Sun Storage Unified Storage System Administration Guide.
ZFS provides for self-healing data in a mirrored or RAID-Z configuration.
When a bad data block is detected, not only does ZFS fetch the correct data from another redundant copy, but it also repairs the bad data by replacing it with the good copy.
ZFS dynamically stripes data across all top-level virtualdevices. The decision about where to place data is done at write time, so no fixed width stripes are created at allocation time.
When new virtual devices are added to a pool, ZFS gradually allocates data to the new device in order to maintain performance and space allocation policies. Each virtual device can also be a mirror or a RAID-Z device that contains other disk devices or files. This configuration allows for flexibility in controlling the fault characteristics of your pool. For example, you could create the following configurations out of 4 disks:
Four disks using dynamic striping
One four-way RAID-Z configuration
Two two-way mirrors using dynamic striping
While ZFS supports combining different types of virtual devices within the same pool, this practice is not recommended. For example, you can create a pool with a two-way mirror and a three-way RAID-Z configuration. However, your fault tolerance is as good as your worst virtual device, RAID-Z in this case. The recommended practice is to use top-level virtual devices of the same type with the same redundancy level in each device.
The following sections describe different scenarios for creating and destroying ZFS storage pools.
By design, creating and destroying pools is fast and easy. However, be cautious when doing these operations. Although checks are performed to prevent using devices known to be in use in a new pool, ZFS cannot always know when a device is already in use. Destroying a pool is even easier. Use zpool destroy with caution. This is a simple command with significant consequences.
To create a storage pool, use the zpool create command. This command takes a pool name and any number of virtual devices as arguments. The pool name must satisfy the naming conventions outlined in ZFS Component Naming Requirements.
The following command creates a new pool named tank that consists of the disks c1t0d0 and c1t1d0:
# zpool create tank c1t0d0 c1t1d0 |
These whole disks are found in the /dev/dsk directory and are labelled appropriately by ZFS to contain a single, large slice. Data is dynamically striped across both disks.
To create a mirrored pool, use the mirror keyword, followed by any number of storage devices that will comprise the mirror. Multiple mirrors can be specified by repeating the mirror keyword on the command line. The following command creates a pool with two, two-way mirrors:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0 |
The second mirror keyword indicates that a new top-level virtual device is being specified. Data is dynamically striped across both mirrors, with data being redundant between each disk appropriately.
For more information about recommended mirrored configurations, see the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Currently, the following operations are supported on a ZFS mirrored configuration:
Adding another set of disks for an additional top-level vdev to an existing mirrored configuration. For more information, see Adding Devices to a Storage Pool.
Attaching additional disks to an existing mirrored configuration. Or, attaching additional disks to a non-replicated configuration to create a mirrored configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Replacing a disk or disks in an existing mirrored configuration as long as the replacement disks are greater than or equal to the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Detaching a disk in a mirrored configuration as long as the remaining devices provide adequate redundancy for the configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Currently, the following operations are not supported on a mirrored configuration:
You cannot outright remove a top-level device from a mirrored storage pool. An RFE is filed for this feature.
In current Solaris releases, you can install and boot from a ZFS root file system. Review the following root pool configuration information:
Disks used for the root pool must have a VTOC (SMI) label and the pool must be created with disk slices
A root pool must be created as a mirrored configuration or a single-disk configuration. You cannot add additional disks to create multiple mirrored top-level virtual devicess by using the zpool add command, but you can expand a mirrored virtual device by using the zpool attach command.
A RAID-Z or a striped configuration is not supported
A root pool cannot have a separate log device
If you attempt to use an unsupported configuration for a root pool, you will see messages similar to the following:
ERROR: ZFS pool <pool-name> does not support boot environments |
# zpool add -f rpool log c0t6d0s0 cannot add to 'rpool': root pool can not have multiple vdevs or separate logs |
For more information about installing and booting a ZFS root file system, see Chapter 5, Installing and Booting a ZFS Root File System.
Creating a single-parity RAID-Z pool is identical to creating a mirrored pool, except that the raidz or raidz1 keyword is used instead of mirror. The following example shows how to create a pool with a single RAID-Z device that consists of five disks:
# zpool create tank raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 /dev/dsk/c5t0d0 |
This example illustrates that disks can be specified by using their full paths, if desired. The /dev/dsk/c5t0d0 device is identical to the c5t0d0 device.
A similar configuration could be created with disk slices. For example:
# zpool create tank raidz c1t0d0s0 c2t0d0s0 c3t0d0s0 c4t0d0s0 c5t0d0s0 |
However, the disks must be preformatted to have an appropriately sized slice zero.
You can create a double-parity RAID-Z configuration by using the raidz2 keyword when the pool is created. For example:
You can create a double-parity or triple-parity RAID-Z configuration by using the raidz2 or raidz3 keyword when the pool is created. For example:
# zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0 # zpool status -v tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 errors: No known data errors |
# zpool create tank raidz3 c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 # zpool status -v tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 errors: No known data errors |
Currently, the following operations are supported on a ZFS RAID-Z configuration:
Add another set of disks for an additional top-level vdev to an existing RAID-Z configuration. For more information, see Adding Devices to a Storage Pool.
Replace a disk or disks in an existing RAID-Z configuration as long as the replacement disks are greater than or equal to the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Currently, the following operations are not supported on a RAID-Z configuration:
Attach an additional disk to an existing RAID-Z configuration.
Detach a disk from a RAID-Z configuration.
You cannot outright remove a device from a RAID-Z configuration. An RFE is filed for this feature.
For more information about a RAID-Z configuration, see RAID-Z Storage Pool Configuration.
By default, the ZIL is allocated from blocks within the main pool. However, better performance might be possible by using separate intent log devices, such as NVRAM or a dedicated disk. For more information about ZFS log devices, see Setting Up Separate ZFS Logging Devices.
You can set up a ZFS logging device when the storage pool is created or after the pool is created.
For example, create a mirrored storage pool with mirrored log devices.
# zpool create datap mirror c1t1d0 c1t2d0 mirror c1t3d0 c1t4d0 log mirror c1t5d0 c1t8d0 # zpool status datap pool: datap state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datap ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 errors: No known data errors |
For information about recovering from a log device failure, see Example 11–2.
You can create a storage pool with cache devices to cache storage pool data. For example:
# zpool create tank mirror c2t0d0 c2t1d0 c2t3d0 cache c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 cache c2t5d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 errors: No known data errors |
Review the following points when considering whether to create a ZFS storage pool with cache devices:
Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
Capacity and reads can be monitored by using the zpool iostat command.
Single or multiple cache devices can be added when the pool is created or added and removed after the pool is created. For more information, see Example 4–4.
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or RAID-Z configuration. The content of the cache devices is considered volatile, as is the case with other system caches.
Each storage pool is comprised of one or more virtual devices. A virtual device is an internal representation of the storage pool that describes the layout of physical storage and its fault characteristics. As such, a virtual device represents the disk devices or files that are used to create the storage pool. A pool can have any number of virtual devices at the top of the configuration, known as a root vdev.
If the top-level virtual device contains two or more physical devices, the configuration provide data redundancy as mirror or RAID-Z virtual devices. These virtual devices consist of disks, disk slices, or files. A spare is a special vdev that keeps track of available hot spares for a pool.
The following example shows how to create a pool that consists of two top-level virtual devicess, each a mirror of two disks.
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0 |
The following example shows how to create pool that consists of one top-level virtual device of 4 disks.
# zpool create mypool raidz2 c1d0 c2d0 c3d0 c4d0 |
You can add another top-level virtual device to this pool by using the zpool add command. For example:
# zpool add mypool raidz2 c2d1 c3d1 c4d1 c5d1 |
Disks, disk slices, or files that are used in non-redundant pools function as top-level virtual devices themselves. Storage pools typically contain multiple top-level virtual devices. ZFS dynamically stripes data among all of the top-level virtual devices in a pool.
Virtual devices and the physical devices that are contained in a ZFS storage pool are displayed with the zpool status command. For example:
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors |
Pool creation errors can occur for many reasons. Some of these reasons are obvious, such as when a specified device doesn't exist, while other reasons are more subtle.
Before formatting a device, ZFS first determines if the disk is in-use by ZFS or some other part of the operating system. If the disk is in use, you might see errors such as the following:
# zpool create tank c1t0d0 c1t1d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1t0d0s0 is currently mounted on /. Please see umount(1M). /dev/dsk/c1t0d0s1 is currently mounted on swap. Please see swap(1M). /dev/dsk/c1t1d0s0 is part of active ZFS pool zeepool. Please see zpool(1M). |
Some of these errors can be overridden by using the -f option, but most errors cannot. The following uses cannot be overridden by using the -f option, and you must manually correct them:
The disk or one of its slices contains a file system that is currently mounted. To correct this error, use the umount command.
The disk contains a file system that is listed in the /etc/vfstab file, but the file system is not currently mounted. To correct this error, remove or comment out the line in the /etc/vfstab file.
The disk is in use as the dedicated dump device for the system. To correct this error, use the dumpadm command.
The disk or file is part of an active ZFS storage pool. To correct this error, use the zpool destroy command to destroy the other pool, if it is no longer needed. Or, use the zpool detach command to detach the disk from the other pool. You can only detach a disk from a mirrored storage pool.
The following in-use checks serve as helpful warnings and can be overridden by using the -f option to create the pool:
The disk contains a known file system, though it is not mounted and doesn't appear to be in use.
The disk is part of an SVM volume.
The disk is in use as an alternate boot environment for Solaris Live Upgrade.
The disk is part of a storage pool that has been exported or manually removed from a system. In the latter case, the pool is reported as potentially active, as the disk might or might not be a network-attached drive in use by another system. Be cautious when overriding a potentially active pool.
The following example demonstrates how the -f option is used:
# zpool create tank c1t0d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1t0d0s0 contains a ufs filesystem. # zpool create -f tank c1t0d0 |
Ideally, correct the errors rather than use the -f option.
Creating pools with virtual devices of different replication levels is not recommended. The zpool command tries to prevent you from accidentally creating a pool with mismatched levels of redundancy. If you try to create a pool with such a configuration, you see errors similar to the following:
# zpool create tank c1t0d0 mirror c2t0d0 c3t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: both disk and mirror vdevs are present # zpool create tank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 c5t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: 2-way mirror and 3-way mirror vdevs are present |
You can override these errors with the -f option, though this practice is not recommended. The command also warns you about creating a mirrored or RAID-Z pool using devices of different sizes. While this configuration is allowed, mismatched levels of redundancy result in unused space on the larger device, and requires the -f option to override the warning.
Because creating a pool can fail unexpectedly in different ways, and because formatting disks is such a potentially harmful action, the zpool create command has an additional option, -n, which simulates creating the pool without actually writing to the device. This option performs the device in-use checking and replication level validation, and reports any errors in the process. If no errors are found, you see output similar to the following:
# zpool create -n tank mirror c1t0d0 c1t1d0 would create 'tank' with the following layout: tank mirror c1t0d0 c1t1d0 |
Some errors cannot be detected without actually creating the pool. The most common example is specifying the same device twice in the same configuration. This error cannot be reliably detected without writing the data itself, so the create -n command can report success and yet fail to create the pool when run for real.
When a pool is created, the default mount point for the root dataset is /pool-name. This directory must either not exist or be empty. If the directory does not exist, it is automatically created. If the directory is empty, the root dataset is mounted on top of the existing directory. To create a pool with a different default mount point, use the -m option of the zpool create command:
# zpool create home c1t0d0 default mountpoint '/home' exists and is not empty use '-m' option to provide a different default # zpool create -m /export/zfs home c1t0d0 |
This command creates a new pool home and the home dataset with a mount point of /export/zfs.
For more information about mount points, see Managing ZFS Mount Points.
Pools are destroyed by using the zpool destroy command. This command destroys the pool even if it contains mounted datasets.
# zpool destroy tank |
Be very careful when you destroy a pool. Make sure you are destroying the right pool and you always have copies of your data. If you accidentally destroy the wrong pool, you can attempt to recover the pool. For more information, see Recovering Destroyed ZFS Storage Pools.
The act of destroying a pool requires that data be written to disk to indicate that the pool is no longer valid. This state information prevents the devices from showing up as a potential pool when you perform an import. If one or more devices are unavailable, the pool can still be destroyed. However, the necessary state information won't be written to these damaged devices.
These devices, when suitably repaired, are reported as potentially active when you create a new pool, and appear as valid devices when you search for pools to import. If a pool has enough faulted devices such that the pool itself is faulted (meaning that a top-level virtual device is faulted), then the command prints a warning and cannot complete without the -f option. This option is necessary because the pool cannot be opened, so whether data is stored there or not is unknown. For example:
# zpool destroy tank cannot destroy 'tank': pool is faulted use '-f' to force destruction anyway # zpool destroy -f tank |
For more information about pool and device health, see Determining the Health Status of ZFS Storage Pools.
For more information about importing pools, see Importing ZFS Storage Pools.
Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. Once a pool has been created, you can perform several tasks to manage the physical devices within the pool.
You can dynamically add space to a pool by adding a new top-level virtual device. This space is immediately available to all datasets within the pool. To add a new virtual device to a pool, use the zpool add command. For example:
# zpool add zeepool mirror c2t1d0 c2t2d0 |
The format for specifying the virtual devices is the same as for the zpool create command, and the same rules apply. Devices are checked to determine if they are in use, and the command cannot change the level of redundancy without the -f option. The command also supports the -n option so that you can perform a dry run. For example:
# zpool add -n zeepool mirror c3t1d0 c3t2d0 would update 'zeepool' to the following configuration: zeepool mirror c1t0d0 c1t1d0 mirror c2t1d0 c2t2d0 mirror c3t1d0 c3t2d0 |
This command syntax would add mirrored devices c3t1d0 and c3t2d0 to zeepool's existing configuration.
For more information about how virtual device validation is done, see Detecting In-Use Devices.
In the following example, another mirror is added to an existing mirrored ZFS configuration on a Sun Fire x4500 system.
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors # zpool add tank mirror c0t3d0 c1t3d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors |
Additional disks can be added similarly to a RAID-Z configuration. The following example shows how to convert a storage pool with one RAID–Z device comprised of 3 disks to a storage pool with two RAID-Z devices comprised of 3 disks.
# zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool raidz c2t2d0 c2t3d0 c2t4d0 # zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 errors: No known data errors |
The following example shows how to add a mirrored log device to mirrored storage pool.For more information about using log devices in your storage pool, see Setting Up Separate ZFS Logging Devices.
# zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors # zpool add newpool log mirror c0t6d0 c0t7d0 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 errors: No known data errors |
You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in a unmirrored storage pool.
Log devices can be removed by using the zpool remove command. The mirrored log device in the previous example can be removed by specifying the mirror-1 argument. For example:
# zpool remove newpool mirror-1 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors |
If your pool configuration only contains one log device, you would remove the log device by specifying the device name. For example:
# zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 logs c0t10d0 ONLINE 0 0 0 errors: No known data errors # zpool remove pool c0t10d0 |
You can add and remove cache devices to your ZFS storage pool.
Use the zpool add command to add cache devices. For example:
# zpool add tank cache c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 cache c2t5d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 errors: No known data errors |
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
Use the zpool remove command to remove cache devices. For example:
# zpool remove tank c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors |
Currently, the zpool remove command only supports removing hot spares, log devices, and cache devices. Devices that are part of the main mirrored pool configuration can be removed by using the zpool detach command. Non-redundant and RAID-Z devices cannot be removed from a pool.
For more information about using cache devices in a ZFS storage pool, see Creating a ZFS Storage Pool with Cache Devices.
In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or non-mirrored device.
If you are adding and detaching a disk in a ZFS root pool to replace a disk, see How to Replace a Disk in the ZFS Root Pool.
In this example, zeepool is an existing two-way mirror that is transformed to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.
# zpool status zeepool pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach zeepool c1t1d0 c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 12:59:20 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 592K resilvered errors: No known data errors |
If the existing device is part of a two-way mirror, attaching the new device, creates a three-way mirror, and so on. In either case, the new device begins to resilver immediately.
In addition, you can convert a non-redundant storage pool into a redundant storage pool by using the zpool attach command. For example:
# zpool create tank c0t1d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach tank c0t1d0 c1t1d0 # zpool status tank pool: tank state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 14:28:23 2010 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 73.5K resilvered errors: No known data errors |
You can use the zpool detach command to detach a device from a mirrored storage pool. For example:
# zpool detach zeepool c2t1d0 |
However, this operation is refused if there are no other valid replicas of the data. For example:
# zpool detach newpool c1t2d0 cannot detach c1t2d0: only applicable to mirror and replacing vdevs |
ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read or write data to the device, assuming the condition is only temporary. If the condition is not temporary, it is possible to instruct ZFS to ignore the device by bringing it offline. ZFS does not send any requests to an offlined device.
Devices do not need to be taken offline in order to replace them.
You can use the offline command when you need to temporarily disconnect storage. For example, if you need to physically disconnect an array from one set of Fibre Channel switches and connect the array to a different set, you could take the LUNs offline from the array that was used in ZFS storage pools. After the array was reconnected and operational on the new set of switches, you could then bring the same LUNs online. Data that had been added to the storage pools while the LUNs were offline would resilver to the LUNs after they were brought back online.
This scenario is possible assuming that the systems in question see the storage once it is attached to the new switches, possibly through different controllers than before, and your pools are set up as RAID-Z or mirrored configurations.
You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:
# zpool offline tank c1t0d0 bringing device c1t0d0 offline |
Keep the following points in mind when taking a device offline:
You cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices out of a raidz1 configuration, nor can you take offline a top-level virtual device.
# zpool offline tank c1t0d0 cannot offline c1t0d0: no valid replicas |
By default, the offline state is persistent. The device remains offline when the system is rebooted.
To temporarily take a device offline, use the zpool offline -t option. For example:
# zpool offline -t tank c1t0d0 bringing device 'c1t0d0' offline |
When the system is rebooted, this device is automatically returned to the ONLINE state.
When a device is taken offline, it is not detached from the storage pool. If you attempt to use the offlined device in another pool, even after the original pool is destroyed, you will see a message similar to the following:
device is part of exported or potentially active ZFS pool. Please see zpool(1M) |
If you want to use the offlined device in another storage pool after destroying the original storage pool, first bring the device back online, then destroy the original storage pool.
Another way to use a device from another storage pool if you want to keep the original storage pool is to replace the existing device in the original storage pool with another comparable device. For information about replacing devices, see Replacing Devices in a Storage Pool.
Offlined devices show up in the OFFLINE state when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.
For more information on device health, see Determining the Health Status of ZFS Storage Pools.
Once a device is taken offline, it can be restored by using the zpool online command:
# zpool online tank c1t0d0 bringing device c1t0d0 online |
When a device is brought online, any data that has been written to the pool is resynchronized to the newly available device. Note that you cannot use device onlining to replace a disk. If you offline a device, replace the drive, and try to bring it online, it remains in the faulted state.
If you attempt to online a faulted device, a message similar to the following is displayed:
# zpool online tank c1t0d0 warning: device 'c1t0d0' onlined, but remains in faulted state use 'zpool replace' to replace devices that are no longer present |
You might also see the faulted disk message from fmd.
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Fri Aug 28 14:08:39 MDT 2009 PLATFORM: SUNW,Sun-Fire-T200, CSN: -, HOSTNAME: neo2 SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: 9da778a7-a828-c88a-d679-c9a7873f4808 DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. AUTO-RESPONSE: No automated response will occur. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Run 'zpool status -x' and replace the bad device. |
For more information on replacing a faulted device, see Resolving a Missing Device.
You can use the zpool online -e command to expand a LUN. By default, a LUN that is added to a pool is not expanded to its full size unless the autoexpand pool property is enabled. You can expand the LUN automatically by using the zpool online -ecommand even if the LUN is already online or if the LUN is currently offline. For example:
# zpool online -e tank c1t13d0 |
If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.
If specified with no arguments, this command clears all device errors within the pool. For example:
# zpool clear tank |
If one or more devices are specified, this command only clear errors associated with the specified devices. For example:
# zpool clear tank c1t0d0 |
For more information on clearing zpool errors, see Clearing Transient Errors.
You can replace a device in a storage pool by using the zpool replace command.
If you are physically replacing a device with another device in the same location in a redundant pool, then you only need to identify the replaced device. ZFS recognizes that it is a different disk in the same location. For example, to replace a failed disk (c1t1d0) by removing the disk and replacing it in the same location, use the syntax similar to the following:
# zpool replace tank c1t1d0 |
If you are replacing a device in a storage pool with a disk in a different physical location, you will need to specify both devices. For example:
# zpool replace tank c1t1d0 c1t2d0 |
If you are replacing a disk in the ZFS root pool, see How to Replace a Disk in the ZFS Root Pool.
The basic steps for replacing a disk are:
Offline the disk, if necessary, with the zpool offline command.
Remove the disk to be replaced.
Insert the replacement disk.
Run the zpool replace command. For example:
# zpool replace tank c1t1d0 |
Put the disk back online with the zpool online command.
On some systems, such as the Sun Fire x4500, you must unconfigure a disk before you take it offline. If you are just replacing a disk in the same slot position on this system, then you can just run the zpool replace command as identified above.
For an example of replacing a disk on this system, see Example 11–1.
Keep the following considerations in mind when replacing devices in a ZFS storage pool:
If you set the pool property autoreplace to on, then any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced without using the zpool replace command. This feature might not be available on all hardware types.
The replacement device must be greater than or equal to the minimum size of all the devices in a mirrored or RAID-Z configuration.
When a replacement device that is greater in size than the device it is replacing is added to a pool, is not automatically expanded to its full size. The autoexpand pool property value determines whether a replacement LUN is expanded to its full size when the disk is added to the pool. By default, the autoexpand property is disabled. You can enable this property to expand LUN size before or after the larger LUN is added to the pool.
In the following example, two 16-Gbyte disks in a mirrored pool are replaced with two 72-Gbyte disks. The autoexpand property is enabled after the disk replacements to expand the full LUN sizes.
# zpool create pool mirror c1t16d0 c1t17d0 # zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t16d0 ONLINE 0 0 0 c1t17d0 ONLINE 0 0 0 zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 16.8G 76.5K 16.7G 0% ONLINE - # zpool replace pool c1t16d0 c1t1d0 # zpool replace pool c1t17d0 c1t2d0 # zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 16.8G 88.5K 16.7G 0% ONLINE - # zpool set autoexpand=on pool # zpool list pool NAME SIZE USED AVAIL CAP HEALTH ALTROOT pool 68.2G 117K 68.2G 0% ONLINE - |
Replacing many disks in a large pool is time consuming due to resilvering the data onto the new disks. In addition, you might consider running the zpool scrub command between disk replacements to ensure that the replacement devices are operational and the data is written correctly.
If a failed disk has been replaced automatically with a hot spare, then you might need to detach the spare after the failed disk is replaced. For information about detaching a hot spare, see Activating and Deactivating Hot Spares in Your Storage Pool.
For more information about replacing devices, see Resolving a Missing Device and Replacing or Repairing a Damaged Device.
The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that the device is not an active device in a pool, but if an active device in the pool fails, the hot spare automatically replaces the failed device.
Devices can be designated as hot spares in the following ways:
When the pool is created with the zpool create command
After the pool is created with the zpool add command
Hot spare devices can be shared between multiple pools, but spares cannot be shared between multiple pools on different systems
Designate devices as hot spares when the pool is created. For example:
# zpool create trinity mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0 # zpool status trinity pool: trinity state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM trinity ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c1t2d0 AVAIL c2t2d0 AVAIL errors: No known data errors |
Designate hot spares by adding them to a pool after the pool is created. For example:
# zpool add neo spare c5t3d0 c6t3d0 # zpool status neo pool: neo state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM neo ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 spares c5t3d0 AVAIL c6t3d0 AVAIL errors: No known data errors |
Multiple pools can share devices that are designated as hot spares. For example:
# zpool create zeepool mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0 # zpool create tank raidz c3t1d0 c4t1d0 spare c1t2d0 c2t2d0 |
Hot spares can be removed from a storage pool by using the zpool remove command. For example:
# zpool remove zeepool c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c1t3d0 AVAIL errors: No known data errors |
A hot spare cannot be removed if it is currently used by the storage pool.
Keep the following points in mind when using ZFS hot spares:
Currently, the zpool remove command can only be used to remove hot spares, cache devices, and log devices.
Add a disk as a spare that is equal to or larger than the size of the largest disk in the pool. Adding a smaller disk as a spare to a pool is allowed. However, when the smaller spare disk is activated, either automatically or with the zpool replace command, the operation fails with an error similar to the following:
cannot replace disk3 with disk4: device is too small |
You can share a hot spare between pools. However, you cannot export a pool with an in-use shared spare unless you use the zpool export -f (force) option. This behavior prevents the potential data corruption scenario of exporting a pool with an in-use shared spare and another pool attempts to use the shared spare from the exported pool. If you export a pool with an in-use shared spare by using the -f option, be aware that this operation might lead to data corruption if another pool attempts to activate the in-use shared spare.
Hot spares are activated in the following ways:
Manual replacement – Replace a failed device in a storage pool with a hot spare by using the zpool replace command.
Automatic replacement – When a fault is received, an FMA agent examines the pool to see if it has any available hot spares. If so, it replaces the faulted device with an available spare.
If a hot spare that is currently in use fails, the agent detaches the spare and thereby cancels the replacement. The agent then attempts to replace the device with another hot spare, if one is available. This feature is currently limited by the fact that the ZFS diagnosis engine only emits faults when a device disappears from the system.
If you physically replace a failed device with an active spare, you can reactivate the original device by using the zpool detach command to detach the spare. If you set the autoreplace pool property to on, the spare is automatically detached back to the spare pool when the new device is inserted and the online operation completes.
Manually replace a device with a hot spare by using the zpool replace command. See Example 4–7.
A faulted device is automatically replaced if a hot spare is available. For example:
# zpool status -x pool: zeepool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver completed after 0h0m with 0 errors on Mon Jan 11 10:20:35 2010 config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 c2t1d0 UNAVAIL 0 0 0 cannot open c2t3d0 ONLINE 0 0 0 88.5K resilvered spares c2t3d0 INUSE currently in use errors: No known data errors |
Currently, three ways to deactivate hot spares are available:
Removing the hot spare from the storage pool
Detaching a hot spare after a failed disk is physically replaced. See Example 4–8.
Temporarily or permanently swapping in the hot spare. See Example 4–9.
In this example, the zpool replace command is used to replace disk c2t1d0 with the spare disk c2t3d0.
# zpool replace zeepool c2t1d0 c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Oct 21 12:47:29 2009 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 spare ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 76.5K resilvered spares c2t3d0 INUSE currently in use errors: No known data errors |
After the faulted device is replaced, use the zpool detach command to return the hot spare back to the spare set. For example:
# zpool detach zeepool c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed with 0 errors on Fri Aug 28 14:21:02 2009 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c2t3d0 AVAIL errors: No known data errors |
If you want to replace a failed disk with a hot spare that is currently replacing it, then detach the original (failed) disk. If the failed disk is eventually replaced, then you can add it back to the storage pool as a spare. For example:
# zpool status zeepool pool: zeepool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver in progress for 0h0m, 70.47% done, 0h0m to go config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 c2t1d0 UNAVAIL 0 0 0 cannot open c2t3d0 ONLINE 0 0 0 51.5M resilvered spares c2t3d0 INUSE currently in use errors: No known data errors # zpool detach zeepool c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Tue Oct 20 11:58:43 2009 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 70.5M resilvered errors: No known data errors (Original failed disk c2t1d0 is physically replaced) # zpool add zeepool spare c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Tue Oct 20 11:58:43 2009 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 70.5M resilvered spares c2t1d0 AVAIL errors: No known data errors |
You can use the zpool get command to display pool property information. For example:
# zpool get all export NAME PROPERTY VALUE SOURCE export size 33.8G - export capacity 0% - export altroot - default export health ONLINE - export guid 2064230982813446135 default export version 22 default export bootfs - default export delegation on default export autoreplace off default export cachefile - default export failmode wait default export listsnapshots off default export autoexpand off default export dedupditto 0 default export dedupratio 3.00x - export free 33.6G - export allocated 105M - |
Storage pool properties can be set with the zpool set command. For example:
# zpool set autoreplace=on mpool # zpool get autoreplace mpool NAME PROPERTY VALUE SOURCE mpool autoreplace on default |
Property Name |
Type |
Default Value |
Description |
---|---|---|---|
allocated |
String |
N/A |
Read-only value that identifies the amount of storage space within the pool that has been physically allocated. |
altroot |
String |
off |
Identifies an alternate root directory. If set, this directory is prepended to any mount points within the pool. This property can be used when examining an unknown pool, if the mount points cannot be trusted, or in an alternate boot environment, where the typical paths are not valid. |
autoreplace |
Boolean |
off |
Controls automatic device replacement. If set to off, device replacement must be initiated by using the zpool replace command. If set to on, any new device, found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is off. This property can also be referred to by its shortened column name, replace. |
bootfs |
Boolean |
N/A |
Identifies the default bootable dataset for the root pool. This property is expected to be set mainly by the installation and upgrade programs. |
cachefile |
String |
N/A |
Controls where pool configuration information is cached. All pools in the cache are automatically imported when the system boots. However, installation and clustering environments might need to cache this information in a different location so that pools are not automatically imported. You can set this property to cache pool configuration in a different location that can be imported later by using the zpool import -c command. For most ZFS configurations, this property would not be used. |
capacity |
Number |
N/A |
Read-only value that identifies the percentage of pool space used. This property can also be referred to by its shortened column name, cap. |
dedupditto |
String |
N/A |
Sets a threshold, and if the reference count for a deduped block goes above the threshold, another ditto copy of the block is stored automatically. The default value is 0. |
dedupratio |
String |
N/A |
Read-only deduplication ratio specified for a pool, expressed as a multiplier. |
delegation |
Boolean |
on |
Controls whether a non-privileged user can be granted access permissions that are defined for the dataset. For more information, see Chapter 9, ZFS Delegated Administration. |
failmode |
String |
wait |
Controls the system behavior in the event of catastrophic pool failure. This condition is typically a result of a loss of connectivity to the underlying storage device(s) or a failure of all devices within the pool. The behavior of such an event is determined by one of the following values: wait, blocks all I/O requests to the pool until the device connectivity is restored and the errors are cleared by using the zpool clear command. This is the default behavior. In this state, I/O operations to the pool are blocked, but read operations might succeed. A pool remains in the wait state until the device issue is resolved; continue, returns EIO to any new write I/O requests, but allows reads to any of the remaining healthy devices. Any write requests that have yet to be committed to disk would be blocked. After the device is reconnected or replaced, the errors must be cleared with the zpool clear command; panic, prints out a message to the console and generates a system crash dump. |
free |
String |
N/A |
Read-only value that identifies the number of blocks within the pool that are not allocated. |
guid |
String |
N/A |
Read-only property that identifies the unique identifier for the pool. |
health |
String |
N/A |
Read-only property that identifies the current health of the pool, as either ONLINE, DEGRADED, FAULTED, OFFLINE, REMOVED, or UNAVAIL. |
listsnapshots |
String |
off |
Controls whether snapshot information that is associated with this pool is displayed with the zfs list command. If this property is disabled, snapshot information can be displayed with the zfs list -t snapshot command. The default value is off. |
size |
Number |
N/A |
Read-only property that identifies the total size of the storage pool. |
used |
Number |
N/A |
Read-only property that identifies the amount of storage space used within the pool. |
version |
Number |
N/A |
Identifies the current on-disk version of the pool. The preferred method of updating pools is with the zpool upgrade command, although this property can be used when a specific version is needed for backwards compatibility. This property can be set to any number between 1 and the current version reported by the zpool upgrade -v command. The current value is an alias for the latest supported version. |
The zpool list command provides a number of ways to request information regarding pool status. The information available generally falls into three categories: basic usage information, I/O statistics, and health status. All three types of storage pool information are covered in this section.
You can use the zpool list command to display basic information about pools.
With no arguments, the command displays all the fields for all pools on the system. For example:
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - dozer 1.2T 384G 816G 32% ONLINE - |
This output displays the following information:
The name of the pool.
The total size of the pool, equal to the sum of the size of all top-level virtual devices.
The amount of space allocated by all datasets and internal metadata. Note that this amount is different from the amount of space as reported at the file system level.
For more information about determining available file system space, see ZFS Space Accounting.
The amount of unallocated space in the pool.
The amount of space used, expressed as a percentage of total space.
The current health status of the pool.
For more information about pool health, see Determining the Health Status of ZFS Storage Pools.
The alternate root of the pool, if any.
For more information about alternate root pools, see Using ZFS Alternate Root Pools.
You can also gather statistics for a specific pool by specifying the pool name. For example:
# zpool list tank NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 80.0G 22.3G 47.7G 28% ONLINE - |
Specific statistics can be requested by using the -o option. This option allows for custom reports or a quick way to list pertinent information. For example, to list only the name and size of each pool, you use the following syntax:
# zpool list -o name,size NAME SIZE tank 80.0G dozer 1.2T |
The column names correspond to the properties that are listed in Listing Information About All Storage Pools.
The default output for the zpool list command is designed for readability, and is not easy to use as part of a shell script. To aid programmatic uses of the command, the -H option can be used to suppress the column headings and separate fields by tabs, rather than by spaces. For example, to request a simple list of all pool names on the system:
# zpool list -Ho name tank dozer |
Here is another example:
# zpool list -H -o name,size tank 80.0G dozer 1.2T |
ZFS automatically logs successful zfs and zpool commands that modify pool state information. This information can be displayed by using the zpool history command.
For example, the following syntax displays the command output for the root pool.
# zpool history History for 'rpool': 2009-05-07.13:51:00 zpool create -f -o failmode=continue -R /a -m legacy -o cachefile= /tmp/root/etc/zfs/zpool.cache rpool c1t0d0s0 2009-05-07.13:51:01 zfs set canmount=noauto rpool 2009-05-07.13:51:02 zfs set mountpoint=/rpool rpool 2009-05-07.13:51:02 zfs create -o mountpoint=legacy rpool/ROOT 2009-05-07.13:51:03 zfs create -b 8192 -V 2048m rpool/swap 2009-05-07.13:51:04 zfs create -b 131072 -V 1024m rpool/dump 2009-05-07.13:51:09 zfs create -o canmount=noauto rpool/ROOT/snv_114 2009-05-07.13:51:10 zpool set bootfs=rpool/ROOT/snv_114 rpool 2009-05-07.13:51:10 zfs set mountpoint=/ rpool/ROOT/snv_114 2009-05-07.13:51:11 zfs set canmount=on rpool 2009-05-07.13:51:12 zfs create -o mountpoint=/export rpool/export 2009-05-07.13:51:12 zfs create rpool/export/home |
You can use similar output on your system to identify the exact set of ZFS commands that was executed to troubleshoot an error scenario.
The features of the history log are as follows:
The log cannot be disabled.
The log is saved persistently on disk, which means the log is saved across system reboots.
The log is implemented as a ring buffer. The minimum size is 128 Kbytes. The maximum size is 32 Mbytes.
For smaller pools, the maximum size is capped at 1% of the pool size, where size is determined at pool creation time.
Requires no administration, which means tuning the size of the log or changing the location of the log is unnecessary.
To identify the command history of a specific storage pool, use syntax similar to the following:
# zpool history mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 2009-06-02.10:57:54 zpool offline mypool c0t5d0 2009-06-02.10:58:02 zpool online mypool c0t5d0 |
Use the -l option to display a long format that includes the user name, the hostname, and the zone in which the operation was performed. For example:
# zpool history -l mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 [user root on neo:global] 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 [user root on neo:global] 2009-06-02.10:57:54 zpool offline mypool c0t5d0 [user root on neo:global] 2009-06-02.10:58:02 zpool online mypool c0t5d0 [user root on neo:global] |
Use the -i option to display internal event information that can be used for diagnostic purposes. For example:
# zpool history -i mypool History for 'mypool': 2009-06-02.10:56:54 zpool create mypool mirror c0t4d0 c0t5d0 2009-06-02.10:57:31 zpool add mypool spare c0t6d0 2009-06-02.10:57:54 zpool offline mypool c0t5d0 2009-06-02.10:58:02 zpool online mypool c0t5d0 2009-06-02.11:02:20 [internal create txg:23] dataset = 24 2009-06-02.11:02:20 [internal property set txg:24] mountpoint=/data dataset = 24 2009-06-02.11:02:20 zfs create -o mountpoint=/data mypool/data 2009-06-02.11:02:34 [internal create txg:26] dataset = 30 2009-06-02.11:02:34 zfs create mypool/data/datab |
To request I/O statistics for a pool or specific virtual devices, use the zpool iostat command. Similar to the iostat command, this command can display a static snapshot of all I/O activity so far, as well as updated statistics for every specified interval. The following statistics are reported:
The amount of data currently stored in the pool or device. This figure differs from the amount of space available to actual file systems by a small amount due to internal implementation details.
For more information about the difference between pool space and dataset space, see ZFS Space Accounting.
The amount of space available in the pool or device. As with the used statistic, this amount differs from the amount of space available to datasets by a small margin.
The number of read I/O operations sent to the pool or device, including metadata requests.
The number of write I/O operations sent to the pool or device.
The bandwidth of all read operations (including metadata), expressed as units per second.
The bandwidth of all write operations, expressed as units per second.
With no options, the zpool iostat command displays the accumulated statistics since boot for all pools on the system. For example:
# zpool iostat capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 100G 20.0G 1.2M 102K 1.2M 3.45K dozer 12.3G 67.7G 132K 15.2K 32.1K 1.20K |
Because these statistics are cumulative since boot, bandwidth might appear low if the pool is relatively idle. You can request a more accurate view of current bandwidth usage by specifying an interval. For example:
# zpool iostat tank 2 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- tank 100G 20.0G 1.2M 102K 1.2M 3.45K tank 100G 20.0G 134 0 1.34K 0 tank 100G 20.0G 94 342 1.06K 4.1M |
In this example, the command displays usage statistics only for the pool tank every two seconds until you type Ctrl-C. Alternately, you can specify an additional count parameter, which causes the command to terminate after the specified number of iterations. For example, zpool iostat 2 3 would print a summary every two seconds for three iterations, for a total of six seconds. If there is a single pool, then the statistics are displayed on consecutive lines. If more than one pool exists, then an additional dashed line delineates each iteration to provide visual separation.
In addition to pool-wide I/O statistics, the zpool iostat command can display statistics for virtual devices. This command can be used to identify abnormally slow devices, or simply to observe the distribution of I/O generated by ZFS. To request the complete virtual device layout as well as all I/O statistics, use the zpool iostat -v command. For example:
# zpool iostat -v capacity operations bandwidth tank used avail read write read write ---------- ----- ----- ----- ----- ----- ----- mirror 20.4G 59.6G 0 22 0 6.00K c1t0d0 - - 1 295 11.2K 148K c1t1d0 - - 1 299 11.2K 148K ---------- ----- ----- ----- ----- ----- ----- total 24.5K 149M 0 22 0 6.00K |
Note two important things when viewing I/O statistics on a virtual device basis:
First, space usage is only available for top-level virtual devices. The way in which space is allocated among mirror and RAID-Z virtual devices is particular to the implementation and not easily expressed as a single number.
Second, the numbers might not add up exactly as you would expect them to. In particular, operations across RAID-Z and mirrored devices will not be exactly equal. This difference is particularly noticeable immediately after a pool is created, as a significant amount of I/O is done directly to the disks as part of pool creation that is not accounted for at the mirror level. Over time, these numbers should gradually equalize, although broken, unresponsive, or offlined devices can affect this symmetry as well.
You can use the same set of options (interval and count) when examining virtual device statistics.
ZFS provides an integrated method of examining pool and device health. The health of a pool is determined from the state of all its devices. This state information is displayed by using the zpool status command. In addition, potential pool and device failures are reported by fmd and are displayed on the system console and logged in the /var/adm/messages file. This section describes how to determine pool and device health. This chapter does not document how to repair or recover from unhealthy pools. For more information on troubleshooting and data recovery, see Chapter 11, ZFS Troubleshooting and Pool Recovery.
Each device can fall into one of the following states:
The device or virtual device is in normal working order. While some transient errors might still occur, the device is otherwise in working order.
The virtual device has experienced failure but is still able to function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised, as a subsequent fault in another device might be unrecoverable.
The device or virtual device is completely inaccessible. This status typically indicates total failure of the device, such that ZFS is incapable of sending or receiving data from it. If a top-level virtual device is in this state, then the pool is completely inaccessible.
The device has been explicitly taken offline by the administrator.
The device or virtual device cannot be opened. In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is unavailable, then nothing in the pool can be accessed.
The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms.
The health of a pool is determined from the health of all its top-level virtual devices. If all virtual devices are ONLINE, then the pool is also ONLINE. If any one of the virtual devices is DEGRADED or UNAVAIL, then the pool is also DEGRADED. If a top-level virtual device is FAULTED or OFFLINE, then the pool is also FAULTED. A pool in the faulted state is completely inaccessible. No data can be recovered until the necessary devices are attached or repaired. A pool in the degraded state continues to run, but you might not achieve the same level of data redundancy or data throughput than if the pool were online.
The simplest way to request a quick overview of pool health status is to use the zpool status command:
# zpool status -x all pools are healthy |
Specific pools can be examined by specifying a pool name to the command. Any pool that is not in the ONLINE state should be investigated for potential problems, as described in the next section.
You can request a more detailed health summary by using the -v option. For example:
# zpool status -v tank pool: tank state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c1t0d0 FAULTED 0 0 0 cannot open c1t1d0 ONLINE 0 0 0 errors: No known data errors |
This output displays a complete description of why the pool is in its current state, including a readable description of the problem and a link to a knowledge article for more information. Each knowledge article provides up-to-date information on the best way to recover from your current problem. Using the detailed configuration information, you should be able to determine which device is damaged and how to repair the pool.
In the above example, the faulted device should be replaced. After the device is replaced, use the zpool online command to bring the device back online. For example:
# zpool online tank c1t0d0 Bringing device c1t0d0 online # zpool status -x all pools are healthy |
If the autoreplace property is on, you might not have to online the replaced device.
If a pool has an offlined device, the command output identifies the problem pool. For example:
# zpool status -x pool: tank state: DEGRADED status: One or more devices has been taken offline by the adminstrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: none requested config: NAME STATE READ WRITE CKSUM tank DEGRADED 0 0 0 mirror DEGRADED 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 OFFLINE 0 0 0 errors: No known data errors |
The READ and WRITE columns provides a count of I/O errors seen on the device, while the CKSUM column provides a count of uncorrectable checksum errors that occurred on the device. Both of these error counts likely indicate potential device failure, and some corrective action is needed. If non-zero errors are reported for a top-level virtual device, portions of your data might have become inaccessible.
The errors: field identifies any known data errors.
In the example output above, the offlined device is not causing data errors.
For more information about diagnosing and repairing faulted pools and data, see Chapter 11, ZFS Troubleshooting and Pool Recovery.
Occasionally, you might need to move a storage pool between machines. To do so, the storage devices must be disconnected from the original machine and reconnected to the destination machine. This task can be accomplished by physically recabling the devices, or by using multiported devices such as the devices on a SAN. ZFS enables you to export the pool from one machine and import it on the destination machine, even if the machines are of different endianness. For information about replicating or migrating file systems between different storage pools, which might reside on different machines, see Sending and Receiving ZFS Data.
Storage pools should be explicitly exported to indicate that they are ready to be migrated. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all knowledge of the pool from the system.
If you do not explicitly export the pool, but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions, and the pool will appear faulted on the original machine because the devices are no longer present. By default, the destination machine refuses to import a pool that has not been explicitly exported. This condition is necessary to prevent accidentally importing an active pool that consists of network attached storage that is still in use on another system.
To export a pool, use the zpool export command. For example:
# zpool export tank |
The command attempts to unmount any mounted file systems within the pool before continuing. If any of the file systems fail to unmount, you can forcefully unmount them by using the -f option. For example:
# zpool export tank cannot unmount '/export/home/eschrock': Device busy # zpool export -f tank |
After this command is executed, the pool tank is no longer visible on the system.
If devices are unavailable at the time of export, the disks cannot be specified as cleanly exported. If one of these devices is later attached to a system without any of the working devices, it appears as “potentially active.”
If ZFS volumes are in use in the pool, the pool cannot be exported, even with the -f option. To export a pool with a ZFS volume, first make sure that all consumers of the volume are no longer active.
For more information about ZFS volumes, see ZFS Volumes.
Once the pool has been removed from the system (either through export or by forcefully removing the devices), attach the devices to the target system. Although ZFS can handle some situations in which only a portion of the devices is available, all devices within the pool must be moved between the systems. The devices do not necessarily have to be attached under the same device name. ZFS detects any moved or renamed devices, and adjusts the configuration appropriately. To discover available pools, run the zpool import command with no options. For example:
# zpool import pool: tank id: 3778921145927357706 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror ONLINE c1t0d0 ONLINE c1t1d0 ONLINE |
In this example, the pool tank is available to be imported on the target system. Each pool is identified by a name as well as a unique numeric identifier. If multiple pools available to import have the same name, you can use the numeric identifier to distinguish between them.
Similar to the zpool status command, the zpool import command refers to a knowledge article available on the web with the most up-to-date information regarding repair procedures for a problem that is preventing a pool from being imported. In this case, the user can force the pool to be imported. However, importing a pool that is currently in use by another system over a storage network can result in data corruption and panics as both systems attempt to write to the same storage. If some devices in the pool are not available but enough redundancy is available to have a usable pool, the pool appears in the DEGRADED state. For example:
# zpool import pool: tank id: 3778921145927357706 state: DEGRADED status: One or more devices are missing from the system. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: http://www.sun.com/msg/ZFS-8000-2Q config: tank DEGRADED mirror DEGRADED c1t0d0 UNAVAIL cannot open c1t1d0 ONLINE |
In this example, the first disk is damaged or missing, though you can still import the pool because the mirrored data is still accessible. If too many faulted or missing devices are present, the pool cannot be imported. For example:
# zpool import pool: dozer id: 12090808386336829175 state: FAULTED action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: raidz FAULTED c1t0d0 ONLINE c1t1d0 FAULTED c1t2d0 ONLINE c1t3d0 FAULTED |
In this example, two disks are missing from a RAID-Z virtual device, which means that sufficient redundant data is not available to reconstruct the pool. In some cases, not enough devices are present to determine the complete configuration. In this case, ZFS doesn't know what other devices were part of the pool, though ZFS does report as much information as possible about the situation. For example:
# zpool import pool: dozer id: 12090808386336829175 state: FAULTED status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: dozer FAULTED missing device raidz ONLINE c1t0d0 ONLINE c1t1d0 ONLINE c1t2d0 ONLINE c1t3d0 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. |
By default, the zpool import command only searches devices within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the -d option to search different directories. For example:
# zpool create dozer mirror /file/a /file/b # zpool export dozer # zpool import -d /file pool: dozer id: 10952414725867935582 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE mirror ONLINE /file/a ONLINE /file/b ONLINE # zpool import -d /file dozer |
If devices exist in multiple directories, you can specify multiple -d options.
Once a pool has been identified for import, you can import it by specifying the name of the pool or its numeric identifier as an argument to the zpool import command. For example:
# zpool import tank |
If multiple available pools have the same name, you must specify which pool to import by using the numeric identifier. For example:
# zpool import pool: dozer id: 2704475622193776801 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t9d0 ONLINE pool: dozer id: 6223921996155991199 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: dozer ONLINE c1t8d0 ONLINE # zpool import dozer cannot import 'dozer': more than one matching pool import by numeric ID instead # zpool import 6223921996155991199 |
If the pool name conflicts with an existing pool name, you can import the pool under a different name. For example:
# zpool import dozer zeepool |
This command imports the exported pool dozer using the new name zeepool.
If the pool was not cleanly exported, ZFS requires the -f flag to prevent users from accidentally importing a pool that is still in use on another system. For example:
# zpool import dozer cannot import 'dozer': pool may be in use on another system use '-f' to import anyway # zpool import -f dozer |
Pools can also be imported under an alternate root by using the -R option. For more information on alternate root pools, see Using ZFS Alternate Root Pools.
You can use the zpool import -D command to recover a storage pool that has been destroyed. For example:
# zpool destroy tank # zpool import -D pool: tank id: 3778921145927357706 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. The pool was destroyed, but can be imported using the '-Df' flags. config: tank ONLINE mirror ONLINE c1t0d0 ONLINE c1t1d0 ONLINE |
In the above zpool import output, you can identify this pool as the destroyed pool because of the following state information:
state: ONLINE (DESTROYED) |
To recover the destroyed pool, issue the zpool import -D command again with the pool to be recovered. For example:
# zpool import -D tank # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors |
If one of the devices in the destroyed pool is faulted or unavailable, you might be able to recover the destroyed pool anyway by including the -f option. In this scenario, import the degraded pool and then attempt to fix the device failure. For example:
# zpool destroy dozer # zpool import -D pool: dozer id: state: DEGRADED (DESTROYED) status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver completed after 0h0m with 0 errors on Fri Aug 28 09:33:56 2009 config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 UNAVAIL 0 35 1 cannot open c2t12d0 ONLINE 0 0 0 errors: No known data errors # zpool import -Df dozer # zpool status -x pool: dozer state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver completed after 0h0m with 0 errors on Fri Aug 28 09:33:56 2009 config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c2t8d0 ONLINE 0 0 0 c2t9d0 ONLINE 0 0 0 c2t10d0 ONLINE 0 0 0 c2t11d0 UNAVAIL 0 37 0 cannot open c2t12d0 ONLINE 0 0 0 errors: No known data errors # zpool online dozer c2t11d0 Bringing device c2t11d0 online # zpool status -x all pools are healthy |
If you have ZFS storage pools from a previous Solaris release, such as the Solaris 10 6/06 release, you can upgrade your pools with the zpool upgrade command to take advantage of the pool features in the current release. In addition, the zpool status command has been modified to notify you when your pools are running older versions. For example:
# zpool status pool: test state: ONLINE status: The pool is formatted using an older on-disk format. The pool can still be used, but some features are unavailable. action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on older software versions. scrub: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 c1t27d0 ONLINE 0 0 0 errors: No known data errors |
You can use the following syntax to identify additional information about a particular version and supported releases.
# zpool upgrade -v This system is currently running ZFS pool version 22. The following versions are supported: VER DESCRIPTION --- -------------------------------------------------------- 1 Initial ZFS version 2 Ditto blocks (replicated metadata) 3 Hot spares and double parity RAID-Z 4 zpool history 5 Compression using the gzip algorithm 6 bootfs pool property 7 Separate intent log devices 8 Delegated administration 9 refquota and refreservation properties 10 Cache devices 11 Improved scrub performance 12 Snapshot properties 13 snapused property 14 passthrough-x aclinherit 15 user/group space accounting 16 stmf property support 17 Triple-parity RAID-Z 18 Snapshot user holds 19 Log device removal 20 Compression using zle (zero-length encoding) 21 Deduplication 22 Received properties For more information on a particular version, including supported releases, see: http://www.opensolaris.org/os/community/zfs/version/N Where 'N' is the version number. |
Then, you can run the zpool upgrade command to upgrade all of your pools. For example:
# zpool upgrade -a |
If you upgrade your pool to a later ZFS version, the pool will not be accessible on a system that runs an older ZFS version.