|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris ZFS Administration Guide Oracle Solaris 11 Express 11/10|
The following sections describe different scenarios for creating and destroying ZFS storage pools:
Creating and destroying pools is fast and easy. However, be cautious when performing these operations. Although checks are performed to prevent using devices known to be in use in a new pool, ZFS cannot always know when a device is already in use. Destroying a pool is easier than creating one. Use zpool destroy with caution. This simple command has significant consequences.
To create a storage pool, use the zpool create command. This command takes a pool name and any number of virtual devices as arguments. The pool name must satisfy the naming requirements in ZFS Component Naming Requirements.
The following command creates a new pool named tank that consists of the disks c1t0d0 and c1t1d0:
# zpool create tank c1t0d0 c1t1d0
Device names representing the whole disks are found in the /dev/dsk directory and are labeled appropriately by ZFS to contain a single, large slice. Data is dynamically striped across both disks.
To create a mirrored pool, use the mirror keyword, followed by any number of storage devices that will comprise the mirror. Multiple mirrors can be specified by repeating the mirror keyword on the command line. The following command creates a pool with two, two-way mirrors:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0
For more information about recommended mirrored configurations, see the following site:
Currently, the following operations are supported in a ZFS mirrored configuration:
Adding another set of disks for an additional top-level virtual device (vdev) to an existing mirrored configuration. For more information, see Adding Devices to a Storage Pool.
Attaching additional disks to an existing mirrored configuration. Or, attaching additional disks to a non-replicated configuration to create a mirrored configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Replacing a disk or disks in an existing mirrored configuration as long as the replacement disks are greater than or equal to the size of the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Detaching a disk in a mirrored configuration as long as the remaining devices provide adequate redundancy for the configuration. For more information, see Attaching and Detaching Devices in a Storage Pool.
Splitting a mirrored configuration by detaching one of the disks to create a new, identical pool. For more information, see Creating a New Pool By Splitting a Mirrored ZFS Storage Pool.
You cannot outright remove a device that is not a log or a cache device from a mirrored storage pool. An RFE is filed for this feature.
You can install and boot from a ZFS root file system. Review the following root pool configuration information:
Disks used for the root pool must have a VTOC (SMI) label, and the pool must be created with disk slices.
The root pool must be created as a mirrored configuration or as a single-disk configuration. You cannot add additional disks to create multiple mirrored top-level virtual devices by using the zpool add command, but you can expand a mirrored virtual device by using the zpool attach command.
A RAID-Z or a striped configuration is not supported.
The root pool cannot have a separate log device.
If you attempt to use an unsupported configuration for a root pool, you see messages similar to the following:
ERROR: ZFS pool <pool-name> does not support boot environments
# zpool add -f rpool log c0t6d0s0 cannot add to 'rpool': root pool can not have multiple vdevs or separate logs
For more information about installing and booting a ZFS root file system, see Chapter 5, Managing ZFS Root Pool Components.
Creating a single-parity RAID-Z pool is identical to creating a mirrored pool, except that the raidz or raidz1 keyword is used instead of mirror. The following example shows how to create a pool with a single RAID-Z device that consists of five disks:
# zpool create tank raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 /dev/dsk/c5t0d0
This example illustrates that disks can be specified by using their shorthand device names or their full device names. Both /dev/dsk/c5t0d0 and c5t0d0 refer to the same disk.
# zpool create tank raidz2 c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 # zpool status -v tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 errors: No known data errors
# zpool create tank raidz3 c0t0d0 c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 # zpool status -v tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c0t0d0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 c7t0d0 ONLINE 0 0 0 errors: No known data errors
Currently, the following operations are supported in a ZFS RAID-Z configuration:
Adding another set of disks for an additional top-level virtual device to an existing RAID-Z configuration. For more information, see Adding Devices to a Storage Pool.
Replacing a disk or disks in an existing RAID-Z configuration as long as the replacement disks are greater than or equal to the size of the device to be replaced. For more information, see Replacing Devices in a Storage Pool.
Currently, the following operations are not supported in a RAID-Z configuration:
Attaching an additional disk to an existing RAID-Z configuration.
Detaching a disk from a RAID-Z configuration, except when you are detaching a disk that is replaced by a spare disk. Or, when you need to detach a spare disk.
You cannot outright remove a device that is not a log or a cache device from a RAID-Z configuration. An RFE is filed for this feature.
For more information about a RAID-Z configuration, see RAID-Z Storage Pool Configuration.
By default, the ZIL is allocated from blocks within the main pool. However, better performance might be possible by using separate intent log devices, such as NVRAM or a dedicated disk. For more information about ZFS log devices, see Setting Up Separate ZFS Log Devices.
You can set up a ZFS log device when the storage pool is created or after the pool is created.
The following example shows how to create a mirrored storage pool with mirrored log devices:
# zpool create datap mirror c1t1d0 c1t2d0 mirror c1t3d0 c1t4d0 log mirror c1t5d0 c1t8d0 # zpool status datap pool: datap state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM datap ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t8d0 ONLINE 0 0 0 errors: No known data errors
For information about recovering from a log device failure, see Example 11-2.
# zpool create tank mirror c2t0d0 c2t1d0 c2t3d0 cache c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 cache c2t5d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 errors: No known data errors
Consider the following points when determining whether to create a ZFS storage pool with cache devices:
Using cache devices provides the greatest performance improvement for random-read workloads of mostly static content.
Capacity and reads can be monitored by using the zpool iostat command.
Single or multiple cache devices can be added when the pool is created. They can also be added and removed after the pool is created. For more information, see Example 4-4.
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or a RAID-Z configuration. The content of the cache devices is considered volatile, similar to other system caches.
Each storage pool contains one or more virtual devices. A virtual device is an internal representation of the storage pool that describes the layout of physical storage and the storage pool's fault characteristics. As such, a virtual device represents the disk devices or files that are used to create the storage pool. A pool can have any number of virtual devices at the top of the configuration, known as a top-level vdev.
If the top-level virtual device contains two or more physical devices, the configuration provide data redundancy as mirror or RAID-Z virtual devices. These virtual devices consist of disks, disk slices, or files. A spare is a special virtual dev that tracks available hot spares for a pool.
The following example shows how to create a pool that consists of two top-level virtual devices, each a mirror of two disks:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0
The following example shows how to create pool that consists of one top-level virtual device of four disks:
# zpool create mypool raidz2 c1d0 c2d0 c3d0 c4d0
You can add another top-level virtual device to this pool by using the zpool add command. For example:
# zpool add mypool raidz2 c2d1 c3d1 c4d1 c5d1
Disks, disk slices, or files that are used in nonredundant pools function as top-level virtual devices. Storage pools typically contain multiple top-level virtual devices. ZFS dynamically stripes data among all of the top-level virtual devices in a pool.
Virtual devices and the physical devices that are contained in a ZFS storage pool are displayed with the zpool status command. For example:
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors
Pool creation errors can occur for many reasons. Some reasons are obvious, such as when a specified device doesn't exist, while other reasons are more subtle.
Before formatting a device, ZFS first determines if the disk is in-use by ZFS or some other part of the operating system. If the disk is in use, you might see errors such as the following:
# zpool create tank c1t0d0 c1t1d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1t0d0s0 is currently mounted on /. Please see umount(1M). /dev/dsk/c1t0d0s1 is currently mounted on swap. Please see swap(1M). /dev/dsk/c1t1d0s0 is part of active ZFS pool zeepool. Please see zpool(1M).
The disk or one of its slices contains a file system that is currently mounted. To correct this error, use the umount command.
The disk contains a file system that is listed in the /etc/vfstab file, but the file system is not currently mounted. To correct this error, remove or comment out the line in the /etc/vfstab file.
The disk is in use as the dedicated dump device for the system. To correct this error, use the dumpadm command.
The disk or file is part of an active ZFS storage pool. To correct this error, use the zpool destroy command to destroy the other pool, if it is no longer needed. Or, use the zpool detach command to detach the disk from the other pool. You can only detach a disk from a mirrored storage pool.
The following in-use checks serve as helpful warnings and can be overridden by using the -f option to create the pool:
The disk contains a known file system, though it is not mounted and doesn't appear to be in use.
The disk is part of a Solaris Volume Manager volume.
The disk is part of a storage pool that has been exported or manually removed from a system. In the latter case, the pool is reported as potentially active, as the disk might or might not be a network-attached drive in use by another system. Be cautious when overriding a potentially active pool.
The following example demonstrates how the -f option is used:
# zpool create tank c1t0d0 invalid vdev specification use '-f' to override the following errors: /dev/dsk/c1t0d0s0 contains a ufs filesystem. # zpool create -f tank c1t0d0
Ideally, correct the errors rather than use the -f option to override them.
Creating pools with virtual devices of different replication levels is not recommended. The zpool command tries to prevent you from accidentally creating a pool with mismatched levels of redundancy. If you try to create a pool with such a configuration, you see errors similar to the following:
# zpool create tank c1t0d0 mirror c2t0d0 c3t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: both disk and mirror vdevs are present # zpool create tank mirror c1t0d0 c2t0d0 mirror c3t0d0 c4t0d0 c5t0d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: 2-way mirror and 3-way mirror vdevs are present
You can override these errors with the -f option, but you should avoid this practice. The command also warns you about creating a mirrored or RAID-Z pool using devices of different sizes. Although this configuration is allowed, mismatched levels of redundancy result in unused disk space on the larger device. The -f option is required to override the warning.
Attempts to create a pool can fail unexpectedly in different ways, and formatting disks is a potentially harmful action. For these reasons, the zpool create command has an additional option, -n, which simulates creating the pool without actually writing to the device. This dry run option performs the device in-use checking and replication-level validation, and reports any errors in the process. If no errors are found, you see output similar to the following:
# zpool create -n tank mirror c1t0d0 c1t1d0 would create 'tank' with the following layout: tank mirror c1t0d0 c1t1d0
Some errors cannot be detected without actually creating the pool. The most common example is specifying the same device twice in the same configuration. This error cannot be reliably detected without actually writing the data, so the zpool create -n command can report success and yet fail to create the pool when the command is run without this option.
When a pool is created, the default mount point for the top-level dataset is /pool-name. This directory must either not exist or be empty. If the directory does not exist, it is automatically created. If the directory is empty, the root dataset is mounted on top of the existing directory. To create a pool with a different default mount point, use the -m option of the zpool create command. For example:
# zpool create home c1t0d0 default mountpoint '/home' exists and is not empty use '-m' option to provide a different default # zpool create -m /export/zfs home c1t0d0
This command creates the new pool home and the home dataset with a mount point of /export/zfs.
For more information about mount points, see Managing ZFS Mount Points.
# zpool destroy tank
Caution - Be very careful when you destroy a pool. Ensure that you are destroying the right pool and you always have copies of your data. If you accidentally destroy the wrong pool, you can attempt to recover the pool. For more information, see Recovering Destroyed ZFS Storage Pools.
The act of destroying a pool requires data to be written to disk to indicate that the pool is no longer valid. This state information prevents the devices from showing up as a potential pool when you perform an import. If one or more devices are unavailable, the pool can still be destroyed. However, the necessary state information won't be written to these unavailable devices.
These devices, when suitably repaired, are reported as potentially active when you create a new pool. They appear as valid devices when you search for pools to import. If a pool has enough faulted devices such that the pool itself is faulted (meaning that a top-level virtual device is faulted), then the command prints a warning and cannot complete without the -f option. This option is necessary because the pool cannot be opened, so whether data is stored there is unknown. For example:
# zpool destroy tank cannot destroy 'tank': pool is faulted use '-f' to force destruction anyway # zpool destroy -f tank
For more information about pool and device health, see Determining the Health Status of ZFS Storage Pools.
For more information about importing pools, see Importing ZFS Storage Pools.