Skip Navigation Links | |
Exit Print View | |
Oracle Solaris ZFS Administration Guide Oracle Solaris 11 Express 11/10 |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Oracle Solaris ZFS and Traditional File System Differences
4. Managing Oracle Solaris ZFS Storage Pools
Components of a ZFS Storage Pool
Using Disks in a ZFS Storage Pool
Using Slices in a ZFS Storage Pool
Using Files in a ZFS Storage Pool
Replication Features of a ZFS Storage Pool
Mirrored Storage Pool Configuration
RAID-Z Storage Pool Configuration
Self-Healing Data in a Redundant Configuration
Dynamic Striping in a Storage Pool
Creating and Destroying ZFS Storage Pools
Creating a Mirrored Storage Pool
Creating a RAID-Z Storage Pool
Creating a ZFS Storage Pool With Log Devices
Creating a ZFS Storage Pool With Cache Devices
Displaying Storage Pool Virtual Device Information
Handling ZFS Storage Pool Creation Errors
Doing a Dry Run of Storage Pool Creation
Default Mount Point for Storage Pools
Destroying a Pool With Faulted Devices
Managing Devices in ZFS Storage Pools
Adding Devices to a Storage Pool
Attaching and Detaching Devices in a Storage Pool
Creating a New Pool By Splitting a Mirrored ZFS Storage Pool
Onlining and Offlining Devices in a Storage Pool
Clearing Storage Pool Device Errors
Replacing Devices in a Storage Pool
Managing ZFS Storage Pool Properties
Querying ZFS Storage Pool Status
Displaying Information About ZFS Storage Pools
Listing Information About All Storage Pools or a Specific Pool
Listing Specific Storage Pool Statistics
Scripting ZFS Storage Pool Output
Displaying ZFS Storage Pool Command History
Viewing I/O Statistics for ZFS Storage Pools
Listing Pool-Wide I/O Statistics
Listing Virtual Device I/O Statistics
Determining the Health Status of ZFS Storage Pools
Basic Storage Pool Health Status
Gathering ZFS Storage Pool Status Information
Preparing for ZFS Storage Pool Migration
Determining Available Storage Pools to Import
Importing ZFS Storage Pools From Alternate Directories
Importing a Pool With a Missing Log Device
Importing a Pool in Read-Only Mode
Importing a Pool By a Specific Device Path
Recovering Destroyed ZFS Storage Pools
5. Managing ZFS Root Pool Components
6. Managing Oracle Solaris ZFS File Systems
7. Working With Oracle Solaris ZFS Snapshots and Clones
8. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
9. Oracle Solaris ZFS Delegated Administration
10. Oracle Solaris ZFS Advanced Topics
11. Oracle Solaris ZFS Troubleshooting and Pool Recovery
Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. After a pool has been created, you can perform several tasks to manage the physical devices within the pool.
You can dynamically add disk space to a pool by adding a new top-level virtual device. This disk space is immediately available to all datasets in the pool. To add a new virtual device to a pool, use the zpool add command. For example:
# zpool add zeepool mirror c2t1d0 c2t2d0
The format for specifying the virtual devices is the same as for the zpool create command. Devices are checked to determine if they are in use, and the command cannot change the level of redundancy without the -f option. The command also supports the -n option so that you can perform a dry run. For example:
# zpool add -n zeepool mirror c3t1d0 c3t2d0 would update 'zeepool' to the following configuration: zeepool mirror c1t0d0 c1t1d0 mirror c2t1d0 c2t2d0 mirror c3t1d0 c3t2d0
This command syntax would add mirrored devices c3t1d0 and c3t2d0 to the zeepool pool's existing configuration.
For more information about how virtual device validation is done, see Detecting In-Use Devices.
Example 4-1 Adding Disks to a Mirrored ZFS Configuration
In the following example, another mirror is added to an existing mirrored ZFS configuration on Oracle's Sun Fire x4500 system.
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors # zpool add tank mirror c0t3d0 c1t3d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors
Example 4-2 Adding Disks to a RAID-Z Configuration
Additional disks can be added similarly to a RAID-Z configuration. The following example shows how to convert a storage pool with one RAID-Z device that contains three disks to a storage pool with two RAID-Z devices that contains three disks each.
# zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 errors: No known data errors # zpool add rzpool raidz c2t2d0 c2t3d0 c2t4d0 # zpool status rzpool pool: rzpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rzpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t4d0 ONLINE 0 0 0 errors: No known data errors
Example 4-3 Adding and Removing a Mirrored Log Device
The following example shows how to add a mirrored log device to mirrored storage pool.For more information about using log devices in your storage pool, see Setting Up Separate ZFS Log Devices.
# zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors # zpool add newpool log mirror c0t6d0 c0t7d0 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 logs mirror-1 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 errors: No known data errors
You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in a unmirrored storage pool.
Log devices can be removed by using the zpool remove command. The mirrored log device in the previous example can be removed by specifying the mirror-1 argument. For example:
# zpool remove newpool mirror-1 # zpool status newpool pool: newpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 errors: No known data errors
If your pool configuration only contains one log device, you would remove the log device by specifying the device name. For example:
# zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 c0t9d0 ONLINE 0 0 0 logs c0t10d0 ONLINE 0 0 0 errors: No known data errors # zpool remove pool c0t10d0
Example 4-4 Adding and Removing Cache Devices
You can add to your ZFS storage pool and remove them if they are no longer required..
Use the zpool add command to add cache devices. For example:
# zpool add tank cache c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 cache c2t5d0 ONLINE 0 0 0 c2t8d0 ONLINE 0 0 0 errors: No known data errors
Cache devices cannot be mirrored or be part of a RAID-Z configuration.
Use the zpool remove command to remove cache devices. For example:
# zpool remove tank c2t5d0 c2t8d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 errors: No known data errors
Currently, the zpool remove command only supports removing hot spares, log devices, and cache devices. Devices that are part of the main mirrored pool configuration can be removed by using the zpool detach command. Nonredundant and RAID-Z devices cannot be removed from a pool.
For more information about using cache devices in a ZFS storage pool, see Creating a ZFS Storage Pool With Cache Devices.
In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or nonmirrored device.
If you are attaching a disk to create a mirrored root pool, see How to Configure a Mirrored Root Pool.
If you are replacing a disk in a ZFS root pool, see How to Replace a Disk in the ZFS Root Pool.
Example 4-5 Converting a Two-Way Mirrored Storage Pool to a Three-way Mirrored Storage Pool
In this example, zeepool is an existing two-way mirror that is converted to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.
# zpool status zeepool pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach zeepool c1t1d0 c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 12:59:20 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 592K resilvered errors: No known data errors
If the existing device is part of a three-way mirror, attaching the new device creates a four-way mirror, and so on. Whatever the case, the new device begins to resilver immediately.
Example 4-6 Converting a Nonredundant ZFS Storage Pool to a Mirrored ZFS Storage Pool
In addition, you can convert a nonredundant storage pool to a redundant storage pool by using the zpool attach command. For example:
# zpool create tank c0t1d0 # zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 errors: No known data errors # zpool attach tank c0t1d0 c1t1d0 # zpool status tank pool: tank state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Fri Jan 8 14:28:23 2010 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 73.5K resilvered errors: No known data errors
You can use the zpool detach command to detach a device from a mirrored storage pool. For example:
# zpool detach zeepool c2t1d0
However, this operation fails if no other valid replicas of the data exist. For example:
# zpool detach newpool c1t2d0 cannot detach c1t2d0: only applicable to mirror and replacing vdevs
A mirrored ZFS storage pool can be quickly cloned as a backup pool by using the zpool split command.
Currently, this feature cannot be used to split a mirrored root pool.
You can use the zpool split command to detach disks from a mirrored ZFS storage pool to create a new pool with one of the detached disks. The new pool will have identical contents to the original mirrored ZFS storage pool.
By default, a zpool split operation on a mirrored pool detaches the last disk for the newly created pool. After the split operation, import the new pool. For example:
# zpool status tank pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors # zpool split tank tank2 # zpool import tank2 # zpool status tank tank2 pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 errors: No known data errors pool: tank2 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank2 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors
You can identify which disk should be used for the newly created pool by specifying it with the zpool split command. For example:
# zpool split tank tank2 c1t0d0
Before the actual split operation occurs, data in memory is flushed to the mirrored disks. After the data is flushed, the disk is detached from the pool and given a new pool GUID. A new pool GUID is generated so that the pool can be imported on the same system on which it was split.
If the pool to be split has non-default dataset mount points and the new pool is created on the same system, then you will need to use the zpool split -R option to identify an alternate root directory for the new pool so that any existing mount points do not conflict. For example:
# zpool split -R /tank2 tank tank2
If you don't use the zpool split -R option and you can see that mount points conflict when you attempt to import the new pool, import the new pool with the -R option. If the new pool is created on a different system, then specifying an alternate root directory should not be necessary unless mount point conflicts occur.
Review the following considerations before using the zpool split feature:
This feature is not available for a RAIDZ configuration or a non-redundant pool of multiple disks.
Data and application operations should be quiesced before attempting a zpool split operation.
Having disks that honor, rather than ignore, the disk's flush write cache command is important.
A pool cannot be split if resilvering is in process.
Splitting a mirrored pool is optimal when composed of two to three disks, where the last disk in the original pool is used for the newly created pool. Then, you can use the zpool attach command to recreate your original mirrored storage pool or convert your newly created pool into a mirrored storage pool. No way currently exists to create a new mirrored pool from an existing mirrored pool by using this feature.
If the existing pool is a three-way mirror, then the new pool will contain one disk after the split operation. If the existing pool is a two-way mirror of two disks, then the outcome is two non-redundant pools of two disks. You will need to attach two additional disks to convert the non-redundant pools to mirrored pools.
A good way to keep your data redundant during a split operation is to split a mirrored storage pool that is composed of three disks so that the original pool is comprised of two mirrored disks after the split operation.
Example 4-7 Splitting a Mirrored ZFS Pool
In the following example, a mirrored storage pool called trinity, with three disks, c1t0d0, c1t2d0 and c1t3d0 is split. The two resulting pools are the mirrored pool trinity, with disks c1t0d0 and c1t2d0, and the new pool, neo, with disk c1t3d0. Each pool has identical content.
# zpool status trinity pool: trinity state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM trinity ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors # zpool split trinity neo # zpool import neo # zpool status trinity neo pool: neo state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM neo ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors pool: trinity state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM trinity ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 errors: No known data errors
ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read data from or write data to the device, assuming the condition is only temporary. If the condition is not temporary, you can instruct ZFS to ignore the device by taking it offline. ZFS does not send any requests to an offline device.
Note - Devices do not need to be taken offline in order to replace them.
You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:
# zpool offline tank c1t0d0 bringing device c1t0d0 offline
Consider the following points when taking a device offline:
You cannot take a pool offline to the point where it becomes faulted. For example, you cannot take offline two devices in a raidz1 configuration, nor can you take offline a top-level virtual device.
# zpool offline tank c1t0d0 cannot offline c1t0d0: no valid replicas
By default, the OFFLINE state is persistent. The device remains offline when the system is rebooted.
To temporarily take a device offline, use the zpool offline -t option. For example:
# zpool offline -t tank c1t0d0 bringing device 'c1t0d0' offline
When the system is rebooted, this device is automatically returned to the ONLINE state.
When a device is taken offline, it is not detached from the storage pool. If you attempt to use the offline device in another pool, even after the original pool is destroyed, you see a message similar to the following:
device is part of exported or potentially active ZFS pool. Please see zpool(1M)
If you want to use the offline device in another storage pool after destroying the original storage pool, first bring the device online, then destroy the original storage pool.
Another way to use a device from another storage pool, while keeping the original storage pool, is to replace the existing device in the original storage pool with another comparable device. For information about replacing devices, see Replacing Devices in a Storage Pool.
Offline devices are in the OFFLINE state when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.
For more information on device health, see Determining the Health Status of ZFS Storage Pools.
After a device is taken offline, it can be brought online again by using the zpool online command. For example:
# zpool online tank c1t0d0 bringing device c1t0d0 online
When a device is brought online, any data that has been written to the pool is resynchronized with the newly available device. Note that you cannot use bring a device online to replace a disk. If you take a device offline, replace the device, and try to bring it online, it remains in the faulted state.
If you attempt to bring online a faulted device, a message similar to the following is displayed:
# zpool online tank c1t0d0 warning: device 'c1t0d0' onlined, but remains in faulted state use 'zpool replace' to replace devices that are no longer present
You might also see the faulted disk message displayed on the console or written to the /var/adm/messages file. For example:
SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major EVENT-TIME: Wed Jun 30 14:53:39 MDT 2010 PLATFORM: SUNW,Sun-Fire-880, CSN: -, HOSTNAME: neo SOURCE: zfs-diagnosis, REV: 1.0 EVENT-ID: 504a1188-b270-4ab0-af4e-8a77680576b8 DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. AUTO-RESPONSE: No automated response will occur. IMPACT: Fault tolerance of the pool may be compromised. REC-ACTION: Run 'zpool status -x' and replace the bad device.
For more information about replacing a faulted device, see Resolving a Missing Device.
You can use the zpool online -e command to expand a LUN. By default, a LUN that is added to a pool is not expanded to its full size unless the autoexpand pool property is enabled. You can expand the LUN automatically by using the zpool online -ecommand even if the LUN is already online or if the LUN is currently offline. For example:
# zpool online -e tank c1t13d0
If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.
If specified with no arguments, this command clears all device errors within the pool. For example:
# zpool clear tank
If one or more devices are specified, this command only clear errors associated with the specified devices. For example:
# zpool clear tank c1t0d0
For more information about clearing zpool errors, see Clearing Transient Errors.
You can replace a device in a storage pool by using the zpool replace command.
If you are physically replacing a device with another device in the same location in a redundant pool, then you might only need to identify the replaced device. ZFS recognizes that the device is a different disk in the same location on some hardware. For example, to replace a failed disk (c1t1d0) by removing the disk and replacing it in the same location, use the following syntax:
# zpool replace tank c1t1d0
If you are replacing a device in a storage pool with a disk in a different physical location, you will need to specify both devices. For example:
# zpool replace tank c1t1d0 c1t2d0
If you are replacing a disk in the ZFS root pool, see How to Replace a Disk in the ZFS Root Pool.
The following are the basic steps for replacing a disk:
Offline the disk, if necessary, with the zpool offline command.
Remove the disk to be replaced.
Insert the replacement disk.
Run the zpool replace command. For example:
# zpool replace tank c1t1d0
Bring the disk online with the zpool online command.
On some systems, such as the Sun Fire x4500, you must unconfigure a disk before you can take it offline. If you are replacing a disk in the same slot position on this system, then you can just run the zpool replace command as described in the first example in this section.
For an example of replacing a disk on a Sun Fire X4500 system, see Example 11-1.
Consider the following when replacing devices in a ZFS storage pool:
If you set the autoreplace pool property to on, then any new device found in the same physical location as a device that previously belonged to the pool is automatically formatted and replaced. You are not required to use the zpool replace command when this property is enabled. This feature might not be available on all hardware types.
The size of the replacement device must be equal to or larger than the smallest disk in a mirrored or RAID-Z configuration.
When a replacement device that is greater in size than the device it is replacing is added to a pool, is not automatically expanded to its full size. The autoexpand pool property value determines whether a replacement LUN is expanded to its full size when the disk is added to the pool. By default, the autoexpand property is disabled. You can enable this property to expand LUN size before or after the larger LUN is added to the pool.
In the following example, two 16-GB disks in a mirrored pool are replaced with two 72-GB disks. The autoexpand property is enabled after the disk replacements to expand the full LUN sizes.
# zpool create pool mirror c1t16d0 c1t17d0 # zpool status pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t16d0 ONLINE 0 0 0 c1t17d0 ONLINE 0 0 0 zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 16.8G 76.5K 16.7G 0% ONLINE - # zpool replace pool c1t16d0 c1t1d0 # zpool replace pool c1t17d0 c1t2d0 # zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 16.8G 88.5K 16.7G 0% ONLINE - # zpool set autoexpand=on pool # zpool list pool NAME SIZE ALLOC FREE CAP HEALTH ALTROOT pool 68.2G 117K 68.2G 0% ONLINE -
Replacing many disks in a large pool is time-consuming due to resilvering the data onto the new disks. In addition, you might consider running the zpool scrub command between disk replacements to ensure that the replacement devices are operational and that the data is written correctly.
If a failed disk has been replaced automatically with a hot spare, then you might need to detach the spare after the failed disk is replaced. You can use the zpool detach command to detach a spare in a mirrored or RAIDZ pool. For information about detaching a hot spare, see Activating and Deactivating Hot Spares in Your Storage Pool.
For more information about replacing devices, see Resolving a Missing Device and Replacing or Repairing a Damaged Device.
The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that the device is not an active device in the pool, but if an active device in the pool fails, the hot spare automatically replaces the failed device.
Devices can be designated as hot spares in the following ways:
When the pool is created with the zpool create command.
After the pool is created with the zpool add command.
The following example shows how to designate devices as hot spares when the pool is created:
# zpool create trinity mirror c1t1d0 c2t1d0 spare c1t2d0 c2t2d0 # zpool status trinity pool: trinity state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM trinity ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c1t2d0 AVAIL c2t2d0 AVAIL errors: No known data errors
The following example shows how to designate hot spares by adding them to a pool after the pool is created:
# zpool add neo spare c5t3d0 c6t3d0 # zpool status neo pool: neo state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM neo ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 c4t3d0 ONLINE 0 0 0 spares c5t3d0 AVAIL c6t3d0 AVAIL errors: No known data errors
Hot spares can be removed from a storage pool by using the zpool remove command. For example:
# zpool remove zeepool c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c1t3d0 AVAIL errors: No known data errors
A hot spare cannot be removed if it is currently used by a storage pool.
Consider the following when using ZFS hot spares:
Currently, the zpool remove command can only be used to remove hot spares, cache devices, and log devices.
To add a disk as a hot spare, the hot spare must be equal to or larger than the size of the largest disk in the pool. Adding a smaller disk as a spare to a pool is allowed. However, when the smaller spare disk is activated, either automatically or with the zpool replace command, the operation fails with an error similar to the following:
cannot replace disk3 with disk4: device is too small
Hot spares are activated in the following ways:
Manual replacement – You replace a failed device in a storage pool with a hot spare by using the zpool replace command.
Automatic replacement – When a fault is detected, an FMA agent examines the pool to determine if it has any available hot spares. If so, it replaces the faulted device with an available spare.
If a hot spare that is currently in use fails, the FMA agent detaches the spare and thereby cancels the replacement. The agent then attempts to replace the device with another hot spare, if one is available. This feature is currently limited by the fact that the ZFS diagnostic engine only generates faults when a device disappears from the system.
If you physically replace a failed device with an active spare, you can reactivate the original device by using the zpool detach command to detach the spare. If you set the autoreplace pool property to on, the spare is automatically detached and returned to the spare pool when the new device is inserted and the online operation completes.
You can manually replace a device with a hot spare by using the zpool replace command. See Example 4-8.
A faulted device is automatically replaced if a hot spare is available. For example:
# zpool status -x pool: zeepool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver completed after 0h0m with 0 errors on Mon Jan 11 10:20:35 2010 config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 c2t1d0 UNAVAIL 0 0 0 cannot open c2t3d0 ONLINE 0 0 0 88.5K resilvered spares c2t3d0 INUSE currently in use errors: No known data errors
Currently, you can deactivate a hot spare in the following ways:
By removing the hot spare from the storage pool.
By detaching a hot spare after a failed disk is physically replaced. See Example 4-9.
By temporarily or permanently swapping in the hot spare. See Example 4-10.
Example 4-8 Manually Replacing a Disk With a Hot Spare
In this example, the zpool replace command is used to replace disk c2t1d0 with the hot spare c2t3d0.
# zpool replace zeepool c2t1d0 c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 10:00:50 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 90K resilvered spares c2t3d0 INUSE currently in use errors: No known data errors
Then, detach the disk c2t1d0.
# zpool detach zeepool c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 10:00:50 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 90K resilvered errors: No known data errors
Example 4-9 Detaching a Hot Spare After the Failed Disk Is Replaced
In this example, the failed disk (c2t1d0) is physical replaced and ZFS is notified by using the zpool replace command.
# zpool replace zeepool c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 10:08:44 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 90K resilvered c2t1d0 ONLINE 0 0 0 spares c2t3d0 INUSE currently in use errors: No known data errors
Then, you can use the zpool detach command to return the hot spare back to the spare pool. For example:
# zpool detach zeepool c2t3d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed with 0 errors on Wed Jan 20 10:08:44 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spares c2t3d0 AVAIL errors: No known data errors
Example 4-10 Detaching a Failed Disk and Using the Hot Spare
If you want to replace a failed disk by temporarily or permanently swap in the hot spare that is currently replacing it, then detach the original (failed) disk. If the failed disk is eventually replaced, then you can add it back to the storage pool as a spare. For example:
# zpool status zeepool pool: zeepool state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: resilver in progress for 0h0m, 70.47% done, 0h0m to go config: NAME STATE READ WRITE CKSUM zeepool DEGRADED 0 0 0 mirror-0 DEGRADED 0 0 0 c1t2d0 ONLINE 0 0 0 spare-1 DEGRADED 0 0 0 c2t1d0 UNAVAIL 0 0 0 cannot open c2t3d0 ONLINE 0 0 0 70.5M resilvered spares c2t3d0 INUSE currently in use errors: No known data errors # zpool detach zeepool c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 13:46:46 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 70.5M resilvered errors: No known data errors (Original failed disk c2t1d0 is physically replaced) # zpool add zeepool spare c2t1d0 # zpool status zeepool pool: zeepool state: ONLINE scrub: resilver completed after 0h0m with 0 errors on Wed Jan 20 13:48:46 2010 config: NAME STATE READ WRITE CKSUM zeepool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 70.5M resilvered spares c2t1d0 AVAIL errors: No known data errors