JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 11.1 Administration: ZFS File Systems     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

Components of a ZFS Storage Pool

Using Disks in a ZFS Storage Pool

Using Slices in a ZFS Storage Pool

Using Files in a ZFS Storage Pool

Considerations for ZFS Storage Pools

Replication Features of a ZFS Storage Pool

Mirrored Storage Pool Configuration

RAID-Z Storage Pool Configuration

ZFS Hybrid Storage Pool

Self-Healing Data in a Redundant Configuration

Dynamic Striping in a Storage Pool

Creating and Destroying ZFS Storage Pools

Creating ZFS Storage Pools

Creating a Basic Storage Pool

Creating a Mirrored Storage Pool

Creating a ZFS Root Pool

Creating a RAID-Z Storage Pool

Creating a ZFS Storage Pool With Log Devices

Creating a ZFS Storage Pool With Cache Devices

Cautions For Creating Storage Pools

Displaying Storage Pool Virtual Device Information

Handling ZFS Storage Pool Creation Errors

Detecting In-Use Devices

Mismatched Replication Levels

Doing a Dry Run of Storage Pool Creation

Default Mount Point for Storage Pools

Destroying ZFS Storage Pools

Destroying a Pool With Unavailable Devices

Managing Devices in ZFS Storage Pools

Adding Devices to a Storage Pool

Attaching and Detaching Devices in a Storage Pool

Creating a New Pool By Splitting a Mirrored ZFS Storage Pool

Onlining and Offlining Devices in a Storage Pool

Taking a Device Offline

Bringing a Device Online

Clearing Storage Pool Device Errors

Replacing Devices in a Storage Pool

Designating Hot Spares in Your Storage Pool

Activating and Deactivating Hot Spares in Your Storage Pool

Managing ZFS Storage Pool Properties

Querying ZFS Storage Pool Status

Displaying Information About ZFS Storage Pools

Displaying Information About All Storage Pools or a Specific Pool

Displaying Pool Devices by Physical Locations

Displaying Specific Storage Pool Statistics

Scripting ZFS Storage Pool Output

Displaying ZFS Storage Pool Command History

Viewing I/O Statistics for ZFS Storage Pools

Listing Pool-Wide I/O Statistics

Listing Virtual Device I/O Statistics

Determining the Health Status of ZFS Storage Pools

Basic Storage Pool Health Status

Detailed Health Status

Gathering ZFS Storage Pool Status Information

Migrating ZFS Storage Pools

Preparing for ZFS Storage Pool Migration

Exporting a ZFS Storage Pool

Determining Available Storage Pools to Import

Importing ZFS Storage Pools From Alternate Directories

Importing ZFS Storage Pools

Importing a Pool With a Missing Log Device

Importing a Pool in Read-Only Mode

Importing a Pool By a Specific Device Path

Recovering Destroyed ZFS Storage Pools

Upgrading ZFS Storage Pools

4.  Managing ZFS Root Pool Components

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Archiving Snapshots and Root Pool Recovery

12.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Managing Devices in ZFS Storage Pools

Most of the basic information regarding devices is covered in Components of a ZFS Storage Pool. After a pool has been created, you can perform several tasks to manage the physical devices within the pool.

Adding Devices to a Storage Pool

You can dynamically add disk space to a pool by adding a new top-level virtual device. This disk space is immediately available to all datasets in the pool. To add a new virtual device to a pool, use the zpool add command. For example:

# zpool add zeepool mirror c2t1d0 c2t2d0

The format for specifying the virtual devices is the same as for the zpool create command. Devices are checked to determine if they are in use, and the command cannot change the level of redundancy without the -f option. The command also supports the -n option so that you can perform a dry run. For example:

# zpool add -n zeepool mirror c3t1d0 c3t2d0
would update 'zeepool' to the following configuration:
      zeepool
        mirror
            c1t0d0
            c1t1d0
        mirror
            c2t1d0
            c2t2d0
        mirror
            c3t1d0
            c3t2d0

This command syntax would add mirrored devices c3t1d0 and c3t2d0 to the zeepool pool's existing configuration.

For more information about how virtual device validation is done, see Detecting In-Use Devices.

Example 3-1 Adding Disks to a Mirrored ZFS Configuration

In the following example, another mirror is added to an existing mirrored ZFS configuration.

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0

errors: No known data errors
# zpool add tank mirror c0t3d0 c1t3d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            c0t2d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
          mirror-2  ONLINE       0     0     0
            c0t3d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0

errors: No known data errors

Example 3-2 Adding Disks to a RAID-Z Configuration

Additional disks can be added similarly to a RAID-Z configuration. The following example shows how to convert a storage pool with one RAID-Z device that contains three disks to a storage pool with two RAID-Z devices that contains three disks each.

# zpool status rzpool
  pool: rzpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rzpool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
            c1t4d0  ONLINE       0     0     0

errors: No known data errors
# zpool add rzpool raidz c2t2d0 c2t3d0 c2t4d0
# zpool status rzpool
  pool: rzpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rzpool      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0
            c1t3d0  ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            c2t2d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
            c2t4d0  ONLINE       0     0     0

errors: No known data errors

Example 3-3 Adding and Removing a Mirrored Log Device

The following example shows how to add a mirrored log device to a mirrored storage pool.

# zpool status newpool
  pool: newpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        newpool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0

errors: No known data errors
# zpool add newpool log mirror c0t6d0 c0t7d0
# zpool status newpool
  pool: newpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        newpool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0
        logs
          mirror-1  ONLINE       0     0     0
            c0t6d0  ONLINE       0     0     0
            c0t7d0  ONLINE       0     0     0

errors: No known data errors

You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in an unmirrored storage pool.

You can remove log devices by using the zpool remove command. The mirrored log device in the previous example can be removed by specifying the mirror-1 argument. For example:

# zpool remove newpool mirror-1
# zpool status newpool
  pool: newpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        newpool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t4d0  ONLINE       0     0     0
            c0t5d0  ONLINE       0     0     0

errors: No known data errors

If your pool configuration contains only one log device, you remove the log device by specifying the device name. For example:

# zpool status pool
  pool: pool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool        ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            c0t8d0  ONLINE       0     0     0
            c0t9d0  ONLINE       0     0     0
        logs
          c0t10d0   ONLINE       0     0     0

errors: No known data errors
# zpool remove pool c0t10d0

Example 3-4 Adding and Removing Cache Devices

You can add cache devices to your ZFS storage pool and remove them if they are no longer required.

Use the zpool add command to add cache devices. For example:

# zpool add tank cache c2t5d0 c2t8d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0
        cache
          c2t5d0    ONLINE       0     0     0
          c2t8d0    ONLINE       0     0     0

errors: No known data errors

Cache devices cannot be mirrored or be part of a RAID-Z configuration.

Use the zpool remove command to remove cache devices. For example:

# zpool remove tank c2t5d0 c2t8d0
# zpool status tank
 pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c2t0d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0
            c2t3d0  ONLINE       0     0     0

errors: No known data errors

Currently, the zpool remove command only supports removing hot spares, log devices, and cache devices. Devices that are part of the main mirrored pool configuration can be removed by using the zpool detach command. Nonredundant and RAID-Z devices cannot be removed from a pool.

For more information about using cache devices in a ZFS storage pool, see Creating a ZFS Storage Pool With Cache Devices.

Attaching and Detaching Devices in a Storage Pool

In addition to the zpool add command, you can use the zpool attach command to add a new device to an existing mirrored or nonmirrored device.

If you are attaching a disk to create a mirrored root pool, see How to Configure a Mirrored Root Pool (SPARC or x86/VTOC).

If you are replacing a disk in a ZFS root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).

Example 3-5 Converting a Two-Way Mirrored Storage Pool to a Three-way Mirrored Storage Pool

In this example, zeepool is an existing two-way mirror that is converted to a three-way mirror by attaching c2t1d0, the new device, to the existing device, c1t1d0.

# zpool status zeepool
  pool: zeepool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zeepool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0

errors: No known data errors
# zpool attach zeepool c1t1d0 c2t1d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Jan  8 12:59:20 2010
config:

        NAME        STATE     READ WRITE CKSUM
        zeepool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0
            c2t1d0  ONLINE       0     0     0  592K resilvered

errors: No known data errors

If the existing device is part of a three-way mirror, attaching the new device creates a four-way mirror, and so on. Whatever the case, the new device begins to resilver immediately.

Example 3-6 Converting a Nonredundant ZFS Storage Pool to a Mirrored ZFS Storage Pool

In addition, you can convert a nonredundant storage pool to a redundant storage pool by using the zpool attach command. For example:

# zpool create tank c0t1d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:
        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          c0t1d0    ONLINE       0     0     0

errors: No known data errors
# zpool attach tank c0t1d0 c1t1d0
# zpool status tank
  pool: tank
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Fri Jan  8 14:28:23 2010
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c0t1d0  ONLINE       0     0     0
            c1t1d0  ONLINE       0     0     0  73.5K resilvered

errors: No known data errors

You can use the zpool detach command to detach a device from a mirrored storage pool. For example:

# zpool detach zeepool c2t1d0

However, this operation fails if no other valid replicas of the data exist. For example:

# zpool detach newpool c1t2d0
cannot detach c1t2d0: only applicable to mirror and replacing vdevs

Creating a New Pool By Splitting a Mirrored ZFS Storage Pool

A mirrored ZFS storage pool can be quickly cloned as a backup pool by using the zpool split command. You can use this feature to split a mirrored root pool, but the pool that is split off is not bootable until you perform some additional steps.

You can use the zpool split command to detach one or more disks from a mirrored ZFS storage pool to create a new pool with the detached disk or disks. The new pool will have identical contents to the original mirrored ZFS storage pool.

By default, a zpool split operation on a mirrored pool detaches the last disk for the newly created pool. After the split operation, you then import the new pool. For example:

# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c1t0d0  ONLINE       0     0     0
            c1t2d0  ONLINE       0     0     0

errors: No known data errors
# zpool split tank tank2
# zpool import tank2
# zpool status tank tank2
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          c1t0d0    ONLINE       0     0     0

errors: No known data errors

  pool: tank2
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank2       ONLINE       0     0     0
          c1t2d0    ONLINE       0     0     0

errors: No known data errors

You can identify which disk should be used for the newly created pool by specifying it with the zpool split command. For example:

# zpool split tank tank2 c1t0d0

Before the actual split operation occurs, data in memory is flushed to the mirrored disks. After the data is flushed, the disk is detached from the pool and given a new pool GUID. A new pool GUID is generated so that the pool can be imported on the same system on which it was split.

If the pool to be split has non-default file system mount points, and the new pool is created on the same system, then you must use the zpool split -R option to identify an alternate root directory for the new pool so that any existing mount points do not conflict. For example:

# zpool split -R /tank2 tank tank2

If you don't use the zpool split -R option, and you can see that mount points conflict when you attempt to import the new pool, import the new pool with the -R option. If the new pool is created on a different system, then specifying an alternate root directory is not necessary unless mount point conflicts occur.

Review the following considerations before using the zpool split feature:

Example 3-7 Splitting a Mirrored ZFS Pool

In the following example, a mirrored storage pool called mothership, with three disks is split. The two resulting pools are the mirrored pool mothership, with two disks and the new pool, luna, with one disk. Each pool has identical content.

The pool luna could be imported on another system for backup purposes. After the backup is complete, the pool luna could be destroyed and the disk reattached to the mothership. Then, the process could be repeated.

# zpool status mothership
  pool: mothership
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        mothership                 ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0

errors: No known data errors
# zpool split mothership luna
# zpool import luna 
# zpool status mothership luna
  pool: luna
 state: ONLINE
  scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        luna                     ONLINE       0     0     0
          c0t5000C500335F907Fd0  ONLINE       0     0     0

errors: No known data errors

  pool: mothership
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        mothership                 ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0

errors: No known data errors

Onlining and Offlining Devices in a Storage Pool

ZFS allows individual devices to be taken offline or brought online. When hardware is unreliable or not functioning properly, ZFS continues to read data from or write data to the device, assuming the condition is only temporary. If the condition is not temporary, you can instruct ZFS to ignore the device by taking it offline. ZFS does not send any requests to an offline device.


Note - Devices do not need to be taken offline in order to replace them.


Taking a Device Offline

You can take a device offline by using the zpool offline command. The device can be specified by path or by short name, if the device is a disk. For example:

# zpool offline tank c0t5000C500335F95E3d0

Consider the following points when taking a device offline:

Offline devices are in the OFFLINE state when you query pool status. For information about querying pool status, see Querying ZFS Storage Pool Status.

For more information on device health, see Determining the Health Status of ZFS Storage Pools.

Bringing a Device Online

After a device is taken offline, it can be brought online again by using the zpool online command. For example:

# zpool online tank c0t5000C500335F95E3d0

When a device is brought online, any data that has been written to the pool is resynchronized with the newly available device. Note that you cannot bring a device online to replace a disk. If you take a device offline, replace the device, and try to bring it online, it remains in the UNAVAIL state.

If you attempt to bring online an UNAVAIL device, a message similar to the following is displayed:

You might also see the faulted disk message displayed on the console or written to the /var/adm/messages file. For example:

SUNW-MSG-ID: ZFS-8000-LR, TYPE: Fault, VER: 1, SEVERITY: Major
EVENT-TIME: Wed Jun 20 11:35:26 MDT 2012
PLATFORM: ORCL,SPARC-T3-4, CSN: 1120BDRCCD, HOSTNAME: tardis
SOURCE: zfs-diagnosis, REV: 1.0
EVENT-ID: fb6699c8-6bfb-eefa-88bb-81479182e3b7
DESC: ZFS device 'id1,sd@n5000c500335dc60f/a' in pool 'pond' failed to open.
AUTO-RESPONSE: An attempt will be made to activate a hot spare if available.
IMPACT: Fault tolerance of the pool may be compromised.
REC-ACTION: Use 'fmadm faulty' to provide a more detailed view of this event. 
Run 'zpool status -lx' for more information. Please refer to the associated 
reference document at http://support.oracle.com/msg/ZFS-8000-LR for the latest 
service procedures and policies regarding this diagnosis.

For more information about replacing a faulted device, see Resolving a Missing Device.

You can use the zpool online -e command to expand a LUN. By default, a LUN that is added to a pool is not expanded to its full size unless the autoexpand pool property is enabled. You can expand the LUN automatically by using the zpool online -e command even if the LUN is already online or if the LUN is currently offline. For example:

# zpool online -e tank c0t5000C500335F95E3d0

Clearing Storage Pool Device Errors

If a device is taken offline due to a failure that causes errors to be listed in the zpool status output, you can clear the error counts with the zpool clear command.

If specified with no arguments, this command clears all device errors within the pool. For example:

# zpool clear tank

If one or more devices are specified, this command only clears errors associated with the specified devices. For example:

# zpool clear tank c0t5000C500335F95E3d0

For more information about clearing zpool errors, see Clearing Transient Errors.

Replacing Devices in a Storage Pool

You can replace a device in a storage pool by using the zpool replace command.

If you are physically replacing a device with another device in the same location in a redundant pool, then you might only need to identify the replaced device. On some hardware, ZFS recognizes that the device is a different disk in the same location. For example, to replace a failed disk (c1t1d0) by removing the disk and replacing it in the same location, use the following syntax:

# zpool replace tank c1t1d0

If you are replacing a device in a storage pool with a disk in a different physical location, you must specify both devices. For example:

# zpool replace tank c1t1d0 c1t2d0

If you are replacing a disk in the ZFS root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).

The following are the basic steps for replacing a disk:

  1. Offline the disk, if necessary, with the zpool offline command.

  2. Remove the disk to be replaced.

  3. Insert the replacement disk.

  4. Review the format output to determine if the replacement disk is visible.

    In addition, check to see whether the device ID has changed. If the replacement disk has a WWN, then the device ID for the failed disk is changed.

  5. Let ZFS know the disk is replaced. For example:

    # zpool replace tank c1t1d0

    If the replacement disk has a different device ID as identified above, include the new device ID.

    # zpool replace tank c0t5000C500335FC3E7d0 c0t5000C500335BA8C3d0
  6. Bring the disk online with the zpool online command, if necessary.

  7. Notify FMA that the device is replaced.

    From fmadm faulty output, identify the zfs://pool=name/vdev=guid string in the Affects: section and provide that string as an argument to the fmadm repaired command.

    # fmadm faulty
    # fmadm repaired zfs://pool=name/vdev=guid

On some systems with SATA disks, you must unconfigure a disk before you can take it offline. If you are replacing a disk in the same slot position on this system, then you can just run the zpool replace command as described in the first example in this section.

For an example of replacing a SATA disk, see Example 10-1.

Consider the following when replacing devices in a ZFS storage pool:

For more information about replacing devices, see Resolving a Missing Device and Replacing or Repairing a Damaged Device.

Designating Hot Spares in Your Storage Pool

The hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in a storage pool. Designating a device as a hot spare means that the device is not an active device in the pool, but if an active device in the pool fails, the hot spare automatically replaces the failed device.

Devices can be designated as hot spares in the following ways:

The following example shows how to designate devices as hot spares when the pool is created:

# zpool create zeepool mirror c0t5000C500335F95E3d0 c0t5000C500335F907Fd0 
mirror c0t5000C500335BD117d0 c0t5000C500335DC60Fd0 spare c0t5000C500335E106Bd0 c0t5000C500335FC3E7d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335DC60Fd0  ONLINE       0     0     0
        spares
          c0t5000C500335E106Bd0    AVAIL   
          c0t5000C500335FC3E7d0    AVAIL   

errors: No known data errors

The following example shows how to designate hot spares by adding them to a pool after the pool is created:

# zpool add zeepool spare c0t5000C500335E106Bd0 c0t5000C500335FC3E7d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335DC60Fd0  ONLINE       0     0     0
        spares
          c0t5000C500335E106Bd0    AVAIL   
          c0t5000C500335FC3E7d0    AVAIL   

errors: No known data errors

Hot spares can be removed from a storage pool by using the zpool remove command. For example:

# zpool remove zeepool c0t5000C500335FC3E7d0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: none requested
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335DC60Fd0  ONLINE       0     0     0
        spares
          c0t5000C500335E106Bd0    AVAIL   

errors: No known data errors

A hot spare cannot be removed if it is currently used by a storage pool.

Consider the following when using ZFS hot spares:

Activating and Deactivating Hot Spares in Your Storage Pool

Hot spares are activated in the following ways:

An UNAVAIL device is automatically replaced if a hot spare is available. For example:

# zpool status -x
  pool: zeepool
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 16:46:19 2012
config:

        NAME                         STATE     READ WRITE CKSUM
        zeepool                      DEGRADED     0     0     0
          mirror-0                   ONLINE       0     0     0
            c0t5000C500335F95E3d0    ONLINE       0     0     0
            c0t5000C500335F907Fd0    ONLINE       0     0     0
          mirror-1                   DEGRADED     0     0     0
            c0t5000C500335BD117d0    ONLINE       0     0     0
            spare-1                  DEGRADED   449     0     0
              c0t5000C500335DC60Fd0  UNAVAIL      0     0     0
              c0t5000C500335E106Bd0  ONLINE       0     0     0
        spares
          c0t5000C500335E106Bd0      INUSE   

errors: No known data errors

Currently, you can deactivate a hot spare in the following ways:

Example 3-8 Detaching a Hot Spare After the Failed Disk Is Replaced

In this example, the failed disk (c0t5000C500335DC60Fd0) is physically replaced and ZFS is notified by using the zpool replace command.

# zpool replace zeepool c0t5000C500335DC60Fd0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 16:53:43 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335DC60Fd0  ONLINE       0     0     0
        spares
          c0t5000C500335E106Bd0    AVAIL   

If necessary, you can use the zpool detach command to return the hot spare back to the spare pool. For example:

# zpool detach zeepool c0t5000C500335E106Bd0

Example 3-9 Detaching a Failed Disk and Using the Hot Spare

If you want to replace a failed disk by temporarily or permanently swapping in the hot spare that is currently replacing it, then detach the original (failed) disk. If the failed disk is eventually replaced, then you can add it back to the storage pool as a spare. For example:

# zpool status zeepool
  pool: zeepool
 state: DEGRADED
status: One or more devices are unavailable in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or 'fmadm repaired', or replace the device
        with 'zpool replace'.
        Run 'zpool status -v' to see device specific details.
  scan: scrub in progress since Thu Jun 21 17:01:49 2012
    1.07G scanned out of 6.29G at 220M/s, 0h0m to go
    0 repaired, 17.05% done
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    DEGRADED     0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 DEGRADED     0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335DC60Fd0  UNAVAIL      0     0     0
        spares
          c0t5000C500335E106Bd0    AVAIL   

errors: No known data errors
# zpool detach zeepool c0t5000C500335DC60Fd0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 17:02:35 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335E106Bd0  ONLINE       0     0     0

errors: No known data errors
(Original failed disk c0t5000C500335DC60Fd0 is physically replaced)
# zpool add zeepool spare c0t5000C500335DC60Fd0
# zpool status zeepool
  pool: zeepool
 state: ONLINE
  scan: resilvered 3.15G in 0h0m with 0 errors on Thu Jun 21 17:02:35 2012
config:

        NAME                       STATE     READ WRITE CKSUM
        zeepool                    ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            c0t5000C500335F95E3d0  ONLINE       0     0     0
            c0t5000C500335F907Fd0  ONLINE       0     0     0
          mirror-1                 ONLINE       0     0     0
            c0t5000C500335BD117d0  ONLINE       0     0     0
            c0t5000C500335E106Bd0  ONLINE       0     0     0
        spares
          c0t5000C500335DC60Fd0    AVAIL   

errors: No known data errors

After a disk is replaced and the spare is detached, let FMA know that the disk is repaired.

# fmadm faulty
# fmadm repaired zfs://pool=name/vdev=guid