Occasionally, you might need to move a storage pool between systems. You would disconnect the storage devices from the original system and reconnect them to the destination system either by physically recabling the devices or by using multiported devices such as the devices on a SAN.
ZFS enables you to export the pool from one system and import it on the destination system even if the systems are of different architectural endianness. For information about replicating or migrating file systems between different storage pools, which might reside on different systems, see Saving, Sending, and Receiving ZFS Data.
To migrate a pool, you must first export it. This operation flushes any unwritten data to disk, writes data to the disk indicating that the export is complete, and removes all information about the pool from the system.
If you do not explicitly export the pool but instead remove the disks manually, you can still import the resulting pool on another system. However, you might lose the last few seconds of data transactions. Also, because the devices are no longer present, the pool will appear as UNAVAIL on the original system. By default, the destination system cannot import a pool that has not been explicitly exported. This condition is necessary to prevent you from accidentally importing an active pool that consists of network-attached storage that is still in use on another system.
To export a pool, use the following command:
# zpool export [option] pool
The command first unmounts any mounted file systems within the pool. If any of the file systems fail to unmount, you can forcefully unmount them by using the –f option. However, if ZFS volumes in the pool are in use, the operation fails even with the –f option. To export a pool with a ZFS volume, first ensure that all consumers of the volume are no longer active.
For more information, see ZFS Volumes.
After this command is executed, the pool is no longer visible on the system.
If devices are unavailable at the time of export, the devices cannot be identified as cleanly exported. If one of these devices is later attached to a system without any of the other working devices, it appears as potentially active.
After the pool has been removed from the system, you can attach the devices to the target system. You do not need to attach them under the same device name. ZFS detects any moved or renamed devices and adjusts the configuration appropriately. Note that ZFS can handle some situations in which only some of the devices are available. However, a successful pool migration depends on the overall health of all the devices.
Use the following general command syntax for all pool import operations:
# zpool import [options] [pool|ID-number]
To discover available pools that can be imported, run the zpool import command without specifying pools. In the output, the pools are identified by names and unique number identifiers. If pools available for import share the same name, use the numeric identifier to import the correct pool.
In the following example, one of the devices is missing but you can still import the pool because the mirrored data remains accessible.
# zpool import pool: system1 id: 4715259469716913940 state: DEGRADED status: One or more devices are unavailable. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: system1 DEGRADED mirror-0 DEGRADED c0t5000C500335E106Bd0 ONLINE c0t5000C500335FC3E7d0 UNAVAIL cannot open device details: c0t5000C500335FC3E7d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing.
In the following example, because two disks are missing from a RAID-Z virtual device, not enough redundant data exists to reconstruct the pool. With insufficient available devices, ZFS cannot import the pool.
# zpool import pool: mothership id: 3702878663042245922 state: UNAVAIL status: One or more devices are unavailable. action: The pool cannot be imported due to unavailable devices or data. config: mothership UNAVAIL insufficient replicas raidz1-0 UNAVAIL insufficient replicas c8t0d0 UNAVAIL cannot open c8t1d0 UNAVAIL cannot open c8t2d0 ONLINE c8t3d0 ONLINE device details: c8t0d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. c8t1d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing.
To import a specific pool, specify the pool name or its numeric identifier with the zfs import command. Additionally, you can rename a pool while importing it. For example:
# zpool import system1 mpool
This command imports the exported pool system1 and renames it mpool. The new pool name is persistent.
Caution - During an import operation, warnings occur if the pool might be in use on another system.
cannot import 'pool': pool may be in use on another system use '-f' to import anywayDo not attempt to import a pool that is active on one system to another system. ZFS is not a native cluster, distributed, or parallel file system and cannot provide concurrent access from multiple, different stems.
You can also import pools under an alternate root by using the –R option. For more information, see Using a ZFS Pool With an Alternate Root Location.
By default, a pool with a missing log device cannot be imported. You can use zpool import –m command to force a pool to be imported with a missing log device.
In the following example, the output indicates a missing mirrored log when you first import the pool dozer.
# zpool import dozer The devices below are missing, use '-m' to import the pool anyway: mirror-1 [log] c3t3d0 c3t4d0 cannot import 'dozer': one or more devices is currently unavailable
To proceed with importing the pool with the missing mirrored log, use the –m option.
# zpool import -m dozer # zpool status dozer pool: dozer state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: URL to My Oracle Support knowledge article scan: scrub repaired 0 in 0h0m with 0 errors on Fri Oct 15 16:51:39 2010 config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 mirror-0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 logs mirror-1 UNAVAIL 0 0 0 insufficient replicas 13514061426445294202 UNAVAIL 0 0 0 was c3t3d0 16839344638582008929 UNAVAIL 0 0 0 was c3t4d0
The imported pool remains in a DEGRADED state. Based on the output recommendation, attach the missing log devices. Then, run the zpool clear command to clear the pool errors.
If a pool is so damaged that it cannot be accessed, importing in read-only mode might enable you to recover the pool's data. For example:
# zpool import -o readonly=on system1 # zpool scrub system1 cannot scrub system1: pool is read-only
When a pool is imported in read-only mode, the following conditions apply:
All file systems and volumes are mounted in read-only mode.
Pool transaction processing is disabled. Any pending synchronous writes in the intent log are not made until the pool is imported in read-write mode.
Setting a pool property during the read-only import is ignored.
You can set the pool back to read-write mode by exporting and importing the pool. For example:
# zpool export system1 # zpool import system1 # zpool scrub system1
By default, the zpool import command searches devices only within the /dev/dsk directory. If devices exist in another directory, or you are using pools backed by files, you must use the –d option to search alternate directories. For example:
# zpool create mpool mirror /file/a /file/b # zpool export mpool # zpool import -d /file pool: mpool id: 7318163511366751416 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: mpool ONLINE mirror-0 ONLINE /file/a ONLINE /file/b ONLINE # zpool import -d /file mpool
If devices exist in multiple directories, you can specify multiple –d options.
The following command imports the pool mpool by identifying one of the pool's specific devices, /dev/etc/c2t3d0:
# zpool import -d /dev/etc/c2t3d0 mpool # zpool status mpool pool: mpool state: ONLINE scan: resilvered 952K in 0h0m with 0 errors on Fri Jun 29 16:22:06 2012 config: NAME STATE READ WRITE CKSUM mpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c2t3d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0
You can use the zpool import –D command to recover a storage pool that has been destroyed.
In the following example, the pool system1 is indicated as destroyed.
# zpool import -D pool: system1 id: 5154272182900538157 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: system1 ONLINE mirror-0 ONLINE c1t0d0 ONLINE c1t1d0 ONLINE
To recover the destroyed pool, import it with the –D option.
# zpool import -D system1 # zpool status system1 pool: system1 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM system1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 errors: No known data errors
If one of the devices in the destroyed pool is unavailable, you might still recover the destroyed pool by including the –f option. In this scenario, you would import the degraded pool and then attempt to fix the device failure. For example:
# zpool import -D pool: dozer id: 4107023015970708695 state: DEGRADED (DESTROYED) status: One or more devices are unavailable. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: dozer DEGRADED raidz2-0 DEGRADED c8t0d0 ONLINE c8t1d0 ONLINE c8t2d0 ONLINE c8t3d0 UNAVAIL cannot open c8t4d0 ONLINE device details: c8t3d0 UNAVAIL cannot open status: ZFS detected errors on this device. The device was missing. # zpool import -Df dozer # zpool status -x pool: dozer state: DEGRADED status: One or more devices are unavailable in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or 'fmadm repaired', or replace the device with 'zpool replace'. Run 'zpool status -v' to see device specific details. scan: none requested config: NAME STATE READ WRITE CKSUM dozer DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 4881130428504041127 UNAVAIL 0 0 0 c8t4d0 ONLINE 0 0 0 errors: No known data errors # zpool online dozer c8t3d0 # zpool status -x all pools are healthy