The following section summarizes new features in the ZFS file system.
ZFS and Flash installation support – In the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool. For more information, see the Solaris ZFS Administration Guide.
Setting ZFS user and group quotas – In previous Solaris releases, you could apply quotas and reservations to ZFS file systems to manage and reserve space. In this Solaris release, you can set a quota on the amount of space consumed by files that are owned by a particular user or group. You might consider setting user and group quotas in an environment with a large number of users or groups. You can set user or group quotas by using the zfs userspace and zfs groupspace properties as follows:
# zfs set userquota@user1=5G tank/data # zfs set groupquota@staff=10G tank/staff/admins |
You can display a user's or group's current quota setting as follows:
# zfs get userquota@user1 tank/data NAME PROPERTY VALUE SOURCE tank/data userquota@user1 5G local # zfs get groupquota@staff tank/staff/admins NAME PROPERTY VALUE SOURCE tank/staff/admins groupquota@staff 10G local |
Using ZFS ACL pass through inheritance for execute permission – In previous Solaris releases, you could apply ACL inheritance so that all files are created with 0664 or 0666 permissions. If you want to optionally include the execute bit from the file creation mode into the inherited ACL, you can use the pass through inheritance for execute permission in this release.
If aclinherit=passthrough-x is enabled on a ZFS dataset, you can include execute permission for an output file that is generated from cc or gcc tools. If the inherited ACL does not include execute permission, then the executable output from the compiler won't be executable until you use the chmod command to change the file's permissions.
Using cache devices in your ZFS storage pool – In the Solaris 10 10/09 release, you can create pool and specify cache devices, which are used to cache storage pool data. Cache devices provide an additional layer of caching between main memory and disk. Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
One or more cache devices can specified when the pool is created. For example:
# zpool create pool mirror c0t2d0 c0t4d0 cache c0t0d0 # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 errors: No known data errors |
For information about determining whether using cache devices is appropriate for your environment, see the Solaris ZFS Administration Guide.
ZFS property enhancements – The Solaris 10 10/09 release provides the following ZFS property enhancements:
You can set ZFS file system properties at pool creation time. In the following example, compression is enabled on the ZFS file system that is created when the pool is created.
# zpool create -O compression=on pool mirror c0t1d0 c0t2d0 |
You can set two cache properties on a ZFS file system that allow you to control what is cached in the primary cache (ARC) or the secondary cache (L2ARC). The cache properties are set as follows:
primarycache – Controls what is cached in the ARC.
secondarycache – Controls what is cached in the L2ARC.
You can set these properties on an existing file system or when the file system is created. For example:
# zfs set primarycache=metadata tank/datab # zfs create -o primarycache=metadata tank/newdatab |
Some database environments might benefit from not caching user data. You will have determine if setting cache properties is appropriate for your environment.
For more information, see the Solaris ZFS Administration Guide.
You can use the space usage properties to identify space usage for clones, file systems, and volumes, but not snapshots. The properties are as follows:
usedbychildren – Identifies the amount of space that is used by children of this dataset, which would be freed if all the dataset's children were destroyed. The property abbreviation is usedchild.
usedbydataset – Identifies the amount of space that is used by this dataset itself, which would be freed if the dataset was destroyed, after first destroying any snapshots and removing any refreservation. The property abbreviation is usedds.
usedbyrefreservation – Identifies the amount of space that is used by a refreservation set on this dataset, which would be freed if the refreservation was removed. The property abbreviation is usedrefreserv.
usedbysnapshots – Identifies the amount of space that is consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties, because space can be shared by multiple snapshots. The property abbreviation is usedsnap.
These new properties break down the value of the used property into the various elements that consume space. In particular, the value of the used property breaks down as follows:
used property = usedbychildren + usedbydataset + usedbyrefreservation + usedbysnapshots |
You can view these properties by using the zfs list -o space command. For example:
# zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD pool 33.2G 72K 0 21K 0 51K rpool 27.0G 6.27G 20.5K 97K 0 6.27G rpool/ROOT 27.0G 4.73G 0 21K 0 4.73G rpool/ROOT/zfsBE 27.0G 4.73G 97.5M 4.63G 0 0 rpool/dump 27.0G 1.00G 16K 1.00G 0 0 rpool/export 27.0G 60K 16K 23K 0 21K rpool/export/home 27.0G 21K 0 21K 0 0 rpool/swap 27.5G 553M 0 41.5M 512M 0 |
In this release, snapshots are omitted from zfs list output. The listsnaps pool property controls whether snapshot information is displayed by the zfs list command. If you use the zfs list -t snapshots command, snapshot information is displayed. The default value is off, which means snapshot information is not displayed by default.
ZFS log device recovery – In the Solaris 10 10/09 release, ZFS identifies intent log failures in the zpool status command. FMA reports these errors as well. Both ZFS and FMA describe how to recover from an intent log failure.
For example, if the system shuts down abruptly before synchronous write operations are committed to a pool with a separate log device, you will see intent-log related error messages in the zpool status output. For information about resolving log device failures, see the Solaris ZFS Administration Guide.
Using ZFS ACL Sets – The Solaris 10 10/09 release provides the ability to apply NFSv4–style ACLs in sets, rather than apply different ACL permissions individually. The following ACL sets are provided:
full_set = all permissions
modify_set = all permissions except write_acl and write_owner
read_set = read_data, read_attributes, read_xattr, and read_acl
write_set = write_data, append_data, write_attributes, and write_xattr
These ACL sets are pre-defined and cannot be modified.
Zone migration in a ZFS environment – In the Solaris 10 5/09 release, support for migrating zones in a ZFS environment with Live Upgrade is extended. For more information, see the Solaris ZFS Administration Guide.
ZFS installation and boot support – In the Solaris 10 10/08 release, you can install and boot a ZFS root file system. The initial installation option or the JumpStart feature is available to install a ZFS root file system. Or, you can use the Solaris Live Upgrade feature to migrate a UFS root file system to a ZFS root file system. ZFS support for swap and dump devices is also provided.
Rolling back a ZFS dataset without unmounting – In the Solaris 10 10/08 release, you can rollback a dataset without unmounting it first. This feature means that the zfs rollback -f option is no longer needed to force an unmount operation. The -f option is no longer supported, and is ignored if specified.
Enhancements to the zfs send command – The Solaris 10 10/08 release includes the following enhancements to the zfs send command:
You can send all incremental streams from one snapshot to a cumulative snapshot. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT pool 428K 16.5G 20K /pool pool/fs 71K 16.5G 21K /pool/fs pool/fs@snapA 16K - 18.5K - pool/fs@snapB 17K - 20K - pool/fs@snapC 17K - 20.5K - pool/fs@snapD 0 - 21K - # zfs send -I pool/fs@snapA pool/fs@snapD > /snaps/fs@combo |
This syntax shows how to send all incremental snapshots between fs@snapA to fs@snapD to fs@combo.
You can send an incremental stream from the origin snapshot to create a clone. The original snapshot must already exist on the receiving side to accept the incremental stream. For example:
# zfs send -I pool/fs@snap1 pool/clone@snapA > /snaps/fsclonesnap-I . . # zfs receive -F pool/clone < /snaps/fsclonesnap-I |
You can send a replication stream of all descendent file systems, up to the named snapshots. When received, all properties, snapshots, descendent file systems, and clones are preserved. For example:
# zfs send -R pool/fs@snap > snaps/fs-R |
Send an incremental replication stream.
zfs send -R -[iI] @snapA pool/fs@snapD |
For extended examples, see the Solaris ZFS Administration Guide.
ZFS quotas and reservations for file system data only – In the Solaris 10 10/08 release, dataset quotas and reservations are provided that do not include descendents, such as snapshots and clones, in the space consumption accounting. The existing ZFS quota and reservation features remain as in previous Solaris releases.
The refquota property limits the amount of space a dataset can consume. This property enforces a hard limit on the amount of space that can be used. This hard limit does not include space used by descendents, such as snapshots and clones.
The refreservation property sets the minimum amount of space that is guaranteed to a dataset, not including its descendents.
For example, you can set a 10-Gbyte refquota for studentA that sets a 10-Gbyte hard limit of referenced space. For additional flexibility, you can set a 20-Gbyte quota that allows you to manage studentA's snapshots.
# zfs set refquota=10g tank/studentA # zfs set quota=20g tank/studentA |
ZFS storage pool properties – New ZFS storage pool property information is provided in the Solaris 10 10/08 release.
Display all pool attributes – You can use the zpool get all pool command to display all pool property information. For example:
# zpool get all users NAME PROPERTY VALUE SOURCE users size 16.8G - users used 194K - users available 16.7G - users capacity 0% - users altroot - default users health ONLINE - users guid 14526624140147884971 - users version 10 default users bootfs - default users delegation on default users autoreplace off default users cachefile - default users failmode wait default |
The cachefile property – This release provides the cachefile property, which controls where pool configuration information is cached. All pools in the cache are automatically imported when the system boots. However, installation and clustering environments might need to cache this information in a different location so that pools are not automatically imported.
You can set this property to cache pool configuration in a different location that can be imported later by using the zpool import -c command. For most ZFS configurations, this property would not be used.
The cachefile property is not persistent and is not stored on disk. This property replaces the temporary property that was used to indicate that pool information should not be cached in previous Solaris releases.
The failmode property – This release provides the failmode property for determining the behavior of a catastrophic pool failure due to a loss of device connectivity or the failure of all devices in the pool. The failmode property can be set to these values: wait, continue, or panic. The default value is wait, which means you must reconnect the device or replace a failed device and clear the error with the zpool clear command.
The failmode property is set like other settable ZFS properties, which can be set either before or after the pool is created. For example:
# zpool set failmode=continue tank # zpool get failmode tank NAME PROPERTY VALUE SOURCE tank failmode continue local |
# zpool create -o failmode=continue users mirror c0t1d0 c1t1d0 |
ZFS and file system mirror mounts – In the Solaris 10 10/08 release, NFSv4 mount enhancements are provided to make ZFS file systems more accessible to NFS clients.
When file systems are created on the NFS server, the NFS client can automatically discover these newly created file systems within their existing mount of a parent file system.
For example, if the server neo already shares the tank file system and client zee has it mounted, /tank/baz is automatically visible on the client after it is created on the server.
zee# mount neo:/tank /mnt zee# ls /mnt baa bar neo# zfs create tank/baz zee% ls /mnt baa bar baz zee% ls /mnt/baz file1 file2 |
ZFS command history enhancements (zpool history) – In the Solaris 10 10/08 release, the zpool history command provides the following new features:
ZFS file system event information is displayed. For example:
# zpool history users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 2008-07-10.09:43:48 zfs create users/home 2008-07-10.09:43:56 zfs create users/home/markm 2008-07-10.09:44:02 zfs create users/home/marks 2008-07-10.09:44:19 zfs snapshot -r users/home@yesterday |
A -l option for displaying a long format that includes the user name, the host name, and the zone in which the operation was performed. For example:
# zpool history -l users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 [user root on corona:global] 2008-07-10.09:43:13 zfs create users/marks [user root on corona:global] 2008-07-10.09:43:44 zfs destroy users/marks [user root on corona:global] 2008-07-10.09:43:48 zfs create users/home [user root on corona:global] 2008-07-10.09:43:56 zfs create users/home/markm [user root on corona:global] 2008-07-10.09:44:02 zfs create users/home/marks [user root on corona:global] 2008-07-11.10:44:19 zfs snapshot -r users/home@yesterday [user root on corona:global] |
A -i option for displaying internal event information that can be used for diagnostic purposes. For example:
# zpool history -i users History for 'users': 2008-07-10.09:43:05 zpool create users mirror c1t1d0 c1t2d0 2008-07-10.09:43:13 [internal create txg:6] dataset = 21 2008-07-10.09:43:13 zfs create users/marks 2008-07-10.09:43:48 [internal create txg:12] dataset = 27 2008-07-10.09:43:48 zfs create users/home 2008-07-10.09:43:55 [internal create txg:14] dataset = 33 2008-07-10.09:43:56 zfs create users/home/markm 2008-07-10.09:44:02 [internal create txg:16] dataset = 39 2008-07-10.09:44:02 zfs create users/home/marks 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 42 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 44 2008-07-10.09:44:19 [internal snapshot txg:21] dataset = 46 2008-07-10.09:44:19 zfs snapshot -r users/home@yesterday |
Upgrading ZFS file systems (zfs upgrade) – In the Solaris 10 10/08 release, you can use the zfs upgrade command to upgrade your existing ZFS file systems with new file system enhancements. ZFS storage pools have a similar upgrade feature to provide pool enhancements to existing storage pools.
For example:
# zfs upgrade This system is currently running ZFS filesystem version 2. The following filesystems are out of date, and can be upgraded. After being upgraded, these filesystems (and any 'zfs send' streams generated from subsequent snapshots) will no longer be accessible by older software versions. VER FILESYSTEM --- ------------ 1 datab 1 datab/users 1 datab/users/area51 |
File systems that are upgraded and any streams that are created from those upgraded file systems by the zfs send command are not accessible on systems that are running older software releases.
ZFS delegated administration – In the Solaris 10 10/08 release, you can delegate granular permissions to perform ZFS administration tasks to non-privileged users.
You can use the zfs allow and zfs unallow commands to grant and remove permissions.
You can modify the ability to use delegated administration with the pool's delegation property. For example:
# zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation on default # zpool set delegation=off users # zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation off local |
By default, the delegation property is enabled.
Setting up separate ZFS logging devices – The ZFS intent log (ZIL) is provided to satisfy POSIX requirements for synchronous transactions. For example, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the ZIL is allocated from blocks within the main storage pool. However, better performance in the Solaris 10 10/08 release might be possible by using separate ZIL devices in your ZFS storage pool, such as with NVRAM or a dedicated disk,
Log devices for the ZIL are not related to database log files.
You can set up a ZFS logging device when the storage pool is created or after the pool is created. For examples of setting up log devices, see Solaris ZFS Administration Guide.
Creating intermediate ZFS datasets – In the Solaris 10 10/08 release, you can use the -p option with the zfs create, zfs clone, and zfs rename commands to quickly create a nonexistent intermediate dataset, if it doesn't already exist.
For example, create ZFS datasets (users/area51) in the datab storage pool.
# zfs list NAME USED AVAIL REFER MOUNTPOINT datab 106K 16.5G 18K /datab # zfs create -p -o compression=on datab/users/area51 |
If the intermediate dataset exists during the create operation, the operation completes successfully.
Properties specified apply to the target dataset, not to the intermediate datasets. For example:
# zfs get mountpoint,compression datab/users/area51 NAME PROPERTY VALUE SOURCE datab/users/area51 mountpoint /datab/users/area51 default datab/users/area51 compression on local |
The intermediate dataset is created with the default mount point. Any additional properties are disabled for the intermediate dataset. For example:
# zfs get mountpoint,compression datab/users NAME PROPERTY VALUE SOURCE datab/users mountpoint /datab/users default datab/users compression off default |
For more information, see zfs(1M).
ZFS hot-plugging enhancements – In the Solaris 10 10/08 release, ZFS more effectively responds to removed devices and provides a mechanism to automatically identify devices that are inserted:
You can replace an existing device with an equivalent device without having to use the zpool replace command.
The autoreplace property controls automatic device replacement. If set to off, device replacement must be initiated by the administrator by using the zpool replace command. If set to on, any new device that is found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default behavior is off.
The storage pool state REMOVED is provided when a device or hot spare has been removed, if the device was physically removed while the system was running. A hot spare device is substituted for the removed device, if available.
If a device is removed and then inserted, the device is placed online. If a hot spare was activated when the device is re-inserted, the hot spare is removed when the online operation completes.
Automatic detection when devices are removed or inserted is hardware-dependent and might not be supported on all platforms. For example, USB devices are automatically configured upon insertion. However, you might have to use the cfgadm -c configure command to configure a SATA drive.
Hot spares are checked periodically to make sure that they are online and available.
For more information, see zpool(1M).
Recursively renaming ZFS snapshots (zfs rename -r) – In the Solaris 10 10/08 release, you can recursively rename all descendent ZFS snapshots by using the zfs rename -r command.
For example, snapshot a set of ZFS file systems.
# zfs snapshot -r users/home@today # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@today 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@today 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@today 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@today 0 - 18K - |
Then, rename the snapshots the following day.
# zfs rename -r users/home@today @yesterday # zfs list NAME USED AVAIL REFER MOUNTPOINT users 216K 16.5G 20K /users users/home 76K 16.5G 22K /users/home users/home@yesterday 0 - 22K - users/home/markm 18K 16.5G 18K /users/home/markm users/home/markm@yesterday 0 - 18K - users/home/marks 18K 16.5G 18K /users/home/marks users/home/marks@yesterday 0 - 18K - users/home/neil 18K 16.5G 18K /users/home/neil users/home/neil@yesterday 0 - 18K - |
Snapshots are the only datasets that can be renamed recursively.
GZIP compression now available for ZFS – In the Solaris 10 10/08 release, you can set gzip compression on ZFS file systems in addition to lzjb compression. You can specify compression as gzip, the default, or gzip-N, where N equals 1 through 9. For example:
# zfs create -o compression=gzip users/home/snapshots # zfs get compression users/home/snapshots NAME PROPERTY VALUE SOURCE users/home/snapshots compression gzip local # zfs create -o compression=gzip-9 users/home/oldfiles # zfs get compression users/home/oldfiles NAME PROPERTY VALUE SOURCE users/home/oldfiles compression gzip-9 local |
Storing multiple copies of ZFS user data – A ZFS file system automatically stores metadata multiple times across different disks, if possible, as a reliability feature. This feature is known as ditto blocks. In the Solaris 10 10/08 release, you can specify that multiple copies of user data is also stored per file system by using the zfs set copies command. For example:
# zfs set copies=2 users/home # zfs get copies users/home NAME PROPERTY VALUE SOURCE users/home copies 2 local |
Available values are 1, 2, or 3. The default value is 1. These copies are in addition to any pool-level redundancy, such as in a mirrored or RAID-Z configuration.
For more information about using this property, see the Solaris ZFS Administration Guide.
ZFS command history (zpool history) – In the Solaris 10 8/07 release, ZFS automatically logs successful zfs and zpool commands that modify pool state information. This features enables you or Sun support personnel to identify the exact ZFS commands that were executed to troubleshoot an error scenario.
Improved storage pool status information (zpool status) – In the Solaris 10 8/07 release, you can use the zpool status -v command to display a list of files with persistent errors. Previously, you had to use the find -inum command to identify the file names from the list of displayed inodes.
ZFS and Solaris iSCSI improvements – In the Solaris 10 8/07 release, you can create a ZFS volume as a Solaris iSCSI target device by setting the shareiscsi property on the ZFS volume. This method is a convenient way to quickly set up a Solaris iSCSI target. For example:
# zfs create -V 2g tank/volumes/v2 # zfs set shareiscsi=on tank/volumes/v2 # iscsitadm list target Target: tank/volumes/v2 iSCSI Name: iqn.1986-03.com.sun:02:984fe301-c412-ccc1-cc80-cf9a72aa062a Connections: 0 |
After the iSCSI target is created, you would set up the iSCSI initiator. For information about setting up a Solaris iSCSI initiator, see Chapter 14, Configuring Solaris iSCSI Targets and Initiators (Tasks), in System Administration Guide: Devices and File Systems.
For more information about managing a ZFS volume as an iSCSI target, see the Solaris ZFS Administration Guide.
ZFS property improvements
ZFS xattr property – In the Solaris 10 8/07 release, you can use the xattr property to disable or enable extended attributes for a specific ZFS file system. The default value is on.
ZFS canmount property – In the Solaris 10 8/07 release, you use the canmount property to specify whether a dataset can be mounted by using the zfs mount command.
ZFS user properties – In the Solaris 10 8/07 release, ZFS supports user properties, in addition to the standard native properties that can either export internal statistics or control ZFS file system behavior. User properties have no effect on ZFS behavior, but you can use them to annotate datasets with information that is meaningful in your environment.
Setting properties when creating ZFS file systems – In the Solaris 10 8/07 release, you can set properties when you create a file system, in addition to setting properties after the file system is created.
The following examples illustrate equivalent syntax:
# zfs create tank/home # zfs set mountpoint=/export/zfs tank/home # zfs set sharenfs=on tank/home # zfs set compression=on tank/home |
Or, set the properties when the file system is created.
# zfs create -o mountpoint=/export/zfs -o sharenfs=on -o compression=on tank/home |
Display all ZFS file system information – In the Solaris 10 8/07 release, you can use various forms of the zfs get command to display information about all datasets if you do not specify a dataset. In previous releases, all dataset information was not retrievable with the zfs get command.
For example:
# zfs get -s local all tank/home atime off local tank/home/bonwick atime off local tank/home/marks quota 50G local |
New zfs receive -F option – In the Solaris 10 8/07 release, you can use the new -F option to the zfs receive command to force a rollback of the file system to the most recent snapshot before doing the receive operation. Using this option might be necessary when the file system is modified between the time a rollback occurs and the receive operation is initiated.
Recursive ZFS snapshots – In the Solaris 10 11/06 release, recursive snapshots are available. When you use the zfs snapshot command to create a file system snapshot, you can use the -r option to recursively create snapshots for all descendant file systems. In addition, using the -r option recursively destroys all descendant snapshots when a snapshot is destroyed.
Double Parity RAID-Z (raidz2) – In the Solaris 10 11/06 release, replicated RAID-Z configuration can now have either single-parity or double-parity, which means that one or two device failures can be sustained respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z configuration.
Hot spares for ZFS storage pool devices – Starting in the Solaris 10 11/06 release, the ZFS hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that if an active device in the pool fails, the hot spare automatically replaces the failed device. Or, you can manually replace a device in a storage pool with a hot spare.
Replacing a ZFS file system with a ZFS clone (zfs promote) – In the Solaris 10 11/06 release, the zfs promote command enables you to replace an existing ZFS file system with a clone of that file system. This feature is helpful when you want to run tests on an alternative version of a file system and then, make that alternative version of the file system the active file system.
ZFS and zones improvements – In the Solaris 10 11/06 release, the ZFS and zones interaction is improved. On a Solaris system with zones installed, you can use the zoneadm clone feature to copy the data from an existing source ZFS zonepath to a target ZFS zonepath on your system. You cannot use the ZFS clone feature to clone the non-global zone. You must use the zoneadm clone command. For more information, see System Administration Guide: Solaris Containers-Resource Management and Solaris Zones.
Upgrading ZFS storage pools (zpool upgrade) – Starting in the Solaris 10 6/06 release, you can upgrade your storage pools to a newer version to take advantage of the latest features by using the zpool upgrade command. In addition, the zpool status command has been modified to notify you when your pools are running older versions.
Clearing device errors – Starting in the Solaris 10 6/06 release, you can use the zpool clear command to clear error counts that are associated with a device or the pool. Previously, error counts were cleared when a device in a pool was brought online with the zpool online command.
Recovering destroyed pools – In the Solaris 10 6/06 release, the zpool import -D command enables you to recover pools that were previously destroyed with the zpool destroy command.
ZFS backup and restore commands renamed – In the Solaris 10 6/06 release, the zfs backup and zfs restore commands are renamed to zfs send and zfs receive to more accurately describe their function. Their function is to save and restore ZFS data stream representations.
Compact NFSv4 ACL format – Starting in the Solaris 10 6/06 release, three NFSv4 ACL formats are available: verbose, positional, and compact. The new compact and positional ACL formats are available to set and display ACLs. You can use the chmod command to set all three ACL formats. Use the ls -V command to display compact and positional ACL formats. Use the ls -v command to display verbose ACL formats.
Temporarily take a device offline – Starting in the Solaris 10 6/06 release, you can use the zpool offline -t command to take a device offline temporarily. When the system is rebooted, the device is automatically returned to the ONLINE state.
ZFS is integrated with Fault Manager – Starting in the Solaris 10 6/06 release, a ZFS diagnostic engine that is capable of diagnosing and reporting pool failures and device failures is included. Checksum, I/O, and device errors associated with pool or device failures are also reported. Diagnostic error information is written to the console and the /var/adm/messages file. In addition, detailed information about recovering from a reported error can be displayed by using the zpool status command.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
See the following What's New sections for related ZFS feature information: