This section describes new ZFS features in the Solaris Express Developer Edition 9/07 release.
ZFS command history enhancements (zpool history) – The zpool history command has been enhanced with new options that provide ZFS file system event logging and a long format that includes the user name, the hostname, and the zone in which the operation was performed.
For example, the zpool history -i option provides zpool command events and zfs command events.
# zpool history -i users History for 'users': 2007-04-26.12:44:02 zpool create users mirror c0t8d0 c0t9d0 c0t10d0 2007-04-26.12:46:13 zfs create users/home 2007-04-26.12:46:18 zfs create users/home/markm 2007-04-26.12:46:23 zfs create users/home/marks 2007-04-26.12:46:30 zfs create users/home/neil 2007-04-26.12:47:15 zfs snapshot -r users/home@yesterday 2007-04-26.12:54:50 zfs snapshot -r users/home@today 2007-04-26.13:29:13 zfs create users/snapshots |
The zpool history -l option provides a long format. For example:
# zpool history -l tank History for 'tank': 2007-07-19.10:55:13 zpool create tank mirror c0t1d0 c0t11d0 [user root on neo:global] 2007-07-19.10:55:19 zfs create tank/cindys [user root on neo:global] 2007-07-19.10:55:49 zfs allow cindys create,destroy,mount,snapshot tank/cindys [user root on neo:global] 2007-07-19.10:56:24 zfs create tank/cindys/data [user cindys on neo:global] |
For more information, see zpool(1M).
Upgrading ZFS File Systems (zfs upgrade) – Starting with this release, the zfs upgrade command is included to provide future file system enhancements to existing ZFS file systems. ZFS storage pools have a similar upgrade feature to provide pool enhancements to existing storage pools.
For example:
# zfs upgrade This system is currently running ZFS filesystem version 2. The following filesystems are out of date, and can be upgraded. After being upgraded, these filesystems (and any 'zfs send' streams generated from subsequent snapshots) will no longer be accessible by older software versions. VER FILESYSTEM --- ------------ 1 datab 1 datab/users 1 datab/users/area51 |
However, no new ZFS file system upgrade features are provided in this release.
ZFS delegated administration – Starting with this release, you can delegate fine-grained permissions to perform ZFS administration tasks to non-privileged users. You can use the zfs allow and zfs unallow commands to grant and remove permissions.
The following example shows how to set permissions so that user cindys can create, destroy, mount and take snapshots on tank/cindys. The permissions on tank/cindys are also displayed.
# zfs allow cindys create,destroy,mount,snapshot tank/cindys # zfs allow tank/cindys ------------------------------------------------------------- Local+Descendent permissions on (tank/cindys) user cindys create,destroy,mount,snapshot ------------------------------------------------------------- |
Because the tank/cindys mount point permission is set to 755 by default, user cindys will be unable to mount file systems under tank/cindys. Set an ACL similar to the following syntax to provide mount point access.
# chmod A+user:cindys:add_subdirectory:allow /tank/cindys |
You can modify the ability to use ZFS delegated administration with the pool's delegation property. For example:
# zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation on default # zpool set delegation=off users # zpool get delegation users NAME PROPERTY VALUE SOURCE users delegation off local |
By default, the delegation property is enabled.
For more information, see Chapter 9, ZFS Delegated Administration, in Solaris ZFS Administration Guide.
Setting up separate ZFS logging devices – The ZFS intent log (ZIL) is provided to satisfy POSIX requirements for synchronous transactions. For example, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the ZIL is allocated from blocks within the main storage pool. However, better performance might be possible by using separate intent log devices in your ZFS storage pool, such as with NVRAM or a dedicated disk.
Log devices for the ZFS intent log are not related to database log files.
You can set up separate ZFS logging devices in the following ways:
When the ZFS storage pool is created or after the pool is created.
You can attach a log device to an existing log device to create a mirrored log device. This operation is identical to attaching a device in a unmirrored storage pool.
For examples on setting up log devices, see Creating a ZFS Storage Pool with Log Devices in Solaris ZFS Administration Guide andAdding Devices to a Storage Pool in Solaris ZFS Administration Guide.
For information about whether using separate ZFS logging devices is appropriate for your environment, see Setting Up Separate ZFS Logging Devices in Solaris ZFS Administration Guide.
Creating intermediate ZFS datasets – You can use the -p option with the zfs create, zfs clone, and zfs rename commands to quickly create a non-existent intermediate dataset, if it doesn't already exist.
For example, create ZFS datasets (users/area51) in the datab storage pool.
# zfs list NAME USED AVAIL REFER MOUNTPOINT datab 106K 16.5G 18K /datab # zfs create -p -o compression=on datab/users/area51 |
If the intermediate dataset exists during the create operation, the operation completes successfully.
Properties specified apply to the target dataset, not to the intermediate datasets. For example:
# zfs get mountpoint,compression datab/users/area51 NAME PROPERTY VALUE SOURCE datab/users/area51 mountpoint /datab/users/area51 default datab/users/area51 compression on local |
The intermediate dataset is created with the default mount point. Any additional properties are disabled for the intermediate dataset. For example:
# zfs get mountpoint,compression datab/users NAME PROPERTY VALUE SOURCE datab/users mountpoint /datab/users default datab/users compression off default |
For more information, see zfs(1M).
ZFS hotplugging enhancements – Starting with this release, ZFS more effectively responds to devices that are removed. ZFS also provides a mechanism to automatically identify devices that are inserted with the following enhancements:
You can replace an existing device with an equivalent device without having to use the zpool replace command.
The autoreplace property controls automatic device replacement. If the property is set to off, device replacement must be initiated by the administrator by using the zpool replace command. If the property is set to on, any new device which is found in the same physical location as a device that previously belonged to the pool, is automatically formatted and replaced. The default value for the autoreplace property is off.
The storage pool state REMOVED is provided when a device or hot spare has been removed if the device was physically removed while the system was running. A hot-spare device is substituted for the removed device, if available.
If a device is removed and then inserted, the device is placed online. If a hot-spare was activated when the device is re-inserted, the spare is removed when the online operation completes.
Automatic detection when devices are removed or inserted is hardware-dependent and might not be supported on all platforms.
Hot spares are checked periodically to make sure they are online and available.
For more information, see zpool(1M).
For more information about these ZFS file system enhancements, see the Solaris ZFS Administration Guide.