The following section summarizes new features in the ZFS file system.
ZFS and Flash installation support – In the Solaris 10 10/09 release, you can set up a JumpStart profile to identify a flash archive of a ZFS root pool. For more information, see the Solaris ZFS Administration Guide.
Setting ZFS user and group quotas – In previous Solaris releases, you could apply quotas and reservations to ZFS file systems to manage and reserve space. In this Solaris release, you can set a quota on the amount of space consumed by files that are owned by a particular user or group. You might consider setting user and group quotas in an environment with a large number of users or groups. You can set user or group quotas by using the zfs userspace and zfs groupspace properties as follows:
# zfs set userquota@user1=5G tank/data # zfs set groupquota@staff=10G tank/staff/admins |
You can display a user's or group's current quota setting as follows:
# zfs get userquota@user1 tank/data NAME PROPERTY VALUE SOURCE tank/data userquota@user1 5G local # zfs get groupquota@staff tank/staff/admins NAME PROPERTY VALUE SOURCE tank/staff/admins groupquota@staff 10G local |
Using ZFS ACL pass through inheritance for execute permission – In previous Solaris releases, you could apply ACL inheritance so that all files are created with 0664 or 0666 permissions. If you want to optionally include the execute bit from the file creation mode into the inherited ACL, you can use the pass through inheritance for execute permission in this release.
If aclinherit=passthrough-x is enabled on a ZFS dataset, you can include execute permission for an output file that is generated from cc or gcc tools. If the inherited ACL does not include execute permission, then the executable output from the compiler won't be executable until you use the chmod command to change the file's permissions.
Using cache devices in your ZFS storage pool – In the Solaris 10 10/09 release, you can create pool and specify cache devices, which are used to cache storage pool data. Cache devices provide an additional layer of caching between main memory and disk. Using cache devices provide the greatest performance improvement for random read-workloads of mostly static content.
One or more cache devices can specified when the pool is created. For example:
# zpool create pool mirror c0t2d0 c0t4d0 cache c0t0d0 # zpool status pool pool: pool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 cache c0t0d0 ONLINE 0 0 0 errors: No known data errors |
For information about determining whether using cache devices is appropriate for your environment, see the Solaris ZFS Administration Guide.
ZFS property enhancements – The Solaris 10 10/09 release provides the following ZFS property enhancements:
You can set ZFS file system properties at pool creation time. In the following example, compression is enabled on the ZFS file system that is created when the pool is created.
# zpool create -O compression=on pool mirror c0t1d0 c0t2d0 |
You can set two cache properties on a ZFS file system that allow you to control what is cached in the primary cache (ARC) or the secondary cache (L2ARC). The cache properties are set as follows:
primarycache – Controls what is cached in the ARC.
secondarycache – Controls what is cached in the L2ARC.
You can set these properties on an existing file system or when the file system is created. For example:
# zfs set primarycache=metadata tank/datab # zfs create -o primarycache=metadata tank/newdatab |
Some database environments might benefit from not caching user data. You will have determine if setting cache properties is appropriate for your environment.
For more information, see the Solaris ZFS Administration Guide.
You can use the space usage properties to identify space usage for clones, file systems, and volumes, but not snapshots. The properties are as follows:
usedbychildren – Identifies the amount of space that is used by children of this dataset, which would be freed if all the dataset's children were destroyed. The property abbreviation is usedchild.
usedbydataset – Identifies the amount of space that is used by this dataset itself, which would be freed if the dataset was destroyed, after first destroying any snapshots and removing any refreservation. The property abbreviation is usedds.
usedbyrefreservation – Identifies the amount of space that is used by a refreservation set on this dataset, which would be freed if the refreservation was removed. The property abbreviation is usedrefreserv.
usedbysnapshots – Identifies the amount of space that is consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties, because space can be shared by multiple snapshots. The property abbreviation is usedsnap.
These new properties break down the value of the used property into the various elements that consume space. In particular, the value of the used property breaks down as follows:
used property = usedbychildren + usedbydataset + usedbyrefreservation + usedbysnapshots |
You can view these properties by using the zfs list -o space command. For example:
# zfs list -o space NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD pool 33.2G 72K 0 21K 0 51K rpool 27.0G 6.27G 20.5K 97K 0 6.27G rpool/ROOT 27.0G 4.73G 0 21K 0 4.73G rpool/ROOT/zfsBE 27.0G 4.73G 97.5M 4.63G 0 0 rpool/dump 27.0G 1.00G 16K 1.00G 0 0 rpool/export 27.0G 60K 16K 23K 0 21K rpool/export/home 27.0G 21K 0 21K 0 0 rpool/swap 27.5G 553M 0 41.5M 512M 0 |
In this release, snapshots are omitted from zfs list output. The listsnaps pool property controls whether snapshot information is displayed by the zfs list command. If you use the zfs list -t snapshots command, snapshot information is displayed. The default value is off, which means snapshot information is not displayed by default.
ZFS log device recovery – In the Solaris 10 10/09 release, ZFS identifies intent log failures in the zpool status command. FMA reports these errors as well. Both ZFS and FMA describe how to recover from an intent log failure.
For example, if the system shuts down abruptly before synchronous write operations are committed to a pool with a separate log device, you will see intent-log related error messages in the zpool status output. For information about resolving log device failures, see the Solaris ZFS Administration Guide.
Using ZFS ACL Sets – The Solaris 10 10/09 release provides the ability to apply NFSv4–style ACLs in sets, rather than apply different ACL permissions individually. The following ACL sets are provided:
full_set = all permissions
modify_set = all permissions except write_acl and write_owner
read_set = read_data, read_attributes, read_xattr, and read_acl
write_set = write_data, append_data, write_attributes, and write_xattr
These ACL sets are pre-defined and cannot be modified.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
See the following What's New sections for related ZFS feature information: