Solaris ZFS Administration Guide

Chapter 6 Managing ZFS File Systems

This chapter provides detailed information about managing ZFS file systems. Concepts such as hierarchical file system layout, property inheritance, and automatic mount point management and share interactions are included in this chapter.

A ZFS file system is built on top of a storage pool. File systems can be dynamically created and destroyed without requiring you to allocate or format any underlying space. Because file systems are so lightweight and because they are the central point of administration in ZFS, you are likely to create many of them.

ZFS file systems are administered by using the zfs command. The zfs command provides a set of subcommands that perform specific operations on file systems. This chapter describes these subcommands in detail. Snapshots, volumes, and clones are also managed by using this command, but these features are only covered briefly in this chapter. For detailed information about snapshots and clones, see Chapter 7, Working With ZFS Snapshots and Clones. For detailed information about ZFS volumes, see ZFS Volumes.


Note –

The term dataset is used in this chapter as a generic term to refer to a file system, snapshot, clone, or volume.


The following sections are provided in this chapter:

Creating and Destroying ZFS File Systems

ZFS file systems can be created and destroyed by using the zfs create and zfs destroy commands.

Creating a ZFS File System

ZFS file systems are created by using the zfs create command. The create subcommand takes a single argument: the name of the file system to create. The file system name is specified as a path name starting from the name of the pool:

pool-name/[filesystem-name/]filesystem-name

The pool name and initial file system names in the path identify the location in the hierarchy where the new file system will be created. The last name in the path identifies the name of the file system to be created. The file system name must satisfy the naming conventions defined in ZFS Component Naming Requirements.

In the following example, a file system named bonwick is created in the tank/home file system.


# zfs create tank/home/bonwick

ZFS automatically mounts the newly created file system if it is created successfully. By default, file systems are mounted as /dataset, using the path provided for the file system name in the create subcommand. In this example, the newly created bonwick file system is at /tank/home/bonwick. For more information about automanaged mount points, see Managing ZFS Mount Points.

For more information about the zfs create command, see zfs(1M).

You can set file system properties when the file system is created.

In the following example, a mount point of /export/zfs is specified and is created for the tank/home file system.


# zfs create -o mountpoint=/export/zfs tank/home

For more information about file system properties, see Introducing ZFS Properties.

Destroying a ZFS File System

To destroy a ZFS file system, use the zfs destroy command. The destroyed file system is automatically unmounted and unshared. For more information about automatically managed mounts or automatically managed shares, see Automatic Mount Points.

In the following example, the tabriz file system is destroyed.


# zfs destroy tank/home/tabriz

Caution – Caution –

No confirmation prompt appears with the destroy subcommand. Use it with extreme caution.


If the file system to be destroyed is busy and so cannot be unmounted, the zfs destroy command fails. To destroy an active file system, use the -f option. Use this option with caution as it can unmount, unshare, and destroy active file systems, causing unexpected application behavior.


# zfs destroy tank/home/ahrens
cannot unmount 'tank/home/ahrens': Device busy

# zfs destroy -f tank/home/ahrens

The zfs destroy command also fails if a file system has children. To recursively destroy a file system and all its descendents, use the -r option. Note that a recursive destroy also destroys snapshots so use this option with caution.


# zfs destroy tank/ws
cannot destroy 'tank/ws': filesystem has children
use '-r' to destroy the following datasets:
tank/ws/billm
tank/ws/bonwick
tank/ws/maybee

# zfs destroy -r tank/ws

If the file system to be destroyed has indirect dependents, even the recursive destroy command described above fails. To force the destruction of all dependents, including cloned file systems outside the target hierarchy, the -R option must be used. Use extreme caution with this option.


# zfs destroy -r tank/home/schrock
cannot destroy 'tank/home/schrock': filesystem has dependent clones
use '-R' to destroy the following datasets:
tank/clones/schrock-clone

# zfs destroy -R tank/home/schrock

Caution – Caution –

No confirmation prompt appears with the -f, -r, or -R options so use these options carefully.


For more information about snapshots and clones, see Chapter 7, Working With ZFS Snapshots and Clones.

Renaming a ZFS File System

File systems can be renamed by using the zfs rename command. Using the rename subcommand can perform the following operations:

The following example uses the rename subcommand to do a simple rename of a file system:


# zfs rename tank/home/kustarz tank/home/kustarz_old

This example renames the kustarz file system to kustarz_old.

The following example shows how to use zfs rename to relocate a file system.


# zfs rename tank/home/maybee tank/ws/maybee

In this example, the maybee file system is relocated from tank/home to tank/ws. When you relocate a file system through rename, the new location must be within the same pool and it must have enough space to hold this new file system. If the new location does not have enough space, possibly because it has reached its quota, the rename will fail.

For more information about quotas, see Setting ZFS Quotas and Reservations.

The rename operation attempts an unmount/remount sequence for the file system and any descendent file systems. The rename fails if the operation is unable to unmount an active file system. If this problem occurs, you will need to force unmount the file system.

For information about renaming snapshots, see Renaming ZFS Snapshots.

Introducing ZFS Properties

Properties are the main mechanism that you use to control the behavior of file systems, volumes, snapshots, and clones. Unless stated otherwise, the properties defined in this section apply to all the dataset types.

Properties are divided into two types, native properties and user defined properties. Native properties either export internal statistics or control ZFS file system behavior. In addition, native properties are either settable or read-only. User properties have no effect on ZFS file system behavior, but you can use them to annotate datasets in a way that is meaningful in your environment. For more information on user properties, see ZFS User Properties.

Most settable properties are also inheritable. An inheritable property is a property that, when set on a parent, is propagated down to all of its descendents.

All inheritable properties have an associated source. The source indicates how a property was obtained. The source of a property can have the following values:

local

A local source indicates that the property was explicitly set on the dataset by using the zfs set command as described in Setting ZFS Properties.

inherited from dataset-name

A value of inherited from dataset-name means that the property was inherited from the named ancestor.

default

A value of default means that the property setting was not inherited or set locally. This source is a result of no ancestor having the property as source local.

The following table identifies both read-only and settable native ZFS file system properties. Read-only native properties are identified as such. All other native properties listed in this table are settable. For information about user properties, see ZFS User Properties.

Table 6–1 ZFS Native Property Descriptions

Property Name 

Type 

Default Value 

Description 

aclinherit

String 

secure

Controls how ACL entries are inherited when files and directories are created. The values are discard, noallow, secure, and passthrough. For a description of these values, see ACL Property Modes.

aclmode

String 

groupmask

Controls how an ACL entry is modified during a chmod operation. The values are discard, groupmask, and passthrough. For a description of these values, see ACL Property Modes.

atime

Boolean 

on

Controls whether the access time for files is updated when they are read. Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities.

available

Number 

N/A 

Read-only property that identifies the amount of space available to the dataset and all its children, assuming no other activity in the pool. Because space is shared within a pool, available space can be limited by various factors including physical pool size, quotas, reservations, or other datasets within the pool.

This property can also be referenced by its shortened column name, avail.

For more information about space accounting, see ZFS Space Accounting.

canmount

Boolean 

on

Controls whether the given file system can be mounted with the zfs mount command. This property can be set on any file system and the property itself is not inheritable. However, when this property is set to off, a mountpoint can be inherited to descendent file systems, but the file system itself is never mounted.

When the noauto option is set, a dataset can only be mounted and unmounted explicitly. The dataset is not mounted automatically when the dataset is created or imported, nor is it mounted by the zfs mount-a command or unmounted by the zfs unmount-a command.

For more information, see The canmount Property.

casesensitivity

String 

sensitive

This property indicates whether the file name matching algorithm used by the file system should be casesensitive, caseinsensitive, or allow a combination of both styles of matching (mixed). The default value for this property is sensitive. Traditionally, UNIX and POSIX file systems have case-sensitive file names.

The mixed value for this property indicates the file system can support requests for both case-sensitive and case-insensitive matching behavior. Currently, case-insensitive matching behavior on a file system that supports mixed behavior is limited to the Solaris CIFS server product. For more information about using the mixed value, see The casesensitivity Property.

Regardless of the casesensitivity property setting, the file system preserves the case of the name specified to create a file. This property cannot be changed after the file system is created.

checksum

String 

on

Controls the checksum used to verify data integrity. The default value is on, which automatically selects an appropriate algorithm, currently fletcher4. The values are on, off, fletcher2, fletcher4, and sha256. A value of off disables integrity checking on user data. A value of off is not recommended.

compression

String 

off

Enables or disables compression for this dataset. The values are on, off, and lzjb, gzip, or gzip-N. Currently, setting this property to lzjb, gzip, or gzip-N has the same effect as setting this property to on. The default value is off. Enabling compression on a file system with existing data only compresses new data. Existing data remains uncompressed.

This property can also be referred to by its shortened column name, compress.

compressratio

Number 

N/A 

Read-only property that identifies the compression ratio achieved for this dataset, expressed as a multiplier. Compression can be turned on by running zfs set compression=on dataset.

Calculated from the logical size of all files and the amount of referenced physical data. Includes explicit savings through the use of the compression property.

copies

Number 

1

Sets the number of copies of user data per file system. Available values are 1, 2 or 3. These copies are in addition to any pool-level redundancy. Space used by multiple copies of user data is charged to the corresponding file and dataset and counts against quotas and reservations. In addition, the used property is updated when multiple copies are enabled. Consider setting this property when the file system is created because changing this property on an existing file system only affects newly written data.

creation

String 

N/A 

Read-only property that identifies the date and time that this dataset was created.

dedup

String 

on|off|verify|sha256[,verify

Controls the ability to remove duplicate data in a ZFS file system. The default value is off. The default checksum for deduplication is sha256. The default checksum value might change in future releases.

For more information, see The dedup Property.

devices

Boolean 

on

Controls the ability to open device files in the file system.

exec

Boolean 

on

Controls whether programs within this file system are allowed to be executed. Also, when set to off, mmap(2) calls with PROT_EXEC are disallowed.

logbias

String 

latency

Controls how ZFS handles synchronous requests for this dataset. If logbias is set to latency, ZFS uses the pool's separate log devices, if any, to handle the requests at low latency. If logbias is set to throughput, ZFS does not use the pool's separate log devices. Instead, ZFS optimizes synchronous operations for global pool throughput and efficient use of resources. The default value is latency.

mlslabel

String 

None 

Provides a sensitivity label that determines if a dataset can be mounted in a Trusted Extensions zone. If the labeled dataset matches the labeled zone, the dataset can be mounted and accessed from the labeled zone. The default value is none. This property can only be modified when Trusted Extensions is enabled and only with the appropriate privilege.

mounted

Boolean 

N/A 

Read-only property that indicates whether this file system, clone, or snapshot is currently mounted. This property does not apply to volumes. Value can be either yes or no.

mountpoint

String 

N/A 

Controls the mount point used for this file system. When the mountpoint property is changed for a file system, the file system and any children that inherit the mount point are unmounted. If the new value is legacy, then they remain unmounted. Otherwise, they are automatically remounted in the new location if the property was previously legacy or none, or if they were mounted before the property was changed. In addition, any shared file systems are unshared and shared in the new location.

For more information about using this property, see Managing ZFS Mount Points.

primarycache

String 

off

Controls what is cached in the ARC. Possible values are all, none, and metadata. If set to all, both user data and metadata are cached. If set to none, neither user data nor metadata is cached. If set to metadata, only metadata is cached. The default is all.

nbmand

Boolean 

off

Controls whether the file system should be mounted with nbmand (Non-blocking mandatory) locks. This property is for CIFS clients only. Changes to this property only take effect when the file system is unmounted and remounted.

normalization

String 

None 

This property indicates whether a file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used. File names are always stored unmodified, names are normalized as part of any comparison process. If this property is set to a legal value other than none, and the utf8only property was left unspecified, the utf8only property is automatically set to on. The default value of the normalization property is none. This property cannot be changed after the file system is created.

origin

String 

N/A 

Read-only property for cloned file systems or volumes that identifies the snapshot from which the clone was created. The origin cannot be destroyed (even with the -r or -f options) as long as a clone exists.

Non-cloned file systems have an origin of none.

quota

Number (or none)

none

Limits the amount of space a dataset and its descendents can consume. This property enforces a hard limit on the amount of space used, including all space consumed by descendents, including file systems and snapshots. Setting a quota on a descendent of a dataset that already has a quota does not override the ancestor's quota, but rather imposes an additional limit. Quotas cannot be set on volumes, as the volsize property acts as an implicit quota.

For information about setting quotas, see Setting Quotas on ZFS File Systems.

readonly

Boolean 

off

Controls whether this dataset can be modified. When set to on, no modifications can be made to the dataset.

This property can also be referred to by its shortened column name, rdonly.

recordsize

Number 

128K

Specifies a suggested block size for files in the file system.

This property can also be referred to by its shortened column name, recsize. For a detailed description, see The recordsize Property.

referenced

Number 

N/A 

Read-only property that identifies the amount of data accessible by this dataset, which might or might not be shared with other datasets in the pool.

When a snapshot or clone is created, it initially references the same amount of space as the file system or snapshot it was created from, because its contents are identical. 

This property can also be referred to by its shortened column name, refer.

refquota

Number (or none) 

none

Sets the amount of space that a dataset can consume. This property enforces a hard limit on the amount of space used. This hard limit does not include space used by descendents, such as snapshots and clones.

refreservation

Number (or none) 

none

Sets the minimum amount of space that is guaranteed to a dataset, not including descendents, such as snapshots and clones. When the amount of space that is used is below this value, the dataset is treated as if it were taking up the amount of space specified by refreservation. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations.

If refreservation is set, a snapshot is only allowed if enough free pool space is available outside of this reservation to accommodate the current number of referenced bytes in the dataset.

This property can also be referred to by its shortened column name, refreserv.

reservation

Number (or none) 

none

The minimum amount of space guaranteed to a dataset and its descendents. When the amount of space used is below this value, the dataset is treated as if it were using the amount of space specified by its reservation. Reservations are accounted for in the parent datasets' space used, and count against the parent datasets' quotas and reservations.

This property can also be referred to by its shortened column name, reserv.

For more information, see Setting Reservations on ZFS File Systems.

secondarycache

String 

off

Controls what is cached in the L2ARC. Possible values are all, none, and metadata. If set to all, both user data and metadata are cached. If set to none, neither user data nor metadata is cached. If set to metadata, only metadata is cached. The default is all.

setuid

Boolean 

on

Controls whether the setuid bit is honored in the file system.

sharenfs

String 

off

Controls whether the file system is available over NFS, and what options are used. If set to on, the zfs share command is invoked with no options. Otherwise, the zfs share command is invoked with options equivalent to the contents of this property. If set to off, the file system is managed by using the legacy share and unshare commands and the dfstab file.

For more information on sharing ZFS file systems, see Sharing and Unsharing ZFS File Systems.

sharesmb

String 

off

Controls whether the file system is shared by using the Solaris CIFS service, and what options are to be used. A file system with the sharesmb property set to off is managed through traditional tools, such as the sharemgr command. Otherwise, the file system is automatically shared and unshared by using the zfs share and zfs unshare commands.

If the property is set to on, the sharemgr command is invoked with no options. Otherwise, the sharemgr command is invoked with options that are equivalent to the contents of this property.

snapdir

String 

hidden

Controls whether the .zfs directory is hidden or visible in the root of the file system. For more information on using snapshots, see Overview of ZFS Snapshots.

type

String 

N/A 

Read-only property that identifies the dataset type as filesystem (file system or clone), volume, or snapshot.

used

Number 

N/A 

Read-only property that identifies the amount of space consumed by the dataset and all its descendents.

For a detailed description, see The used Property.

usedbychildren

Number 

off 

Read-only property that identifies the amount of space that is used by children of this dataset, which would be freed if all the dataset's children were destroyed. The property abbreviation is usedchild.

usedbydataset

Number 

off 

Read-only property that identifies the amount of space that is used by this dataset itself, which would be freed if the dataset was destroyed, after first destroying any snapshots and removing any refreservation. The property abbreviation is usedds.

usedbyrefreservation

Number 

off 

Read-only property that identifies the amount of space that is used by a refreservation set on this dataset, which would be freed if the refreservation was removed. The property abbreviation is usedrefreserv.

usedbysnapshots

Number 

off 

Read-only property that identifies the amount of space that is consumed by snapshots of this dataset. In particular, it is the amount of space that would be freed if all of this dataset's snapshots were destroyed. Note that this is not simply the sum of the snapshots' used properties, because space can be shared by multiple snapshots. The property abbreviation is usedsnap.

utf8only

Boolean 

Off

This property indicates whether a file system should reject file names that include characters that are not present in the UTF-8 character code set. If this property is explicitly set to off, the normalization property must either not be explicitly set or be set to none. The default value for the utf8only property is off. This property cannot be changed after the file system is created.

volsize

Number 

N/A 

For volumes, specifies the logical size of the volume.

For a detailed description, see The volsize Property.

volblocksize

Number 

8 Kbytes

For volumes, specifies the block size of the volume. The block size cannot be changed once the volume has been written, so set the block size at volume creation time. The default block size for volumes is 8 Kbytes. Any power of 2 from 512 bytes to 128 Kbytes is valid.

This property can also be referred to by its shortened column name, volblock.

vscan

Boolean 

Off

Controls whether regular files should be scanned for viruses when a file is opened and closed. In addition to enabling this property, a virus scanning service must also be enabled for virus scanning to occur if you have third-party virus scanning software. The default value is off.

zoned

Boolean 

N/A 

Indicates whether this dataset has been added to a non-global zone. If this property is set, then the mount point is not honored in the global zone, and ZFS cannot mount such a file system when requested. When a zone is first installed, this property is set for any added file systems.

For more information about using ZFS with zones installed, see Using ZFS on a Solaris System With Zones Installed.

xattr

Boolean 

on

Indicates whether extended attributes are enabled or disabled for this file system. The default value is on.

ZFS Read-Only Native Properties

Read-only native properties are properties that can be retrieved but cannot be set. Read-only native properties are not inherited. Some native properties are specific to a particular type of dataset. In such cases, the particular dataset type is mentioned in the description in Table 6–1.

The read-only native properties are listed here and are described in Table 6–1.

For more information on space accounting, including the used, referenced, and available properties, see ZFS Space Accounting.

The used Property

The amount of space consumed by this dataset and all its descendents. This value is checked against the dataset's quota and reservation. The space used does not include the dataset's reservation, but does consider the reservation of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that is freed if the dataset is recursively destroyed, is the greater of its space used and its reservation.

When snapshots are created, their space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, space that was previously shared becomes unique to the snapshot, and counted in the snapshot's space used. The space that is used by a snapshot accounts for its unique data. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. For more information about snapshots and space issues, see Out of Space Behavior.

The amount of space used, available, or referenced does not take into account pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using fsync(3c) or O_SYNC does not necessarily guarantee that the space usage information will be updated immediately.

The usedbychildren, usedbydataset, usedbyrefreservation, and usedbysnapshots property information can be displayed with the zfs list -o space command. These properties break down the used property into space that is consumed by descendents. For more information, see Table 6–1.

Settable ZFS Native Properties

Settable native properties are properties whose values can be both retrieved and set. Settable native properties are set by using the zfs set command, as described in Setting ZFS Properties or by using the zfs create command as described in Creating a ZFS File System. With the exceptions of quotas and reservations, settable native properties are inherited. For more information about quotas and reservations, see Setting ZFS Quotas and Reservations.

Some settable native properties are specific to a particular type of dataset. In such cases, the particular dataset type is mentioned in the description in Table 6–1. If not specifically mentioned, a property applies to all dataset types: file systems, volumes, clones, and snapshots.

The settable properties are listed here and are described in Table 6–1.

The canmount Property

If this property is set to off, the file system cannot be mounted by using the zfs mount or zfs mount -a commands. Setting this property is similar to setting the mountpoint property to none, except that the dataset still has a normal mountpoint property that can be inherited. For example, you can set this property to off, establish inheritable properties for descendent file systems, but the file system itself is never mounted nor is it accessible to users. In this case, the parent file system with this property set to off is serving as a container so that you can set attributes on the container, but the container itself is never accessible.

In the following example, userpool is created and the canmount property is set to off. Mount points for descendent user file systems are set to one common mount point, /export/home. Properties that are set on the parent file system are inherited by descendent file systems, but the parent file system itself is never mounted.


# zpool create userpool mirror c0t5d0 c1t6d0
# zfs set canmount=off userpool
# zfs set mountpoint=/export/home userpool
# zfs set compression=on userpool
# zfs create userpool/user1
# zfs create userpool/user2
# zfs mount
userpool/user1                  /export/home/user1
userpool/user2                  /export/home/user2

Setting the canmount property to noauto means that the dataset can only be mounted explicitly, not automatically. This setting is used by the Solaris upgrade software so that only those datasets belonging to the active boot environment (BE) are mounted at boot time.

The casesensitivity Property

This property indicates whether the file name matching algorithm used by the file system should be casesensitive, caseinsensitive, or allow a combination of both styles of matching (mixed).

When a case-insensitive matching request is made of a mixed sensitivity file system, the behavior is generally the same as would be expected of a purely case-insensitive file system. The difference is that a mixed sensitivity file system might contain directories with multiple names that are unique from a case-sensitive perspective, but not unique from the case-insensitive perspective.

For example, a directory might contain files foo, Foo, and FOO. If a request is made to case-insensitively match any of the possible forms of foo, (for example foo, FOO, FoO, fOo, and so on) one of the three existing files is chosen as the match by the matching algorithm. Exactly which file the algorithm chooses as a match is not guaranteed, but what is guaranteed is that the same file is chosen as a match for any of the forms of foo. The file chosen as a case-insensitive match for foo, FOO, foO, Foo, and so on, is always the same, so long as the directory remains unchanged.

The utf8only, normalization, and casesensitivity properties also provide new permissions that can be assigned to non-privileged users by using ZFS delegated administration. For more information, see Delegating ZFS Permissions.

The dedup Property

This property controls whether duplicate data is removed from the file system. If a file system has the dedup property enabled, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared between files.

When dedup is enabled, the dedup checksum algorithm overrides the checksum property. Setting the value to verify is equivalent to specifying sha256,verify. If the property is set to verify and two blocks have the same signature, ZFS does a byte-for-byte comparison with the existing block to ensure that the contents are identical.

This property can be enabled per file system as follows:


# zfs set dedup=on tank/home

You can use the zfs get command to determine if the dedup property is set.

Although deduplication is set as a file system property, the scope is pool-wide. For example, you can identify the deduplication ratio as follows:


# zpool list tank
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool   136G  55.2G  80.8G    40%  2.30x  ONLINE  -

The DEDUP column indicates how much deduplication has occurred. If the dedup property is not enabled on any dataset or if the dedup property was just enabled on the dataset, the DEDUP ratio is 1.00x.

You can use the zpool get command to determine the value of the dedupratio property.


# zpool get all export
NAME     PROPERTY       VALUE       SOURCE
export   size           33.8G       -
export   capacity       0%          -
export   altroot        -           default
export   health         ONLINE      -
export   guid           2064230982813446135  default
export   version        22          default
export   bootfs         -           default
export   delegation     on          default
export   autoreplace    off         default
export   cachefile      -           default
export   failmode       wait        default
export   listsnapshots  off         default
export   autoexpand     off         default
export   dedupditto     0           default
export   dedupratio     3.00x       -
export   free           33.6G       -
export   allocated      105M        -

This pool property illustrates how much deduplication we have been able to achieve.

The recordsize Property

Specifies a suggested block size for files in the file system.

This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically adjust block sizes according to internal algorithms optimized for typical access patterns. For databases that create very large files but access the files in small random chunks, these algorithms may be suboptimal. Specifying a recordsize greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged, and may adversely affect performance. The size specified must be a power of two greater than or equal to 512 and less than or equal to 128 Kbytes. Changing the file system's recordsize only affects files created afterward. Existing files are unaffected.

This property can also be referred to by its shortened column name, recsize.

The sharesmb Property

This property enables sharing of ZFS file systems with the Solaris CIFS service, and identifies options to be used.

Because SMB shares requires a resource name, a unique resource name is constructed from the dataset name. The constructed name is a copy of the dataset name except that the characters in the dataset name, which would be illegal in the resource name, are replaced with underscore (_) characters. A pseudo property name is also supported that allows you to replace the dataset name with a specific name. The specific name is then used to replace the prefix dataset in the case of inheritance.

For example, if the dataset, data/home/john, is set to name=john, then data/home/john has a resource name of john. If a child dataset of data/home/john/backups exists, it has a resource name of john_backups. When the sharesmb property is changed for a dataset, the dataset and any children inheriting the property are re-shared with the new options, only if the property was previously set to off, or if they were shared before the property was changed. If the new property is set to off, the file systems are unshared.

For examples of using the sharesmb property, see Sharing ZFS Files in a Solaris CIFS Environment.

The volsize Property

The logical size of the volume. By default, creating a volume establishes a reservation for the same amount. Any changes to volsize are reflected in an equivalent change to the reservation. These checks are used to prevent unexpected behavior for users. A volume that contains less space than it claims is available can result in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while it is in use, particularly when you shrink the size. Extreme care should be used when adjusting the volume size.

Though not recommended, you can create a sparse volume by specifying the -s flag to zfs create -V, or by changing the reservation once the volume has been created. A sparse volume is defined as a volume where the reservation is not equal to the volume size. For a sparse volume, changes to volsize are not reflected in the reservation.

For more information about using volumes, see ZFS Volumes.

ZFS User Properties

In addition to the standard native properties, ZFS supports arbitrary user properties. User properties have no effect on ZFS behavior, but you can use them to annotate datasets with information that is meaningful in your environment.

User property names must conform to the following characteristics:

The expected convention is that the property name is divided into the following two components but this namespace is not enforced by ZFS:


module:property

When making programmatic use of user properties, use a reversed DNS domain name for the module component of property names to reduce the chance that two independently-developed packages will use the same property name for different purposes. Property names that begin with "com.sun." are reserved for use by Sun Microsystems.

The values of user properties have the following characteristics:

For example:


# zfs set dept:users=finance userpool/user1
# zfs set dept:users=general userpool/user2
# zfs set dept:users=itops userpool/user3

All of the commands that operate on properties, such as zfs list, zfs get, zfs set, and so on, can be used to manipulate both native properties and user properties.

For example:


zfs get -r dept:users userpool
NAME            PROPERTY    VALUE           SOURCE
userpool        dept:users  all             local
userpool/user1  dept:users  finance         local
userpool/user2  dept:users  general         local
userpool/user3  dept:users  itops           local

To clear a user property, use the zfs inherit command. For example:


# zfs inherit -r dept:users userpool

If the property is not defined in any parent dataset, it is removed entirely.

Querying ZFS File System Information

The zfs list command provides an extensible mechanism for viewing and querying dataset information. Both basic and complex queries are explained in this section.

Listing Basic ZFS Information

You can list basic dataset information by using the zfs list command with no options. This command displays the names of all datasets on the system including their used, available, referenced, and mountpoint properties. For more information about these properties, see Introducing ZFS Properties.

For example:


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
pool                   476K  16.5G    21K  /pool
pool/clone              18K  16.5G    18K  /pool/clone
pool/home              296K  16.5G    19K  /pool/home
pool/home/marks        277K  16.5G   277K  /pool/home/marks
pool/home/marks@snap      0      -   277K  -
pool/test               18K  16.5G    18K  /test

You can also use this command to display specific datasets by providing the dataset name on the command line. Additionally, use the -r option to recursively display all descendents of that dataset. For example:


# zfs list -r pool/home/marks
NAME                   USED  AVAIL  REFER  MOUNTPOINT
pool/home/marks        277K  16.5G   277K  /pool/home/marks
pool/home/marks@snap      0      -   277K  -

You can use the zfs list command with the mount point of a file system. For example:


# zfs list /pool/home/marks
NAME              USED  AVAIL  REFER  MOUNTPOINT
pool/home/marks   277K  16.5G   277K  /pool/home/marks

The following example shows how to display tank/home/chua and all of its descendent datasets.


# zfs list -r tank/home/chua
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
tank/home/chua		          26.0K  4.81G  10.0K  /tank/home/chua 
tank/home/chua/projects       16K  4.81G   9.0K  /tank/home/chua/projects
tank/home/chua/projects/fs1    8K  4.81G     8K  /tank/home/chua/projects/fs1 
tank/home/chua/projects/fs2    8K  4.81G     8K  /tank/home/chua/projects/fs2

For additional information about the zfs list command, see zfs(1M).

Creating Complex ZFS Queries

The zfs list output can be customized by using the -o, -f, and -H options.

You can customize property value output by using the -o option and a comma-separated list of desired properties. Supply any dataset property as a valid argument. For a list of all supported dataset properties, see Introducing ZFS Properties. In addition to the properties defined, the -o option list can also contain the literal name to indicate that the output should include the name of the dataset.

The following example uses zfs list to display the dataset name, along with the sharenfs and mountpoint properties.


# zfs list -o name,sharenfs,mountpoint
NAME                   SHARENFS         MOUNTPOINT
tank                   off              /tank
tank/home              on               /tank/home
tank/home/ahrens       on               /tank/home/ahrens
tank/home/bonwick      on               /tank/home/bonwick
tank/home/chua         on               /tank/home/chua
tank/home/eschrock     on               legacy
tank/home/moore        on               /tank/home/moore
tank/home/tabriz       ro               /tank/home/tabriz

You can use the -t option to specify the types of datasets to display. The valid types are described in the following table.

Table 6–2 Types of ZFS Datasets

Type 

Description 

filesystem

File systems and clones 

volume

Volumes 

snapshot

Snapshots 

The -t options takes a comma-separated list of the types of datasets to be displayed. The following example uses the -t and -o options simultaneously to show the name and used property for all file systems:


# zfs list -t filesystem -o name,used
NAME              USED
pool              476K
pool/clone         18K
pool/home         296K
pool/home/marks   277K
pool/test          18K

You can use the -H option to omit the zfs list header from the generated output. With the -H option, all white space is output as tabs. This option can be useful when you need parseable output, for example, when scripting. The following example shows the output generated from using the zfs list command with the -H option:


# zfs list -H -o name
pool
pool/clone
pool/home
pool/home/marks
pool/home/marks@snap
pool/test

Managing ZFS Properties

Dataset properties are managed through the zfs command's set, inherit, and get subcommands.

Setting ZFS Properties

You can use the zfs set command to modify any settable dataset property. Or, you can use the zfs create command to set properties when the dataset is created. For a list of settable dataset properties, see Settable ZFS Native Properties. The zfs set command takes a property/value sequence in the format of property=value and a dataset name.

The following example sets the atime property to off for tank/home. Only one property can be set or modified during each zfs set invocation.


# zfs set atime=off tank/home

In addition, any file system property can be set when the file system is created. For example:


# zfs create -o atime=off tank/home

You can specify numeric properties by using the following easy to understand suffixes (in order of magnitude): BKMGTPEZ. Any of these suffixes can be followed by an optional b, indicating bytes, with the exception of the B suffix, which already indicates bytes. The following four invocations of zfs set are equivalent numeric expressions indicating that the quota property be set to the value of 50 Gbytes on the tank/home/marks file system:


# zfs set quota=50G tank/home/marks
# zfs set quota=50g tank/home/marks
# zfs set quota=50GB tank/home/marks
# zfs set quota=50gb tank/home/marks

Values of non-numeric properties are case-sensitive and must be lowercase, with the exception of mountpoint and sharenfs. The values of these properties can have mixed upper and lower case letters.

For more information about the zfs set command, see zfs(1M).

Inheriting ZFS Properties

All settable properties, with the exception of quotas and reservations, inherit their value from their parent, unless a quota or reservation is explicitly set on the child. If no ancestor has an explicit value set for an inherited property, the default value for the property is used. You can use the zfs inherit command to clear a property setting, thus causing the setting to be inherited from the parent.

The following example uses the zfs set command to turn on compression for the tank/home/bonwick file system. Then, zfs inherit is used to unset the compression property, thus causing the property to inherit the default setting of off. Because neither home nor tank have the compression property set locally, the default value is used. If both had compression on, the value set in the most immediate ancestor would be used (home in this example).


# zfs set compression=on tank/home/bonwick
# zfs get -r compression tank
NAME             PROPERTY      VALUE                    SOURCE
tank             compression   off                      default
tank/home        compression   off                      default
tank/home/bonwick compression   on                      local
# zfs inherit compression tank/home/bonwick
# zfs get -r compression tank
NAME             PROPERTY      VALUE                    SOURCE
tank             compression   off                      default
tank/home        compression   off                      default
tank/home/bonwick compression  off                      default

The inherit subcommand is applied recursively when the -r option is specified. In the following example, the command causes the value for the compression property to be inherited by tank/home and any descendents it might have.


# zfs inherit -r compression tank/home

Note –

Be aware that the use of the -r option clears the current property setting for all descendent datasets.


For more information about the zfs command, see zfs(1M).

Querying ZFS Properties

The simplest way to query property values is by using the zfs list command. For more information, see Listing Basic ZFS Information. However, for complicated queries and for scripting, use the zfs get command to provide more detailed information in a customized format.

You can use the zfs get command to retrieve any dataset property. The following example shows how to retrieve a single property on a dataset:


# zfs get checksum tank/ws
NAME             PROPERTY       VALUE                      SOURCE
tank/ws          checksum       on                         default

The fourth column, SOURCE, indicates where this property value has been set. The following table defines the meaning of the possible source values.

Table 6–3 Possible SOURCE Values (zfs get)

Source Value 

Description 

default

This property was never explicitly set for this dataset or any of its ancestors. The default value for this property is being used. 

inherited from dataset-name

This property value is being inherited from the parent as specified by dataset-name.

local

This property value was explicitly set for this dataset by using zfs set.

temporary

This property value was set by using the zfs mount -o option and is only valid for the lifetime of the mount. For more information about temporary mount point properties, see Using Temporary Mount Properties.

- (none) 

This property is a read-only property. Its value is generated by ZFS. 

You can use the special keyword all to retrieve all dataset properties. The following examples use the all keyword to retrieve all existing dataset properties:


# zfs get all tank
NAME  PROPERTY              VALUE                  SOURCE
tank  type                  filesystem             -
tank  creation              Wed Nov 18  9:43 2009  -
tank  used                  72K                    -
tank  available             66.9G                  -
tank  referenced            21K                    -
tank  compressratio         1.00x                  -
tank  mounted               yes                    -
tank  quota                 none                   default
tank  reservation           none                   default
tank  recordsize            128K                   default
tank  mountpoint            /tank                  default
tank  sharenfs              off                    default
tank  checksum              on                     default
tank  compression           off                    default
tank  atime                 on                     default
tank  devices               on                     default
tank  exec                  on                     default
tank  setuid                on                     default
tank  readonly              off                    default
tank  zoned                 off                    default
tank  snapdir               hidden                 default
tank  aclmode               groupmask              default
tank  aclinherit            restricted             default
tank  canmount              on                     default
tank  shareiscsi            off                    default
tank  xattr                 on                     default
tank  copies                1                      default
tank  version               4                      -
tank  utf8only              off                    -
tank  normalization         none                   -
tank  casesensitivity       sensitive              -
tank  vscan                 off                    default
tank  nbmand                off                    default
tank  sharesmb              off                    default
tank  refquota              none                   default
tank  refreservation        none                   default
tank  primarycache          all                    default
tank  secondarycache        all                    default
tank  usedbysnapshots       0                      -
tank  usedbydataset         21K                    -
tank  usedbychildren        51K                    -
tank  usedbyrefreservation  0                      -
tank  logbias               latency                default
tank  dedup                 off                    default
tank  mlslabel              none                   default

The -s option to zfs get enables you to specify, by source type, the properties to display. This option takes a comma-separated list indicating the desired source types. Only properties with the specified source type are displayed. The valid source types are local, default, inherited, temporary, and none. The following example shows all properties that have been locally set on pool.


# zfs get -s local all pool
NAME             PROPERTY      VALUE                      SOURCE
pool             compression   on                         local

Any of the above options can be combined with the -r option to recursively display the specified properties on all children of the specified dataset. In the following example, all temporary properties on all datasets within tank are recursively displayed:


# zfs get -r -s temporary all tank
NAME             PROPERTY       VALUE                      SOURCE
tank/home          atime          off                      temporary
tank/home/bonwick  atime          off                      temporary
tank/home/marks    atime          off                      temporary

A recent feature enables you to make queries with the zfs get command without specifying a target file system, which means it operates on all pools or file systems. For example:


# zfs get -s local all
tank/home               atime          off                    local
tank/home/bonwick       atime          off                    local
tank/home/marks         quota          50G                    local

For more information about the zfs get command, see zfs(1M).

Querying ZFS Properties for Scripting

The zfs get command supports the -H and -o options, which are designed for scripting. The -H option indicates that any header information should be omitted and that all white space be replaced with a tab. Uniform white space allows for easily parseable data. You can use the -o option to customize the output in the following ways:

The following example shows how to retrieve a single value by using the -H and -o options of zfs get.


# zfs get -H -o value compression tank/home
on

The -p option reports numeric values as their exact values. For example, 1 Mbyte would be reported as 1000000. This option can be used as follows:


# zfs get -H -o value -p used tank/home
182983742

You can use the -r option along with any of the above options to recursively retrieve the requested values for all descendents. The following example uses the -r, -o, and -H options to retrieve the dataset name and the value of the used property for export/home and its descendents, while omitting any header output:


# zfs get -H -o name,value -r used export/home
export/home     5.57G
export/home/marks       1.43G
export/home/maybee      2.15G

Mounting and Sharing ZFS File Systems

This section describes how mount points and shared file systems are managed in ZFS.

Managing ZFS Mount Points

By default, all ZFS file systems are mounted by ZFS at boot by using the Service Management Facility 's (SMF)svc://system/filesystem/local service. File systems are mounted under /path, where path is the name of the file system.

You can override the default mount point by setting the mountpoint property to a specific path by using the zfs set command. ZFS automatically creates this mount point, if needed, and automatically mounts this file system when the zfs mount -a command is invoked, without requiring you to edit the /etc/vfstab file.

The mountpoint property is inherited. For example, if pool/home has mountpoint set to /export/stuff, then pool/home/user inherits /export/stuff/user for its mountpoint property.

The mountpoint property can be set to none to prevent the file system from being mounted. In addition, the canmount property is available for determining whether a file system can be mounted. For more information about the canmount property, see The canmount Property.

If desired, file systems can also be explicitly managed through legacy mount interfaces by setting the mountpoint property to legacy by using zfs set. Doing so prevents ZFS from automatically mounting and managing this file system. Legacy tools including the mount and umount commands, and the /etc/vfstab file must be used instead. For more information about legacy mounts, see Legacy Mount Points.

When changing mount point management strategies, the following behaviors apply:

Automatic Mount Points

You can also set the default mount point for the root dataset at creation time by using zpool create's -m option. For more information about creating pools, see Creating a ZFS Storage Pool.

Any dataset whose mountpoint property is not legacy is managed by ZFS. In the following example, a dataset is created whose mount point is automatically managed by ZFS.


# zfs create pool/filesystem
# zfs get mountpoint pool/filesystem
NAME             PROPERTY      VALUE                      SOURCE
pool/filesystem  mountpoint    /pool/filesystem           default
# zfs get mounted pool/filesystem
NAME             PROPERTY      VALUE                      SOURCE
pool/filesystem  mounted       yes                        -

You can also explicitly set the mountpoint property as shown in the following example:


# zfs set mountpoint=/mnt pool/filesystem
# zfs get mountpoint pool/filesystem
NAME             PROPERTY      VALUE                      SOURCE
pool/filesystem  mountpoint    /mnt                       local
# zfs get mounted pool/filesystem
NAME             PROPERTY      VALUE                      SOURCE
pool/filesystem  mounted       yes                        -

When the mountpoint property is changed, the file system is automatically unmounted from the old mount point and remounted to the new mount point. Mount point directories are created as needed. If ZFS is unable to unmount a file system due to it being active, an error is reported and a forced manual unmount is necessary.

Legacy Mount Points

You can manage ZFS file systems with legacy tools by setting the mountpoint property to legacy. Legacy file systems must be managed through the mount and umount commands and the /etc/vfstab file. ZFS does not automatically mount legacy file systems on boot, and the ZFS mount and umount commands do not operate on datasets of this type. The following examples show how to set up and manage a ZFS dataset in legacy mode:


# zfs set mountpoint=legacy tank/home/eschrock
# mount -F zfs tank/home/eschrock /mnt

To automatically mount a legacy file system on boot, you must add an entry to the /etc/vfstab file. The following example shows what the entry in the /etc/vfstab file might look like:


#device         device        mount           FS      fsck    mount   mount
#to mount       to fsck       point           type    pass    at boot options
#

tank/home/eschrock -		/mnt		   zfs		-		yes		-	

Note that the device to fsck and fsck pass entries are set to -. This syntax is because the fsck command is not applicable to ZFS file systems. For more information regarding data integrity and the lack of need for fsck in ZFS, see Transactional Semantics.

Mounting ZFS File Systems

ZFS automatically mounts file systems when file systems are created or when the system boots. Use of the zfs mount command is necessary only when changing mount options or explicitly mounting or unmounting file systems.

The zfs mount command with no arguments shows all currently mounted file systems that are managed by ZFS. Legacy managed mount points are not displayed. For example:


# zfs mount
tank                            /tank
tank/home                       /tank/home
tank/home/bonwick               /tank/home/bonwick
tank/ws                         /tank/ws

You can use the -a option to mount all ZFS managed file systems. Legacy managed file systems are not mounted. For example:


# zfs mount -a

By default, ZFS does not allow mounting on top of a nonempty directory. To force a mount on top of a nonempty directory, you must use the -O option. For example:


# zfs mount tank/home/lalt
cannot mount '/export/home/lalt': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
# zfs mount -O tank/home/lalt

Legacy mount points must be managed through legacy tools. An attempt to use ZFS tools results in an error. For example:


# zfs mount pool/home/billm
cannot mount 'pool/home/billm': legacy mountpoint
use mount(1M) to mount this filesystem
# mount -F zfs tank/home/billm

When a file system is mounted, it uses a set of mount options based on the property values associated with the dataset. The correlation between properties and mount options is as follows:

Property

Mount Options

atime

atime/noatime

devices

devices/nodevices

exec

exec/noexec

nbmand

nbmand/nonbmand

readonly

ro/rw

setuid

setuid/nosetuid

xattr

xattr/noxattr

The mount option nosuid is an alias for nodevices,nosetuid.

You can use the NFSv4 mirror mount features to help you better manage NFS-mounted ZFS home directories. For a description of mirror mounts, see ZFS and File System Mirror Mounts.

Using Temporary Mount Properties

If any of the above options are set explicitly by using the-o option with the zfs mount command, the associated property value is temporarily overridden. These property values are reported as temporary by the zfs get command and revert back to their original settings when the file system is unmounted. If a property value is changed while the dataset is mounted, the change takes effect immediately, overriding any temporary setting.

In the following example, the read-only mount option is temporarily set on the tank/home/perrin file system:


# zfs mount -o ro tank/home/perrin

In this example, the file system is assumed to be unmounted.

To temporarily change a property on a file system that is currently mounted, you must use the special remount option. In the following example, the atime property is temporarily changed to off for a file system that is currently mounted:


# zfs mount -o remount,noatime tank/home/perrin
# zfs get atime tank/home/perrin
NAME             PROPERTY      VALUE                      SOURCE
tank/home/perrin atime         off                        temporary

For more information about the zfs mount command, see zfs(1M).

Unmounting ZFS File Systems

You can unmount file systems by using the zfs unmount subcommand. The unmount command can take either the mount point or the file system name as arguments.

In the following example, a file system is unmounted by file system name:


# zfs unmount tank/home/tabriz

In the following example, the file system is unmounted by mount point:


# zfs unmount /export/home/tabriz

The unmount command fails if the file system is active or busy. To forcibly unmount a file system, you can use the -f option. Be cautious when forcibly unmounting a file system, if its contents are actively being used. Unpredictable application behavior can result.


# zfs unmount tank/home/eschrock
cannot unmount '/export/home/eschrock': Device busy
# zfs unmount -f tank/home/eschrock

To provide for backwards compatibility, the legacy umount command can be used to unmount ZFS file systems. For example:


# umount /export/home/bob

For more information about the zfs umount command, see zfs(1M).

Sharing and Unsharing ZFS File Systems

Similar to mount points, ZFS can automatically share file systems by using the sharenfs property. Using this method, you do not have to modify the /etc/dfs/dfstab file when a new file system is added. The sharenfs property is a comma-separated list of options to pass to the share command. The special value on is an alias for the default share options, which are read/write permissions for anyone. The special value off indicates that the file system is not managed by ZFS and can be shared through traditional means, such as the /etc/dfs/dfstab file. All file systems whose sharenfs property is not off are shared during boot.

Controlling Share Semantics

By default, all file systems are unshared. To share a new file system, use zfs set syntax similar to the following:


# zfs set sharenfs=on tank/home/eschrock

The property is inherited, and file systems are automatically shared on creation if their inherited property is not off. For example:


# zfs set sharenfs=on tank/home
# zfs create tank/home/bricker
# zfs create tank/home/tabriz
# zfs set sharenfs=ro tank/home/tabriz

Both tank/home/bricker and tank/home/tabriz are initially shared writable because they inherit the sharenfs property from tank/home. Once the property is set to ro (readonly), tank/home/tabriz is shared read-only regardless of the sharenfs property that is set for tank/home.

Unsharing ZFS File Systems

While most file systems are automatically shared and unshared during boot, creation, and destruction, file systems sometimes need to be explicitly unshared. To do so, use the zfs unshare command. For example:


# zfs unshare tank/home/tabriz

This command unshares the tank/home/tabriz file system. To unshare all ZFS file systems on the system, you need to use the -a option.


# zfs unshare -a

Sharing ZFS File Systems

Most of the time the automatic behavior of ZFS, sharing on boot and creation, is sufficient for normal operation. If, for some reason, you unshare a file system, you can share it again by using the zfs share command. For example:


# zfs share tank/home/tabriz

You can also share all ZFS file systems on the system by using the -a option.


# zfs share -a

Legacy Share Behavior

If the sharenfs property is off, then ZFS does not attempt to share or unshare the file system at any time. This setting enables you to administer through traditional means such as the /etc/dfs/dfstab file.

Unlike the traditional mount command, the traditional share and unshare commands can still function on ZFS file systems. As a result, you can manually share a file system with options that are different from the settings of the sharenfs property. This administrative model is discouraged. Choose to either manage NFS shares completely through ZFS or completely through the /etc/dfs/dfstab file. The ZFS administrative model is designed to be simpler and less work than the traditional model. However, in some cases, you might still want to control file system sharing behavior through the familiar model.

Sharing ZFS Files in a Solaris CIFS Environment

The sharesmb property is provided to share ZFS files by using the Solaris CIFS software product. When this property is set on a ZFS file system, these shares are visible to CIFS client systems. For more information about using the CIFS software product, see the System Administration Guide: Windows Interoperability.

For a detailed description of the sharesmb property, see The sharesmb Property.


Example 6–1 Example—Sharing ZFS File Systems (sharesmb)

In this example, a ZFS file system sandbox/fs1 is created and shared with the sharesmb property. If necessary, enable the SMB services.


# svcadm enable -r smb/server
svcadm: svc:/milestone/network depends on svc:/network/physical, which has multiple instances.
# svcs | grep smb
online         10:47:15 svc:/network/smb/server:default

# zpool create sandbox mirror c0t2d0 c0t4d0
# zfs create sandbox/fs1
# zfs set sharesmb=on sandbox/fs1

The sharesmb property is set for sandbox/fs1 and its descendents.

Verify that the file system was shared. For example:


# sharemgr show -vp
default nfs=()
zfs nfs=()
    zfs/sandbox/fs1 smb=()
          sandbox_fs1=/sandbox/fs1

A default SMB resource name, sandbox_fs1, is assigned automatically.

In this example, another file system is created, sandbox/fs2, and shared with a resource name, myshare.


# zfs create sandbox/fs2
# zfs set sharesmb=name=myshare sandbox/fs2
# sharemgr show -vp
default nfs=()
zfs nfs=()
    zfs/sandbox/fs1 smb=()
          sandbox_fs1=/sandbox/fs1
    zfs/sandbox/fs2 smb=()
          myshare=/sandbox/fs2

The sandbox/fs2/fs2_sub1 file system is created and is automatically shared. The inherited resource name is myshare_fs2_sub1.


# zfs create sandbox/fs2/fs2_sub1
# sharemgr show -vp
default nfs=()
zfs nfs=()
    zfs/sandbox/fs1 smb=()
          sandbox_fs1=/sandbox/fs1
    zfs/sandbox/fs2 smb=()
          myshare=/sandbox/fs2
          myshare_fs2_sub1=/sandbox/fs2/fs2_sub1

Disable SMB sharing for sandbox/fs2 and its descendents.


# zfs set sharesmb=off sandbox/fs2
# sharemgr show -vp
default nfs=()
zfs nfs=()
    zfs/sandbox/fs1 smb=()
          sandbox_fs1=/sandbox/fs1

In this example, the sharesmb property is set on the pool's top-level file system. The descendent file systems are automatically shared.


# zpool create sandbox mirror c0t2d0 c0t4d0
# zfs set sharesmb=on sandbox
# zfs create sandbox/fs1
# zfs create sandbox/fs2

The top-level file system has a resource name of sandbox, but the descendents have their dataset name appended to the resource name.


# sharemgr show -vp
default nfs=()
zfs nfs=()
    zfs/sandbox smb=()
          sandbox=/sandbox
          sandbox_fs1=/sandbox/fs1       smb=()
          sandbox_fs2=/sandbox/fs2       smb=()

Setting ZFS Quotas and Reservations

You can use the quota property to set a limit on the amount of space a file system can use. In addition, you can use the reservation property to guarantee that some amount of space is available to a file system. Both properties apply to the dataset they are set on and all descendents of that dataset.

That is, if a quota is set on the tank/home dataset, the total amount of space used by tank/home and all of its descendents cannot exceed the quota. Similarly, if tank/home is given a reservation, tank/home and all of its descendents draw from that reservation. The amount of space used by a dataset and all of its descendents is reported by the used property.

The refquota and refreservation properties are available to manage file system space without accounting for space consumed by descendents, such as snapshots and clones.

In this Solaris release, you can set a user or group quota on the amount of space consumed by files that are owned by a particular user or group. The user and group quota properties cannot be set on a volume, on a file system before file system version 4, or on a pool before pool version 15.

Consider the following points to determine which quota and reservations features might better manage your file systems:

For more information about setting quotas and reservations, see Setting Quotas on ZFS File Systemsand Setting Reservations on ZFS File Systems.

Setting Quotas on ZFS File Systems

ZFS quotas can be set and displayed by using the zfs set and zfs get commands. In the following example, a quota of 10 Gbytes is set on tank/home/bonwick.


# zfs set quota=10G tank/home/bonwick
# zfs get quota tank/home/bonwick
NAME              PROPERTY      VALUE                      SOURCE
tank/home/bonwick quota         10.0G                      local

ZFS quotas also impact the output of the zfs list and df commands. For example:


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank/home             16.5K  33.5G  8.50K  /export/home
tank/home/bonwick     15.0K  10.0G  8.50K  /export/home/bonwick
tank/home/bonwick/ws  6.50K  10.0G  8.50K  /export/home/bonwick/ws
# df -h /export/home/bonwick
Filesystem             size   used  avail capacity  Mounted on
tank/home/bonwick       10G     8K    10G     1%    /export/home/bonwick

Note that although tank/home has 33.5 Gbytes of space available, tank/home/bonwick and tank/home/bonwick/ws only have 10 Gbytes of space available, due to the quota on tank/home/bonwick.

You cannot set a quota to an amount less than is currently being used by a dataset. For example:


# zfs set quota=10K tank/home/bonwick
cannot set quota for 'tank/home/bonwick': size is less than current used or 
reserved space

You can set a refquota on a dataset that limits the amount of space that the dataset can consume. This hard limit does not include space that is consumed by descendents. For example:


# zfs set refquota=10g students/studentA
# zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
profs               106K  33.2G    18K  /profs
students           57.7M  33.2G    19K  /students
students/studentA  57.5M  9.94G  57.5M  /students/studentA
# zfs snapshot students/studentA@today
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
profs                     106K  33.2G    18K  /profs
students                 57.7M  33.2G    19K  /students
students/studentA        57.5M  9.94G  57.5M  /students/studentA
students/studentA@today      0      -  57.5M  -

For additional convenience, you can set another quota on a dataset to help manage the space that is consumed by snapshots. For example:


# zfs set quota=20g students/studentA
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
profs                     106K  33.2G    18K  /profs
students                 57.7M  33.2G    19K  /students
students/studentA        57.5M  9.94G  57.5M  /students/studentA
students/studentA@today      0      -  57.5M  -

In this scenario, studentA might reach the refquota (10 Gbytes) hard limit, but can remove files to recover, even if snapshots exist.

In the above example, the smaller of the two quotas (10 Gbytes versus 20 Gbytes) is displayed in the zfs list output. To see the value of both quotas, use the zfs get command. For example:


# zfs get refquota,quota students/studentA
NAME               PROPERTY  VALUE              SOURCE
students/studentA  refquota  10G                local
students/studentA  quota     20G                local

Setting User or Group Quotas on a ZFS File System

You can set a user or a group quota by using the zfs userquota and zfs groupquota commands as follows:


# zfs create students/compsci
# zfs set userquota@student1=10G students/compsci
# zfs create students/labstaff
# zfs set groupquota@staff=20GB students/labstaff

Display the current user quota or group quota as follows:


# zfs get userquota@student1 students/compsci
NAME              PROPERTY            VALUE               SOURCE
students/compsci  userquota@student1  10G                 local
# zfs get groupquota@staff students/labstaff
NAME               PROPERTY          VALUE             SOURCE
students/labstaff  groupquota@staff  20G               local

You can display general user and group space usage by querying the following properties:


# zfs userspace students/compsci
TYPE        NAME      USED  QUOTA  
POSIX User  root      227M   none  
POSIX User  student1  455M    10G  
# zfs groupspace students/labstaff
TYPE         NAME   USED  QUOTA  
POSIX Group  root   217M   none  
POSIX Group  staff  217M    20G  

If you want to identify individual user or group space usage, query the following properties:


# zfs get userused@student1 students/compsci
NAME              PROPERTY           VALUE              SOURCE
students/compsci  userused@student1  455M               local
# zfs get groupused@staff students/labstaff
NAME               PROPERTY         VALUE            SOURCE
students/labstaff  groupused@staff  217M             local

The user and group quota properties are not displayed by using the zfs get all dataset command that displays a listing of all file system properties.

You can remove a user or group quota as follows:


# zfs set userquota@user1=none students/compsci
# zfs set groupquota@staff=none students/labstaff

ZFS user and group quotas provide the following features:

Enforcement of user or group quotas might be delayed by several seconds. This delay means that users might exceed their quota before the system notices that they are over quota and refuses additional writes with the EDQUOT error message.

You can use the legacy quota command to review user quotas in an NFS environment, for example, where a ZFS file system is mounted. Without any options, the quota command only displays output if the user's quota is exceeded. For example:


# zfs set userquota@student1=10m students/compsci   
# zfs userspace students/compsci
TYPE        NAME      USED  QUOTA  
POSIX User  root      227M   none  
POSIX User  student1  455M    10M  
# quota student1
Block limit reached on /students/compsci

If you reset the quota and the quota limit is no longer exceeded, you will need to use the quota -v command to review the user's quota. For example:


# zfs set userquota@student1=10GB students/compsci 
# zfs userspace students/compsci
TYPE        NAME      USED  QUOTA  
POSIX User  root      227M   none  
POSIX User  student1  455M    10G  
# quota student1
# quota -v student1
Disk quotas for student1 (uid 201):
Filesystem     usage  quota  limit    timeleft  files  quota  limit    timeleft
/students/compsci
              466029 10485760 10485760     

Setting Reservations on ZFS File Systems

A ZFS reservation is an allocation of space from the pool that is guaranteed to be available to a dataset. As such, you cannot reserve space for a dataset if that space is not currently available in the pool. The total amount of all outstanding unconsumed reservations cannot exceed the amount of unused space in the pool. ZFS reservations can be set and displayed by using the zfs set and zfs get commands. For example:


# zfs set reservation=5G tank/home/moore
# zfs get reservation tank/home/moore
NAME             PROPERTY     VALUE   SOURCE
tank/home/moore  reservation  5G      local

ZFS reservations can affect the output of the zfs list command. For example:


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
tank/home             5.00G  33.5G  8.50K  /export/home
tank/home/moore       15.0K  33.5G  8.50K  /export/home/moore

Note that tank/home is using 5 Gbytes of space, although the total amount of space referred to by tank/home and its descendents is much less than 5 Gbytes. The used space reflects the space reserved for tank/home/moore. Reservations are considered in the used space of the parent dataset and do count against its quota, reservation, or both.


# zfs set quota=5G pool/filesystem
# zfs set reservation=10G pool/filesystem/user1
cannot set reservation for 'pool/filesystem/user1': size is greater than 
available space

A dataset can use more space than its reservation, as long as space is available in the pool that is unreserved and the dataset's current usage is below its quota. A dataset cannot consume space that has been reserved for another dataset.

Reservations are not cumulative. That is, a second invocation of zfs set to set a reservation does not add its reservation to the existing reservation. Rather, the second reservation replaces the first reservation.


# zfs set reservation=10G tank/home/moore
# zfs set reservation=5G tank/home/moore
# zfs get reservation tank/home/moore
NAME             PROPERTY      VALUE                      SOURCE
tank/home/moore  reservation   5.00G                      local

You can set a refreservation to guarantee space for a dataset that does not include space consumed by snapshots and clones. The refreservation reservation is accounted for in the parent datasets' space used, and counts against the parent datasets' quotas and reservations. For example:


# zfs set refreservation=10g profs/prof1
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
profs                    10.0G  23.2G    19K  /profs
profs/prof1                10G  33.2G    18K  /profs/prof1

You can also set a reservation on the same dataset to guarantee dataset space and snapshot space. For example:


# zfs set reservation=20g profs/prof1
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
profs                    20.0G  13.2G    19K  /profs
profs/prof1                10G  33.2G    18K  /profs/prof1

Regular reservations are accounted for in the parent's used space.

In the above example, the smaller of the two quotas (10 Gbytes versus 20 Gbytes) is displayed in the zfs list output. To see the value of both quotas, use the zfs get command. For example:


# zfs get reservation,refreserv profs/prof1
NAME         PROPERTY        VALUE        SOURCE
profs/prof1  reservation     20G          local
profs/prof1  refreservation  10G          local

If refreservation is set, a snapshot is only allowed if enough free pool space exists outside of this reservation to accommodate the current number of referenced bytes in the dataset.