Managing ZFS File Systems in Oracle® Solaris 11.2

Exit Print View

Updated: December 2014
 
 

Settable ZFS Native Properties

Settable native properties are properties whose values can be both retrieved and set. Settable native properties are set by using the zfs set command, as described in Setting ZFS Properties or by using the zfs create command as described in Creating a ZFS File System. With the exceptions of quotas and reservations, settable native properties are inherited. For more information about quotas and reservations, see Setting ZFS Quotas and Reservations.

Some settable native properties are specific to a particular type of dataset. In such cases, the dataset type is mentioned in the description in Table 5–1. If not specifically mentioned, a property applies to all dataset types: file systems, volumes, clones, and snapshots.

The settable properties are listed here and described in Table 5–1.

  • aclinherit

    For a detailed description, see ACL Properties.

  • aclmode

    For a detailed description, see ACL Properties.

  • atime

  • canmount

  • casesensitivity

  • checksum

  • compression

  • copies

  • devices

  • dedup

  • encryption

  • exec

  • keysource

  • logbias

  • mlslabel

  • mountpoint

  • nbmand

  • normalization

  • primarycache

  • quota

  • readonly

  • recordsize

    For a detailed description, see The recordsize Property.

  • refquota

  • refreservation

  • reservation

  • rstchown

  • secondarycache

  • share.smb

  • share.nfs

  • setuid

  • snapdir

  • version

  • vscan

  • utf8only

  • volsize

    For a detailed description, see The volsize Property.

  • volblocksize

  • zoned

  • xattr

The canmount Property

If the canmount property is set to off, the file system cannot be mounted by using the zfs mount or zfs mount –a commands. Setting this property to off is similar to setting the mountpoint property to none, except that the file system still has a normal mountpoint property that can be inherited. For example, you can set this property to off, establish inheritable properties for descendent file systems, but the parent file system itself is never mounted nor is it accessible to users. In this case, the parent file system is serving as a container so that you can set properties on the container, but the container itself is never accessible.

In the following example, userpool is created, and its canmount property is set to off. Mount points for descendent user file systems are set to one common mount point, /export/home. Properties that are set on the parent file system are inherited by descendent file systems, but the parent file system itself is never mounted.

# zpool create userpool mirror c0t5d0 c1t6d0
# zfs set canmount=off userpool
# zfs set mountpoint=/export/home userpool
# zfs set compression=on userpool
# zfs create userpool/user1
# zfs create userpool/user2
# zfs mount
userpool/user1                  /export/home/user1
userpool/user2                  /export/home/user2

Setting the canmount property to noauto means that the file system can only be mounted explicitly, not automatically.

The casesensitivity Property

This property indicates whether the file name matching algorithm used by the file system should be casesensitive, caseinsensitive, or allow a combination of both styles of matching (mixed).

When a case-insensitive matching request is made of a mixed sensitivity file system, the behavior is generally the same as would be expected of a purely case-insensitive file system. The difference is that a mixed sensitivity file system might contain directories with multiple names that are unique from a case-sensitive perspective, but not unique from the case-insensitive perspective.

For example, a directory might contain files foo, Foo, and FOO. If a request is made to case-insensitively match any of the possible forms of foo, (for example foo, FOO, FoO, fOo, and so on) one of the three existing files is chosen as the match by the matching algorithm. Exactly which file the algorithm chooses as a match is not guaranteed, but what is guaranteed is that the same file is chosen as a match for any of the forms of foo. The file chosen as a case-insensitive match for foo, FOO, foO, Foo, and so on, is always the same, so long as the directory remains unchanged.

The utf8only, normalization, and casesensitivity properties also provide new permissions that can be assigned to non-privileged users by using ZFS delegated administration. For more information, see Delegating ZFS Permissions.

The copies Property

As a reliability feature, ZFS file system metadata is automatically stored multiple times across different disks, if possible. This feature is known as ditto blocks.

In this release, you can also store multiple copies of user data is also stored per file system by using the zfs set copies command. For example:

# zfs set copies=2 users/home
# zfs get copies users/home
NAME        PROPERTY  VALUE       SOURCE
users/home  copies    2           local

Available values are 1, 2, or 3. The default value is 1. These copies are in addition to any pool-level redundancy, such as in a mirrored or RAID-Z configuration.

The benefits of storing multiple copies of ZFS user data are as follows:

  • Improves data retention by enabling recovery from unrecoverable block read faults, such as media faults (commonly known as bit rot) for all ZFS configurations.

  • Provides data protection, even when only a single disk is available.

  • Enables you to select data protection policies on a per-file system basis, beyond the capabilities of the storage pool.


Note - Depending on the allocation of the ditto blocks in the storage pool, multiple copies might be placed on a single disk. A subsequent full disk failure might cause all ditto blocks to be unavailable.

You might consider using ditto blocks when you accidentally create a non-redundant pool and when you need to set data retention policies.

The dedup Property

The dedup property controls whether duplicate data is removed from a file system. If a file system has the dedup property enabled, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared between files.

Do not enable the dedup property on file systems that reside on production systems until you review the following considerations:

  1. Determine if your data would benefit from deduplication space savings. You can run the zdb –S command to simulate the potential space savings of enabling dedup on your pool. This command must be run on a quiet pool. If your data is not dedup-able, then there's not point in enabling dedup. For example:

    # zdb -S tank
    Simulated DDT histogram:
    bucket              allocated                       referenced
    ______   ______________________________   ______________________________
    refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
    ------   ------   -----   -----   -----   ------   -----   -----   -----
    1    2.27M    239G    188G    194G    2.27M    239G    188G    194G
    2     327K   34.3G   27.8G   28.1G     698K   73.3G   59.2G   59.9G
    4    30.1K   2.91G   2.10G   2.11G     152K   14.9G   10.6G   10.6G
    8    7.73K    691M    529M    529M    74.5K   6.25G   4.79G   4.80G
    16      673   43.7M   25.8M   25.9M    13.1K    822M    492M    494M
    32      197   12.3M   7.02M   7.03M    7.66K    480M    269M    270M
    64       47   1.27M    626K    626K    3.86K    103M   51.2M   51.2M
    128       22    908K    250K    251K    3.71K    150M   40.3M   40.3M
    256        7    302K     48K   53.7K    2.27K   88.6M   17.3M   19.5M
    512        4    131K   7.50K   7.75K    2.74K    102M   5.62M   5.79M
    2K        1      2K      2K      2K    3.23K   6.47M   6.47M   6.47M
    8K        1    128K      5K      5K    13.9K   1.74G   69.5M   69.5M
    Total    2.63M    277G    218G    225G    3.22M    337G    263G    270G
    
    dedup = 1.20, compress = 1.28, copies = 1.03, dedup * compress / copies = 1.50

    If the estimated dedup ratio is greater than 2, then you might see dedup space savings.

    In the above example, the dedup ratio is less than 2, so enabling dedup is not recommended.

  2. Make sure your system has enough memory to support dedup.

    • Each in-core dedup table entry is approximately 320 bytes

    • Multiply the number of allocated blocks times 320. For example:

      in-core DDT size = 2.63M x 320 = 841.60M
  3. Dedup performance is best when the deduplication table fits into memory. If the dedup table has to be written to disk, then performance will decrease. For example, removing a large file system with dedup enabled will severely decrease system performance if the system doesn't meet the memory requirements described above.

When dedup is enabled, the dedup checksum algorithm overrides the checksum property. Setting the property value to verify is equivalent to specifying sha256,verify. If the property is set to verify and two blocks have the same signature, ZFS does a byte-by-byte comparison with the existing block to ensure that the contents are identical.

This property can be enabled per file system. For example:

# zfs set dedup=on tank/home

You can use the zfs get command to determine if the dedup property is set.

Although deduplication is set as a file system property, the scope is pool-wide. For example, you can identify the deduplication ratio. For example:

# zpool list tank
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
rpool   136G  55.2G  80.8G    40%  2.30x  ONLINE  -

The DEDUP column indicates how much deduplication has occurred. If the dedup property is not enabled on any file system or if the dedup property was just enabled on the file system, the DEDUP ratio is 1.00x.

You can use the zpool get command to determine the value of the dedupratio property. For example:

# zpool get dedupratio export
NAME   PROPERTY    VALUE  SOURCE
rpool  dedupratio  3.00x  -

This pool property illustrates how much data deduplication this pool has achieved.

The encryption Property

You can use the encryption property to encrypt ZFS file systems. For more information, see Encrypting ZFS File Systems.

The recordsize Property

The recordsize property specifies a suggested block size for files in the file system.

This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically adjust block sizes according to internal algorithms optimized for typical access patterns. For databases that create very large files but access the files in small random chunks, these algorithms might be suboptimal. Specifying a recordsize value greater than or equal to the record size of the database can result in significant performance gains. Use of this property for general purpose file systems is strongly discouraged and might adversely affect performance. The size specified must be a power of 2 greater than or equal to 512 bytes and less than or equal to 1 MB. Changing the file system's recordsize value only affects files created afterward. Existing files are unaffected.

The property abbreviation is recsize.

The share.smb Property

This property enables sharing of ZFS file systems with the Oracle Solaris SMB service, and identifies options to be used.

When the property is changed from off to on, any shares that inherit the property are re-shared with their current options. When the property is set to off, the shares that inherit the property are unshared.For examples of using the share.smb property, see Sharing and Unsharing ZFS File Systems.

The volsize Property

The volsize property specifies the logical size of the volume. By default, creating a volume establishes a reservation for the same amount. Any changes to volsize are reflected in an equivalent change to the reservation. These checks are used to prevent unexpected behavior for users. A volume that contains less space than it claims is available can result in undefined behavior or data corruption, depending on how the volume is used. These effects can also occur when the volume size is changed while the volume is in use, particularly when you shrink the size. Use extreme care when adjusting the volume size.

For more information about using volumes, see ZFS Volumes.