JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 11.1 Administration: ZFS File Systems     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

What's New in ZFS?

Improved ZFS Pool Device Messages

ZFS File Sharing Improvements

Shared var File System

Boot Support for EFI (GPT) Labeled Disks

ZFS Command Usage Enhancements

ZFS Snapshot Enhancements

ZFS Manual Page Change (zfs.1m)

Improved aclmode Property

Identifying Pool Devices By Physical Location

ZFS Shadow Migration

ZFS File System Encryption

ZFS Send Stream Enhancements

ZFS Snapshot Differences (zfs diff)

ZFS Storage Pool Recovery and Performance Enhancements

Tuning ZFS Synchronous Behavior

Improved ZFS Pool Messages

ZFS ACL Interoperability Enhancements

Splitting a Mirrored ZFS Storage Pool (zpool split)

ZFS iSCSI Changes

New ZFS System Process

ZFS Deduplication Property

What Is Oracle Solaris ZFS?

ZFS Pooled Storage

Transactional Semantics

Checksums and Self-Healing Data

Unparalleled Scalability

ZFS Snapshots

Simplified Administration

ZFS Terminology

ZFS Component Naming Requirements

Oracle Solaris ZFS and Traditional File System Differences

ZFS File System Granularity

ZFS Disk Space Accounting

Out of Space Behavior

Mounting ZFS File Systems

Traditional Volume Management

Solaris ACL Model Based on NFSv4

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

4.  Managing ZFS Root Pool Components

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Archiving Snapshots and Root Pool Recovery

12.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Oracle Solaris ZFS and Traditional File System Differences

ZFS File System Granularity

Historically, file systems have been constrained to one device and thus to the size of that device. Creating and re-creating traditional file systems because of size constraints are time-consuming and sometimes difficult. Traditional volume management products help manage this process.

Because ZFS file systems are not constrained to specific devices, they can be created easily and quickly, similar to the way directories are created. ZFS file systems grow automatically within the disk space allocated to the storage pool in which they reside.

Instead of creating one file system, such as /export/home, to manage many user subdirectories, you can create one file system per user. You can easily set up and manage many file systems by applying properties that can be inherited by the descendent file systems contained within the hierarchy.

For an example that shows how to create a file system hierarchy, see Creating a ZFS File System Hierarchy.

ZFS Disk Space Accounting

ZFS is based on the concept of pooled storage. Unlike typical file systems, which are mapped to physical storage, all ZFS file systems in a pool share the available storage in the pool. So, the available disk space reported by utilities such as df might change even when the file system is inactive, as other file systems in the pool consume or release disk space.

Note that the maximum file system size can be limited by using quotas. For information about quotas, see Setting Quotas on ZFS File Systems. A specified amount of disk space can be guaranteed to a file system by using reservations. For information about reservations, see Setting Reservations on ZFS File Systems. This model is very similar to the NFS model, where multiple directories are mounted from the same file system (consider /home).

All metadata in ZFS is allocated dynamically. Most other file systems preallocate much of their metadata. As a result, at file system creation time, an immediate space cost for this metadata is required. This behavior also means that the total number of files supported by the file systems is predetermined. Because ZFS allocates its metadata as it needs it, no initial space cost is required, and the number of files is limited only by the available disk space. The output from the df -g command must be interpreted differently for ZFS than other file systems. The total files reported is only an estimate based on the amount of storage that is available in the pool.

ZFS is a transactional file system. Most file system modifications are bundled into transaction groups and committed to disk asynchronously. Until these modifications are committed to disk, they are called pending changes. The amount of disk space used, available, and referenced by a file or file system does not consider pending changes. Pending changes are generally accounted for within a few seconds. Even committing a change to disk by using fsync(3c) or O_SYNC does not necessarily guarantee that the disk space usage information is updated immediately.

On a UFS file system, the du command reports the size of the data blocks within the file. On a ZFS file system, du reports the actual size of the file as stored on disk. This size includes metadata as well as compression. This reporting really helps answer the question of "how much more space will I get if I remove this file?" So, even when compression is off, you will still see different results between ZFS and UFS.

When you compare the space consumption that is reported by the df command with the zfs list command, consider that df is reporting the pool size and not just file system sizes. In addition, df doesn't understand descendent file systems or whether snapshots exist. If any ZFS properties, such as compression and quotas, are set on file systems, reconciling the space consumption that is reported by df might be difficult.

Consider the following scenarios that might also impact reported space consumption:

Out of Space Behavior

File system snapshots are inexpensive and easy to create in ZFS. Snapshots are common in most ZFS environments. For information about ZFS snapshots, see Chapter 6, Working With Oracle Solaris ZFS Snapshots and Clones.

The presence of snapshots can cause some unexpected behavior when you attempt to free disk space. Typically, given appropriate permissions, you can remove a file from a full file system, and this action results in more disk space becoming available in the file system. However, if the file to be removed exists in a snapshot of the file system, then no disk space is gained from the file deletion. The blocks used by the file continue to be referenced from the snapshot.

As a result, the file deletion can consume more disk space because a new version of the directory needs to be created to reflect the new state of the namespace. This behavior means that you can receive an unexpected ENOSPC or EDQUOT error when attempting to remove a file.

Mounting ZFS File Systems

ZFS reduces complexity and eases administration. For example, with traditional file systems, you must edit the /etc/vfstab file every time you add a new file system. ZFS has eliminated this requirement by automatically mounting and unmounting file systems according to the properties of the file system. You do not need to manage ZFS entries in the /etc/vfstab file.

For more information about mounting and sharing ZFS file systems, see Mounting ZFS File Systems.

Traditional Volume Management

As described in ZFS Pooled Storage, ZFS eliminates the need for a separate volume manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical volumes, either software or hardware. This configuration is not recommended, as ZFS works best when it uses raw physical devices. Using logical volumes might sacrifice performance, reliability, or both, and should be avoided.

Solaris ACL Model Based on NFSv4

Previous versions of the Solaris OS supported an ACL implementation that was primarily based on the POSIX ACL draft specification. The POSIX-draft based ACLs are used to protect UFS files. A new Solaris ACL model that is based on the NFSv4 specification is used to protect ZFS files.

The main differences of the new Solaris ACL model are as follows:

For more information about using ACLs with ZFS files, see Chapter 7, Using ACLs and Attributes to Protect Oracle Solaris ZFS Files.