ZFS is the default root file system in the Oracle Solaris release. ZFS is a disk based file system with the following features:
Uses a pooled storage model where whole disks can be added to the pool so that all file systems use storage space from the pool.
A ZFS file system is not tied to a specific disk slice or volume, so previous tasks, such as repartitioning a disk or unmounting a file system to add disk space, are unnecessary.
All file system operations are copy-on-write transactions so the on-disk state is always valid. Every block is checksummed to prevent silent data corruption. In a replicated RAID-Z or mirrored configuration, ZFS detects corrupted data and uses another copy to repair it.
A disk scrubbing feature reads all data to detect latent errors while the errors are still correctable. A scrub traverses the entire storage pool to read every data block, validates the data against its 256-bit checksum, and repairs the data, if necessary.
ZFS is a 128-bit file system, which means support for 64-bit file offsets, unlimited links, directory entries, and so on.
ZFS provides snapshots, a read-only point-in-time copy of a file system and cloning, which provides a writable copy of a snapshot.
A ZFS storage pool and ZFS file system are created in two steps:
# zpool create tank mirror c1t0d0 c1t1d0 # zfs create tank/fs1
A ZFS file system is mounted automatically when created and when the system is rebooted by an SMF service. No need exists to edit the /etc/vfstab file manually. If you need to mount a ZFS file manually, use syntax similar to the following:
# zfs mount tank/fs1
For more information about managing ZFS file systems, see the Managing ZFS File Systems in Oracle Solaris 11.2 .
See attributes(5) for a description of the following attributes:
ZFS does not have an fsck-like repair feature because the data is always consistent on disk. ZFS provides a pool scrubbing operation that can find and repair bad data. In addition, because hardware can fail, ZFS pool recovery features are also available.
Use the zpool list and zfs list to identify ZFS space consumption. A limitation of using the du (1) command to determine ZFS file system sizes is that it also reports ZFS metadata space consumption. The df (1M) command does not account for space that is consumed by ZFS snapshots, clones, or quotas.
A ZFS storage pool that is not used for booting should be created by using whole disks. When a ZFS storage pool is created by using whole disks, an EFI label is applied to the pool's disks. Due to a long-standing boot limitation, a ZFS root pool must be created with disks that contain a valid SMI (VTOC) label and a disk slice, usually slice 0.