This file system enhancement is new in the Solaris Express 12/05 release.
This Solaris Express release includes ZFS, a new 128-bit file system. ZFS provides simple administration, transactional semantics, end-to-end data integrity, and immense scalability. ZFS is not an incremental improvement to existing technology. Rather, ZFS is a fundamentally new approach to data management.
ZFS uses a pooled-storage model that completely eliminates the concept of volumes. Thus, ZFS eliminates the associated problems of partition management, provisioning, and growing file systems. Thousands of file systems can all draw from a common storage pool. Each system consumes only as much space as actually needed. The combined I/O bandwidth of all devices in the pool is available to all file systems at all times.
All operations are “copy-on-write” transactions, so the on-disk state is always valid. Every block has a checksum, so silent data corruption is impossible. In addition, the data is self-healing in replicated configurations. This feature means that if one copy is damaged, ZFS detects the damage and uses another copy to repair the damaged copy.
For system administrators, the greatest improvement of ZFS over traditional file systems is the ease of administration.
ZFS takes a single command to set up a mirrored storage pool and file system. For example:
# zpool create home mirror c0t1d0 c1t2d0 |
The above command creates a mirrored storage pool named home and a single file system named home. The file system is mounted at /home.
With ZFS, you can use whole disks instead of partitions to create the storage pool.
Then, you can use the /home file system hierarchy to create any number of file systems beneath /home. For example:
# zfs create home/user1 |
For more information, see the zpool(1M) and zfs(1M) man pages.
In addition, ZFS provides the following administration features:
Backup and restore capabilities
Device management support
Persistent snapshots and cloning features
Quotas that can be set for file systems
RBAC-based access control
Storage pool space reservations for file systems
Support for Solaris systems that have zones installed
For more information, see the Solaris ZFS Administration Guide.
The following section describes recent improvements and changes to the ZFS command interface in the Solaris Express release.
Clearing device errors – You can use the zpool clear command to clear error counts associated with a device or the pool. Previously, error counts were cleared when a device in a pool was brought online with the zpool online command.
Compact NFSv4 ACL format – Three NFSv4 ACL formats are available: verbose, positional, and compact. The new compact and positional ACL formats are available to set and display ACLs. You can use the chmod command to set all 3 ACL formats. Use the ls -V command to display compact and positional ACL formats and the ls -v command to display verbose ACL formats.
Double Parity RAID-Z (raidz2) – A replicated RAID-Z configuration can now have either single- or double-parity, which means that one or two device failures can be sustained respectively, without any data loss. You can specify the raidz2 keyword for a double-parity RAID-Z configuration. Or, you can specify the raidz or raidz1 keyword for a single-parity RAID-Z configuration.
Hot spares for ZFS storage pool devices – The ZFS hot spares feature enables you to identify disks that could be used to replace a failed or faulted device in one or more storage pools. Designating a device as a hot spare means that if an active device in the pool fails, the hot spare automatically replaces the failed device. Or, you can manually replace a device in a storage pool with a hot spare.
Replacing a ZFS File System With a ZFS Clone (zfs promote) – The zfs promote command enables you to replace an existing ZFS file system with a clone of that file system. This feature is helpful when you want to run tests on an alternative version of a file system and then, make that alternative version of the file system the active file system.
Recovering destroyed pools – The zpool import -D command enables you to recover pools that were previously destroyed with the zpool destroy command.
Temporarily take a device offline – You can use the zpool offline -t command to take a device offline temporarily. When the system is rebooted, the device is automatically returned to the ONLINE state.
Upgrading ZFS Storage Pools (zpool upgrade) – You can upgrade your storage pools to a newer version to take advantage of the latest features by using the zpool upgrade command. In addition, the zpool status command has been modified to notify you when your pools are running older versions.
ZFS backup and restore commands are renamed – The zfs backup and zfs restore commands are renamed to zfs send and zfs receive to more accurately describe their function. The function of these commands is to save and restore ZFS data stream representations.
ZFS and zones improvements – On a Solaris system with zones installed, you can use the zoneadm clone feature to copy the data from an existing source ZFS zonepath to a target ZFS zonepath on your system. You cannot use the ZFS clone feature to clone the non-global zone. You must use the zoneadm clone command. For more information, see System Administration Guide: Virtualization Using the Solaris Operating System.
ZFS is integrated with Fault Manager – A ZFS diagnostic engine is included that is capable of diagnosing and reporting pool failures and device failures. Checksum, I/O, and device errors associated with pool or device failures are also reported. Diagnostic error information is written to the console and the /var/adm/messages file. In addition, detailed information about recovering from a reported error can be displayed by using the zpool status command.
For more information about these improvements and changes, see the Solaris ZFS Administration Guide.
The Solaris Express 1/06 release includes the ZFS web-based management tool, which enables you to perform much of the administration that you can do with the ZFS command line interface. You can perform the following administrative tasks with the ZFS Administration console:
Create a new storage pool.
Add capacity to an existing pool.
Move (export) a storage pool to another system.
Import a previously exported storage pool to make it available on another system.
View information about storage pools.
Create a file system.
Create a volume.
Take a snapshot of a file system or a volume.
Roll back a file system to a previous snapshot.
You can access the ZFS Administration console through a secure web browser at the following URL:
https://system-name:6789 |
If you type the appropriate URL and are unable to reach ZFS Administration console, the server might not be started. To start the server, run the following command:
# /usr/sbin/smcwebserver start |
If you want the server to run automatically when the system boots, run the following command:
# /usr/sbin/smcwebserver enable |
The Solaris Zones partitioning technology supports ZFS components, such as adding ZFS file systems and storage pools into a zone.
For example, the file system resource type in the zonecfg command has been enhanced as follows:
zonecfg:myzone> add fs zonecfg:myzone:fs> set type=zfs zonecfg:myzone:fs> set dir=/export/share zonecfg:myzone:fs> set special=tank/home zonecfg:myzone:fs> end |
For more information, see the zonecfg(1M) man page and the Solaris ZFS Administration Guide.
In this release, the following Solaris installation tool support is provided:
Custom Solaris Jumpstart - You cannot include ZFS file systems in a Jumpstart profile. However, you can run following scripts from a ZFS storage pool to set up an install server or an install client:
setup_install_server
add_install_server
add_install_client
Solaris Live Upgrade - Preserves your original boot environment and carries over your ZFS storage pools into the new environment. Currently, ZFS cannot be used as a bootable root file system. Therefore, your existing ZFS file systems are not copied into the boot environment (BE).
Solaris Initial Install - ZFS file systems are not recognized during an initial installation. However, if you do not specify any of the disk devices that contain ZFS storage pools to be used for the installation, you should be able to recover your storage pools by using the zpool import command after the installation. For more information, see the zpool(1M) man page.
As with most reinstallation scenarios, you should back up your ZFS files before proceeding with the initial installation option.
Solaris Upgrade – Your ZFS file systems and storage pools are preserved.
ZFS implements a new ACL model. Previous versions of the Solaris OS only supported an ACL model that was primarily based on the POSIX ACL draft specification. The POSIX-draft based ACLs are used to protect UFS files. A new model that is based on the NFSv4 specification is used to protect ZFS files.
The main features of the new ACL model are as follows:
Is based on the NFSv4 specification and the new ACLs that are similar to NT-style ACLs.
Provides a more granular set of access privileges.
Uses the chmod and ls commands rather than the setfacl and getfacl commands to set and display ACLs.
Provides richer inheritance semantics for designating how access privileges are applied from directory to subdirectories, and so on.
The recently revised chmod(1) man page adds many new examples that demonstrate usage with ZFS. The acl(5) man page has an overview of the new ACL model. In addition, the Solaris ZFS Administration Guide provides extensive examples of using ACLs to protect ZFS files.