JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Administration: ZFS File Systems     Oracle Solaris 11 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Oracle Solaris ZFS and Traditional File System Differences

4.  Managing Oracle Solaris ZFS Storage Pools

Components of a ZFS Storage Pool

Using Disks in a ZFS Storage Pool

Using Slices in a ZFS Storage Pool

Using Files in a ZFS Storage Pool

Considerations for ZFS Storage Pools

Replication Features of a ZFS Storage Pool

Mirrored Storage Pool Configuration

RAID-Z Storage Pool Configuration

ZFS Hybrid Storage Pool

Self-Healing Data in a Redundant Configuration

Dynamic Striping in a Storage Pool

Creating and Destroying ZFS Storage Pools

Creating ZFS Storage Pools

Creating a Basic Storage Pool

Creating a Mirrored Storage Pool

Creating a ZFS Root Pool

Creating a RAID-Z Storage Pool

Creating a ZFS Storage Pool With Log Devices

Creating a ZFS Storage Pool With Cache Devices

Cautions For Creating Storage Pools

Displaying Storage Pool Virtual Device Information

Handling ZFS Storage Pool Creation Errors

Detecting In-Use Devices

Mismatched Replication Levels

Doing a Dry Run of Storage Pool Creation

Default Mount Point for Storage Pools

Destroying ZFS Storage Pools

Destroying a Pool With Faulted Devices

Managing Devices in ZFS Storage Pools

Adding Devices to a Storage Pool

Attaching and Detaching Devices in a Storage Pool

Creating a New Pool By Splitting a Mirrored ZFS Storage Pool

Onlining and Offlining Devices in a Storage Pool

Taking a Device Offline

Bringing a Device Online

Clearing Storage Pool Device Errors

Replacing Devices in a Storage Pool

Designating Hot Spares in Your Storage Pool

Activating and Deactivating Hot Spares in Your Storage Pool

Managing ZFS Storage Pool Properties

Querying ZFS Storage Pool Status

Displaying Information About ZFS Storage Pools

Displaying Information About All Storage Pools or a Specific Pool

Displaying Pool Devices by Physical Locations

Displaying Specific Storage Pool Statistics

Scripting ZFS Storage Pool Output

Displaying ZFS Storage Pool Command History

Viewing I/O Statistics for ZFS Storage Pools

Listing Pool-Wide I/O Statistics

Listing Virtual Device I/O Statistics

Determining the Health Status of ZFS Storage Pools

Basic Storage Pool Health Status

Detailed Health Status

Gathering ZFS Storage Pool Status Information

Migrating ZFS Storage Pools

Preparing for ZFS Storage Pool Migration

Exporting a ZFS Storage Pool

Determining Available Storage Pools to Import

Importing ZFS Storage Pools From Alternate Directories

Importing ZFS Storage Pools

Importing a Pool With a Missing Log Device

Importing a Pool in Read-Only Mode

Importing a Pool By a Specific Device Path

Recovering Destroyed ZFS Storage Pools

Upgrading ZFS Storage Pools

5.  Managing ZFS Root Pool Components

6.  Managing Oracle Solaris ZFS File Systems

7.  Working With Oracle Solaris ZFS Snapshots and Clones

8.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

9.  Oracle Solaris ZFS Delegated Administration

10.  Oracle Solaris ZFS Advanced Topics

11.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

12.  Archiving Snapshots and Root Pool Recovery

13.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Components of a ZFS Storage Pool

The following sections provide detailed information about the following storage pool components:

Using Disks in a ZFS Storage Pool

The most basic element of a storage pool is physical storage. Physical storage can be any block device of at least 128 MB in size. Typically, this device is a hard drive that is visible to the system in the /dev/dsk directory.

A storage device can be a whole disk (c1t0d0) or an individual slice (c0t0d0s7). The recommended mode of operation is to use an entire disk, in which case the disk does not require special formatting. ZFS formats the disk using an EFI label to contain a single, large slice. When used in this way, the partition table that is displayed by the format command appears similar to the following:

Current partition table (original):
Total disk sectors available: 286722878 + 16384 (reserved sectors)

Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34      136.72GB          286722911    
  1 unassigned    wm                 0           0               0    
  2 unassigned    wm                 0           0               0    
  3 unassigned    wm                 0           0               0    
  4 unassigned    wm                 0           0               0    
  5 unassigned    wm                 0           0               0    
  6 unassigned    wm                 0           0               0    
  8   reserved    wm         286722912        8.00MB          286739295    

Review the following considerations when using whole disks in your ZFS storage pools:

Disks can be specified by using either the full path, such as /dev/dsk/c1t0d0, or a shorthand name that consists of the device name within the /dev/dsk directory, such as c1t0d0. For example, the following are valid disk names:

Using Slices in a ZFS Storage Pool

Disks can be labeled with a traditional Solaris VTOC (SMI) label when you create a storage pool with a disk slice.

For a bootable ZFS root pool, the disks in the pool must contain slices and the disks must be labeled with an SMI label. The simplest configuration would be to put the entire disk capacity in slice 0 and use that slice for the root pool.

On a SPARC based system, a 72-GB disk has 68 GB of usable space located in slice 0 as shown in the following format output:

# format
.
.
.
Specify disk (enter its number): 4
selecting c1t1d0
partition> p
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
  1 unassigned    wm       0                0         (0/0/0)             0
  2     backup    wm       0 - 14086       68.35GB    (14087/0/0) 143349312
  3 unassigned    wm       0                0         (0/0/0)             0
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm       0                0         (0/0/0)             0
  6 unassigned    wm       0                0         (0/0/0)             0
  7 unassigned    wm       0                0         (0/0/0)             0

On an x86 based system, a 72-GB disk has 68 GB of usable disk space located in slice 0, as shown in the following format output. A small amount of boot information is contained in slice 8. Slice 8 requires no administration and cannot be changed.

# format
.
.
.
selecting c1t0d0
partition> p
Current partition table (original):
Total disk cylinders available: 49779 + 2 (reserved cylinders)

Part      Tag    Flag     Cylinders         Size            Blocks
  0       root    wm       1 - 49778       68.36GB    (49778/0/0) 143360640
  1 unassigned    wu       0                0         (0/0/0)             0
  2     backup    wm       0 - 49778       68.36GB    (49779/0/0) 143363520
  3 unassigned    wu       0                0         (0/0/0)             0
  4 unassigned    wu       0                0         (0/0/0)             0
  5 unassigned    wu       0                0         (0/0/0)             0
  6 unassigned    wu       0                0         (0/0/0)             0
  7 unassigned    wu       0                0         (0/0/0)             0
  8       boot    wu       0 -     0        1.41MB    (1/0/0)          2880
  9 unassigned    wu       0                0         (0/0/0)             0

An fdisk partition also exists on Solaris x86 systems. An fdisk partition is represented by a /dev/dsk/cN[tN]dNpN device name and acts as a container for the disk's available slices. Do not use a cN[tN]dNpN device for a ZFS storage pool component because this configuration is neither tested nor supported.

Using Files in a ZFS Storage Pool

ZFS also allows you to use files as virtual devices in your storage pool. This feature is aimed primarily at testing and enabling simple experimentation, not for production use.

However, files can be quite useful when you are first trying out ZFS or experimenting with more complicated configurations when insufficient physical devices are present. All files must be specified as complete paths and must be at least 64 MB in size.

Considerations for ZFS Storage Pools

Review the following considerations when creating and managing ZFS storage pools.