JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 11.1 Administration: ZFS File Systems     Oracle Solaris 11.1 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

4.  Managing ZFS Root Pool Components

Managing ZFS Root Pool Components (Overview)

ZFS Root Pool Requirements

ZFS Root Pool Space Requirements

ZFS Root Pool Configuration Requirements

Managing Your ZFS Root Pool

Installing a ZFS Root Pool

How to Update Your ZFS Boot Environment

How to Mount an Alternate BE

How to Configure a Mirrored Root Pool (SPARC or x86/VTOC)

How to Configure a Mirrored Root Pool (x86/EFI (GPT))

How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC)

How to Replace a Disk in a ZFS Root Pool (SPARC or x86/EFI (GPT))

How to Create a BE in Another Root Pool (SPARC or x86/VTOC)

How to Create a BE in Another Root Pool (SPARC or x86/EFI (GPT))

Managing Your ZFS Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap and Dump Devices

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

Booting From a ZFS Root File System on a SPARC Based System

Booting From a ZFS Root File System on an x86 Based System

Booting For Recovery Purposes in a ZFS Root Environment

How to Boot the System For Recovery Purposes

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Archiving Snapshots and Root Pool Recovery

12.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Managing Your ZFS Root Pool

The following sections provide information about installing and updating a ZFS root pool and configuring a mirrored root pool.

Installing a ZFS Root Pool

The Oracle Solaris 11 Live CD installation method installs a default ZFS root pool on a single disk. With the Oracle Solaris 11 automated installation (AI) method, you can create an AI manifest to identify the disk or mirrored disks for the ZFS root pool.

The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target disk that you identify. You can specify the logical device, such as c1t0d0, or the physical device path. In addition, you can use the MPxIO identifier or the device ID for the device to be installed.

After the installation, review your ZFS storage pool and file system information, which can vary by installation type and customizations. For example:

# zpool status rpool
  pool: rpool
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            c8t0d0  ONLINE       0     0     0
            c8t1d0  ONLINE       0     0     0
# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                    11.8G  55.1G  4.58M  /rpool
rpool/ROOT               3.57G  55.1G    31K  legacy
rpool/ROOT/solaris       3.57G  55.1G  3.40G  /
rpool/ROOT/solaris/var    165M  55.1G   163M  /var
rpool/VARSHARE           42.5K  55.1G  42.5K  /var/share
rpool/dump               6.19G  55.3G  6.00G  -
rpool/export               63K  55.1G    32K  /export
rpool/export/home          31K  55.1G    31K  /export/home
rpool/swap               2.06G  55.2G  2.00G  -

Review your ZFS BE information. For example:

# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
solaris NR     /          3.75G static 2012-07-20 12:10 

In the above output, the Active field indicates whether the BE is active now represented by N and active on reboot represented by R, or both represented by NR.

How to Update Your ZFS Boot Environment

The default ZFS boot environment (BE) is named solaris by default. You can identify your BEs by using the beadm list command. For example:

# beadm list
BE      Active Mountpoint Space Policy Created          
--      ------ ---------- ----- ------ -------          
solaris NR     /          3.82G static 2012-07-19 13:44

In the above output, NR means the BE is active now and will be the active BE on reboot.

You can use the pkg update command to update your ZFS boot environment. If you update your ZFS BE by using the pkg update command, a new BE is created and activated automatically, unless the updates to the existing BE are very minimal.

  1. Update your ZFS BE.
    # pkg update
                                           
    
    DOWNLOAD                                  PKGS       FILES    XFER (MB)
    Completed                              707/707 10529/10529  194.9/194.9 
    .
    .
    .

    A new BE, solaris-1, is created automatically and activated.

    You can also create and activate a backup BE outside of the update process.

    # beadm create solaris-1
    # beadm activate solaris-1
  2. Reboot the system to complete the BE activation. Then, confirm your BE status.
    # init 6
    .
    .
    .
    # beadm list
    BE        Active Mountpoint Space  Policy Created          
    --        ------ ---------- -----  ------ -------          
    solaris   -      -          46.95M static 2012-07-20 10:25 
    solaris-1 NR     /          3.82G  static 2012-07-19 14:45 
  3. If an error occurs when booting the new BE, activate and boot back to the previous BE.
    # beadm activate solaris
    # init 6

How to Mount an Alternate BE

You might need to copy or access a file from another BE for recovery purposes.

  1. Become an administrator.
  2. Mount the alternate BE.
    # beadm mount solaris-1 /mnt
  3. Access the BE.
    # ls /mnt
    bin        export     media      pkg        rpool      tmp
    boot       home       mine       platform   sbin       usr
    dev        import     mnt        proc       scde       var
    devices    java       net        project    shared     
    doe        kernel     nfs4       re         src        
    etc        lib        opt        root       system     
  4. Unmount the alternate BE when you're finished with it.
    # beadm umount solaris-1

How to Configure a Mirrored Root Pool (SPARC or x86/VTOC)

If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.

For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).

  1. Display your current root pool status.
    # zpool status rpool
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c2t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
  2. Prepare a second disk for attachment to the root pool, if necessary.
  3. Attach a second disk to configure a mirrored root pool.
    # zpool attach rpool c2t0d0s0 c2t1d0s0
    Make sure to wait until resilver is done before rebooting.

    The correct disk labeling and the boot blocks are applied automatically.

  4. View the root pool status to confirm that resilvering is complete.
    # zpool status rpool
    # zpool status rpool
      pool: rpool
     state: DEGRADED
    status: One or more devices is currently being resilvered.  The pool will
            continue to function in a degraded state.
    action: Wait for the resilver to complete.
            Run 'zpool status -v' to see device specific details.
      scan: resilver in progress since Fri Jul 20 13:39:53 2012
        938M scanned out of 11.7G at 46.9M/s, 0h3m to go
        938M resilvered, 7.86% done
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         DEGRADED     0     0     0
              mirror-0    DEGRADED     0     0     0
                c2t0d0s0  ONLINE       0     0     0
                c2t1d0s0  DEGRADED     0     0     0  (resilvering)

    In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:

    resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2012
  5. If you attaching a larger disk, set the pool's autoexpand property to expand the pool's size.

    Determine the existing rpool pool size:

    # zpool list rpool
    NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    rpool  29.8G   152K  29.7G   0%  1.00x  ONLINE  -
    # zpool set autoexpand=on rpool

    Review the expanded rpool pool size:

    # zpool list rpool
    NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
    rpool  279G   146K  279G   0%  1.00x  ONLINE  -
  6. Verify that you can boot successfully from the new disk.

How to Configure a Mirrored Root Pool (x86/EFI (GPT))

The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.

If you do not configure a mirrored root pool during an automatic installation, you can easily configure a mirrored root pool after the installation.

For information about replacing a disk in a root pool, see How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC).

  1. Display your current root pool status.
    # zpool status rpool
     pool:  rpool
     state: ONLINE
      scan: none requested
    config:
    
            NAME      STATE     READ WRITE CKSUM
            rpool     ONLINE       0     0     0
              c2t0d0  ONLINE       0     0     0
    
    errors: No known data errors
  2. Attach a second disk to configure a mirrored root pool.
    # zpool attach rpool c2t0d0 c2t1d0
    Make sure to wait until resilver is done before rebooting.

    The correct disk labeling and the boot blocks are applied automatically.

    If you have customized partitions on your root pool disk, then you might need syntax similar to the following:

    # zpool attach rpool c2t0d0s0 c2t1d0
  3. View the root pool status to confirm that resilvering is complete.
    # zpool status rpool
      pool: rpool
     state: DEGRADED
    status: One or more devices is currently being resilvered.  The pool will
            continue to function in a degraded state.
    action: Wait for the resilver to complete.
            Run 'zpool status -v' to see device specific details.
      scan: resilver in progress since Fri Jul 20 13:52:05 2012
        809M scanned out of 11.6G at 44.9M/s, 0h4m to go
        776M resilvered, 6.82% done
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       DEGRADED     0     0     0
              mirror-0  DEGRADED     0     0     0
                c8t0d0  ONLINE       0     0     0
                c8t1d0  DEGRADED     0     0     0  (resilvering)
    
    errors: No known data errors

    In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:

    resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 13:57:25 2012
  4. If you attaching a larger disk, set the pool's autoexpand property to expand the pool's size.

    Determine the existing rpool pool size:

    # zpool list rpool
    NAME   SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
    rpool  29.8G   152K  29.7G   0%  1.00x  ONLINE  -
    # zpool set autoexpand=on rpool

    Review the expanded rpool pool size:

    # zpool list rpool
    NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
    rpool  279G   146K  279G   0%  1.00x  ONLINE  -
  5. Verify that you can boot successfully from the new disk.

How to Replace a Disk in a ZFS Root Pool (SPARC or x86/VTOC)

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.

Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
<Confirm that the new disk has an SMI label and a slice 0>
# zpool online rpool c1t0d0s0
# zpool replace rpool c1t0d0s0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
# bootadm install-bootloader

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

  1. Physically connect the replacement disk.
  2. Prepare a second disk for attachment to the root pool, if necessary.
  3. Attach the new disk to the root pool.

    For example:

    # zpool attach rpool c2t0d0s0 c2t1d0s0
    Make sure to wait until resilver is done before rebooting.

    The correct disk labeling and the boot blocks are applied automatically.

  4. Confirm the root pool status.

    For example:

    # zpool status rpool
      pool: rpool
     state: ONLINE
     scan: resilvered 11.7G in 0h5m with 0 errors on Fri Jul 20 13:45:37 2012
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c2t0d0s0  ONLINE       0     0     0
                c2t1d0s0  ONLINE       0     0     0
    
    errors: No known data errors
  5. Verify that you can boot from the new disk after resilvering is complete.

    For example, on a SPARC based system:

    ok boot /pci@1f,700000/scsi@2/disk@1,0

    Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c2t0d0s0) is:

    /pci@1f,700000/scsi@2/disk@0,0

    In the example below, the replacement boot disk is (c2t1d0s0):

    boot /pci@1f,700000/scsi@2/disk@1,0
  6. If the system boots from the new disk, detach the old disk.

    For example:

    # zpool detach rpool c2t0d0s0
  7. If you attaching a larger disk, set the pool's autoexpand property to expand the pool's size.
    # zpool set autoexpand=on rpool

    Or, expand the device:

    # zpool online -e c2t1d0s0
  8. Set up the system to boot automatically from the new disk.
    • SPARC: Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.

    • x86: Reconfigure the system BIOS.

How to Replace a Disk in a ZFS Root Pool (SPARC or x86/EFI (GPT))

The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.

Systems with SATA disks require that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

# zpool offline rpool c1t0d0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
# zpool online rpool c1t0d0
# zpool replace rpool c1t0d0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
x86# bootadm install-bootloader

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

  1. Physically connect the replacement disk.
  2. Attach the new disk to the root pool.

    For example:

    # zpool attach rpool c2t0d0 c2t1d0
    Make sure to wait until resilver is done before rebooting.

    The correct disk labeling and the boot blocks are applied automatically.

  3. Confirm the root pool status.

    For example:

    # zpool status rpool
      pool: rpool
     state: ONLINE
      scan: resilvered 11.6G in 0h5m with 0 errors on Fri Jul 20 12:06:07 2012
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              mirror-0  ONLINE       0     0     0
                c2t0d0  ONLINE       0     0     0
                c2t1d0  ONLINE       0     0     0
    
    errors: No known data errors
  4. Verify that you can boot from the new disk after resilvering is complete.
  5. If the system boots from the new disk, detach the old disk.

    For example:

    # zpool detach rpool c2t0d0
  6. If you attaching a larger disk, set the pool's autoexpand property to expand the pool's size.
    # zpool set autoexpand=on rpool

    Or, expand the device:

    # zpool online -e c2t1d0
  7. Set up the system to boot automatically from the new disk.

    Reconfigure the system BIOS.

How to Create a BE in Another Root Pool (SPARC or x86/VTOC)

If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.

After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.

  1. Create a second root pool with an SMI (VTOC)-labeled disk. For example:
    # zpool create rpool2 c4t2d0s0
  2. Create the new BE in the second root pool. For example:
    # beadm create -p rpool2 solaris2
  3. Set the bootfs property on the second root pool. For example:
    # zpool set bootfs=rpool2/ROOT/solaris2 rpool2
  4. Activate the new BE. For example:
    # beadm activate solaris2
  5. Boot from the new BE but you must boot specifically from the second root pool's boot device.
    ok boot disk2

    Your system should be running under the new BE.

  6. Re-create the swap volume. For example:
    # zfs create -V 4g rpool2/swap
  7. Update the /etc/vfstab entry for the new swap device. For example:
    /dev/zvol/dsk/rpool2/swap       -               -               swap -     no      -
  8. Re-create the dump volume. For example:
    # zfs create -V 4g rpool2/dump
  9. Reset the dump device. For example:
    # dumpadm -d /dev/zvol/dsk/rpool2/dump
  10. Reset your default boot device to boot from the second root pool's boot disk.
    • SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.

    • x86 – Reconfigure the system BIOS.

  11. Reboot to clear the original root pool's swap and dump devices.
    # init 6

How to Create a BE in Another Root Pool (SPARC or x86/EFI (GPT))

The Oracle Solaris 11.1 release installs an EFI (GPT) label by default on an x86 based system, in most cases.

If you want to re-create your existing BE in another root pool, follow the steps below. You can modify the steps based on whether you want two root pools with similar BEs that have independent swap and dump devices or whether you just want a BE in another root pool that shares the swap and dump devices.

After you activate and boot from the new BE in the second root pool, it will have no information about the previous BE in the first root pool. If you want to boot back to the original BE, you will need to boot the system manually from the original root pool's boot disk.

  1. Create the alternate root pool.
    # zpool create -B rpool2 c2t2d0

    Or, create a mirrored alternate root pool. For example:

    # zpool create -B rpool2 mirror c2t2d0 c2t3d0
  2. Create the new BE in the second root pool. For example:
    # beadm create -p rpool2 solaris2
  3. Apply the boot information to the second root pool. For example:
    # bootadm install-bootloader -P rpool2
  4. Set the bootfs property on the second root pool. For example:
    # zpool set bootfs=rpool2/ROOT/solaris2 rpool2
  5. Activate the new BE. For example:
    # beadm activate solaris2
  6. Boot from the new BE.
    • SPARC – Set up the system to boot automatically from the new disk, either by using the eeprom command or the setenv command from the boot PROM.

    • x86 – Reconfigure the system BIOS.

    Your system should be running under the new BE.

  7. Re-create the swap volume. For example:
    # zfs create -V 4g rpool2/swap
  8. Update the /etc/vfstab entry for the new swap device. For example:
    /dev/zvol/dsk/rpool2/swap       -               -               swap -     no      -
  9. Re-create the dump volume. For example:
    # zfs create -V 4g rpool2/dump
  10. Reset the dump device. For example:
    # dumpadm -d /dev/zvol/dsk/rpool2/dump
  11. Reboot to clear the original root pool's swap and dump devices.
    # init 6