JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide     Oracle Solaris 10 8/11 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Oracle Solaris ZFS and Traditional File System Differences

4.  Managing Oracle Solaris ZFS Storage Pools

5.  Installing and Booting an Oracle Solaris ZFS Root File System

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

ZFS Installation Features

Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support

Oracle Solaris Release Requirements

General ZFS Storage Pool Requirements

Disk Space Requirements for ZFS Storage Pools

ZFS Storage Pool Configuration Requirements

Installing a ZFS Root File System (Oracle Solaris Initial Installation)

How to Create a Mirrored ZFS Root Pool (Postinstallation)

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Installing a ZFS Root File System ( JumpStart Installation)

JumpStart Keywords for ZFS

JumpStart Profile Examples for ZFS

JumpStart Issues for ZFS

Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

ZFS Migration Issues With Live Upgrade

Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)

Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

ZFS Support for Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

SPARC: Booting From a ZFS Root File System

x86: Booting From a ZFS Root File System

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

How to Resolve ZFS Mount-Point Problems

Booting for Recovery Purposes in a ZFS Root Environment

How to Boot ZFS Failsafe Mode

How to Boot ZFS From Alternate Media

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots

How to Roll Back Root Pool Snapshots From a Failsafe Boot

6.  Managing Oracle Solaris ZFS File Systems

7.  Working With Oracle Solaris ZFS Snapshots and Clones

8.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

9.  Oracle Solaris ZFS Delegated Administration

10.  Oracle Solaris ZFS Advanced Topics

11.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

12.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Recovering the ZFS Root Pool or Root Pool Snapshots

The following sections describe how to perform the following tasks:

How to Replace a Disk in the ZFS Root Pool

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you can attempt a disk replacement without booting from alternate media. You can replace a failed disk by using the zpool replace command. Or, if you have an additional disk, you can use the zpool attach command. See the procedure in this section for an example of attaching an additional disk and detaching a root pool disk.

Some hardware requires that you take a disk offline and unconfigure it before attempting the zpool replace operation to replace a failed disk. For example:

# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
# zpool replace rpool c1t0d0s0
# zpool online rpool c1t0d0s0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0

With some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

You must identify the boot device path names of the current disk and the new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if the replacement disk fails. In the example in the following procedure, the path name for the current root pool disk (c1t10d0s0) is:

/pci@8,700000/pci@3/scsi@5/sd@a,0

The path name for the replacement boot disk (c1t9d0s0) is:

/pci@8,700000/pci@3/scsi@5/sd@9,0
  1. Physically connect the replacement (or new) disk.
  2. Prepare the replacement disk for the root pool, if necessary.
  3. Attach the new disk to the root pool.

    For example:

    # zpool attach rpool c1t10d0s0 c1t9d0s0
  4. Confirm the root pool status.

    For example:

    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress, 25.47% done, 0h4m to go
    config:
    
            NAME           STATE     READ WRITE CKSUM
            rpool          ONLINE       0     0     0
              mirror-0     ONLINE       0     0     0
                c1t10d0s0  ONLINE       0     0     0
                c1t9d0s0   ONLINE       0     0     0
    
    errors: No known data errors
  5. Verify that you can boot from the new disk.

    For example, on a SPARC based system, you would use syntax similar to the following:

    ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
  6. If the system boots from the new disk, detach the old disk.

    For example:

    # zpool detach rpool c1t10d0s0
  7. Set up the system to boot automatically from the new disk by resetting the default boot device.
    • SPARC - Use the eeprom command or the setenv command from the SPARC boot PROM.

    • x86 - Reconfigure the system BIOS.

How to Create Root Pool Snapshots

You can create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to perform a recursive snapshot of the root pool.

The following procedure creates a recursive root pool snapshot and stores the snapshot as a file and as snapshots in a pool on a remote system. If a root pool fails, the remote dataset can be mounted by using NFS, and the snapshot file can be received into the re-created pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Oracle Solaris OS miniroot.

Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery. With either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.

In the following procedure, the system is booted from the zfsBE boot environment.

  1. Create a pool and file system on a remote system to store the snapshots.

    For example:

    remote# zfs create rpool/snaps
  2. Share the file system with the local system.

    For example:

    remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps
    # share
    -@rpool/snaps   /rpool/snaps   sec=sys,rw=local-system,root=local-system   "" 
  3. Create a recursive snapshot of the root pool.
    local# zfs snapshot -r rpool@snap1
    local# zfs list -r rpool
    NAME                        USED  AVAIL  REFER  MOUNTPOINT
    rpool                      7.84G  59.1G   109K  /rpool
    rpool@snap1                  21K      -   106K  -
    rpool/ROOT                 4.78G  59.1G    31K  legacy
    rpool/ROOT@snap1               0      -    31K  -
    rpool/ROOT/s10zfsBE        4.78G  59.1G  4.76G  /
    rpool/ROOT/s10zfsBE@snap1  15.6M      -  4.75G  -
    rpool/dump                 1.00G  59.1G  1.00G  -
    rpool/dump@snap1             16K      -  1.00G  -
    rpool/export                 99K  59.1G    32K  /export
    rpool/export@snap1           18K      -    32K  -
    rpool/export/home            49K  59.1G    31K  /export/home
    rpool/export/home@snap1      18K      -    31K  -
    rpool/swap                 2.06G  61.2G    16K  -
    rpool/swap@snap1               0      -    16K  -
  4. Send the root pool snapshots to the remote system.

    For example, to send the root pool snapshots to a remote pool as a file, use syntax similar to the following:

    local# zfs send -Rv rpool@snap1 > /net/remote-system/rpool/snaps/rpool.snap1
    sending from @ to rpool@snap1
    sending from @ to rpool/ROOT@snap1
    sending from @ to rpool/ROOT/s10zfsBE@snap1
    sending from @ to rpool/dump@snap1
    sending from @ to rpool/export@snap1
    sending from @ to rpool/export/home@snap1
    sending from @ to rpool/swap@snap1

    To send the root pool snapshots to a remote pool as snapshots, use syntax similar to the following:

    local# zfs send -Rv rpool@snap1 | ssh remote-system zfs receive -Fd -o canmount=off tank/snaps
    sending from @ to rpool@snap1
    sending from @ to rpool/ROOT@snap1
    sending from @ to rpool/ROOT/s10zfsBE@snap1
    sending from @ to rpool/dump@snap1
    sending from @ to rpool/export@snap1
    sending from @ to rpool/export/home@snap1
    sending from @ to rpool/swap@snap1

How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots

In this procedure, assume the following conditions:

All the steps are performed on the local system.

  1. Boot from an installation DVD or the network.
    • SPARC - Select one of the following boot methods:

      ok boot net -s
      ok boot cdrom -s

      If you don't use -s option, you'll need to exit the installation program.

    • x86 – Select the option for booting from the DVD or the network. Then, exit the installation program.

  2. Mount the remote snapshot file system if you have sent the root pool snapshots as a file to the remote system.

    For example:

    # mount -F nfs remote-system:/rpool/snaps /mnt

    If your network services are not configured, you might need to specify the remote-system's IP address.

  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you must relabel the disk.

    For more information about relabeling the disk, go to the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  4. Re-create the root pool.

    For example:

    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=
    /etc/zfs/zpool.cache rpool c1t1d0s0
  5. Restore the root pool snapshots.

    This step might take some time. For example:

    # cat /mnt/rpool.0804 | zfs receive -Fdu rpool

    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.

    To restore the actual root pool snapshots that are stored in a pool on a remote system, use syntax similar to the following:

    # rsh remote-system zfs send -Rb tank/snaps/rpool@snap1 | zfs receive -F rpool
  6. Verify that the root pool datasets are restored.

    For example:

    # zfs list
  7. Set the bootfs property on the root pool BE.

    For example:

    # zpool set bootfs=rpool/ROOT/zfsBE rpool
  8. Install the boot blocks on the new disk.
    • SPARC:

      # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    • x86:

      # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
  9. Reboot the system.
    # init 6

How to Roll Back Root Pool Snapshots From a Failsafe Boot

This procedure assumes that existing root pool snapshots are available. In the example, they are available on the local system.

# zfs snapshot -r rpool@snap1
# zfs list -r rpool
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      7.84G  59.1G   109K  /rpool
rpool@snap1                  21K      -   106K  -
rpool/ROOT                 4.78G  59.1G    31K  legacy
rpool/ROOT@snap1               0      -    31K  -
rpool/ROOT/s10zfsBE        4.78G  59.1G  4.76G  /
rpool/ROOT/s10zfsBE@snap1  15.6M      -  4.75G  -
rpool/dump                 1.00G  59.1G  1.00G  -
rpool/dump@snap1             16K      -  1.00G  -
rpool/export                 99K  59.1G    32K  /export
rpool/export@snap1           18K      -    32K  -
rpool/export/home            49K  59.1G    31K  /export/home
rpool/export/home@snap1      18K      -    31K  -
rpool/swap                 2.06G  61.2G    16K  -
rpool/swap@snap1               0      -    16K  -
  1. Shut down the system and boot failsafe mode.
    ok boot -F failsafe
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    
    Starting shell.
  2. Roll back each root pool snapshot.
    # zfs rollback rpool@snap1
    # zfs rollback rpool/ROOT@snap1
    # zfs rollback rpool/ROOT/s10zfsBE@snap1
  3. Reboot to multiuser mode.
    # init 6