JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide     Oracle Solaris 11 Express 11/10
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Oracle Solaris ZFS and Traditional File System Differences

4.  Managing Oracle Solaris ZFS Storage Pools

5.  Managing ZFS Root Pool Components

Managing ZFS Root Pool Components (Overview)

Oracle Solaris 11 Express Installation Requirements for ZFS Support

Oracle Solaris 11 Express Release Requirements

General ZFS Storage Pool Requirements

ZFS Storage Pool Space Requirements

ZFS Storage Pool Configuration Requirements

Managing Your ZFS Root Pool

Installing a ZFS Root Pool

How to Update Your ZFS Boot Environment

How to Configure a Mirrored Root Pool

Managing Your ZFS Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap and Dump Devices

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

Booting From a ZFS Root File System on a SPARC Based System

Booting From a ZFS Root File System on an x86 Based System

Booting For Recovery Purposes in a ZFS Root Environment

How to Boot ZFS for Recovery Purposes

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

6.  Managing Oracle Solaris ZFS File Systems

7.  Working With Oracle Solaris ZFS Snapshots and Clones

8.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

9.  Oracle Solaris ZFS Delegated Administration

10.  Oracle Solaris ZFS Advanced Topics

11.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

A.  Oracle Solaris ZFS Version Descriptions

Index

Recovering the ZFS Root Pool or Root Pool Snapshots

The following sections describe how to perform the following tasks:

How to Replace a Disk in the ZFS Root Pool

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.

Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:

# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
<Confirm that the new disk has an SMI label and a slice 0>
# zpool replace rpool c1t0d0s0
# zpool online rpool c1t0d0s0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c1t10d0s0) is:

/pci@8,700000/pci@3/scsi@5/sd@a,0

In the example below, the replacement boot disk is (c1t9d0s0):

/pci@8,700000/pci@3/scsi@5/sd@9,0
  1. Physically connect the replacement disk.
  2. Confirm that the replacement (new) disk has an SMI label and a slice 0.

    For information about relabeling a disk that is intended for the root pool, see the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  3. Attach the new disk to the root pool.

    For example:

    # zpool attach rpool c1t10d0s0 c1t9d0s0
    Make sure to wait until resilver is done before rebooting.
  4. Confirm the root pool status.

    For example:

    # zpool status rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scan: resilver in progress since Fri Jan 14 13:35:45 2011
        814M scanned out of 15.3G at 16.3M/s, 0h15m to go
        813M resilvered, 5.18% done
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t3d0s0  ONLINE       0     0     0
                c1t2d0s0  ONLINE       0     0     0  (resilvering)
    
    errors: No known data errors
  5. Verify that you can boot from the new disk after resilvering is complete.

    For example, on a SPARC based system:

    ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
  6. If the system boots from the new disk, detach the old disk.

    For example:

    # zpool detach rpool c1t10d0s0
  7. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM, or reconfigure the PC BIOS.

How to Create Root Pool Snapshots

Create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to do a recursive snapshot of the root pool.

The procedure below creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. In the case of a root pool failure, the remote dataset can be mounted by using NFS and the snapshot file received into the recreated pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.

For information about remotely storing and recovering root pool snapshots and the most up-to-date information about root pool recovery, go to this site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery and in either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.

In the following example, the system is booted from the zfsBE boot environment.

  1. Create space on a remote system to store the snapshots.

    For example:

    remote# zfs create rpool/snaps
  2. Share the space to the local system.

    For example:

    remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps
    # share
    -@rpool/snaps   /rpool/snaps   sec=sys,rw=local-system,root=local-system   "" 
  3. Create a recursive snapshot of the root pool.

    In this example, the system has two BEs, osolBE and osol2BE. The active BE is osolBE.

    local# zpool set listsnapshots=on rpool
    local# zfs snapshot -r rpool@0311
    local# zfs list -r rpool
    NAME                          USED  AVAIL  REFER  MOUNTPOINT
    rpool                         20.1G   114G    67K  /rpool
    rpool@0311                        0      -    67K  -
    rpool/ROOT                    4.00G   114G    21K  legacy
    rpool/ROOT@0311                   0      -    21K  -
    rpool/ROOT/opensolaris        5.11M   114G  3.96G  /
    rpool/ROOT/opensolaris@0311       0      -  3.96G  -
    rpool/ROOT/osolBE             4.00G   114G  3.96G  /
    rpool/ROOT/osolBE@install     30.9M      -  3.89G  -
    rpool/ROOT/osolBE@osolBE      2.97M      -  3.96G  -
    rpool/ROOT/osolBE@0311            0      -  3.96G  -
    rpool/dump                    7.94G   114G  7.94G  -
    rpool/dump@0311                   0      -  7.94G  -
    rpool/export                  69.5K   114G    23K  /export
    rpool/export@0311                 0      -    23K  -
    rpool/export/home             46.5K   114G    23K  /export/home
    rpool/export/home@0311            0      -    23K  -
    rpool/export/home/admin       23.5K   114G  23.5K  /export/home/admin
    rpool/export/home/admin@0311      0      -  23.5K  -
    rpool/swap                    8.20G   122G  14.7M  -
    rpool/swap@0311                   0      -  14.7M  -
  4. Send the root pool snapshots to the remote system.

    For example:

    local# zfs send -Rv rpool@0311 > /net/remote-system/rpool/snaps/rpool.0311
    sending from @ to rpool@0311
    sending from @ to rpool/dump@0311
    sending from @ to rpool/ROOT@0311
    sending from @ to rpool/ROOT/osolBE@install
    sending from @install to rpool/ROOT/osolBE@osolBE
    sending from @osolBE to rpool/ROOT/osolBE@0311
    sending from @ to rpool/ROOT/opensolaris@0311
    sending from @ to rpool/swap@0311
    sending from @ to rpool/export@0311
    sending from @ to rpool/export/home@0311
    sending from @ to rpool/export/home/admin@0311

How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

In this scenario, assume the following conditions:

All steps below are performed on the local system.

  1. Boot from CD/DVD or the network.

    On a SPARC based system, select one of the following boot methods:

    ok boot net -s
    ok boot cdrom -s

    If you don't use -s option, you'll need to exit the installation program.

    On an x86 based system, select the option for booting from the DVD or the network. Then, exit the installation program.

  2. Mount the remote snapshot dataset.

    For example:

    # mount -F nfs remote-system:/rpool/snaps /mnt

    If your network services are not configured, you might need to specify the remote-system's IP address.

  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you will have to relabel the disk.

    For more information about relabeling the disk, go to the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  4. Recreate the root pool.

    For example:

    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0
  5. Restore the root pool snapshots.

    This step might take some time. For example:

    # cat /mnt/rpool.0311 | zfs receive -Fdu rpool

    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.

  6. (Optional) If you want to modify something in the BE, you will need to explicitly mount them like this:
    1. Mount the BE components. For example:
      # zfs mount rpool/ROOT/osolBE
    2. Mount everything in the pool that is not part of a BE. For example:
      # zfs mount -a rpool

    Other BEs are not mounted since they have canmount=noauto, which suppresses mounting when the zfs mount -a operation is done.

  7. Verify that the root pool datasets are restored.

    For example:

    # zfs list
    NAME                          USED  AVAIL  REFER  MOUNTPOINT
    rpool                         20.1G   114G    67K  /rpool
    rpool@0311                        0      -    67K  -
    rpool/ROOT                    4.00G   114G    21K  legacy
    rpool/ROOT@0311                   0      -    21K  -
    rpool/ROOT/opensolaris        5.11M   114G  3.96G  /
    rpool/ROOT/opensolaris@0311       0      -  3.96G  -
    rpool/ROOT/osolBE             4.00G   114G  3.96G  /
    rpool/ROOT/osolBE@install     30.9M      -  3.89G  -
    rpool/ROOT/osolBE@osolBE      2.97M      -  3.96G  -
    rpool/ROOT/osolBE@0311            0      -  3.96G  -
    rpool/dump                    7.94G   114G  7.94G  -
    rpool/dump@0311                   0      -  7.94G  -
    rpool/export                  69.5K   114G    23K  /export
    rpool/export@0311                 0      -    23K  -
    rpool/export/home             46.5K   114G    23K  /export/home
    rpool/export/home@0311            0      -    23K  -
    rpool/export/home/admin       23.5K   114G  23.5K  /export/home/admin
    rpool/export/home/admin@0311      0      -  23.5K  -
    rpool/swap                    8.20G   122G  14.7M  -
    rpool/swap@0311                   0      -  14.7M  -
  8. Set the bootfs property on the root pool BE.

    For example:

    # zpool set bootfs=rpool/ROOT/osolBE rpool
  9. Install the boot blocks on the new disk.

    On a SPARC based system:

    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0

    On an x86 based system:

    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
  10. Reboot the system.
    # init 6