JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

4.  Installing and Booting an Oracle Solaris ZFS Root File System

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

ZFS Installation Features

Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support

Oracle Solaris Release Requirements

General ZFS Root Pool Requirements

Disk Space Requirements for ZFS Root Pools

ZFS Root Pool Configuration Requirements

Installing a ZFS Root File System (Oracle Solaris Initial Installation)

How to Create a Mirrored ZFS Root Pool (Postinstallation)

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Installing a ZFS Root File System ( JumpStart Installation)

JumpStart Keywords for ZFS

JumpStart Profile Examples for ZFS

JumpStart Issues for ZFS

Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

ZFS Migration Issues With Live Upgrade

Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)

Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

Managing Your ZFS Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Customizing ZFS Swap and Dump Volumes

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

SPARC: Booting From a ZFS Root File System

x86: Booting From a ZFS Root File System

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

How to Resolve ZFS Mount-Point Problems

Booting for Recovery Purposes in a ZFS Root Environment

How to Boot ZFS Failsafe Mode

How to Boot ZFS From Alternate Media

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots

How to Roll Back Root Pool Snapshots From a Failsafe Boot

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Booting From a ZFS Root File System

Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.

When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.

Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.

By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset with the boot -Z command.

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:

Review the following known issues regarding mirrored ZFS root pools:

SPARC: Booting From a ZFS Root File System

On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.

During the Oracle Solaris OS installation and Live Upgrade process, the default ZFS root file system is automatically designated with the bootfs property.

Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.

When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:

title zfsBE
bootfs rpool/ROOT/zfsBE
title zfs2BE
bootfs rpool/ROOT/zfs2BE

When a new BE is created, the menu.lst file is updated automatically.

On a SPARC based system, two ZFS boot options are available:

Example 4-11 SPARC: Booting From a Specific ZFS Boot Environment

If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.

For example, the following lustatus output shows that two ZFS BEs are available:

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -

If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.

For example:

ok boot -L
Rebooting with command: boot -L
Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L

1 zfsBE
2 zfs2BE
Select environment to boot: [ 1 - 2 ]: 1
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/zfsBE

Program terminated
ok boot -Z rpool/ROOT/zfsBE

Example 4-12 SPARC: Booting a ZFS File System in Failsafe Mode

On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:

ok boot -F failsafe

To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:

ok boot -Z rpool/ROOT/zfsBE -F failsafe

x86: Booting From a ZFS Root File System

The following entries are added to the /pool-name/boot/grub/menu.lst file during the Oracle Solaris OS installation or Live Upgrade process to boot ZFS automatically:

title Solaris 10 1/13  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.

On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:

-B $ZFS-BOOTFS

Example 4-13 x86: Booting a ZFS File System

When a system boots from a ZFS file system, the root device is specified by the -B $ZFS-BOOTFS boot parameter. For example:

title Solaris 10 1/13  X86
findroot (pool_rpool,0,a)
kernel /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Example 4-14 x86: Booting a ZFS File System in Failsafe Mode

The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:

title Solaris failsafe
findroot (pool_rpool,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

The best way to change the active boot environment (BE) is to use the luactivate command. If booting the active BE fails due to a bad patch or a configuration error, the only way to boot from a different BE is to select it at boot time. You can select an alternate BE by booting it explicitly from the PROM on a SPARC based system or from the GRUB menu on an x86 based system.

Due to a bug in Live Upgrade in the Solaris 10 10/08 release, the inactive BE might fail to boot because a ZFS dataset or a zone's ZFS dataset in the BE has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.

If a zone's ZFS dataset has an invalid mount point, the mount point can be corrected by performing the following steps.

How to Resolve ZFS Mount-Point Problems

  1. Boot the system from a failsafe archive.
  2. Import the pool.

    For example:

    # zpool import rpool
  3. Look for incorrect temporary mount points.

    For example:

    # zfs list -r -o name,mountpoint rpool/ROOT/s10up
        
        NAME                               MOUNTPOINT
        rpool/ROOT/s10up                   /.alt.tmp.b-VP.mnt/
        rpool/ROOT/s10up/zones             /.alt.tmp.b-VP.mnt//zones
        rpool/ROOT/s10up/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

    The mount point for the root BE (rpool/ROOT/s10up) should be /.

    If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.

  4. Reset the mount points for the ZFS BE and its datasets.

    For example:

    # zfs inherit -r mountpoint rpool/ROOT/s10up
    # zfs set mountpoint=/ rpool/ROOT/s10up
  5. Reboot the system.

    When the option to boot a specific BE is presented, either at the OpenBoot PROM prompt or in the GRUB menu, select the boot environment whose mount points were just corrected.

Booting for Recovery Purposes in a ZFS Root Environment

Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.

You must boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.

If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.

How to Boot ZFS Failsafe Mode

  1. Boot failsafe mode.
    • On a SPARC based system, type the following at the ok prompt:

      ok boot -F failsafe
    • On an x86 system, select failsafe mode from the GRUB menu.

  2. Mount the ZFS BE on /a when prompted.
    .
    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.
  3. Change to the /a/etc directory.
    # cd /a/etc
  4. If necessary, set the TERM type.
    # TERM=vt100
    # export TERM
  5. Correct the passwd or shadow file.
    # vi shadow
  6. Reboot the system.
    # init 6

How to Boot ZFS From Alternate Media

If a problem prevents the system from booting successfully or some other severe problem occurs, you must boot from a network install server or from an Oracle Solaris installation DVD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.

  1. Boot from an installation DVD or from the network.
    • SPARC - Select one of the following boot methods:

      ok boot cdrom -s 
      ok boot net -s

      If you don't use the -s option, you must exit the installation program.

    • x86 – Select the network boot option or boot from local DVD.

  2. Import the root pool, and specify an alternate mount point. For example:
    # zpool import -R /a rpool
  3. Mount the ZFS BE. For example:
    # zfs mount rpool/ROOT/zfsBE
  4. Access the ZFS BE contents from the /a directory.
    # cd /a
  5. Reboot the system.
    # init 6