Oracle Solaris ZFS Administration Guide

Booting From a ZFS Root File System

Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.

When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.

Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.

By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset in the boot -Z command.

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:

Review the following known issues regarding mirrored ZFS root pools:

SPARC: Booting From a ZFS Root File System

On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.

During the Solaris OS installation and Oracle Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.

Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.

When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:


title zfsBE
bootfs rpool/ROOT/zfsBE
title zfs2BE
bootfs rpool/ROOT/zfs2BE

When a new BE is created, the menu.lst file is updated automatically.

On a SPARC based system, two new boot options are available:


Example 5–8 SPARC: Booting From a Specific ZFS Boot Environment

If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.

For example, the following ZFS BEs are available as described by the lustatus output:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -

If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.

For example:


ok boot -L
Rebooting with command: boot -L
Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L

1 zfsBE
2 zfs2BE
Select environment to boot: [ 1 - 2 ]: 1
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/zfsBE

Program terminated
ok boot -Z rpool/ROOT/zfsBE


Example 5–9 SPARC: Booting a ZFS File System in Failsafe Mode

On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:


ok boot -F failsafe

To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:


ok boot -Z rpool/ROOT/zfsBE -F failsafe

x86: Booting From a ZFS Root File System

The following entries are added to the /pool-name/boot/grub/menu.lst file during the Solaris OS installation process or Oracle Solaris Live Upgrade operation to boot ZFS automatically:


title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.

On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:


-B $ZFS-BOOTFS

Example 5–10 x86: Booting a ZFS File System

When a system boots from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This parameter value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:



title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Example 5–11 x86: Booting a ZFS File System in Failsafe Mode

The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:


title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

The best way to change the active boot environment is to use the luactivate command. If booting the active environment fails due to a bad patch or a configuration error, the only way to boot from a different environment is to select that environment at boot time. You can select an alternate BE from the GRUB menu on an x86 based system or by booting it explicitly from the PROM on a SPARC based system.

Due to a bug in Oracle Solaris Live Upgrade in the Solaris 10 10/08 release, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.

If a zone dataset has an invalid mount point, the mount point can be corrected by performing the following steps.

ProcedureHow to Resolve ZFS Mount-Point Problems

  1. Boot the system from a failsafe archive.

  2. Import the pool.

    For example:


    # zpool import rpool
    
  3. Look for incorrect temporary mount points.

    For example:


    # zfs list -r -o name,mountpoint rpool/ROOT/s10u6
        
        NAME                               MOUNTPOINT
        rpool/ROOT/s10u6                   /.alt.tmp.b-VP.mnt/
        rpool/ROOT/s10u6/zones             /.alt.tmp.b-VP.mnt//zones
        rpool/ROOT/s10u6/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

    The mount point for the root BE (rpool/ROOT/s10u6) should be /.

    If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.

  4. Reset the mount points for the ZFS BE and its datasets.

    For example:


    # zfs inherit -r mountpoint rpool/ROOT/s10u6
    # zfs set mountpoint=/ rpool/ROOT/s10u6
    
  5. Reboot the system.

    When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.

Booting For Recovery Purposes in a ZFS Root Environment

Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.

You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.

If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.

ProcedureHow to Boot ZFS Failsafe Mode

  1. Boot failsafe mode.

    On a SPARC system:


    ok boot -F failsafe
    

    On an x86 system, select failsafe mode from the GRUB prompt.

  2. Mount the ZFS BE on /a when prompted:


    .
    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.
  3. Change to the /a/etc directory.


    # cd /a/etc
    
  4. If necessary, set the TERM type.


    # TERM=vt100
    # export TERM
  5. Correct the passwd or shadow file.


    # vi shadow
    
  6. Reboot the system.


    # init 6
    

ProcedureHow to Boot ZFS From Alternate Media

If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.

  1. Boot from an installation CD or from the network.

    • SPARC:


      ok boot cdrom -s 
      ok boot net -s
      

      If you don't use the -s option, you will need to exit the installation program.

    • x86: Select the network boot or boot from local CD option.

  2. Import the root pool and specify an alternate mount point. For example:


    # zpool import -R /a rpool
    
  3. Mount the ZFS BE. For example:


    # zfs mount rpool/ROOT/zfsBE
    
  4. Access the ZFS BE contents from the /a directory.


    # cd /a
    
  5. Reboot the system.


    # init 6