JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Oracle Solaris ZFS and Traditional File System Differences

4.  Managing Oracle Solaris ZFS Storage Pools

5.  Installing and Booting an Oracle Solaris ZFS Root File System

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

ZFS Installation Features

Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support

Oracle Solaris Release Requirements

General ZFS Storage Pool Requirements

Disk Space Requirements for ZFS Storage Pools

ZFS Storage Pool Configuration Requirements

Installing a ZFS Root File System (Initial Installation)

How to Create a Mirrored Root Pool (Post Installation)

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Installing a ZFS Root File System (Oracle Solaris JumpStart Installation)

JumpStart Keywords for ZFS

JumpStart Profile Examples for ZFS

JumpStart Issues for ZFS

Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade)

ZFS Migration Issues With Oracle Solaris Live Upgrade

Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

ZFS Support for Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

SPARC: Booting From a ZFS Root File System

x86: Booting From a ZFS Root File System

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

How to Resolve ZFS Mount-Point Problems

Booting For Recovery Purposes in a ZFS Root Environment

How to Boot ZFS Failsafe Mode

How to Boot ZFS From Alternate Media

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

How to Roll Back Root Pool Snapshots From a Failsafe Boot

6.  Managing Oracle Solaris ZFS File Systems

7.  Working With Oracle Solaris ZFS Snapshots and Clones

8.  Using ACLs to Protect Oracle Solaris ZFS Files

9.  Oracle Solaris ZFS Delegated Administration

10.  Oracle Solaris ZFS Advanced Topics

11.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

A.  Oracle Solaris ZFS Version Descriptions

Index

Booting From a ZFS Root File System

Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.

When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.

Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.

By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset in the boot -Z command.

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:

Review the following known issues regarding mirrored ZFS root pools:

SPARC: Booting From a ZFS Root File System

On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.

During the Solaris OS installation and Oracle Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.

Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.

When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:

title zfsBE
bootfs rpool/ROOT/zfsBE
title zfs2BE
bootfs rpool/ROOT/zfs2BE

When a new BE is created, the menu.lst file is updated automatically.

On a SPARC based system, two new boot options are available:

Example 5-8 SPARC: Booting From a Specific ZFS Boot Environment

If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.

For example, the following ZFS BEs are available as described by the lustatus output:

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -

If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.

For example:

ok boot -L
Rebooting with command: boot -L
Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L

1 zfsBE
2 zfs2BE
Select environment to boot: [ 1 - 2 ]: 1
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/zfsBE

Program terminated
ok boot -Z rpool/ROOT/zfsBE

Example 5-9 SPARC: Booting a ZFS File System in Failsafe Mode

On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:

ok boot -F failsafe

To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:

ok boot -Z rpool/ROOT/zfsBE -F failsafe

x86: Booting From a ZFS Root File System

The following entries are added to the /pool-name/boot/grub/menu.lst file during the Solaris OS installation process or Oracle Solaris Live Upgrade operation to boot ZFS automatically:

title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.

On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:

-B $ZFS-BOOTFS

Example 5-10 x86: Booting a ZFS File System

When a system boots from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This parameter value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:

title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Example 5-11 x86: Booting a ZFS File System in Failsafe Mode

The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:

title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

The best way to change the active boot environment is to use the luactivate command. If booting the active environment fails due to a bad patch or a configuration error, the only way to boot from a different environment is to select that environment at boot time. You can select an alternate BE from the GRUB menu on an x86 based system or by booting it explicitly from the PROM on a SPARC based system.

Due to a bug in Oracle Solaris Live Upgrade in the Solaris 10 10/08 release, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.

If a zone dataset has an invalid mount point, the mount point can be corrected by performing the following steps.

How to Resolve ZFS Mount-Point Problems

  1. Boot the system from a failsafe archive.
  2. Import the pool.

    For example:

    # zpool import rpool
  3. Look for incorrect temporary mount points.

    For example:

    # zfs list -r -o name,mountpoint rpool/ROOT/s10u6
        
        NAME                               MOUNTPOINT
        rpool/ROOT/s10u6                   /.alt.tmp.b-VP.mnt/
        rpool/ROOT/s10u6/zones             /.alt.tmp.b-VP.mnt//zones
        rpool/ROOT/s10u6/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

    The mount point for the root BE (rpool/ROOT/s10u6) should be /.

    If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.

  4. Reset the mount points for the ZFS BE and its datasets.

    For example:

    # zfs inherit -r mountpoint rpool/ROOT/s10u6
    # zfs set mountpoint=/ rpool/ROOT/s10u6
  5. Reboot the system.

    When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.

Booting For Recovery Purposes in a ZFS Root Environment

Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.

You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.

If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.

How to Boot ZFS Failsafe Mode

  1. Boot failsafe mode.

    On a SPARC system:

    ok boot -F failsafe

    On an x86 system, select failsafe mode from the GRUB prompt.

  2. Mount the ZFS BE on /a when prompted:
    .
    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.
  3. Change to the /a/etc directory.
    # cd /a/etc
  4. If necessary, set the TERM type.
    # TERM=vt100
    # export TERM
  5. Correct the passwd or shadow file.
    # vi shadow
  6. Reboot the system.
    # init 6

How to Boot ZFS From Alternate Media

If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.

  1. Boot from an installation CD or from the network.
    • SPARC:

      ok boot cdrom -s 
      ok boot net -s

      If you don't use the -s option, you will need to exit the installation program.

    • x86: Select the network boot or boot from local CD option.

  2. Import the root pool and specify an alternate mount point. For example:
    # zpool import -R /a rpool
  3. Mount the ZFS BE. For example:
    # zfs mount rpool/ROOT/zfsBE
  4. Access the ZFS BE contents from the /a directory.
    # cd /a
  5. Reboot the system.
    # init 6