|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris ZFS Administration Guide Oracle Solaris 10 1/13 Information Library|
Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.
When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.
Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.
By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset with the boot -Z command.
You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:
Review the following known issues regarding mirrored ZFS root pools:
If you replace a root pool disk by using the zpool replace command, you must install the boot information on the newly replaced disk by using the installboot or installgrub command. If you create a mirrored ZFS root pool with the initial installation method or if you use the zpool attach command to attach a disk to the root pool, then this step is unnecessary. The installboot and installgrub command syntax follows:
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0
You can boot from different devices in a mirrored ZFS root pool. Depending on the hardware configuration, you might need to update the PROM or the BIOS to specify a different boot device.
For example, you can boot from either disk (c1t0d0s0 or c1t1d0s0) in the following pool:
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0
SPARC: Specify the alternate disk at the ok prompt. For example:
ok boot /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0
After the system is rebooted, confirm the active boot device. For example:
SPARC# prtconf -vp | grep bootpath bootpath: '/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0,0:a'
x86: Select an alternate disk in the mirrored ZFS root pool from the appropriate BIOS menu.
Then, use syntax similar to the following to confirm that you are booted from the alternate disk:
x86# prtconf -v|sed -n '/bootpath/,/value/p' name='bootpath' type=string items=1 value='/pci@0,0/pci8086,25f8@4/pci108e,286@0/disk@0,0:a'
On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.
During the Oracle Solaris OS installation and Live Upgrade process, the default ZFS root file system is automatically designated with the bootfs property.
Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.
When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:
title zfsBE bootfs rpool/ROOT/zfsBE title zfs2BE bootfs rpool/ROOT/zfs2BE
When a new BE is created, the menu.lst file is updated automatically.
On a SPARC based system, two ZFS boot options are available:
After the BE is activated, you can use the boot -L command to display a list of bootable datasets within a ZFS pool. Then, you can select one of the bootable datasets in the list. Detailed instructions for booting that dataset are displayed. You can boot the selected dataset by following the instructions.
You can use the boot -Z dataset command to boot a specific ZFS dataset.
Example 4-11 SPARC: Booting From a Specific ZFS Boot Environment
For example, the following lustatus output shows that two ZFS BEs are available:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no -
If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.
ok boot -L Rebooting with command: boot -L Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0 File and args: -L 1 zfsBE 2 zfs2BE Select environment to boot: [ 1 - 2 ]: 1 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfsBE Program terminated ok boot -Z rpool/ROOT/zfsBE
Example 4-12 SPARC: Booting a ZFS File System in Failsafe Mode
On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:
ok boot -F failsafe
To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:
ok boot -Z rpool/ROOT/zfsBE -F failsafe
The following entries are added to the /pool-name/boot/grub/menu.lst file during the Oracle Solaris OS installation or Live Upgrade process to boot ZFS automatically:
title Solaris 10 1/13 X86 findroot (rootfs0,0,a) kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS module /platform/i86pc/boot_archive title Solaris failsafe findroot (rootfs0,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe
If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.
On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:
Example 4-13 x86: Booting a ZFS File System
When a system boots from a ZFS file system, the root device is specified by the -B $ZFS-BOOTFS boot parameter. For example:
title Solaris 10 1/13 X86 findroot (pool_rpool,0,a) kernel /platform/i86pc/multiboot -B $ZFS-BOOTFS module /platform/i86pc/boot_archive title Solaris failsafe findroot (pool_rpool,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe
Example 4-14 x86: Booting a ZFS File System in Failsafe Mode
The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:
title Solaris failsafe findroot (pool_rpool,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe
The best way to change the active boot environment (BE) is to use the luactivate command. If booting the active BE fails due to a bad patch or a configuration error, the only way to boot from a different BE is to select it at boot time. You can select an alternate BE by booting it explicitly from the PROM on a SPARC based system or from the GRUB menu on an x86 based system.
Due to a bug in Live Upgrade in the Solaris 10 10/08 release, the inactive BE might fail to boot because a ZFS dataset or a zone's ZFS dataset in the BE has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.
If a zone's ZFS dataset has an invalid mount point, the mount point can be corrected by performing the following steps.
# zpool import rpool
# zfs list -r -o name,mountpoint rpool/ROOT/s10up NAME MOUNTPOINT rpool/ROOT/s10up /.alt.tmp.b-VP.mnt/ rpool/ROOT/s10up/zones /.alt.tmp.b-VP.mnt//zones rpool/ROOT/s10up/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
The mount point for the root BE (rpool/ROOT/s10up) should be /.
If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.
# zfs inherit -r mountpoint rpool/ROOT/s10up # zfs set mountpoint=/ rpool/ROOT/s10up
When the option to boot a specific BE is presented, either at the OpenBoot PROM prompt or in the GRUB menu, select the boot environment whose mount points were just corrected.
Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.
You must boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.
If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.
On a SPARC based system, type the following at the ok prompt:
ok boot -F failsafe
On an x86 system, select failsafe mode from the GRUB menu.
. . . ROOT/zfsBE was found on rpool. Do you wish to have it mounted read-write on /a? [y,n,?] y mounting rpool on /a Starting shell.
# cd /a/etc
# TERM=vt100 # export TERM
# vi shadow
# init 6
If a problem prevents the system from booting successfully or some other severe problem occurs, you must boot from a network install server or from an Oracle Solaris installation DVD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.
SPARC - Select one of the following boot methods:
ok boot cdrom -s ok boot net -s
If you don't use the -s option, you must exit the installation program.
x86 – Select the network boot option or boot from local DVD.
# zpool import -R /a rpool
# zfs mount rpool/ROOT/zfsBE
# cd /a
# init 6