This chapter provides an overview of booting a system. The Oracle Solaris boot design, boot processes, and various methods of booting a system in the Oracle Solaris OS are described.
The following is a list of the information that is in this chapter:
For instructions on booting an Oracle Solaris system, see Chapter 12, Booting an Oracle Solaris System (Tasks)
For instructions on booting a Solaris system that does not implement GRUB, see Chapter 16, x86: Booting a System That Does Not Implement GRUB (Tasks).
For what's new in shutting down and booting a system, see What's New in Shutting Down and Booting a System.
For overview information and instructions on administering boot loaders and modifying boot behavior, see Chapter 11, Modifying Oracle Solaris Boot Behavior (Tasks).
For information about managing boot services through the Service Management Facility (SMF), see SMF and Booting.
The information in this section applies to both the SPARC and x86 platforms.
The fundamental Oracle Solaris boot design includes the following characteristics:
Use of a boot archive
The boot archive is a ramdisk image that contains all of the files that are required for booting a system. When you install the Solaris OS, two boot archives are created, one primary archive and one failsafe archive. For more information, see Implementation of the Boot Archives on SPARC.
The bootadm command has also been modified for use on the SPARC platform. This command functions the same way that it does on the x86 platform. The bootadm command handles the details of archive update and verification automatically. During an installation or system upgrade, the bootadm command creates the initial boot archive. During the process of a normal system shutdown, the shutdown process checks the boot archive contents against the root file system. If there are any inconsistencies, the system rebuilds the boot archive to ensure that on reboot, the boot archive and root (/) file system are synchronized. You can also use the bootadm command to manually update the boot archives. See Using the bootadm Command to Manage the Boot Archives.
Some options of the bootadm command cannot be used on SPARC based systems.
Use of a ramdisk image as the root file system during installation and failsafe operations
This process is now the same on the SPARC and x86 platforms. The ramdisk image is derived from the boot archive and is then transferred to the system from the boot device.
On the SPARC platform, the OpenBoot PROM continues to be used to access the boot device and to transfer the boot archive to the system's memory. Conversely, on the x86 platform, the system is initially controlled by the BIOS. The BIOS is used to initiate a transfer of the boot archive from a network device or to run a boot loader. In the Oracle Solaris OS, the x86 boot loader that is used to transfer the boot archive from disk is GRUB. See x86: Boot Processes.
In the case of a software installation, the ramdisk image is the root file system that is used for the entire installation process. Using the ramdisk image for this purpose eliminates the need to boot the system from removable media. The ramdisk file system type can be either a High Sierra File System (HSFS) or UFS.
The boot processes on the SPARC platform have been redesigned and improved to increase commonality with the x86 boot experience. The new SPARC boot design enables the addition of new features, for example new file system types, without necessitating any changes to multiple portions of the boot chain. Changes also include the implementation of boot phase independence.
Highlights of these improvements include:
Commonality in boot processes on the SPARC and x86 platforms
Commonality in the network boot experience
Boot architecture flexibility that enables booting a system from different file system types more easily
The following four boot phases are now independent of each other:
Open Boot PROM (OBP) phase
The OBP phase of the boot process on the SPARC platform is unchanged.
For disk devices, the firmware driver usually uses the OBP label package's load method, which parses the VTOC label at the beginning of the disk to locate the specified partition. Sectors 1-15 of the partition are then read into the system's memory. This area is commonly called the boot block and usually contains a file system reader.
During this phase the boot archive is read and executed. Note that this is the only phase of the boot process that requires knowledge of the boot file system format. In some instances, the boot archive might also be the installation miniroot. Protocols that are used for the transfer of the boot loader and the boot archive include local disk access, NFS, and HTTP.
The ramdisk is a boot archive that is comprised of kernel modules any other components that are required to boot an instance of the Oracle Solaris OS. or an installation miniroot.
The SPARC boot archive is identical to an x86 boot archive. The boot archive file system format is private. Therefore, knowledge of the file system type that is used during a system boot, for example an HSFS or a UFS file system, is not required by the booter or the kernel. The ramdisk extracts the kernel image from the boot archive and then executes it. To minimize the size of the ramdisk, in particular, the installation miniroot that resides in the system's memory, the contents of the miniroot are compressed. This compression is performed on a per-file level and is implemented within the individual file system. The /usr/sbin/fiocompress utility is then used to compress the file and mark the file as compressed.
This utility has a private interface to the file compression file system, dcfs.
The kernel phase is the final stage of the boot process. During this phase, the Oracle Solaris OS is initialized and a minimal root file system is mounted on the ramdisk that was constructed from the boot archive. If the boot archive is the installation miniroot, the OS continues executing the installation process. Otherwise, the ramdisk contains a set of kernel files and drivers that is sufficient to mount the root file system on the specified root device.
The kernel then extracts the remainder of the primary modules from the boot archive, initializes itself, mounts the real root file system, then discards the boot archive.
The ramdisk-based miniroot is packed and unpacked by the root_archive command. Note that only SPARC based systems that support the new boot architecture have the ability to pack and unpack a compressed version of the miniroot.
The Oracle Solaris 10 version of the root_archive tool is not compatible with versions of the tool that are included in other Oracle Solaris releases. Therefore, ramdisk manipulation should only be performed on a system that is running the same release as the archives.
For more information about packing and unpacking the miniroot, see the root_archive(1M) man page.
To install or upgrade the Oracle Solaris OS, you need to boot from either CD/DVD, or from the network. In both instances, the miniroot's root file system is the ramdisk. This process enables you to eject the Solaris boot CD or DVD without having to reboot the system. Note that the boot archive contains the entire miniroot. The construction of the installation DVD has been modified to use an HSFS boot block. The miniroot is then packed into a single UFS file that is loaded as the ramdisk. Note that the miniroot is used for all OS installation types.
For Oracle Solaris 10 9/10, the minimum memory requirement to install a SPARC based system is 384 Mbytes of memory. This amount of memory enables a text-based installation only. For x86 based systems, the minimum memory requirement is 768 Mbytes of memory. Also, to run the installation GUI program a minimum of 768 Mbytes of memory is required.
The network boot server setup process has been modified. The boot server now serves a bootstrap program, as well as the ramdisk, which is downloaded and booted as a single miniroot for all installations, whether booting from CD/DVD, or performing a network installation by using NFS or HTTP. The administration of a network boot server for a network boot over both NFS or the wanboot program (HTTP) remains the same. However, the internal implementation of the network boot process has been modified as follows:
The boot server transfers a bootstrap in the form of a boot archive to the target system.
The target system unpacks the boot archive in a ramdisk.
The boot archive is then mounted as the initial read-only root device.
For more information about booting a SPARC based system, see Booting a SPARC Based System (Task Map).
On SPARC based systems, when you boot the system from the ok prompt, the default boot device is automatically selected. An alternate boot device can be specified by changing the NVRAM variable for the boot-device. You can also specify an alternate boot device or an alternate kernel (boot file) from the command line at boot time. See SPARC: How to Boot a Kernel Other Than the Default Kernel.
The boot archives, previously only available on the x86 platform, are now an integral part of the SPARC boot architecture.
The bootadm command has been modified for use on the SPARC platform. This command functions the same as it does on the x86 platform. The bootadm command handles the details of archive update and verification. On the x86 platform the bootadm command updates the GRUB menu during an installation or system upgrade. You can also use the bootadm command to manually manage the boot archives.
The boot archive service is managed by the Service Management Facility (SMF). The service instance for the boot archive is svc:/system/boot-archive:default. To enable, disable, or refresh this service use the svcadm command. For information about managing services by using SMF, see Chapter 18, Managing Services (Overview).
On supported Solaris releases, for both SPARC and x86 based systems, there are two kinds of boot archives:
Primary boot archive
Failsafe boot archive
The files that are included in the SPARC boot archives are located in the /platform directory.
The contents of the /platform directory is divided into two groups of files:
Files that are required for a sun4u boot archive
Files that are required for sun4v boot archive
For information about managing the boot archives, see Managing the Oracle Solaris Boot Archives (Task Map).
The open source GRand Unified Bootloader (GRUB) is the default boot loader on x86 based systems. GRUB is responsible for loading a boot archive into the system's memory. A boot archive is a collection of critical files that is needed during system startup before the root file system is mounted. The boot archive is the interface that is used to boot the Oracle Solaris OS. You can find more information about GRUB at http://www.gnu.org/software/grub/grub.html. See also the grub(5) man page.
After an x86 based system is powered on, the Basic Input/Output System (BIOS) initializes the CPU, the memory, and the platform hardware. When the initialization phase has completed, the BIOS loads the boot loader from the configured boot device and then transfers control of the system to the boot loader. The boot loader is the first software program that runs after you turn on a system. This program starts the boot process.
GRUB implements a menu interface that includes boot options that are predefined in a configuration file called the menu.lst file. GRUB also has a command-line interface that is accessible from the GUI menu interface that can be used to perform various boot functions, including modifying default boot behavior. In the Solaris OS, the GRUB implementation is compliant with the Multiboot Specification, which is described in detail at http://www.gnu.org/software/grub/grub.html.
Because the Oracle Solaris kernel is fully compliant with the Multiboot Specification, you can boot x86 based systems by using GRUB. With GRUB, you can boot various operating systems that are installed on a single x86 based system. For example, you can individually boot Oracle Solaris, Linux, or Windows by selecting the boot entry in the GRUB menu at boot time, or by configuring the menu.lst file to boot a specific OS by default.
Because GRUB is intuitive about file systems and kernel executable formats, you can load an operating system without recording the physical position of the kernel on the disk. With GRUB-based booting, the kernel is loaded by specifying its file name, and the drive and the partition where the kernel resides. For more information see Naming Conventions That Are Used for Configuring GRUB.
For step-by-step instructions on booting a system with GRUB, see Booting an x86 Based System by Using GRUB (Task Map).
See also the following man pages:
The findroot command, which functions similarly to the root command previously used by GRUB, has enhanced capabilities for discovering a targeted disk, regardless of the boot device. The findroot command also supports booting from an Oracle Solaris ZFS root file system.
The most common format for the menu.lst entry for this command is as follows:
findroot (rootfs0,0,a) kernel$ /platform/i86pc/kernel/$ISADIR/unix module$ /platform/i86pc/$ISADIR/boot_archive
In some Oracle Solaris release, the entry is as follows:
title Solaris 10 10/08 s10x_u6wos_03 X86 findroot (pool_rpool,0,a) kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS module /platform/i86pc/boot_archive title Solaris failsafe findroot (pool_rpool,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttyb module /boot/x86.miniroot-safe
For more information, see x86: Implementation of the findroot Command.
For GRUB reference information, see Chapter 15, x86: GRUB Based Booting (Reference).
Support for booting from an Oracle Solaris ZFS root file system has been added to Oracle Solaris. The installation software also includes support for system upgrades and patching of systems with ZFS roots. Booting, system operations, and installation procedures have been modified to support this change. Changes to booting include the implementation of a new boot architecture on the SPARC platform. The new SPARC boot design includes feature enhancements that increase commonality with the Solaris x86 boot architecture.
Before using this feature, check the Oracle Solaris 10 9/10 Release Notes to find out about any known issues.
For more information about Oracle Solaris ZFS, including a complete list of terms, see ZFS Terminology in Oracle Solaris ZFS Administration Guide.
Before performing a new installation of Oracle Solaris or using Oracle Solaris Live Upgrade to migrate a UFS root file system to an Oracle Solaris ZFS root file system, make sure the following requirements are met:
Solaris release information:
The ability to install and boot from an Oracle Solaris ZFS root file system is available, starting with the Solaris 10 10/09 release. To perform an Oracle Solaris Live Upgrade operation to migrate to a ZFS root file system, you must have installed or upgraded to at least the Solaris 10 10/09 release.
Oracle Solaris ZFS storage pool space requirements:
Because swap and dump devices are not shared in a ZFS root environment, the minimum amount of available pool space that is required for a bootable ZFS root file system is larger than for a bootable UFS root file system.
Swap volume size is calculated at half the size of physical memory, but no more than 2 Gbytes, and no less than 512 Mbytes. Dump volume size is calculated by the kernel, based on dumpadm information and the size of physical memory. You can adjust the size of your swap and dump volumes to sizes of your choosing either in an Oracle Solaris JumpStart profile or during an initial installation, as long as the new sizes support system operation. For more information, see ZFS Support for Swap and Dump Devices in Oracle Solaris ZFS Administration Guide.
Booting from an Oracle Solaris ZFS root file system works differently than booting from a UFS file system. Because ZFS applies several new concepts for installation and booting, some basic administrative practices for booting a system have changed. The most significant difference between booting from a ZFS root file system and booting from a UFS root file system is that with ZFS a device identifier does not uniquely identify a root file system, and thus a BE. With ZFS, a device identifier uniquely identifies a storage pool. A storage pool can contain multiple bootable datasets (root file systems). Therefore, in addition to specifying a boot device, a root file system within the pool that is identified by the boot device must also be specified.
On an x86 based system, if the boot device identified by GRUB contains a ZFS storage pool, the menu.lst file that is used to create the GRUB menu is located in the dataset at the root of that pool's dataset hierarchy. This dataset has the same name as the pool. There is one such dataset in each pool.
A default bootable dataset is the bootable dataset for the pool that is mounted at boot time and is defined by the root pool's bootfs property. When a device in a root pool is booted, the dataset that is specified by this property is then mounted as the root file system.
The new bootfs pool property is a mechanism that is used by the system to specify the default bootable dataset for a given pool. When a device in a root pool is booted, the dataset that is mounted by default as the root file system is the one that is identified by the bootfs pool property.
On a SPARC based system, the default bootfs pool property is overridden by using the new -Z dataset option of the boot command.
On an x86 based system, the default bootfs pool property is overridden by selecting an alternate boot environment in the GRUB menu at boot time.
On the SPARC platform, the following two boot options are new:
The -L option, which is used to print a list of all the available BEs on a system.
ok boot -L
The -L option is run from the ok prompt. This option only presents the list of available BEs on the system. To boot the system, use the- Z boot option.
The -Z option of the boot command enables you to specify a bootable dataset other than the default dataset that is specified by the bootfs pool property.
ok boot -Z dataset
The list of BEs that are displayed when you use the -L option on a device that has a ZFS boot loader reflect the menu.lst entries that are available on that particular system. Along with the list of available BEs, instructions for selecting a BE and using the -Z option to boot the system are also provided. The dataset specified by the bootfs value for the menu item is used for all subsequent files that are read by the booter, for example, the boot archive and various configuration files that are located in the /etc directory. This dataset is then mounted as the root file system.
For step-by-step instructions, see Booting From a Specified ZFS Root File System on a SPARC Based System.
On the x86 platform, a new GRUB keyword, $ZFS-BOOTFS has been introduced. When booting an x86 based system, if the root file system that corresponds with the GRUB menu entry is a ZFS dataset, the GRUB menu entry contains the -B option with the $ZFS-BOOTFS token, by default. If you install a release that supports a ZFS boot loader, the GRUB menu.lst file is updated with this information automatically. The default bootable dataset is identified by the bootfs property.
On x86 based systems that are running a release that supports a ZFS boot loader, this information is included in the GRUB menu.lst file.
For step-by-step instructions on booting a system from ZFS, see x86: Booting From a Specified ZFS Root File System on an x86 Based System.