This chapter describes how to install and boot a ZFS file system. Migrating a UFS root file system to a ZFS file system by using Solaris Live Upgrade is also covered.
The following sections are provided in this chapter:
Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support
Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade)
For up-to-date troubleshooting information, go to the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Starting in the SXCE, build 90 release, you can install and boot from a ZFS root file system in the following ways:
You can perform an initial installation where ZFS is selected as the root file system.
You can use the Solaris Live Upgrade feature to migrate a UFS root file system to a ZFS root file system. In addition, you can use Solaris Live Upgrade to perform the following tasks:
Create a new boot environment within an existing ZFS root pool
Create a new boot environment in a new ZFS root pool
You can use a Solaris JumpStart profile to automatically install a system with a ZFS root file system.
Systems that already have ZFS root file systems can be bfu'd to SXCE, build 90, but bfu does not convert the legacy mounts (of /, /var, and so on) to ZFS mounts. Backwards bfu to releases that don't support ZFS boot is prohibited.
We recommend that you reinstall your systems at some future time to achieve the standard ZFS boot configuration provided in this release, which uses ZFS mounts, not legacy mounts. However, the system continues to boot with legacy mounts, at least for now.
After a SPARC-based or an x86 based system is installed with a ZFS root file system or migrated to a ZFS root file system, the system boots automatically from the ZFS root file system. For more information about boot changes, see Booting From a ZFS Root File System.
The following ZFS installation features are provided in this Solaris release:
Using the Solaris interactive text installer, you can install a UFS or a ZFS root file system. The default file system is still UFS for this Solaris release. You can access the interactive text installer option in the following ways:
On SPARC based system, use the following syntax from the Solaris installation DVD:
ok boot cdrom - text |
On SPARC based system, use the following syntax when booting from the network:
ok boot net - text |
On an x86 based system, select the text-mode install option when presented.
Custom JumpStartTM features enable you to set up a profile to create a ZFS storage pool and designate a bootable ZFS file system.
Using the Solaris Live Upgrade feature, you can migrate a UFS root file system to a ZFS root file system. The lucreate and luactivate commands have been enhanced to support ZFS pools and file systems.
You can set up a mirrored ZFS root pool by selecting two disks during installation. Or, you can attach additional disks after installation to create a mirrored ZFS root pool.
Swap and dump devices are automatically created on ZFS volumes in the ZFS root pool.
The following installation features are not provided in this release:
The GUI installation feature for installing a ZFS root file system is not currently available.
The SolarisTM Flash installation feature for installing a ZFS root file system is not currently available.
You cannot use the standard upgrade program to upgrade your UFS root file system to a ZFS root file system.
Make sure the following requirements are met before attempting to install a system with a ZFS root file system or attempting to migrate a UFS root file system to a ZFS root file system.
You can install and boot a ZFS root file system or migrate to a ZFS root file system in the following ways:
Install a ZFS root file system – Available starting in the SXCE, build 90 release.
Migrate from a UFS root file system to a ZFS root file system with Solaris Live Upgrade – You must have installed the SXCE, build 90 release or you must have upgraded to the SXCE, build 90 release. For a list of required Solaris Live Upgrade patches, see Required Solaris Live Upgrade Patch Information.
Review the following sections that describe ZFS root pool space and configuration requirements.
The required minimum amount of available pool space for a ZFS root file system is larger than for a UFS root file system because swap and dump devices must be separate devices in a ZFS root environment. By default, swap and dump devices are the same device in a UFS root file system.
When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent upon the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of boot environments (BEs) to be created.
Review the following ZFS storage pool space requirements:
1 Gbyte of memory is recommended to install a ZFS root file system and for overall better ZFS performance
At least 16 Gbytes of disk space is recommended. The space is consumed as follows:
Swap area and dump device – The default sizes of the swap and dump volumes that are created by the Solaris installation programs are as follows:
Solaris initial installation – The default swap volume size is calculated as half the size of physical memory, generally in the 512 Mbytes to 2 Gbytes range, in the new ZFS BE. You can adjust the swap size during an initial installation.
The default dump volume size is calculated by the kernel based on dumpadm information and the size of physical memory. You can adjust the dump size during an initial installation.
Solaris Live Upgrade – When a UFS root file system is migrated to a ZFS root file system, the default swap volume size for the ZFS boot environment (BE) is calculated as the size of the swap device of the UFS BE. The default swap volume size calculation simply adds the sizes of all the swap devices in the UFS BE, and creates a ZFS volume of that size in the ZFS BE. If no swap devices are defined in the UFS BE, then the default swap volume size is set to 512 Mbytes.
The default dump volume size is set to half the size of physical memory, between 512 Mbytes and 2 Gbytes, in the ZFS BE.
You can adjust the sizes of your swap and dump volumes to sizes of your choosing as long as the new sizes support system operation. For more information, see Adjusting the Sizes of Your ZFS Swap and Dump Devices.
Boot environment (BE) – In addition to either new swap and dump space requirements or adjusted swap and dump device sizes, a ZFS BE that is migrated from a UFS BE needs approximately 6 Gbytes. Each ZFS BE that is cloned from another ZFS BE doesn't need additional disk space, but consider that the BE size will increase when patches are applied. All ZFS BEs in the same root pool use the same swap and dump devices.
Solaris OS Components – All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. In addition, all Solaris OS components must reside in the root pool with the exception of the swap and dump devices.
For example, a system with 12 Gbytes of disk space might be too small for a bootable ZFS environment because 2 Gbytes of disk space is needed for each swap and dump device and approximately 6 Gbytes of disk space is needed for the ZFS BE that is migrated from a UFS BE.
Review the following ZFS storage pool configuration requirements:
The pool that is intended for the root pool must have an SMI label. This requirement should be met if the pool is created with disk slices.
The pool must exist either on a disk slice or on disk slices that are mirrored. If you attempt to use an unsupported pool configuration during a Live Upgrade migration, you will see a message similar to the following:
ERROR: ZFS pool name does not support boot environments |
For a detailed description of supported ZFS root pool configurations, see Creating a ZFS Root Pool.
On an x86 based system, the disk must contain a Solaris fdisk partition. A Solaris fdisk partition is created automatically when the x86 based system is installed. For more information about Solaris fdisk partitions, see Guidelines for Creating an fdisk Partition in System Administration Guide: Devices and File Systems.
Disks that are designated for booting in a ZFS root pool must be limited to 1 TB in size on both SPARC based and x86 based systems.
Compression can be enabled on the root pool but only after the root pool is installed. No way exists to enable compression on a root pool during installation. The gzip compression algorithm is not supported on root pools.
Do not rename the root pool after it is created by an initial installation or after Solaris Live Upgrade migration to a ZFS root file system. Renaming the root pool might cause an unbootable system.
In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
If you already have ZFS storage pools on the system, they are acknowledged by the following message, but remain untouched, unless you select the disks in the existing pools to create the new storage pool.
There are existing ZFS pools available on this system. However, they can only be upgraded using the Live Upgrade tools. The following screens will only allow you to install a ZFS root system, not upgrade one. |
Existing pools will be destroyed if any of their disks are selected for the new pool.
Before you begin the initial installation to create a ZFS storage pool, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.
The Solaris interactive text installation process is basically the same as previous Solaris releases, except that you are prompted to create a UFS or ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you will be prompted to create a ZFS storage pool. Installing a ZFS root file system involves the following steps:
Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system.
You can perform a standard upgrade to upgrade an existing bootable ZFS file system that is running the SXCE, build 90 release, but you cannot use this option to create a new bootable ZFS file system. Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as the SXCE, build 90 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
If you want to create a ZFS root file system, select the ZFS option. For example:
Choose Filesystem Type Select the filesystem to use for your Solaris installation [ ] UFS [X] ZFS |
After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases:
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] ** c1t1d0 69994 MB [ ] c1t2d0 69994 MB (F4 to edit) Maximum Root Size: 69994 MB Suggested Minimum: 7466 MB |
You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or three-disk mirrored pool is optimal. If you have eight disks and you select all eight disks, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.
If you want to select two disks to create a mirrored root pool, then use the cursor control keys to select the second disk. For example, both c1t1d0 and c1t2d0 are selected for the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label nor contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] ** c1t1d0 69994 MB [X] c1t2d0 69994 MB (F4 to edit) Maximum Root Size: 69994 MB Suggested Minimum: 7466 MB |
If the Available Space column identifies 0 MB, this generally indicates that the disk has an EFI label.
After you have selected a disk or disks for your ZFS storage pool, a screen that looks similar to the following is displayed:
Configure ZFS Settings Specify the name of the pool to be created from the disk(s) you have chosen. Also specify the name of the dataset to be created within the pool that is to be used as the root directory for the filesystem. ZFS Pool Name: rpool ZFS Root Dataset Name: snv_109 ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1024 (Pool size must be between 10076 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
From this screen, you can change the name of the ZFS pool, dataset name, pool size, and swap and dump device sizes by moving the cursor control keys through the entries and replacing the default text value with new text. Or, you can accept the default values. In addition, you can modify the way the /var file system is created and mounted.
In this example, the root dataset name is changed to zfsnv109BE.
ZFS Pool Name: rpool ZFS Root Dataset Name: zfsnv109BE ZFS Pool Size (in MB): 34731 (Pool size must be between 6413 MB and 34731 MB) |
You can change the installation profile at this final installation screen. For example:
Profile The information shown below is your profile for installing Solaris software. It reflects the choices you've made on previous screens. ============================================================================ Installation Option: Initial Boot Device: c1t1d0 Root File System Type: ZFS Client Services: None Regions: North America System Locale: C ( C ) Software: Solaris 11, Entire Distribution Pool Name: rpool Boot Environment Name: zfsnv109BE Pool Size: 69995 MB Devices in Pool: c1t1d0 |
After the installation is complete, review the resulting ZFS storage pool and file system information. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 56.5G 64K /rpool rpool/ROOT 7.43G 56.5G 18K legacy rpool/ROOT/zfsnv109BE 7.43G 56.5G 7.43G / rpool/dump 1.00G 56.5G 1.00G - rpool/export 41K 56.5G 21K /export rpool/export/home 20K 56.5G 20K /export/home rpool/swap 2G 58.5G 5.34M - |
The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.
If you initially created your ZFS storage pool with one disk, you can convert it to a mirrored ZFS configuration after the installation completes by using the zpool attach command to attach an available disk. For example:
# zpool attach rpool c1t1d0s0 c1t2d0s0 # zpool status pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 5.03% done, 0h13m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 errors: No known data errors |
It will take some time to resilver the data to the new disk, but the pool is still available.
Until CR 6668666 is fixed, you will need to install the boot information on the additionally attached disks by using the installboot or installgrub commands if you want to enable booting on the other disks in the mirror. If you create a mirrored ZFS root pool with the initial installation method, then this step is unnecessary. For more information about installing boot information, see Booting From an Alternate Disk in a Mirrored ZFS Root Pool.
For more information about adding or attaching disks, see Managing Devices in ZFS Storage Pools.
If you want to create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfsnv1092BE is created. The current BE is named zfsnv109BE, displayed in the zfs list output, is not acknowledged in the lustatus output until the new BE is created.
# lustatus ERROR: No boot environments are configured on this system ERROR: cannot determine list of all boot environment names |
If you create a new ZFS BE in the same pool, use syntax similar to the following:
# lucreate -n zfsnv1092BE Analyzing system configuration. Comparing source boot environment <zfsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfsnv1092BE>. Source boot environment is <zfsnv109BE>. Creating boot environment <zfsnv1092BE>. Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>. Creating snapshot for <rpool/ROOT/zfsnv109BE> on <rpool/ROOT/zfsnv109BE@zfsnv1092BE>. Creating clone for <rpool/ROOT/zfsnv109BE@zfsnv1092BE> on <rpool/ROOT/zfsnv1092BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfsnv1092BE>. Population of boot environment <zfsnv1092BE> successful. Creation of boot environment <zfsnv1092BE> successful. |
Creating a ZFS BE within the same pool uses ZFS clone and snapshot features so the BE is created instantly. For more details about using Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
Next, verify the new boot environments. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsnv109BE yes yes yes no - zfsnv1092BE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 56.5G 64K /rpool rpool/ROOT 7.42G 56.5G 18K legacy rpool/ROOT/zfsnv1092BE 97K 56.5G 7.42G /tmp/.alt.luupdall.3244 rpool/ROOT/zfsnv109BE 7.42G 56.5G 7.42G / rpool/dump 1.00G 56.5G 1.00G - rpool/export 41K 56.5G 21K /export rpool/export/home 20K 56.5G 20K /export/home rpool/swap 2G 58.5G 5.34M - |
If you want to boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC-based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.
For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfsnv1092BE, select option 2. Then, type the displayed boot -Z command.
ok boot -L Rebooting with command: boot -L Boot device: /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a File and args: -L 1 zfsnv109BE 2 zfsnv1092BE Select environment to boot: [ 1 - 2 ]: 2 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfsnv1092BE Program terminated ok boot -Z rpool/ROOT/zfsnv1092BE |
For more information about booting a ZFS file system, see Booting From a ZFS Root File System.
You can create a JumpStart profile to install a ZFS root file system or a UFS root file system.
A ZFS specific profile must contain the new pool keyword. The pool keyword installs a new root pool and a new boot environment is created by default. You can provide the name of the boot environment and can create a separate /var dataset with the bootenv installbe keywords and bename and dataset options.
For general information about using JumpStart features, see Solaris Express Installation Guide: Custom JumpStart and Advanced Installations.
This section provides examples of ZFS specific JumpStart profiles.
The following profile performs an initial installation specified with install_type initial_install in a new pool, identified with pool newpool, whose size is automatically sized with the auto keyword to the size of the specified disks. The swap area and dump device are automatically sized with auto keyword in a mirrored configuration of disks (with the mirror keyword and disks specified as c0t0d0s0 and c0t1d0s0). Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named sxce_xx is created.
install_type initial_install pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0 bootenv installbe bename sxce-xx |
The following profile performs an initial installation with keyword install_type initial_install of the SUNWCall metacluster in a new pool called newpool, that is 80 Gbytes in size. This pool is created with a 2-Gbyte swap volume and a 2-Gbyte dump volume, in a mirrored configuration of any two available devices that are large enough to create an 80-Gbyte pool. If two such devices aren't available, the installation fails. Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named sxce-xx is created.
install_type initial_install cluster SUNWCall pool newpool 80g 2g 2g mirror any any bootenv installbe bename sxce-xx |
JumpStart installation syntax supports the ability to preserve or create a UFS file system on a disk that also includes a ZFS root pool. This configuration is not recommended for production systems, but could be used for transition or migration needs on a small system, such as a laptop.
The following keywords are permitted in a ZFS specific profile:
Specifies the size of the slices for the pool, swap volume, or dump volume automatically. The size of the disk is checked to verify that the minimum size can be accommodated. If the minimize size can be accommodated, the largest possible pool size is allocated, given the constraints, such as the size of the disks, preserved slices, and so on.
For example, if you specify c0t0d0s0, the slice is created as large as possible if you specify either the all or auto keywords. Or, you can specify a particular size for the slice or swap or dump volume.
The auto keyword works similarly to the all keyword when used with a ZFS root pool because pools don't have the concept of unused space.
This keyword identifies the boot environment characteristics.
The bootenv keyword already exists, but new options are defined. Use the following bootenv keyword syntax to create a bootable ZFS root environment:
bootenv installbe bename BE-name [dataset mount-point]
Creates a new BE that is identified by the bename option and BE-name entry and installs it.
Identifies the BE-name to install.
If bename is not used with the pool keyword, then a default BE is created.
Use the optional dataset keyword to identify a /var dataset that is separate from the root dataset. The mount-point value is currently limited to /var. For example, a bootenv syntax line for a separate /var dataset would be similar to the following:
bootenv installbe bename zfsroot dataset /var |
Defines the new root pool to be created. The following keyword syntax must be provided:
poolname poolsize swapsize dumpsize vdevlist |
Identifies the name of the pool to be created. The pool is created with the specified pool size and with the specified physical devices (vdevs). The poolname option should not identify the name of an existing pool or the existing pool is overwritten.
Specifies the size of the pool to be created. The value can be auto or existing. The auto value means allocate the largest possible pool size, given the constraints, such as size of the disks, preserved slices, and so on. The existing value means the boundaries of existing slices by that name are preserved and overwritten. The size is assumed to be in Mbytes, unless specified by g (Gbytes).
Specifies the size of the swap volume to be created. The value can be auto, which means the default swap size is used, or size, to specify a size. The size is assumed to be in Mbytes, unless specified by g (Gbytes).
Specifies the size of the dump volume to be created. The value can be auto, which means the default swap size is used, or size, to specify a size. The size is assumed to be in Mbytes, unless specified by g (Gbytes).
Specifies one or more devices that are used to create the pool. The format of the vdevlist is the same as the format of the zpool create command. At this time, only mirrored configurations are supported when multiple devices are specified. Devices in the vdevlist must be slices for the root pool. The any string, means that the installation software selects a suitable device.
You can mirror as many disks as you like, but the size of the pool that is created is determined by the smallest of the specified disks. For more information about creating mirrored storage pools, see Mirrored Storage Pool Configuration.
Consider the following issues before starting a JumpStart installation of a bootable ZFS root file system.
You cannot use an existing ZFS storage pool for a JumpStart installation to create a bootable ZFS root file system. You must create a new ZFS storage pool with syntax similar to the following:
pool rpool 20G 4G 4G c0t0d0s0 |
You must create your pool with disk slices rather than whole disks as described in Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support. For example, the bold syntax is not acceptable:
install_type initial_install cluster SUNWCall pool rpool all auto auto mirror c0t0d0 c0t1d0 bootenv installbe bename newBE |
This bold syntax is acceptable:
install_type initial_install cluster SUNWCall pool rpool all auto auto mirror c0t0d0s0 c0t1d0s0 bootenv installbe bename newBE |
Previous Solaris Live Upgrade features are available and if related to UFS components, they work as in previous Solaris releases.
The following features are available:
When you migrate your UFS root file system to a ZFS root file system, you must designate an existing ZFS storage pool with the -p option.
If the UFS root file system has components on different slices, they are migrated to the ZFS root pool.
Solaris Live Upgrade can use the ZFS snapshot and clone features when you are creating a new ZFS BE in the same pool. So, BE creation is much faster than previous Solaris releases.
For detailed information about Solaris installation and Solaris Live Upgrade features, see the Solaris Express Installation Guide: Solaris Live Upgrade and Upgrade Planning.
The basic process for migrating a UFS root file system to a ZFS root file system is as follows:
Install the required Solaris Live Upgrade patches, if needed. For a list of patches, see Required Solaris Live Upgrade Patch Information.
Install the SXCE, build 90 release or use the standard upgrade program to upgrade from a previous SXCE release to the SXCE build 90 release on any supported SPARC based or x86 based system.
When you are running the SXCE, build 90 release, create a ZFS storage pool for your ZFS root file system.
Use Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system.
Activate your ZFS BE with the luactivate command.
For information about ZFS and Solaris Live Upgrade requirements, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.
For the SXCE, build 90 release, the compression of the install images is now unzipped with the 7zip utility. If you want to install the appropriate patches rather than upgrading or reinstalling to build 90, you will need to apply the following patches for Solaris Live Upgrade to succeed with the SXCE, build 90 release:
137321-01 (Solaris 10 SPARC)
137322-01 or later (Solaris 10 x86)
137477-01 or later (Solaris 9 SPARC)
137478-01 or later (Solaris 9 x86)
This chapter doesn't cover Solaris 10 issues, but if you are attempting to use Solaris Live Upgrade from a Solaris 10 release to the Nevada, build 90 release, the Solaris 10 5/08 release has had the 7zip utility since build 5. The patches listed above are only necessary if you are running releases older than the Solaris 10 5/08 release.
If you want to Solaris Live Upgrade from a Solaris 10 system with zones installed, you also need to apply the following additional cpio patches:
127922-03 (Solaris 10 SPARC)
127923-03 or later (Solaris 10 x86)
If you want to use Solaris Live Upgrade from Nevada builds before build 79, you must install the SUNWp7zip package from the latest Nevada build.
Review the following list of issues before you use Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:
The Solaris installation GUI's standard-upgrade option is not available for migrating from a UFS to a ZFS root file system. To migrate from a UFS file system, you must use Solaris Live Upgrade.
You must create the ZFS storage pool that will be used for booting before the Solaris Live Upgrade operation. In addition, due to current boot limitations, the ZFS root pool must be created with slices instead of whole disks. For example:
# zpool create rpool mirror c1t0d0s0 c1t1d0s0 |
Before you create the new pool, make sure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, make sure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slices that are intended for the root pool.
You cannot use Solaris Live Upgrade to create a UFS BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your UFS BE, you can boot from either your UFS BE or your ZFS BE.
Do not rename your ZFS BEs with the zfs rename command because the Solaris Live Upgrade feature is unaware of the name change. Subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to continue to use.
When creating an alternative BE that is a clone of the primary BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion and exclusion option set in the following cases:
UFS -> UFS UFS -> ZFS ZFS -> ZFS (different pool) |
Although you can use Solaris Live Upgrade to upgrade your UFS root file system to a ZFS root file system, you cannot use Solaris Live Upgrade to upgrade non-root or shared file systems.
If you are attempting to use Solaris Live Upgrade from a Solaris 10 release to the Nevada, build 90 release, you might need to do steps similar to the following:
# lucreate -n newBE -m /:cXdYsZ:ufs # luupgrade -n newBE -u -s </path/to/snv_90> # luactivate newBE # init 6 |
You cannot use the lu command to create or migrate a ZFS root file system.
The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, c1t1d0s0, containing a UFS root file system, is identified by the -c option. The new BE, zfsnv109BE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation. The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable.
# zpool create mpool mirror c1t0d0s0 c1t2d0s0 # lucreate -c ufsnv109BE -n zfsnv109BE -p mpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <ufsnv109BE>. Creating initial configuration for primary boot environment <ufsnv109BE>. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsnv109BE> PBE Boot Device </dev/dsk/c1t1d0s0>. Comparing source boot environment <ufsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsnv109BE>. Source boot environment is <ufsnv109BE>. Creating boot environment <zfsnv109BE>. Creating file systems on boot environment <zfsnv109BE>. Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfsnv109BE>. Populating file systems on boot environment <zfsnv109BE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsnv109BE>. Creating compare database for file system </mpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsnv109BE>. Making boot environment <zfsnv109BE> bootable. Creating boot_archive for /.alt.tmp.b-0ob.mnt updating /.alt.tmp.b-0ob.mnt/platform/sun4u/boot_archive Population of boot environment <zfsnv109BE> successful. Creation of boot environment <zfsnv109BE> successful. |
After the lucreate operation completes, use the lustatus command to view the BE status. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsnv109BE yes yes yes no - zfsnv109BE yes no no yes - |
Then, review the list of ZFS components. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT mpool 9.95G 41.2G 21K /mpool mpool/ROOT 7.45G 41.2G 19K /mpool/ROOT mpool/ROOT/zfsnv109BE 7.45G 41.2G 7.45G /tmp/.alt.luupdall.5232 mpool/dump 2G 43.2G 16K - mpool/swap 517M 41.7G 16K - |
Next, use the luactivate command to activate the new ZFS BE. For example:
# luactivate zfs1009BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs1009BE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** . . . Modifying boot archive service Activation of boot environment <zfs1009BE> successful. |
# luactivate zfsnv109BE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,700000/scsi@2/disk@1,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <zfsnv109BE> successful. |
Next, reboot the system to the ZFS BE.
# init 6 |
Confirm that the ZFS BE is active.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsnv109BE yes no no yes - zfsnv109BE yes yes yes no - |
If you switch back to the UFS BE, you will need to re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.
If the UFS BE is no longer required, you can remove it with the ludelete command.
Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool mpool, for example, the -p option is omitted.
If you have multiple ZFS BEs on a SPARC based system, you can use the boot -L command to identify the available BEs and select a BE from which to boot by using the boot -Z command. On an x86 based system, you can select a BE from the GRUB menu. For more information, see Example 5–6.
# lucreate -n zfsnv1092BE Analyzing system configuration. Comparing source boot environment <zfsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfsnv1092BE>. Source boot environment is <zfsnv109BE>. Creating boot environment <zfsnv1092BE>. Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>. Creating snapshot for <mpool/ROOT/zfsnv109BE> on <mpool/ROOT/zfsnv109BE@zfsnv1092BE>. Creating clone for <mpool/ROOT/zfsnv109BE@zfsnv1092BE> on <mpool/ROOT/zfsnv1092BE>. Setting canmount=noauto for </> in zone <global> on <mpool/ROOT/zfsnv1092BE>. Population of boot environment <zfsnv1092BE> successful. Creation of boot environment <zfsnv1092BE> successful. |
You can upgrade your ZFS BE to a later build by using the luupgrade command. The following example shows how to upgrade a ZFS BE from build 109 to build 110.
The basic process is:
Create an alternate BE with the lucreate command.
Activate and boot from the alternate BE.
Upgrade your primary ZFS BE with the luupgrade command.
# luupgrade -n zfsnv109BE -u -s /net/install/export/nv/combined.nvs_wos/110 50687 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/install/export/nv/combined.nvs_wos/110/Solaris_11/Tools/Boot> Validating the contents of the media </net/install/export/nv/combined.nvs_wos/110>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <11>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsnv109BE>. Determining packages to install or upgrade for BE <zfsnv109BE>. Performing the operating system upgrade of the BE <zfsnv109BE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Adding operating system patches to the BE <zfsnv109BE>. The operating system patch installation is complete. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <zfsnv109BE> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <zfsnv109BE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsnv109BE>. Before you activate boot environment <zfsnv109BE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsnv109BE> is complete. |
During an initial installation or a Solaris Live Upgrade from a UFS file system, a swap area is created on a ZFS volume in the ZFS root pool. For example:
# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/mpool/swap 253,3 16 8257520 8257520 |
During an initial installation or a Solaris Live Upgrade from a UFS file system, a dump device is created on a ZFS volume in the ZFS root pool. In general, a dump device requires no administration because it is setup automatically at installation time. For example:
# dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/mpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes |
If you disable and remove the dump device, then you will need to enable it with the dumpadm command after it is recreated. In most cases, you will only have to adjust the size of the dump device by using the zfs command.
For information about the swap and dump volume sizes that are created by the installation programs, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.
Both the swap volume size and the dump volume size can be adjusted during and after installation. For more information, see Adjusting the Sizes of Your ZFS Swap and Dump Devices.
Consider the following issues when working with ZFS swap and dump devices:
Separate ZFS volumes must be used for the swap area and dump devices.
Currently, using a swap file on a ZFS file system is not supported.
If you need to change your swap area or dump device after the system is installed or upgraded, use the swap and dumpadm commands as in previous Solaris releases. For more information, see Chapter 21, Configuring Additional Swap Space (Tasks), in System Administration Guide: Devices and File Systems and Chapter 17, Managing System Crash Information (Tasks), in System Administration Guide: Advanced Administration.
See the following sections for more information:
Because of the differences in the way a ZFS root installation sizes swap and dump devices, you might need to adjust the size of swap and dump devices before, during, or after installation.
You can adjust the size of your swap and dump volumes during an initial installation. For more information, see Example 5–1.
You can create and size your swap and dump volumes before you perform a Solaris Live Upgrade operation. For example:
Create your storage pool.
# zpool create rpool mirror c0t0d0s0 c0t1d0s0 |
Create your dump device.
# zfs create -V 2G rpool/dump |
Enable the dump device.
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on |
Select one of the following to create your swap area:
On a SPARC based system, create your swap area. Set the block size to 8 Kbytes.
# zfs create -V 2G -b 8k rpool/swap |
On an x86 based system, create your swap area. Set the block size to 4 Kbytes.
# zfs create -V 2G -b 4k rpool/swap |
You must enable the swap area when a new swap device is added or changed.
Add an entry for the swap volume to the /etc/vfstab file.
Solaris Live Upgrade does not resize existing swap and dump volumes.
You can reset the volsize property of the dump device after a system is installed. For example:
# zfs set volsize=2G rpool/dump # zfs get volsize rpool/dump NAME PROPERTY VALUE SOURCE rpool/dump volsize 2G - |
You can resize the swap volume but until CR 6765386 is integrated, it is best to remove the swap device first. Then, recreate it. For example:
# swap -d /dev/zvol/dsk/rpool/swap # zfs volsize=2G rpool/swap # swap -a /dev/zvol/dsk/rpool/swap |
For information on removing a swap device on an active system, see this site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
You can adjust the size of the swap and dump volumes in a JumpStart profile by using profile syntax similar to the following:
install_type initial_install cluster SUNWCXall pool rpool 16g 2g 2g c0t0d0s0 |
In this profile, the 2g and 2g entries set the size of the swap area and dump device as 2 Gbytes and 2 Gbytes, respectively.
If you need more swap space on a system that is already installed, just add another swap volume. For example:
# zfs create -V 2G rpool/swap2 |
Then, activate the new swap volume. For example:
# swap -a /dev/zvol/dsk/rpool/swap2 # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 1058800 1058800 /dev/zvol/dsk/rpool/swap2 256,3 16 4194288 4194288 |
Add an entry for the second swap volume to the /etc/vfstab file.
Review the following items if you have problems either capturing a system crash dump or resizing the dump device.
If a crash dump was not created automatically, you can use the savecore command to save the crash dump.
A dump device is created automatically when you initially install a ZFS root file system or migrate to a ZFS root file system. In most cases, you will only need to adjust the size of the dump device if the default dump device size is too small. For example, on a large-memory system, the dump device size is increased to 40 Gbytes as follows:
# zfs set volsize=40G rpool/dump |
Resizing a large dump device can be a time-consuming process.
If, for any reason, you need to enable a dump device after you create a dump device manually, use syntax similar to the following:
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on |
A system with 128 Gbytes or greater memory will need a larger dump device than the dump device that is created by default. If the dump device is too small to capture an existing crash dump, a message similar to the following is displayed:
# dumpadm -d /dev/zvol/dsk/rpool/dump dumpadm: dump device /dev/zvol/dsk/rpool/dump is too small to hold a system dump dump size 36255432704 bytes, device size 34359738368 bytes |
For information on sizing the swap and dump devices, see Planning for Swap Space in System Administration Guide: Devices and File Systems.
You cannot currently add a dump device to a pool with multiple top level-devices. You will see a message similar to the following:
# dumpadm -d /dev/zvol/dsk/datapool/dump dump is not supported on device '/dev/zvol/dsk/datapool/dump': 'datapool' has multiple top level vdevs |
Add the dump device to the root pool, which cannot have multiple top-level devices.
Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When booting from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.
When the system is booted for installation, a RAM disk is used for the root file system during the entire installation process.
Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, a device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.
By default, the dataset selected for booting is the one identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset that is included in the boot -Z command.
You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. Review the following known issues regarding mirrored ZFS root pools:
CR 6668666 – You must install the boot information on the additionally attached disks by using the installboot or installgrub commands if you want to enable booting on the other disks in the mirror. If you create a mirrored ZFS root pool with the initial installation method, then this step is unnecessary. For example, if c0t1d0s0 was the second disk added to the mirror, then the installboot or installgrub command would be as follows:
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 |
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 |
You can boot from different devices in a mirrored ZFS root pool. Depending on the hardware configuration, you might need to update the PROM or the BIOS to specify a different boot device.
For example, you can boot from either disk (c1t0d0s0 or c1t1d0s0) in this pool.
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 |
On a SPARC based system, enter the alternate disk at the ok prompt.
ok boot /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@1 |
After the system is rebooted, confirm the active boot device. For example:
SPARC# prtconf -vp | grep bootpath bootpath: '/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@1,0:a' |
On an x86 based system, use syntax similar to the following:
x86# prtconf -v|sed -n '/bootpath/,/value/p' name='bootpath' type=string items=1 value='/pci@0,0/pci8086,25f8@4/pci108e,286@0/disk@0,0:a' |
On an x86 based system, select an alternate disk in the mirrored ZFS root pool from the appropriate BIOS menu.
On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.
During the installation and Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.
Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.
When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:
title zfsnv109BE bootfs mpool/ROOT/zfsnv109BE |
When a new BE is created, the menu.lst file is updated. Until CR 6696226 is fixed, you must update the menu.lst file manually after you activate the BE with the luactivate command.
When a new BE is created, the menu.lst file is updated automatically.
On a SPARC based system, two new boot options are available:
After the BE is activated, you can use the boot -L command to display a list of bootable datasets within a ZFS pool. Then, you can select one of the bootable datasets in the list. Detailed instructions for booting that dataset are displayed. You can boot the selected dataset by following the instructions.
Use the boot -Z dataset command to boot a specific ZFS dataset.
If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.
For example, the following ZFS BEs are available as described by the lustatus output:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsnv109BE yes yes yes no - zfsnv1092BE yes no no yes - |
If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command. For example:
ok boot -L Rebooting with command: boot -L Boot device: /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a File and args: -L 1 zfsnv109BE 2 zfsnv1092BE Select environment to boot: [ 1 - 2 ]: 2 To boot the selected entry, invoke: boot [<root-device>] -Z mpool/ROOT/zfsnv1092BE Program terminated ok boot -Z rpool/ROOT/zfsnv1092BE |
On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows. For example:
ok boot -F failsafe |
If you want to boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:
ok boot -Z rpool/ROOT/zfsnv109BE -F failsafe |
The following entries are added to the /pool-name/boot/grub/menu.lst file during the installation process or Solaris Live Upgrade operation to boot ZFS automatically:
title Solaris Express Community Edition zfsnv109BE X86 bootfs rpool/ROOT/zfsnv109BE findroot (pool_rpool,0,a) kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive |
If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.
On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added.
-B $ZFS-BOOTFS |
When booting from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:
title Solaris Express Community Edition zfsnv1095BE X86 findroot (pool_rpool,0,a) bootfs rpool/ROOT/zfsnv109BE kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive |
The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:
title zfsnv109BE failsafe findroot (pool_rpool,0,a) bootfs rpool/ROOT/zfsnv109BE kernel /boot/platform/i86pc/kernel/unix -s -B console=ttyb module /boot/x86.miniroot-safe |
In the Solaris Express Community Edition, build 100, the fast reboot feature provides the ability to reboot within seconds on x86 based systems. With the fast reboot feature, you can reboot to a new kernel without experiencing the long delays that can be imposed by the BIOS and boot loader. The ability to fast reboot a system drastically reduces down time and improves efficiency.
You must still use the init 6 command when transitioning between BEs with the luactivate command. For other system operations where the reboot command is appropriate, you can use the reboot f command. For example:
# reboot -f |
Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.
You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password. The OpenSolaris release does not support failsafe mode.
If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.
Boot failsafe mode.
On a SPARC system:
ok boot -F failsafe |
On an x86 system, select failsafe mode from the GRUB prompt.
Mount the ZFS BE on /a when prompted:
. . . ROOT/zfsBE was found on rpool. Do you wish to have it mounted read-write on /a? [y,n,?] y mounting rpool on /a Starting shell. |
Change to the /a/etc directory.
# cd /a/etc |
If necessary, set the TERM type.
# TERM=vt100 # export TERM |
Correct the passwd or shadow file.
# vi shadow |
Reboot the system.
# init 6 |
If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.
You can use this procedure on a system that is running the Open Solaris release to recover a lost root password or similar problem.
Boot from an installation CD or from the network.
On a SPARC system:
ok boot cdrom -s ok boot net -s |
If you don't use the -s option, you will need to exit the installation program.
On an x86 system, select the network boot or boot from local CD option.
Import the root pool and specify an alternate mount point. For example:
# zpool import -R /a rpool |
Mount the ZFS BE. For example:
# zfs mount rpool/ROOT/zfsBE |
Access the ZFS BE contents from the /a directory.
# cd /a |
Reboot the system.
# init 6 |
The following sections describe how to perform the following tasks:
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 # zpool replace rpool c1t0d0s0 # zpool online rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0 |
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c1t10d0s0) is:
/pci@8,700000/pci@3/scsi@5/sd@a,0 |
In the example below, the replacement boot disk is (c1t9d0s0):
/pci@8,700000/pci@3/scsi@5/sd@9,0 |
Physically connect the replacement disk.
Confirm that the replacement (new) disk has an SMI label and a slice 0.
For information about relabeling a disk that is intended for the root pool, see the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Attach the new disk to the root pool.
For example:
# zpool attach rpool c1t10d0s0 c1t9d0s0 |
Confirm the root pool status.
For example:
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 25.47% done, 0h4m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t10d0s0 ONLINE 0 0 0 c1t9d0s0 ONLINE 0 0 0 errors: No known data errors |
After the resilvering is complete, apply the boot blocks to the new disk.
For example:
On a SPARC based system:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0 |
On an x86 based system:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0 |
Verify that you can boot from the new disk.
For example, on a SPARC based system:
ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0 |
If the system boots from the new disk, detach the old disk.
For example:
# zpool detach rpool c1t10d0s0 |
Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM, or reconfigure the PC BIOS.
Create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to do a recursive snapshot of the root pool.
The procedure below creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. In the case of a root pool failure, the remote dataset can be mounted by using NFS and the snapshot file received into the recreated pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.
For information about remotely storing and recovering root pool snapshots and the most up-to-date information about root pool recovery, go to this site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery and in either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.
In the following example, the system is booted from the zfs1009BE boot environment.
Create space on a remote system to store the snapshots.
For example:
remote# zfs create rpool/snaps |
Share the space to the local system.
For example:
remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps # share -@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system "" |
Create a recursive snapshot of the root pool.
In this example, the system has two BEs, zfsnv109BE and zfsnv1092BE. The active BE is zfsnv109BE.
local# zpool set listsnapshots=on mpool local# zfs snapshot -r mpool@0311 local# zfs list NAME USED AVAIL REFER MOUNTPOINT mpool 9.98G 41.2G 22.5K /mpool mpool@0311 0 - 22.5K - mpool/ROOT 7.48G 41.2G 19K /mpool/ROOT mpool/ROOT@0311 0 - 19K - mpool/ROOT/zfsnv1092BE 85K 41.2G 7.48G /tmp/.alt.luupdall.2934 mpool/ROOT/zfsnv1092BE@0311 0 - 7.48G - mpool/ROOT/zfsnv109BE 7.48G 41.2G 7.45G / mpool/ROOT/zfsnv109BE@zfsnv1092BE 28.7M - 7.48G - mpool/ROOT/zfsnv109BE@0311 58K - 7.45G - mpool/dump 2.00G 41.2G 2.00G - mpool/dump@0311 0 - 2.00G - mpool/swap 517M 41.7G 16K - mpool/swap@0311 0 - 16K - |
Send the root pool snapshots to the remote system.
For example:
local# zfs send -Rv mpool@0311 > /net/remote-system/rpool/snaps/mpool.0311 sending from @ to mpool@0311 sending from @ to mpool/swap@0311 sending from @ to mpool/dump@0311 sending from @ to mpool/ROOT@0311 sending from @ to mpool/ROOT/zfsnv109BE@zfsnv1092BE sending from @zfsnv1092BE to mpool/ROOT/zfsnv109BE@0311 sending from @ to mpool/ROOT/zfsnv1092BE@0311 |
In this scenario, assume the following conditions:
ZFS root pool cannot be recovered
ZFS root pool snapshots are stored on a remote system and are shared over NFS
All steps below are performed on the local system.
Boot from CD/DVD or the network.
On a SPARC based system, select one of the following boot methods:
ok boot net -s ok boot cdrom -s |
If you don't use -s option, you'll need to exit the installation program.
On an x86 based system, select the option for booting from the DVD or the network. Then, exit the installation program.
Mount the remote snapshot dataset.
For example:
# mount -F nfs remote-system:/rpool/snaps /mnt |
If your network services are not configured, you might need to specify the remote-system's IP address.
If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you will have to relabel the disk.
For more information about relabeling the disk, go to the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Recreate the root pool.
For example:
# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache mpool c1t0d0s0 |
Restore the root pool snapshots.
This step might take some time. For example:
# cat /mnt/mpool.0311 | zfs receive -Fdu mpool |
Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
(Optional) If you want to modify something in the BE, you will need to explicitly mount them like this:
Mount the BE components. For example:
# zfs mount mpool/ROOT/zfsnv109BE |
Mount everything in the pool that is not part of a BE. For example:
# zfs mount -a mpool |
Other BEs are not mounted since they have canmount=noauto, which suppresses mounting when the zfs mount -a operation is done.
Verify that the root pool datasets are restored.
For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT mpool 9.98G 41.2G 22.5K /mpool mpool@0311 0 - 22.5K - mpool/ROOT 7.48G 41.2G 19K /mpool/ROOT mpool/ROOT@0311 0 - 19K - mpool/ROOT/zfsnv1092BE 85K 41.2G 7.48G /tmp/.alt.luupdall.2934 mpool/ROOT/zfsnv1092BE@0311 0 - 7.48G - mpool/ROOT/zfsnv109BE 7.48G 41.2G 7.45G / mpool/ROOT/zfsnv109BE@zfsnv1092BE 28.7M - 7.48G - mpool/ROOT/zfsnv109BE@0311 58K - 7.45G - mpool/dump 2.00G 41.2G 2.00G - mpool/dump@0311 0 - 2.00G - mpool/swap 517M 41.7G 16K - mpool/swap@0311 0 - 16K - |
Set the bootfs property on the root pool BE.
For example:
# zpool set bootfs=mpool/ROOT/zfsnv109BE mpool |
Install the boot blocks on the new disk.
On a SPARC based system:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t5d0s0 |
On an x86 based system:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t5d0s0 |
Reboot the system.
# init 6 |
This procedure assumes that existing root pool snapshots are available. In this example, the root pool snapshots are available on the local system. For example:
# zpool set listsnapshots=on rpool # zfs snapshot -r rpool/ROOT@0311 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.67G 1.04G 21.5K /rpool rpool/ROOT 4.66G 1.04G 18K /rpool/ROOT rpool/ROOT@1013 0 - 18K - rpool/ROOT/zfsnv109BE 4.66G 1.04G 4.66G / rpool/ROOT/zfsnv109BE@0311 0 - 4.66G - rpool/dump 515M 1.04G 515M - rpool/swap 513M 1.54G 16K - |
Shutdown the system and boot failsafe mode.
ok boot -F failsafe Multiple OS instances were found. To check and mount one of them read-write under /a, select it from the following list. To not mount any, select 'q'. 1 /dev/dsk/c1t1d0s0 Solaris Express Community Edition snv_109 SPARC 2 rpool:7641827061132033134 ROOT/zfsnv1092BE Please select a device to be mounted (q for none) [?,??,q]: 2 mounting rpool on /a |
Rollback the individual root pool snapshots.
# zfs rollback -rf rpool/ROOT@0311 |
Reboot back to multiuser mode.
# init 6 |