This chapter describes how to install and boot a Oracle Solaris ZFS file system. Migrating a UFS root file system to a ZFS file system by using Oracle Solaris Live Upgrade is also covered.
The following sections are provided in this chapter:
Installing and Booting an Oracle Solaris ZFS Root File System (Overview)
Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support
Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)
Installing a ZFS Root File System (Oracle Solaris JumpStart Installation)
Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade)
For a list of known issues in this release, see Oracle Solaris 10 9/10 Release Notes.
For up-to-date troubleshooting information, go to the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Starting in the Solaris 10 10/08 release, you can install and boot from a ZFS root file system in the following ways:
You can perform an initial installation during which ZFS is selected as the root file system.
You can use Oracle Solaris Live Upgrade to migrate a UFS root file system to a ZFS root file system. In addition, you can use Oracle Solaris Live Upgrade to perform the following tasks:
Create a new boot environment within an existing ZFS root pool.
Create a new boot environment in a new ZFS root pool.
You can use an Oracle Solaris JumpStart profile to automatically install a system with a ZFS root file system.
Starting in the Solaris 10 10/09 release, you can use a JumpStart profile to automatically install a system with a ZFS Flash archive.
After a SPARC based or an x86 based system is installed with or migrated to a ZFS root file system, the system boots automatically from the ZFS root file system. For more information about boot changes, see Booting From a ZFS Root File System.
The following ZFS installation features are provided in this Solaris release:
Using the Solaris interactive text installer, you can install a UFS or a ZFS root file system. The default file system is still UFS for this Solaris release. You can access the interactive text installer option in the following ways:
SPARC: Use the following syntax from the Solaris Installation DVD:
ok boot cdrom - text |
SPARC: Use the following syntax when booting from the network:
ok boot net - text |
x86: Select the text-mode installation option.
A Custom JumpStart profile provides the following features:
You can set up a profile to create a ZFS storage pool and designate a bootable ZFS file system.
You can set up a profile to identify a flash archive of a ZFS root pool.
Using Oracle Solaris Live Upgrade, you can migrate a UFS root file system to a ZFS root file system. The lucreate and luactivate commands have been enhanced to support ZFS pools and file systems.
You can set up a mirrored ZFS root pool by selecting two disks during installation. Or, you can attach additional disks after installation to create a mirrored ZFS root pool.
Swap and dump devices are automatically created on ZFS volumes in the ZFS root pool.
The following installation features are not provided in this release:
The GUI installation feature for installing a ZFS root file system is not currently available.
The Oracle Solaris Flash installation feature for installing a ZFS root file system is not available by selecting the flash installation option from the initial installation option. However, you can create a JumpStart profile to identify a flash archive of a ZFS root pool. For more information, see Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation).
You cannot use the standard upgrade program to upgrade your UFS root file system to a ZFS root file system.
Ensure that the following requirements are met before attempting to install a system with a ZFS root file system or attempting to migrate a UFS root file system to a ZFS root file system.
You can install and boot a ZFS root file system or migrate to a ZFS root file system in the following ways:
Install a ZFS root file system – Available starting in the Solaris 10 10/08 release.
Migrate from a UFS root file system to a ZFS root file system with Oracle Solaris Live Upgrade – You must have installed at least the Solaris 10 10/08 release or you must have upgraded to at least the Solaris 10 10/08 release.
The following sections describe ZFS root pool space and configuration requirements.
The required minimum amount of available pool space for a ZFS root file system is larger than for a UFS root file system because swap and dump devices must be separate devices in a ZFS root environment. By default, swap and dump devices are the same device in a UFS root file system.
When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent upon the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of boot environments (BEs) to be created.
Review the following disk space requirements for ZFS storage pools:
768 MB is the minimum amount of memory required to install a ZFS root file system.
1 GB of memory is recommended for better overall ZFS performance.
At least 16 GB of disk space is recommended. The disk space is consumed as follows:
Swap area and dump device – The default sizes of the swap and dump volumes that are created by the Solaris installation programs are as follows:
Solaris initial installation – In the new ZFS boot environment, the default swap volume size is calculated as half the size of physical memory, generally in the 512 MB to 2 GB range. You can adjust the swap size during an initial installation.
The default dump volume size is calculated by the kernel based on dumpadm information and the size of physical memory. You can adjust the dump size during an initial installation.
Oracle Solaris Live Upgrade – When a UFS root file system is migrated to a ZFS root file system, the default swap volume size for the ZFS BE is calculated as the size of the swap device of the UFS BE. The default swap volume size calculation adds the sizes of all the swap devices in the UFS BE, and creates a ZFS volume of that size in the ZFS BE. If no swap devices are defined in the UFS BE, then the default swap volume size is set to 512 MB.
In the ZFS BE, the default dump volume size is set to half the size of physical memory, between 512 MB and 2 GB.
You can adjust the sizes of your swap and dump volumes to sizes of your choosing as long as the new sizes support system operations. For more information, see Adjusting the Sizes of Your ZFS Swap Device and Dump Device.
Boot environment (BE) – In addition to either new swap and dump space requirements or adjusted swap and dump device sizes, a ZFS BE that is migrated from a UFS BE requires approximately 6 GB. Each ZFS BE that is cloned from another ZFS BE doesn't require additional disk space, but consider that the BE size will increase when patches are applied. All ZFS BEs in the same root pool use the same swap and dump devices.
Solaris OS Components – All subdirectories of the root file system that are part of the OS image, with the exception of /var, must be in the same dataset as the root file system. In addition, all Solaris OS components must reside in the root pool, with the exception of the swap and dump devices.
Another restriction is that the /var directory or dataset must be a single dataset. For example, you cannot create a descendent /var dataset, such as /var/tmp, if you want to also use Oracle Solaris Live Upgrade to migrate or patch a ZFS BE or create a ZFS flash archive of this pool.
For example, a system with 12 GB of disk space might be too small for a bootable ZFS environment because 2 GB of disk space is required for each swap and dump device and approximately 6 GB of disk space is required for the ZFS BE that is migrated from the UFS BE.
Review the following ZFS storage pool configuration requirements:
The pool that is intended to be the root pool must have an SMI label. This requirement is met if the pool is created with disk slices.
The pool must exist either on a disk slice or on disk slices that are mirrored. If you attempt to use an unsupported pool configuration during an Oracle Solaris Live Upgrade migration, you see a message similar to the following:
ERROR: ZFS pool name does not support boot environments |
For a detailed description of supported ZFS root pool configurations, see Creating a ZFS Root Pool.
x86: The disk must contain a Solaris fdisk partition. A Solaris fdisk partition is created automatically when the x86 based system is installed. For more information about Solaris fdisk partitions, see Guidelines for Creating an fdisk Partition in System Administration Guide: Devices and File Systems.
Disks that are designated for booting in a ZFS root pool must be limited to 1 TB in size on both SPARC based and x86 based systems.
Compression can be enabled on the root pool but only after the root pool is installed. No way exists to enable compression on a root pool during installation. The gzip compression algorithm is not supported on root pools.
Do not rename the root pool after it is created by an initial installation or after Solaris Live Upgrade migration to a ZFS root file system. Renaming the root pool might cause an unbootable system.
In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Oracle Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
If you will be configuring zones after the initial installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
If you already have ZFS storage pools on the system, they are acknowledged by the following message. However, these pools remain untouched, unless you select the disks in the existing pools to create the new storage pool.
There are existing ZFS pools available on this system. However, they can only be upgraded using the Live Upgrade tools. The following screens will only allow you to install a ZFS root system, not upgrade one. |
Existing pools will be destroyed if any of their disks are selected for the new pool.
Before you begin the initial installation to create a ZFS storage pool, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
The Solaris interactive text installation process is basically the same as in previous Solaris releases, except that you are prompted to create a UFS or a ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you are prompted to create a ZFS storage pool. The steps for installing a ZFS root file system follow:
Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system. However, you can create a ZFS flash archive to be used during a JumpStart installation. For more information, see Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation).
Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as at least the Solaris 10 10/08 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
To create a ZFS root file system, select the ZFS option. For example:
Choose Filesystem Type Select the filesystem to use for your Solaris installation [ ] UFS [X] ZFS |
After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] c1t0d0 69994 MB (F4 to edit) [ ] c1t1d0 69994 MB [-] c1t2d0 0 MB [-] c1t3d0 0 MB Maximum Root Size: 69994 MB Suggested Minimum: 8279 MB |
You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or a three-disk mirrored pool is optimal. If you have eight disks and you select all of them, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.
To select two disks to create a mirrored root pool, use the cursor control keys to select the second disk. In the following example, both c1t1d0 and c1t2d0 are selected as the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label or they don't contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] c1t0d0 69994 MB [X] c1t1d0 69994 MB (F4 to edit) [-] c1t2d0 0 MB [-] c1t3d0 0 MB Maximum Root Size: 69994 MB Suggested Minimum: 8279 MB |
If the Available Space column identifies 0 MB, the disk most likely has an EFI label. If you want to use a disk with an EFI label, you will need to exit the installation program, relabel the disk with an SMI label by using the format -e command, then restart the installation program.
If you do not create a mirrored root pool during installation, you can easily create one after the installation. For information, see How to Create a Mirrored Root Pool (Post Installation).
After you have selected a disk or disks for your ZFS storage pool, a screen similar to the following is displayed:
Configure ZFS Settings Specify the name of the pool to be created from the disk(s) you have chosen. Also specify the name of the dataset to be created within the pool that is to be used as the root directory for the filesystem. ZFS Pool Name: rpool ZFS Root Dataset Name: s10s_u9wos_08 ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1536 (Pool size must be between 6231 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
From this screen, you can change the name of the ZFS pool, the dataset name, the pool size, and the swap and dump device sizes by moving the cursor control keys through the entries and replacing the default value with new values. Or, you can accept the default values. In addition, you can modify how the /var file system is created and mounted.
In this example, the root dataset name is changed to zfsBE.
ZFS Pool Name: rpool ZFS Root Dataset Name: zfsBE ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1536 (Pool size must be between 6231 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
You can change the installation profile at this final installation screen. For example:
Profile The information shown below is your profile for installing Solaris software. It reflects the choices you've made on previous screens. ============================================================================ Installation Option: Initial Boot Device: c1t0d0 Root File System Type: ZFS Client Services: None Regions: North America System Locale: C ( C ) Software: Solaris 10, Entire Distribution Pool Name: rpool Boot Environment Name: zfsBE Pool Size: 69995 MB Devices in Pool: c1t0d0 c1t1d0 |
After the installation is completed, review the resulting ZFS storage pool and file system information. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 8.03G 58.9G 96K /rpool rpool/ROOT 4.47G 58.9G 21K legacy rpool/ROOT/zfsBE 4.47G 58.9G 4.47G / rpool/dump 1.50G 58.9G 1.50G - rpool/export 44K 58.9G 23K /export rpool/export/home 21K 58.9G 21K /export/home rpool/swap 2.06G 61.0G 16K - |
The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.
To create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfs2BE is created. The current BE is named zfsBE, as shown in the zfs list output. However, the current BE is not acknowledged in the lustatus output until the new BE is created.
# lustatus ERROR: No boot environments are configured on this system ERROR: cannot determine list of all boot environment names |
If you create a new ZFS BE in the same pool, use syntax similar to the following:
# lucreate -n zfs2BE INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful. |
Creating a ZFS BE within the same pool uses ZFS clone and snapshot features to instantly create the BE. For more details about using Oracle Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
Next, verify the new boot environments. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 8.03G 58.9G 97K /rpool rpool/ROOT 4.47G 58.9G 21K legacy rpool/ROOT/zfs2BE 116K 58.9G 4.47G / rpool/ROOT/zfsBE 4.47G 58.9G 4.47G / rpool/ROOT/zfsBE@zfs2BE 75.5K - 4.47G - rpool/dump 1.50G 58.9G 1.50G - rpool/export 44K 58.9G 23K /export rpool/export/home 21K 58.9G 21K /export/home rpool/swap 2.06G 61.0G 16K - |
To boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.
For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfs2BE, select option 2. Then, type the displayed boot -Z command.
ok boot -L Executing last command: boot -L Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0 File and args: -L 1 zfsBE 2 zfs2BE Select environment to boot: [ 1 - 2 ]: 2 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfs2BE ok boot -Z rpool/ROOT/zfs2BE |
For more information about booting a ZFS file system, see Booting From a ZFS Root File System.
If you did not create a mirrored ZFS root pool during installation, you can easily create one after the installation.
For information about replacing a disk in root pool, see How to Replace a Disk in the ZFS Root Pool.
Display your current root pool status.
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 errors: No known data errors |
Attach a second disk to configure a mirrored root pool.
# zpool attach rpool c1t0d0s0 c1t1d0s0 Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable. Make sure to wait until resilver is done before rebooting. |
View the root pool status to confirm that resilvering is complete.
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h1m, 24.26% done, 0h3m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 3.18G resilvered errors: No known data errors |
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
scrub: resilver completed after 0h10m with 0 errors on Thu Mar 11 11:27:22 2010 |
Apply boot blocks to the second disk after resilvering is complete.
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0 |
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 |
Verify that you can boot successfully from the second disk.
Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM. Or, reconfigure the PC BIOS.
Starting in the Solaris 10 10/09 release, you can create a flash archive on a system that is running a UFS root file system or a ZFS root file system. A flash archive of a ZFS root pool contains the entire pool hierarchy, except for the swap and dump volumes, and any excluded datasets. The swap and dump volumes are created when the flash archive is installed. You can use the flash archive installation method as follows:
Generate a flash archive that can be used to install and boot a system with a ZFS root file system.
Perform a JumpStart installation of a system by using a ZFS flash archive. Creating a ZFS flash archive clones an entire root pool, not individual boot environments. Individual datasets within the pool can be excluded by using the flarcreate and flar commands' -D option.
Review the following limitations before you consider installing a system with a ZFS flash archive:
Only a JumpStart installation of a ZFS flash archive is supported. You cannot use the interactive installation option of a flash archive to install a system with a ZFS root file system. Nor can you use a flash archive to install a ZFS BE with Oracle Solaris Live Upgrade.
You can only install a flash archive on a system that has the same architecture as the system on which you created the ZFS flash archive. For example, an archive that is created on a sun4u system cannot be installed on a sun4v system.
Only a full initial installation of a ZFS flash archive is supported. You cannot install differential flash archive of a ZFS root file system nor can you install a hybrid UFS/ZFS archive.
Existing UFS flash archives can still only be used to install a UFS root file system. The ZFS Flash archive can only be used to install a ZFS root file system.
Although the entire root pool, minus any explicitly excluded datasets, is archived and installed, only the ZFS BE that is booted when the archive is created is usable after the flash archive is installed. However, pools that are archived with the flarcreate or the flar commands' -R rootdir option can be used to archive a root pool other than the one that is currently booted.
A ZFS root pool name that is created with a flash archive must match the master root pool name. The root pool name that is used to create the flash archive is the name that is assigned to the new pool created. Changing the pool name is not supported.
The flarcreate and flar command options used to include and exclude individual files are not supported in a ZFS flash archive. You can only exclude entire datasets from a ZFS flash archive.
The flar info command is not supported for a ZFS flash archive. For example:
# flar info -l zfs10u8flar ERROR: archive content listing not supported for zfs archives. |
After a master system is installed with or upgraded to at least the Solaris 10 10/09 release, you can create a ZFS flash archive to be used to install a target system. The basic process follows:
Install or upgrade to at least the Solaris 10 10/09 release on the master system. Add any customizations that you want.
Create the ZFS flash archive with the flarcreate command on the master system. All datasets in the root pool, except for the swap and dump volumes, are included in the ZFS flash archive.
Create a JumpStart profile to include the flash archive information on the installation server.
Install the ZFS flash archive on the target system.
The following archive options are supported for installing a ZFS root pool with a flash archive:
Use the flarcreate or flar command to create a flash archive from the specified ZFS root pool. If not specified, a flash archive of the default root pool is created.
Use flarcreate -D dataset to exclude the specified datasets from the flash archive. This option can be used multiple times to exclude multiple datasets.
After a ZFS flash archive is installed, the system is configured as follows:
The entire dataset hierarchy that existed on the system where the flash archive was created is recreated on the target system, minus any datasets that were specifically excluded at the time of archive creation. The swap and dump volumes are not included in the flash archive.
The root pool has the same name as the pool that was used to create the archive.
The boot environment that was active when the flash archive was created is the active and default BE on the deployed systems.
After the master system is installed or upgraded to at least the Solaris 10 10/09 release, create a flash archive of the ZFS root pool. For example:
# flarcreate -n zfsBE zfs10upflar Full Flash Checking integrity... Integrity OK. Running precreation scripts... Precreation scripts done. Determining the size of the archive... The archive will be approximately 4.94GB. Creating the archive... Archive creation complete. Running postcreation scripts... Postcreation scripts done. Running pre-exit scripts... Pre-exit scripts done. |
On the system that will be used as the installation server, create a JumpStart profile as you would to install any system. For example, the following profile is used to install the zfs10upflar archive.
install_type flash_install archive_location nfs system:/export/jump/zfs10upflar partitioning explicit pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0 |
You can create a JumpStart profile to install a ZFS root file system or a UFS root file system.
A ZFS specific profile must contain the new pool keyword. The pool keyword installs a new root pool, and a new boot environment is created by default. You can provide the name of the boot environment as well as create a separate /var dataset with the bootenv installbe keywords and the bename and dataset options.
For general information about using JumpStart features, see Oracle Solaris 10 9/10 Installation Guide: Custom JumpStart and Advanced Installations.
If you will be configuring zones after the JumpStart installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
The following keywords are permitted in a ZFS specific profile:
Automatically specifies the size of the slices for the pool, swap volume, or dump volume. The size of the disk is checked to verify that the minimum size can be accommodated. If the minimum size can be accommodated, the largest possible pool size is allocated, given the constraints, such as the size of the disks, preserved slices, and so on.
For example, if you specify c0t0d0s0, the root pool slice is created as large as possible if you specify either the all or auto keywords. Or, you can specify a particular size for the slice, swap volume, or dump volume.
The auto keyword works similarly to the all keyword when used with a ZFS root pool because pools don't have unused disk space.
Identifies the boot environment characteristics.
Use the following bootenv keyword syntax to create a bootable ZFS root environment:
bootenv installbe bename BE-name [dataset mount-point]
Creates a new BE that is identified by the bename option and BE-name entry and installs it.
Identifies the BE-name to install.
If bename is not used with the pool keyword, then a default BE is created.
Use the optional dataset keyword to identify a /var dataset that is separate from the root dataset. The mount-point value is currently limited to /var. For example, a bootenv syntax line for a separate /var dataset would be similar to the following:
bootenv installbe bename zfsroot dataset /var |
Defines the new root pool to be created. The following keyword syntax must be provided:
pool poolname poolsize swapsize dumpsize vdevlist |
Identifies the name of the pool to be created. The pool is created with the specified pool size and with the specified physical devices (vdevs). The poolname value should not identify the name of an existing pool or the existing pool is overwritten.
Specifies the size of the pool to be created. The value can be auto or existing. The auto value allocates the largest possible pool size, given the constraints, such as size of the disks, preserved slices, and so on. The existing value means the boundaries of existing slices by that name are preserved and not overwritten. The size is assumed to be in MB, unless specified by g (GB).
Specifies the size of the swap volume to be created. The autovalue means that the default swap size is used. You can specify a size with a size value. The size is in MB, unless specified by g (GB).
Specifies the size of the dump volume to be created. The auto value means that the default swap size is used. You can specify a size with a sizevalue. The size is assumed to be in MB, unless specified by g (GB).
Specifies one or more devices that are used to create the pool. The format of vdevlist is the same as the format of the zpool create command. At this time, only mirrored configurations are supported when multiple devices are specified. Devices in vdevlist must be slices for the root pool. The any value means that the installation software selects a suitable device.
You can mirror as many disks as you like, but the size of the pool that is created is determined by the smallest of the specified disks. For more information about creating mirrored storage pools, see Mirrored Storage Pool Configuration.
This section provides examples of ZFS specific JumpStart profiles.
The following profile performs an initial installation specified with install_type initial_install in a new pool, identified with pool newpool, whose size is automatically sized with the auto keyword to the size of the specified disks. The swap area and dump device are automatically sized with the auto keyword in a mirrored configuration of disks (with the mirror keyword and disks specified as c0t0d0s0 and c0t1d0s0). Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named s10-xx is created.
install_type initial_install pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0 bootenv installbe bename s10-xx |
The following profile performs an initial installation with the keyword install_type initial_install of the SUNWCall metacluster in a new pool called newpool, which is 80 GBs in size. This pool is created with a 2-GB swap volume and a 2-GB dump volume, in a mirrored configuration of any two available devices that are large enough to create an 80-GB pool. If two such devices aren't available, the installation fails. Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named s10–xx is created.
install_type initial_install cluster SUNWCall pool newpool 80g 2g 2g mirror any any bootenv installbe bename s10-xx |
JumpStart installation syntax enables you to preserve or create a UFS file system on a disk that also includes a ZFS root pool. This configuration is not recommended for production systems, but could be used for transition or migration needs on a small system, such as a laptop.
Consider the following issues before starting a JumpStart installation of a bootable ZFS root file system:
You cannot use an existing ZFS storage pool for a JumpStart installation to create a bootable ZFS root file system. You must create a new ZFS storage pool with syntax similar to the following:
pool rpool 20G 4G 4G c0t0d0s0 |
You must create your pool with disk slices rather than with whole disks as described in Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support. For example, the bold syntax in the following example is not acceptable:
install_type initial_install cluster SUNWCall pool rpool all auto auto mirror c0t0d0 c0t1d0 bootenv installbe bename newBE |
The bold syntax in the following example is acceptable:
install_type initial_install cluster SUNWCall pool rpool all auto auto mirror c0t0d0s0 c0t1d0s0 bootenv installbe bename newBE |
Oracle Solaris Live Upgrade features related to UFS components are still available, and they work as in previous Solaris releases.
The following features are also available:
When you migrate your UFS root file system to a ZFS root file system, you must designate an existing ZFS storage pool with the -p option.
If the UFS root file system has components on different slices, they are migrated to the ZFS root pool.
You can migrate a system with zones but the supported configurations are limited in the Solaris 10 10/08 release. More zone configurations are supported starting in the Solaris 10 5/09 release. For more information, see the following sections:
If you are migrating a system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
Oracle Solaris Live Upgrade can use the ZFS snapshot and clone features when you create a new ZFS BE in the same pool. So, BE creation is much faster than previous Solaris releases.
For detailed information about Oracle Solaris installation and Oracle Solaris Live Upgrade features, see the Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
The basic process for migrating a UFS root file system to a ZFS root file system follows:
Install the Solaris 10 10/08, Solaris 10 5/09, Solaris 10 10/09, or Oracle Solaris 10 9/10 release or use the standard upgrade program to upgrade from a previous Solaris 10 release on any supported SPARC based or x86 based system.
When you are running at least the Solaris 10 10/08 release, create a ZFS storage pool for your ZFS root file system.
Use Oracle Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system.
Activate your ZFS BE with the luactivate command.
For information about ZFS and Oracle Solaris Live Upgrade requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Review the following issues before you use Oracle Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:
The Oracle Solaris installation GUI's standard upgrade option is not available for migrating from a UFS to a ZFS root file system. To migrate from a UFS file system, you must use Oracle Solaris Live Upgrade.
You must create the ZFS storage pool that will be used for booting before the Oracle Solaris Live Upgrade operation. In addition, due to current boot limitations, the ZFS root pool must be created with slices instead of whole disks. For example:
# zpool create rpool mirror c1t0d0s0 c1t1d0s0 |
Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slices that are intended for the root pool.
You cannot use Oracle Solaris Live Upgrade to create a UFS BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your UFS BE, you can boot from either your UFS BE or your ZFS BE.
Do not rename your ZFS BEs with the zfs rename command because Oracle Solaris Live Upgrade feature cannot detect the name change. Subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to continue to use.
When creating an alternative BE that is a clone of the primary BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion and exclusion option set in the following cases:
UFS -> UFS UFS -> ZFS ZFS -> ZFS (different pool) |
Although you can use Oracle Solaris Live Upgrade to upgrade your UFS root file system to a ZFS root file system, you cannot use Oracle Solaris Live Upgrade to upgrade non-root or shared file systems.
You cannot use the lu command to create or migrate a ZFS root file system.
The following examples show how to migrate a UFS root file system to a ZFS root file system.
If you are migrating or updating a system with zones, see the following sections:
The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, ufsBE, which contains a UFS root file system, is identified by the -c option. If you do not include the optional -c option, the current BE name defaults to the device name. The new BE, zfsBE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation.
The ZFS storage pool must be created with slices rather than with whole disks to be upgradeable and bootable. Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slice that is intended for the root pool.
# zpool create rpool mirror c1t2d0s0 c2t1d0s0 # lucreate -c ufsBE -n zfsBE -p rpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <ufsBE>. Creating initial configuration for primary boot environment <ufsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-qD.mnt updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. |
After the lucreate operation completes, use the lustatus command to view the BE status. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - |
Then, review the list of ZFS components. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.17G 59.8G 95.5K /rpool rpool/ROOT 4.66G 59.8G 21K /rpool/ROOT rpool/ROOT/zfsBE 4.66G 59.8G 4.66G / rpool/dump 2G 61.8G 16K - rpool/swap 517M 60.3G 16K - |
Next, use the luactivate command to activate the new ZFS BE. For example:
# luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** . . . Modifying boot archive service Activation of boot environment <zfsBE> successful. |
Next, reboot the system to the ZFS BE.
# init 6 |
Confirm that the ZFS BE is active.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes no no yes - zfsBE yes yes yes no - |
If you switch back to the UFS BE, you must re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.
If the UFS BE is no longer required, you can remove it with the ludelete command.
Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool, the -p option is omitted.
If you have multiple ZFS BEs, do the following to select which BE to boot from:
SPARC: You can use the boot -L command to identify the available BEs and select a BE from which to boot by using the boot -Z command.
x86: You can select a BE from the GRUB menu.
For more information, see Example 5–9.
# lucreate -n zfs2BE Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful. |
You can upgrade your ZFS BE with additional packages or patches.
The basic process follows:
Create an alternate BE with the lucreate command.
Activate and boot from the alternate BE.
Upgrade your primary ZFS BE with the luupgrade command to add packages or patches.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no - # luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>. Mounting the BE <zfsBE>. Adding packages to the BE <zfsBE>. Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product> Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41 Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. This appears to be an attempt to install the same architecture and version of a package which is already installed. This installation will attempt to overwrite this package. Using </a> as the package base directory. ## Processing package information. ## Processing system information. 4 package pathnames are already properly installed. ## Verifying package dependencies. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-user permission during the process of installing this package. Do you want to continue with the installation of <SUNWchxge> [y,n,?] y Installing Chelsio N110 10GE NIC Driver as <SUNWchxge> ## Installing part 1 of 1. ## Executing postinstall script. Installation of <SUNWchxge> was successful. Unmounting the BE <zfsBE>. The package add to the BE <zfsBE> completed. |
You can use Oracle Solaris Live Upgrade to migrate a system with zones, but the supported configurations are limited in the Solaris 10 10/08 release. If you are installing or upgrading to at least the Solaris 10 5/09 release, more zone configurations are supported. For more information, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
This section describes how to configure and install a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
If you are migrating a system with zones or if you are configuring a system with zones in the Solaris 10 10/08 release, review the following procedures:
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
Follow these recommended procedures to set up zones on a system with a ZFS root file system to ensure that you can use Oracle Solaris Live Upgrade on that system.
This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.
In the steps that follow the example pool name is rpool, and the example name of the active boot environment is s10BE*.
Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10 release.
For more information about upgrading a system is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
Create the root pool.
# zpool create rpool mirror c0t1d0 c1t1d0 |
For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Confirm that the zones from the UFS environment are booted.
Create the new ZFS boot environment.
# lucreate -n s10BE2 -p rpool |
This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.
Activate the new ZFS boot environment.
# luactivate s10BE2 |
Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.
Reboot the system.
# init 6 |
Migrate the zones to a ZFS BE.
Resolve any potential mount-point problems.
Due to a bug in Oracle Solaris Live Upgrade, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Review the zfs list output.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/s10u6 NAME MOUNTPOINT rpool/ROOT/s10u6 /.alt.tmp.b-VP.mnt/ rpool/ROOT/s10u6/zones /.alt.tmp.b-VP.mnt//zones rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA |
The mount point for the root ZFS BE (rpool/ROOT/s10u6) should be /.
Reset the mount points for the ZFS BE and its datasets.
For example:
# zfs inherit -r mountpoint rpool/ROOT/s10u6 # zfs set mountpoint=/ rpool/ROOT/s10u6 |
Reboot the system.
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
This procedure explains how to set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In this configuration, the ZFS zone roots are created as ZFS datasets.
In the steps that follow the example pool name is rpool and the example name of the active boot environment is s10BE. The name for the zones dataset can be any legal dataset name. In the following example, the zones dataset name is zones.
Install the system with a ZFS root, either by using the Solaris interactive text installer or the Solaris JumpStart installation method.
For information about installing a ZFS root file system by using the initial installation method or the Solaris JumpStart method, see Installing a ZFS Root File System (Initial Installation) or Installing a ZFS Root File System (Oracle Solaris JumpStart Installation).
Boot the system from the newly created root pool.
Create a dataset for grouping the zone roots.
For example:
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones |
Setting the noauto value for the canmount property prevents the dataset from being mounted other than by the explicit action of Oracle Solaris Live Upgrade and system startup code.
Mount the newly created zones dataset.
# zfs mount rpool/ROOT/s10BE/zones |
The dataset is mounted at /zones.
Create and mount a dataset for each zone root.
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zonerootA # zfs mount rpool/ROOT/s10BE/zones/zonerootA |
Set the appropriate permissions on the zone root directory.
# chmod 700 /zones/zonerootA |
Configure the zone, setting the zone path as follows:
# zonecfg -z zoneA zoneA: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zoneA> create zonecfg:zoneA> set zonepath=/zones/zonerootA |
You can enable the zones to boot automatically when the system is booted by using the following syntax:
zonecfg:zoneA> set autoboot=true |
Install the zone.
# zoneadm -z zoneA install |
Boot the zone.
# zoneadm -z zoneA boot |
Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can either be a system upgrade or the application of patches.
In the steps that follow, newBE is the example name of the boot environment that is upgraded or patched.
Create the boot environment to upgrade or patch.
# lucreate -n newBE |
The existing boot environment, including all the zones, is cloned. A dataset is created for each dataset in the original boot environment. The new datasets are created in the same pool as the current root pool.
Select one of the following to upgrade the system or apply patches to the new boot environment:
Upgrade the system.
# luupgrade -u -n newBE -s /net/install/export/s10u7/latest |
where the -s option specifies the location of the Solaris installation medium.
Apply patches to the new boot environment.
# luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14 |
Activate the new boot environment.
# luactivate newBE |
Boot from the newly activated boot environment.
# init 6 |
Resolve any potential mount-point problems.
Due to a bug in Oracle Solaris Live Upgrade feature, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Review the zfs list output.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/newBE NAME MOUNTPOINT rpool/ROOT/newBE /.alt.tmp.b-VP.mnt/ rpool/ROOT/newBE/zones /.alt.tmp.b-VP.mnt/zones rpool/ROOT/newBE/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA |
The mount point for the root ZFS BE (rpool/ROOT/newBE) should be /.
Reset the mount points for the ZFS BE and its datasets.
For example:
# zfs inherit -r mountpoint rpool/ROOT/newBE # zfs set mountpoint=/ rpool/ROOT/newBE |
Reboot the system.
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
You can use the Oracle Solaris Live Upgrade feature to migrate or upgrade a system with zones starting in the Solaris 10 10/08 release. Additional sparse—root and whole—root zone configurations are supported by Live Upgrade starting in the Solaris 10 5/09 release.
This section describes how to configure a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade starting in the Solaris 10 5/09 release. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
Consider the following points when using Oracle Solaris Live Upgrade with ZFS and zones starting in at least the Solaris 10 5/09 release:
To use Oracle Solaris Live Upgrade with zone configurations that are supported starting in at least the Solaris 10 5/09 release, you must first upgrade your system to at least the Solaris 10 5/09 release by using the standard upgrade program.
Then, with Oracle Solaris Live Upgrade, you can either migrate your UFS root file system with zone roots to a ZFS root file system or you can upgrade or patch your ZFS root file system and zone roots.
You cannot directly migrate unsupported zone configurations from a previous Solaris 10 release to at least the Solaris 10 5/09 release.
If you are migrating or configuring a system with zones starting in the Solaris 10 5/09 release, review the following information:
Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)
How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)
How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)
Review the supported zone configurations before using Oracle Solaris Live Upgrade to migrate or upgrade a system with zones.
Migrate a UFS root file system to a ZFS root file system – The following configurations of zone roots are supported:
In a directory in the UFS root file system
In a subdirectory of a mount point in the UFS root file system
UFS root file system with a zone root in a UFS root file system directory or in a subdirectory of a UFS root file system mount point and a ZFS non-root pool with a zone root
The following UFS/zone configuration is not supported: UFS root file system that has a zone root as a mount point.
Migrate or upgrade a ZFS root file system – The following configurations of zone roots are supported:
In a dataset in the ZFS root pool. In some cases, if a dataset for the zone root is not provided before the Oracle Solaris Live Upgrade operation, a dataset for the zone root (zoneds) will be created by Oracle Solaris Live Upgrade.
In a subdirectory of the ZFS root file system
In a dataset outside of the ZFS root file system
In a subdirectory of a dataset outside of the ZFS root file system
In a dataset in a non root pool. In the following example, zonepool/zones is a dataset that contains the zone roots, and rpool contains the ZFS BE:
zonepool zonepool/zones zonepool/zones/myzone rpool rpool/ROOT rpool/ROOT/myBE |
Oracle Solaris Live Upgrade snapshots and clones the zones in zonepool and the rpool BE if you use this syntax:
# lucreate -n newBE |
The newBE BE in rpool/ROOT/newBE is created. When activated, newBE provides access to the zonepool components.
In the proceeding example, if /zonepool/zones was a subdirectory and not a separate dataset, then Live Upgrade would migrate it as components of the root pool, rpool.
Zones Migration or upgrade information with zones for both UFS and ZFS – Review the following considerations that might affect a migration or an upgrade of either a UFS and ZFS environment:
If you configured your zones as described in Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) in the Solaris 10 10/08 release and have upgraded to at least the Solaris 10 5/09, you should be able to migrate to a ZFS root file system or use Solaris Live Upgrade to upgrade to at least the Solaris 10 5/09 release.
Do not create zone roots in nested directories, for example, zones/zone1 and zones/zone1/zone2. Otherwise, mounting might fail at boot time.
Use this procedure after you have performed an initial installation of at least the Solaris 10 5/09 release to create a ZFS root file system. Also use this procedure after you have used the luupgrade feature to upgrade a ZFS root file system to at least the Solaris 10 5/09 release. A ZFS BE that is created using this procedure can then be upgraded or patched.
In the steps that follow, the example Oracle Solaris 10 9/10 system has a ZFS root file system and a zone root dataset in /rpool/zones. A ZFS BE named zfs2BE is created and can then be upgraded or patched.
Review the existing ZFS file systems.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.26G 59.7G 98K /rpool rpool/ROOT 4.64G 59.7G 21K legacy rpool/ROOT/zfsBE 4.64G 59.7G 4.64G / rpool/dump 1.00G 59.7G 1.00G - rpool/export 44K 59.7G 23K /export rpool/export/home 21K 59.7G 21K /export/home rpool/swap 1G 60.7G 16K - rpool/zones 633M 59.7G 633M /rpool/zones |
Ensure that the zones are installed and booted.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /rpool/zones native shared |
Create the ZFS BE.
# lucreate -n zfs2BE Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful. |
Activate the ZFS BE.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # luactivate zfs2BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>. . . . # init 6 |
Confirm that the ZFS file systems and zones are created in the new BE.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.38G 59.6G 98K /rpool rpool/ROOT 4.72G 59.6G 21K legacy rpool/ROOT/zfs2BE 4.72G 59.6G 4.64G / rpool/ROOT/zfs2BE@zfs2BE 74.0M - 4.64G - rpool/ROOT/zfsBE 5.45M 59.6G 4.64G /.alt.zfsBE rpool/dump 1.00G 59.6G 1.00G - rpool/export 44K 59.6G 23K /export rpool/export/home 21K 59.6G 21K /export/home rpool/swap 1G 60.6G 16K - rpool/zones 17.2M 59.6G 633M /rpool/zones rpool/zones-zfsBE 653M 59.6G 633M /rpool/zones-zfsBE rpool/zones-zfsBE@zfs2BE 19.9M - 633M - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /rpool/zones native shared |
Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots in at least the Solaris 10 5/09 release. These updates can either be a system upgrade or the application of patches.
In the steps that follow, zfs2BE, is the example name of the boot environment that is upgraded or patched.
Review the existing ZFS file systems.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.38G 59.6G 100K /rpool rpool/ROOT 4.72G 59.6G 21K legacy rpool/ROOT/zfs2BE 4.72G 59.6G 4.64G / rpool/ROOT/zfs2BE@zfs2BE 75.0M - 4.64G - rpool/ROOT/zfsBE 5.46M 59.6G 4.64G / rpool/dump 1.00G 59.6G 1.00G - rpool/export 44K 59.6G 23K /export rpool/export/home 21K 59.6G 21K /export/home rpool/swap 1G 60.6G 16K - rpool/zones 22.9M 59.6G 637M /rpool/zones rpool/zones-zfsBE 653M 59.6G 633M /rpool/zones-zfsBE rpool/zones-zfsBE@zfs2BE 20.0M - 633M - |
Ensure that the zones are installed and booted.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 5 zfszone running /rpool/zones native shared |
Create the ZFS BE to upgrade or patch.
# lucreate -n zfs2BE Analyzing system configuration. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Creating snapshot for <rpool/zones> on <rpool/zones@zfs10092BE>. Creating clone for <rpool/zones@zfs2BE> on <rpool/zones-zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful. |
Select one of the following to upgrade the system or apply patches to the new boot environment:
Upgrade the system.
# luupgrade -u -n zfs2BE -s /net/install/export/s10up/latest |
where the -s option specifies the location of the Solaris installation medium.
This process can take a very long time.
For a complete example of the luupgrade process, see Example 5–6.
Apply patches to the new boot environment.
# luupgrade -t -n zfs2BE -t -s /patchdir patch-id-02 patch-id-04 |
Activate the new boot environment.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # luactivate zfs2BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>. . . . |
Boot from the newly activated boot environment.
# init 6 |
In this example, a ZFS BE (zfsBE), which was created on a Solaris 10 10/09 system with a ZFS root file system and zone root in a non root pool, is upgraded to the Oracle Solaris 10 9/10 release. This process can take a long time. Then, the upgraded BE (zfs2BE) is activated. Ensure that the zones are installed and booted before attempting the upgrade.
In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone zone are created as follows:
# zpool create zonepool mirror c2t1d0 c2t5d0 # zfs create zonepool/zones # chmod 700 zonepool/zones # zonecfg -z zfszone zfszone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zfszone> create zonecfg:zfszone> set zonepath=/zonepool/zones zonecfg:zfszone> verify zonecfg:zfszone> exit # zoneadm -z zfszone install cannot create ZFS dataset zonepool/zones: dataset already exists Preparing to install zone <zfszone>. Creating list of files to copy from the global zone. Copying <8960> files to the zone. . . . |
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /zonepool/zones native shared # lucreate -n zfsBE . . . # luupgrade -u -n zfsBE -s /net/install/export/s10up/latest 40410 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/system/export/s10up/latest/Solaris_10/Tools/Boot> Validating the contents of the media </net/system/export/s10up/latest>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsBE>. Determining packages to install or upgrade for BE <zfsBE>. Performing the operating system upgrade of the BE <zfsBE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Updating package information on boot environment <zfsBE>. Package information successfully updated on boot environment <zfsBE>. Adding operating system patches to the BE <zfsBE>. The operating system patch installation is complete. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <zfsBE> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <zfsBE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsBE>. Before you activate boot environment <zfsBE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsBE> is complete. Installing failsafe Failsafe install is complete. # luactivate zfsBE # init 6 # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /zonepool/zones native shared |
Use this procedure to migrate a system with a UFS root file system and a zone root to at least the Solaris 10 5/09 release. Then, use Oracle Solaris Live Upgrade to create a ZFS BE.
In the steps that follow, the example UFS BE name is c0t1d0s0, the UFS zone root is zonepool/zfszone, and the ZFS root BE is zfsBE.
Upgrade the system to at least the Solaris 10 5/09 release if it is running a previous Solaris 10 release.
For information about upgrading a system that is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
Create the root pool.
For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Confirm that the zones from the UFS environment are booted.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /zonepool/zones native shared |
Create the new ZFS boot environment.
# lucreate -c c1t1d0s0 -n zfsBE -p rpool |
This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.
Activate the new ZFS boot environment.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t1d0s0 yes no no yes - zfsBE yes yes yes no - # luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. . . . |
Reboot the system.
# init 6 |
Confirm that the ZFS file systems and zones are created in the new BE.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /rpool rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT rpool/ROOT/zfsBE 4.67G 60.8G 4.67G / rpool/dump 1.00G 60.8G 1.00G - rpool/swap 517M 61.3G 16K - zonepool 634M 7.62G 24K /zonepool zonepool/zones 270K 7.62G 633M /zonepool/zones zonepool/zones-c1t1d0s0 634M 7.62G 633M /zonepool/zones-c1t1d0s0 zonepool/zones-c1t1d0s0@zfsBE 262K - 633M - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /zonepool/zones native shared |
In this example, a Oracle Solaris 10 9/10 system with a UFS root file system and a zone root (/uzone/ufszone), as well as a ZFS non-root pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS root file system. Ensure that the ZFS root pool is created and that the zones are installed and booted before attempting the migration.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 ufszone running /uzone/ufszone native shared 3 zfszone running /pool/zones/zfszone native shared |
# lucreate -c ufsBE -n zfsBE -p rpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>. Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>. Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-DLd.mnt updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - # luactivate zfsBE . . . # init 6 . . . # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 628M 66.3G 19K /pool pool/zones 628M 66.3G 20K /pool/zones pool/zones/zfszone 75.5K 66.3G 627M /pool/zones/zfszone pool/zones/zfszone-ufsBE 628M 66.3G 627M /pool/zones/zfszone-ufsBE pool/zones/zfszone-ufsBE@zfsBE 98K - 627M - rpool 7.76G 59.2G 95K /rpool rpool/ROOT 5.25G 59.2G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.25G 59.2G 5.25G / rpool/dump 2.00G 59.2G 2.00G - rpool/swap 517M 59.7G 16K - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ufszone installed /uzone/ufszone native shared - zfszone installed /pool/zones/zfszone native shared |
During an initial Solaris OS installation or after performing an Oracle Solaris Live Upgrade migration from a UFS file system, a swap area is created on a ZFS volume in the ZFS root pool. For example:
# swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 4194288 4194288 |
During an initial Solaris OS installation or a Oracle Solaris Live Upgrade from a UFS file system, a dump device is created on a ZFS volume in the ZFS root pool. In general, a dump device requires no administration because it is setup automatically at installation time. For example:
# dumpadm Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on |
If you disable and remove the dump device, then you will need to enable it with the dumpadm command after it is recreated. In most cases, you will only have to adjust the size of the dump device by using the zfs command.
For information about the swap and dump volume sizes that are created by the installation programs, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Both the swap volume size and the dump volume size can be adjusted during and after installation. For more information, see Adjusting the Sizes of Your ZFS Swap Device and Dump Device.
Consider the following issues when working with your ZFS swap and dump devices:
Separate ZFS volumes must be used for the swap area and the dump device.
Currently, using a swap file on a ZFS file system is not supported.
If you need to change your swap area or dump device after the system is installed or upgraded, use the swap and dumpadm commands as in previous Solaris releases. For more information, see Chapter 20, Configuring Additional Swap Space (Tasks), in System Administration Guide: Devices and File Systems and Chapter 17, Managing System Crash Information (Tasks), in System Administration Guide: Advanced Administration.
See the following sections for more information:
Because of the differences in the way a ZFS root installation determines the size of swap and dump devices, you might need to adjust their size before, during, or after installation.
You can adjust the size of your swap and dump volumes during an initial installation. For more information, see Example 5–1.
You can create and size your swap and dump volumes before you perform an Oracle Solaris Live Upgrade operation. For example:
Create your storage pool.
# zpool create rpool mirror c0t0d0s0 c0t1d0s0 |
Create your dump device.
# zfs create -V 2G rpool/dump |
Enable the dump device.
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes Save compressed: on |
Select one of the following to create your swap area:
SPARC: Create your swap area. Set the block size to 8 KB.
# zfs create -V 2G -b 8k rpool/swap |
x86: Create your swap area. Set the block size to 4 KB.
# zfs create -V 2G -b 4k rpool/swap |
You must enable the swap volume when a new swap device is added or changed.
Add an entry for the swap volume to the /etc/vfstab file.
Oracle Solaris Live Upgrade does not resize existing swap and dump volumes.
You can reset the volsize property of the dump device after a system is installed. For example:
# zfs set volsize=2G rpool/dump # zfs get volsize rpool/dump NAME PROPERTY VALUE SOURCE rpool/dump volsize 2G - |
You can resize the swap volume but until CR 6765386 is integrated, it is best to remove the swap device first. Then, recreate it. For example:
# swap -d /dev/zvol/dsk/rpool/swap # zfs volsize=2G rpool/swap # swap -a /dev/zvol/dsk/rpool/swap |
For information about removing a swap device on an active system, see this site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
You can adjust the size of the swap and dump volumes in a JumpStart profile by using profile syntax similar to the following:
install_type initial_install cluster SUNWCXall pool rpool 16g 2g 2g c0t0d0s0 |
In this profile, two 2g entries set the size of the swap volume and dump volume as 2 GB each.
If you need more swap space on a system that is already installed, just add another swap volume. For example:
# zfs create -V 2G rpool/swap2 |
Then, activate the new swap volume. For example:
# swap -a /dev/zvol/dsk/rpool/swap2 # swap -l swapfile dev swaplo blocks free /dev/zvol/dsk/rpool/swap 256,1 16 1058800 1058800 /dev/zvol/dsk/rpool/swap2 256,3 16 4194288 4194288 |
Finally, add an entry for the second swap volume to the /etc/vfstab file.
Review the following items if you have problems either capturing a system crash dump or resizing the dump device.
If a crash dump was not created automatically, you can use the savecore command to save the crash dump.
A dump volume is created automatically when you initially install a ZFS root file system or migrate to a ZFS root file system. In most cases, you will only need to adjust the size of the dump volume if the default dump volume size is too small. For example, on a large-memory system, the dump volume size is increased to 40 GB as follows:
# zfs set volsize=40G rpool/dump |
Resizing a large dump volume can be a time-consuming process.
If, for any reason, you need to enable a dump device after you create a dump device manually, use syntax similar to the following:
# dumpadm -d /dev/zvol/dsk/rpool/dump Dump content: kernel pages Dump device: /dev/zvol/dsk/rpool/dump (dedicated) Savecore directory: /var/crash/t2000 Savecore enabled: yes |
A system with 128 GB or greater memory will need a larger dump device than the dump device that is created by default. If the dump device is too small to capture an existing crash dump, a message similar to the following is displayed:
# dumpadm -d /dev/zvol/dsk/rpool/dump dumpadm: dump device /dev/zvol/dsk/rpool/dump is too small to hold a system dump dump size 36255432704 bytes, device size 34359738368 bytes |
For information about sizing the swap and dump devices, see Planning for Swap Space in System Administration Guide: Devices and File Systems.
You cannot currently add a dump device to a pool with multiple top level-devices. You will see a message similar to the following:
# dumpadm -d /dev/zvol/dsk/datapool/dump dump is not supported on device '/dev/zvol/dsk/datapool/dump': 'datapool' has multiple top level vdevs |
Add the dump device to the root pool, which cannot have multiple top-level devices.
Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.
When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.
Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.
By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset in the boot -Z command.
You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:
Review the following known issues regarding mirrored ZFS root pools:
CR 6668666 – You must install the boot information on the additionally attached disks by using the installboot or installgrub commands to enable booting on the other disks in the mirror. If you create a mirrored ZFS root pool with the initial installation method, then this step is unnecessary. For example, if c0t1d0s0 was the second disk added to the mirror, then the installboot or installgrub command syntax would be as follows:
SPARC:
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0 |
x86:
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0 |
You can boot from different devices in a mirrored ZFS root pool. Depending on the hardware configuration, you might need to update the PROM or the BIOS to specify a different boot device.
For example, you can boot from either disk (c1t0d0s0 or c1t1d0s0) in the following pool.
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 |
SPARC: Enter the alternate disk at the ok prompt.
ok boot /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0 |
After the system is rebooted, confirm the active boot device. For example:
SPARC# prtconf -vp | grep bootpath bootpath: '/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0,0:a' |
x86: Select an alternate disk in the mirrored ZFS root pool from the appropriate BIOS menu.
Then, use syntax similar to the following to confirm that you are booted from the alternate disk:
x86# prtconf -v|sed -n '/bootpath/,/value/p' name='bootpath' type=string items=1 value='/pci@0,0/pci8086,25f8@4/pci108e,286@0/disk@0,0:a' |
On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.
During the Solaris OS installation and Oracle Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.
Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.
When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:
title zfsBE bootfs rpool/ROOT/zfsBE title zfs2BE bootfs rpool/ROOT/zfs2BE |
When a new BE is created, the menu.lst file is updated automatically.
On a SPARC based system, two new boot options are available:
After the BE is activated, you can use the boot -L command to display a list of bootable datasets within a ZFS pool. Then, you can select one of the bootable datasets in the list. Detailed instructions for booting that dataset are displayed. You can boot the selected dataset by following the instructions.
You can use the boot -Z dataset command to boot a specific ZFS dataset.
If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.
For example, the following ZFS BEs are available as described by the lustatus output:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no - |
If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.
For example:
ok boot -L Rebooting with command: boot -L Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0 File and args: -L 1 zfsBE 2 zfs2BE Select environment to boot: [ 1 - 2 ]: 1 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfsBE Program terminated ok boot -Z rpool/ROOT/zfsBE |
On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:
ok boot -F failsafe |
To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:
ok boot -Z rpool/ROOT/zfsBE -F failsafe |
The following entries are added to the /pool-name/boot/grub/menu.lst file during the Solaris OS installation process or Oracle Solaris Live Upgrade operation to boot ZFS automatically:
title Solaris 10 9/10 X86 findroot (rootfs0,0,a) kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS module /platform/i86pc/boot_archive title Solaris failsafe findroot (rootfs0,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe |
If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.
On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:
-B $ZFS-BOOTFS |
When a system boots from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This parameter value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:
title Solaris 10 9/10 X86 findroot (rootfs0,0,a) kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS module /platform/i86pc/boot_archive title Solaris failsafe findroot (rootfs0,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe |
The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:
title Solaris failsafe findroot (rootfs0,0,a) kernel /boot/multiboot kernel/unix -s -B console=ttya module /boot/x86.miniroot-safe |
The best way to change the active boot environment is to use the luactivate command. If booting the active environment fails due to a bad patch or a configuration error, the only way to boot from a different environment is to select that environment at boot time. You can select an alternate BE from the GRUB menu on an x86 based system or by booting it explicitly from the PROM on a SPARC based system.
Due to a bug in Oracle Solaris Live Upgrade in the Solaris 10 10/08 release, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.
If a zone dataset has an invalid mount point, the mount point can be corrected by performing the following steps.
Boot the system from a failsafe archive.
Import the pool.
For example:
# zpool import rpool |
Look for incorrect temporary mount points.
For example:
# zfs list -r -o name,mountpoint rpool/ROOT/s10u6 NAME MOUNTPOINT rpool/ROOT/s10u6 /.alt.tmp.b-VP.mnt/ rpool/ROOT/s10u6/zones /.alt.tmp.b-VP.mnt//zones rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA |
The mount point for the root BE (rpool/ROOT/s10u6) should be /.
If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.
Reset the mount points for the ZFS BE and its datasets.
For example:
# zfs inherit -r mountpoint rpool/ROOT/s10u6 # zfs set mountpoint=/ rpool/ROOT/s10u6 |
Reboot the system.
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.
You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.
If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.
Boot failsafe mode.
On a SPARC system:
ok boot -F failsafe |
On an x86 system, select failsafe mode from the GRUB prompt.
Mount the ZFS BE on /a when prompted:
. . . ROOT/zfsBE was found on rpool. Do you wish to have it mounted read-write on /a? [y,n,?] y mounting rpool on /a Starting shell. |
Change to the /a/etc directory.
# cd /a/etc |
If necessary, set the TERM type.
# TERM=vt100 # export TERM |
Correct the passwd or shadow file.
# vi shadow |
Reboot the system.
# init 6 |
If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.
Boot from an installation CD or from the network.
SPARC:
ok boot cdrom -s ok boot net -s |
If you don't use the -s option, you will need to exit the installation program.
x86: Select the network boot or boot from local CD option.
Import the root pool and specify an alternate mount point. For example:
# zpool import -R /a rpool |
Mount the ZFS BE. For example:
# zfs mount rpool/ROOT/zfsBE |
Access the ZFS BE contents from the /a directory.
# cd /a |
Reboot the system.
# init 6 |
The following sections describe how to perform the following tasks:
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace a smaller disk with a larger disk.
A root pool disk is failing. In a non-redundant pool, if the disk is failing such that the system won't boot, you must boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
In a mirrored root pool configuration, you can attempt a disk replacement without booting from alternate media. You can replace a failed disk by using the zpool replace command. Or, if you have an additional disk, you can use the zpool attach command. See the procedure in this section for an example of attaching an additional disk and detaching a root pool disk.
Some hardware requires that you take a disk offline and unconfigure it before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 # zpool replace rpool c1t0d0s0 # zpool online rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0 |
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
You must identify the boot device pathnames of the current disk and the new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if the replacement disk fails. In the example in the following procedure, the path name for current root pool disk (c1t10d0s0) is:
/pci@8,700000/pci@3/scsi@5/sd@a,0 |
The path name for the replacement boot disk (c1t9d0s0) is:
/pci@8,700000/pci@3/scsi@5/sd@9,0 |
Physically connect the replacement (or new) disk.
Confirm that the new disk has an SMI label and a slice 0.
For information about relabeling a disk that is intended for the root pool, see the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Attach the new disk to the root pool.
For example:
# zpool attach rpool c1t10d0s0 c1t9d0s0 |
Confirm the root pool status.
For example:
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 25.47% done, 0h4m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t10d0s0 ONLINE 0 0 0 c1t9d0s0 ONLINE 0 0 0 errors: No known data errors |
After the resilvering is completed, apply the boot blocks to the new disk.
Using syntax similar to the following:
SPARC:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0 |
x86:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0 |
Verify that you can boot from the new disk.
For example, on a SPARC based system, you would use syntax similar to the following:
ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0 |
If the system boots from the new disk, detach the old disk.
For example:
# zpool detach rpool c1t10d0s0 |
Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM, or reconfigure the PC BIOS.
You can create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to perform a recursive snapshot of the root pool.
The following procedure creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. If a root pool fails, the remote dataset can be mounted by using NFS and the snapshot file can be received into the recreated pool. You can instead store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.
For information about remotely storing and recovering root pool snapshots, for the most up-to-date information about root pool recovery, go to this site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery. With either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.
In the following procedure, the system is booted from the zfsBE boot environment.
Create a pool and file system on a remote system to store the snapshots.
For example:
remote# zfs create rpool/snaps |
Share the file system with the local system.
For example:
remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps # share -@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system "" |
Create a recursive snapshot of the root pool.
local# zfs snapshot -r rpool@0804 local# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /rpool rpool@0804 0 - 98K - rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT rpool/ROOT@0804 0 - 21K - rpool/ROOT/zfsBE 4.67G 60.8G 4.67G / rpool/ROOT/zfsBE@0804 386K - 4.67G - rpool/dump 1.00G 60.8G 1.00G - rpool/dump@0804 0 - 1.00G - rpool/swap 517M 61.3G 16K - rpool/swap@0804 0 - 16K - |
Send the root pool snapshots to the remote system.
For example:
local# zfs send -Rv rpool@0804 > /net/remote-system/rpool/snaps/rpool.0804 sending from @ to rpool@0804 sending from @ to rpool/swap@0804 sending from @ to rpool/ROOT@0804 sending from @ to rpool/ROOT/zfsBE@0804 sending from @ to rpool/dump@0804 |
In this procedure, assume the following conditions:
The ZFS root pool cannot be recovered.
The ZFS root pool snapshots are stored on a remote system and are shared over NFS.
All the steps are performed on the local system.
Boot from a CD/DVD or the network.
SPARC: Select one of the following boot methods:
ok boot net -s ok boot cdrom -s |
If you don't use -s option, you'll need to exit the installation program.
x86: Select the option for booting from the DVD or the network. Then, exit the installation program.
Mount the remote snapshot dataset.
For example:
# mount -F nfs remote-system:/rpool/snaps /mnt |
If your network services are not configured, you might need to specify the remote-system's IP address.
If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you must relabel the disk.
For more information about relabeling the disk, go to the following site:
http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide
Recreate the root pool.
For example:
# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile= /etc/zfs/zpool.cache rpool c1t1d0s0 |
Restore the root pool snapshots.
This step might take some time. For example:
# cat /mnt/rpool.0804 | zfs receive -Fdu rpool |
Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
Verify that the root pool datasets are restored.
For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /a/rpool rpool@0804 0 - 98K - rpool/ROOT 4.67G 60.8G 21K /legacy rpool/ROOT@0804 0 - 21K - rpool/ROOT/zfsBE 4.67G 60.8G 4.67G /a rpool/ROOT/zfsBE@0804 398K - 4.67G - rpool/dump 1.00G 60.8G 1.00G - rpool/dump@0804 0 - 1.00G - rpool/swap 517M 61.3G 16K - rpool/swap@0804 0 - 16K - |
Set the bootfs property on the root pool BE.
For example:
# zpool set bootfs=rpool/ROOT/zfsBE rpool |
Install the boot blocks on the new disk.
SPARC:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0 |
x86:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 |
Reboot the system.
# init 6 |
This procedure assumes that existing root pool snapshots are available. In the example, they are available on the local system.
# zfs snapshot -r rpool@0804 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /rpool rpool@0804 0 - 98K - rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT rpool/ROOT@0804 0 - 21K - rpool/ROOT/zfsBE 4.67G 60.8G 4.67G / rpool/ROOT/zfsBE@0804 398K - 4.67G - rpool/dump 1.00G 60.8G 1.00G - rpool/dump@0804 0 - 1.00G - rpool/swap 517M 61.3G 16K - rpool/swap@0804 0 - 16K - |
Shut down the system and boot failsafe mode.
ok boot -F failsafe ROOT/zfsBE was found on rpool. Do you wish to have it mounted read-write on /a? [y,n,?] y mounting rpool on /a Starting shell. |
Roll back each root pool snapshot.
# zfs rollback rpool@0804 # zfs rollback rpool/ROOT@0804 # zfs rollback rpool/ROOT/zfsBE@0804 |
Reboot to multiuser mode.
# init 6 |