This chapter provides step-by-step procedures on how to create a ZFS boot environment when you use Solaris Live Upgrade.
Migrating from a UFS file system to a ZFS root pool or creating ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. To use Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.
This chapter provides procedures for the following tasks:
For procedures on using ZFS when non-global zones are installed, see Chapter 14, Solaris Live Upgrade For ZFS With Non-Global Zones Installed.
This procedure describes how to migrate a UFS file system to a ZFS file system. Creating a boot environment provides a method of copying critical file systems from an active UFS boot environment to a ZFS root pool. The lucreate command copies the critical file systems to a new boot environment within an existing ZFS root pool. User-defined (shareable) file systems are not copied and are not shared with the source UFS boot environment. Also, /swap is not shared between the UFS file system and ZFS root pool. For an overview of critical and shareable file systems, see File System Types.
To migrate an active UFS root (/) file system to a ZFS root pool, you must provide the name of the root pool. The critical file systems are copied into the root pool.
Complete the following steps the first time you perform a Solaris Live Upgrade.
Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.
Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.
# pkgrm SUNWlucfg SUNWluu SUNWlur |
Install the new Solaris Live Upgrade packages from the release to which you are upgrading. For instructions, see Installing Solaris Live Upgrade.
Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.
Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.
Become superuser or assume an equivalent role.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches |
Install the patches with the patchadd command.
# patchadd patch_id |
patch_id is the patch number or numbers. Separate multiple patch names with a space.
The patches need to be applied in the order that is specified in info doc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
# init 6 |
Create a ZFS root pool.
The ZFS root pool must be on a single slice to be bootable and upgradeable.
# zpool create rpool c0t1d0s5 |
Specifies the name of the new ZFS root pool to be created.
Creates the new root pool on the disk slice, c0t1d0s5.
For information about creating a new root pool, see the Solaris ZFS Administration Guide.
Migrate your UFS root (/) file system to the new ZFS root pool.
# lucreate [-c ufsBE] -n new-zfsBE -p rpool |
Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.
Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.
Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.
The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
(Optional) Verify that the boot environment is complete.
In this example, the lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ----------------------------------------------------------------- ufsBE yes yes yes no - new-zfsBE yes no no yes - |
(Optional) Verify the basic dataset information on the system.
The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.alt.luupdall.110034 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G - |
The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.
You can now upgrade and activate the new boot environment. See Example 13–1.
In this example, the new ZFS root pool, rpool, is created on a separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot environment,c0t0d0, to the new ZFS boot environment, new-zfsBE, and places the new boot environment in rpool.
# zpool create rpool C0t0d0s4 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool # lucreate -c c0t0d0 -n new-zfsBE -p rpool Analyzing system configuration. Current boot environment is named <c0t0d0>. Creating initial configuration for primary boot environment <c0t0d0>. The device </dev/dsk/c0t0d0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <c0t0d0> PBE Boot Device </dev/dsk/c0t0d0>. Comparing source boot environment <c0t0d0> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <new-zfsBE>. Source boot environment is <c0t0d0>. Creating boot environment <new-zfsBE>. Creating file systems on boot environment <new-zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/new-zfsBE>. Populating file systems on boot environment <new-zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-cBc.mnt updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive Population of boot environment <new-zfsBE> successful. Creation of boot environment <new-zfsBE> successful. # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t0d0 yes yes yes no - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.alt.luupdall.110034 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G - |
You can now upgrade or activate the new boot environment.
In this example, the new boot environment is upgraded by using the luupgrade command from an image that is stored in the location indicated with the -s option.
# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10 51135 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/install/export/solaris_10/combined.solaris_10_wos /Solaris_10/Tools/Boot> Validating the contents of the media </net/install/export/s10/combined.s10>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains Solaris version <10_1008>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsBE>. Determining packages to install or upgrade for BE <zfsBE>. Performing the operating system upgrade of the BE <zfsBE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Adding operating system patches to the BE <zfsBE>. The operating system patch installation is complete. INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot environment <zfsBE> contains a log of the upgrade operation. INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot environment <zfsBE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsBE>. Before you activate boot environment <zfsBE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsBE> is complete. |
The new boot environment can be activated anytime after it is created.
# luactivate new-zfsBE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <new-zfsBE> successful. |
Reboot the system to the ZFS boot environment.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . . |
If you fall back to the UFS boot environment, then you need to import again any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.
# luactivate c0t0d0 WARNING: The following files have changed on both the current boot environment <new-zfsBE> zone <global> and the boot environment to be activated <c0t0d0>: /etc/zfs/zpool.cache INFORMATION: The files listed above are in conflict between the current boot environment <zfsBE> zone <global> and the boot environment to be activated <c0t0d0>. These files will not be automatically synchronized from the current boot environment <new-zfsBE> when boot environment <c0t0d0> |
If you have an existing ZFS root pool and want to create a new ZFS boot environment within that pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is not required when you create a boot environment within the same pool.
Complete the following steps the first time you perform a Solaris Live Upgrade.
Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.
Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.
The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a previous release, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur |
Install the new Solaris Live Upgrade packages. For instructions, see Installing Solaris Live Upgrade.
Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.
Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.
Become superuser or assume an equivalent role.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches |
Install the patches with the patchadd command.
# patchadd patch_id |
patch_id is the patch number or numbers. Separate multiple patch names with a space.
The patches need to be applied in the order that is specified in info doc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
# init 6 |
Create the new boot environment.
# lucreate [-c zfsBE] -n new-zfsBE |
Assigns the name zfsBE to the current boot environment. This option is not required and is used only when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.
Assigns the name to the boot environment to be created. The name must be unique on the system.
The creation of the new boot environment is almost instantaneous. A snapshot is created of each dataset in the current ZFS root pool, and a clone is then created from each snapshot. Snapshots are very disk-space efficient, and this process uses minimal disk space. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
(Optional) Verify that the boot environment is complete.
The lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes - |
(Optional) Verify the basic dataset information on the system.
In this example, the ZFS root pool is named rpool, and the @ symbol indicates a snapshot. The new boot environment mount points are temporary until the luactivate command is executed. The /dump and /swap volumes are shared with the ZFS root pool and boot environments within the root pool.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/zfsBE@new-zfsBE 66.5K - 551M - rpool/ROOT/new-zfsBE 85.5K 57.6G 551M /tmp/.alt.103197 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G - |
You can now upgrade and activate the new boot environment. See Example 13–2.
The following commands create a new ZFS boot environment, new-zfsBE. The -p option is not required because the boot environment is being created within the same root pool.
# lucreate [-c zfsBE] -n new-zfsBE Analyzing system configuration. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Creating configuration for boot environment new-zfsBE. Source boot environment is zfsBE. Creating boot environment new-zfsBE. Cloning file systems from boot environment zfsBE to create boot environment new-zfsBE. Creating snapshot for <rpool> on <rpool> Creating clone for <rpool>. Setting canmount=noauto for <rpool> in zone <global> on <rpool>. Population of boot environment zfsBE successful on <rpool>. # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/zfsBE@new-zfsBE 66.5K - 551M - rpool/ROOT/new-zfsBE 85.5K 57.6G 551M /tmp/.alt.103197 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G - |
You can now upgrade and activate the new boot environment. For an example of upgrading a ZFS boot environment, see Example 13–1. For more examples of using the luupgrade command, see Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
# luactivate new-zfsBE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <new-zfsBE> successful. |
Reboot the system to the ZFS boot environment.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . . |
If you have an existing ZFS root pool and want to create a new ZFS boot environment in a new root pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is required to note where to place the new boot environment. The existing ZFS root pool must exist and be on a separate slice to be bootable and upgradeable.
Complete the following steps the first time you perform a Solaris Live Upgrade.
Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.
Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.
The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a previous release, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur |
Install the new Solaris Live Upgrade packages. For instructions, see Installing Solaris Live Upgrade.
Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.
Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.
Become superuser or assume an equivalent role.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches |
Install the patches with the patchadd command.
# patchadd patch_id |
patch_id is the patch number or numbers. Separate multiple patch names with a space.
The patches need to be applied in the order that is specified in info doc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
# init 6 |
Create a ZFS root pool.
The ZFS root pool must be on a single slice to be bootable and upgradeable.
# zpool create rpool2 c0t1d0s5 |
Names of the new ZFS root pool.
Specifies to place rpool2 on the bootable slice, c0t1d0s5.
For information about creating a new root pool, see the Solaris ZFS Administration Guide.
Create the new boot environment.
# lucreate [-c zfsBE] -n new-zfsBE -p rpool2 |
Assigns the name zfsBE to the current ZFS boot environment.
Assigns the name to the boot environment to be created. The name must be unique on the system.
Places the newly created ZFS root boot environment into the ZFS root pool defined in rpool2.
The creation of the new ZFS boot environment might take a while. The file system data is being copied to the new ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
(Optional) Verify that the boot environment is complete.
The lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes - |
(Optional) Verify the basic dataset information on the system.
The following example displays the names of all datasets on the system. The mount point listed for the new boot environment are temporary until the luactivate command is executed. The new boot environment shares the volumes, rpool2/dump and rpool2/swap, with the rpool2 ZFS boot environment.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool2 9.29G 57.6G 20K /rpool2 rpool2/ROOT/ 5.38G 57.6G 18K /rpool2/ROOT rpool2/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.new.luupdall.109859 rpool2/dump 3.99G - 3.99G - rpool2/swap 3.99G - 3.99G - rpool 9.29G 57.6G 20K /.new.lulib.rs.109262 rpool/ROOT 5.46G 57.6G 18K legacy rpool/ROOT/zfsBE 5.46G 57.6G 551M rpool/dump 3.99G - 3.99G - rpool/swap 3.99G - 3.99G - |
You can now upgrade and activate the new boot environment. See Example 13–3.
In this example, a new ZFS root pool, rpool, is created on a separate slice, c0t2s0s5. The lucreate command creates a new ZFS boot environment, new-zfsBE. The -p option is required, because the boot environment is being created in a different root pool.
# zpool create rpool C0t1d0s5 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool2 9.29G 57.6G 20K /rpool2 rpool 9.29G 57.6G 20K /.new.lulib.rs.109262 rpool/ROOT 5.46G 57.6G 18K legacy rpool/ROOT/zfsBE 5.46G 57.6G 551M rpool/dump 3.99G - 3.99G - rpool/swap 3.99G - 3.99G - # lucreate -c rpool -n new-zfsBE -p rpool2 Analyzing system configuration. Current boot environment is named <rpool>. Creating initial configuration for primary boot environment <rpool>. The device </dev/dsk/c0t0d0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <rpool> PBE Boot Device </dev/dsk/rpool>. Comparing source boot environment <rpool> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <new-zfsBE>. Source boot environment is <rpool>. Creating boot environment <new-zfsBE>. Creating file systems on boot environment <new-zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool2/ROOT/new-zfsBE>. Populating file systems on boot environment <new-zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </>. Making boot environment <new-zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-cBc.mnt updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive Population of boot environment <new-zfsBE> successful. Creation of boot environment <new-zfsBE> successful. # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool2 9.29G 57.6G 20K /rpool2 rpool2/ROOT/ 5.38G 57.6G 18K /rpool2/ROOT rpool2/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.new.luupdall.109859 rpool2/dump 3.99G - 3.99G - rpool2/swap 3.99G - 3.99G - rpool 9.29G 57.6G 20K /.new.lulib.rs.109262 rpool/ROOT 5.46G 57.6G 18K legacy rpool/ROOT/zfsBE 5.46G 57.6G 551M rpool/dump 3.99G - 3.99G - rpool/swap 3.99G - 3.99G - |
If you have an existing ZFS root pool or UFS boot environment that is not currently used as the active boot environment, you can use the following example to create a new ZFS boot environment from this boot environment. After the new ZFS boot environment is created, this new boot environment can be upgraded and activated at your convenience.
If you are creating a boot environment from a source other than the currently running system, you must use the lucreate command with the -s option. The -s option works the same as for a UFS file system. The -s option provides the path to the alternate root (/) file system. This alternate root (/) file system is the source for the creation of the new ZFS root pool. The alternate root can be either a UFS (/) root file system or a ZFS root pool. The copy process might take time, depending on your system.
The following example shows how the -s option is used when creating a boot environment on another ZFS root pool.
The following command creates a new ZFS root pool from an existing ZFS root pool. The -n option assigns the name to the boot environment to be created, new-zfsBE. The -s option specifies the boot environment, rpool3, to be used as the source of the copy instead of the currently running boot environment. The -p option specifies to place the new boot environment in rpool2.
# lucreate -n new-zfsBE -s rpool3 -p rpool2 # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - zfsBE2 yes no no yes - zfsBE3 yes no no yes - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool2 9.29G 57.6G 20K /rpool2 rpool2/ROOT/ 5.38G 57.6G 18K /rpool2/ROOT rpool2/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.new.luupdall.109859 rpool2/dump 3.99G - 3.99G - rpool2/swap 3.99G - 3.99G - rpool3 9.29G 57.6G 20K /rpool2 rpool3/ROOT/ 5.38G 57.6G 18K /rpool2/ROOT rpool3/ROOT/zfsBE3 5.38G 57.6G 551M /tmp/.new.luupdall.109859 rpool3/dump 3.99G - 3.99G - rpool3/swap 3.99G - 3.99G - prool 9.29G 57.6G 20K /.new.lulib.rs.109262 rpool/ROOT 5.46G 57.6G 18K legacy rpool/ROOT/zfsBE 5.46G 57.6G 551M rpool/dump 3.99G - 3.99G - rpool/swap 3.99G - 3.99G - |
You can now upgrade and activate the new boot environment.
If a failure is detected after upgrading or if the application is not compatible with an upgraded component, you can fall back to the original boot environment with the luactivate command.
When you have migrated to a ZFS root pool from a UFS boot environment and you then decide to fall back to the UFS boot environment, you again need to import any ZFS storage pools that were created in the ZFS boot environment. These ZFS storage pools are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.
# luactivate c0t0d0 WARNING: The following files have changed on both the current boot environment <new-ZFSbe> zone <global> and the boot environment to be activated <c0t0d0>: /etc/zfs/zpool.cache INFORMATION: The files listed above are in conflict between the current boot environment <ZFSbe> zone <global> and the boot environment to be activated <c0t0d0>. These files will not be automatically synchronized from the current boot environment <new-ZFSbe> when boot environment <c0t0d0> |
For examples of falling back to the original boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).
For additional information about the topics included in this chapter, see the resources listed in Table 13–1.
Table 13–1 Additional Resources
Resource |
Location |
---|---|
For ZFS information, including overview, planning, and step-by-step instructions | |
For using Solaris Live Upgrade on a system with UFS file systems |
Part I, Upgrading With Solaris Live Upgrade of this book |