Previous Solaris Live Upgrade features are available and if related to UFS components, they work as in previous Solaris releases.
The following features are available:
When you migrate your UFS root file system to a ZFS root file system, you must designate an existing ZFS storage pool with the -p option.
If the UFS root file system has components on different slices, they are migrated to the ZFS root pool.
Solaris Live Upgrade can use the ZFS snapshot and clone features when you are creating a new ZFS BE in the same pool. So, BE creation is much faster than previous Solaris releases.
For detailed information about Solaris installation and Solaris Live Upgrade features, see the Solaris Express Installation Guide: Solaris Live Upgrade and Upgrade Planning.
The basic process for migrating a UFS root file system to a ZFS root file system is as follows:
Install the required Solaris Live Upgrade patches, if needed. For a list of patches, see Required Solaris Live Upgrade Patch Information.
Install the SXCE, build 90 release or use the standard upgrade program to upgrade from a previous SXCE release to the SXCE build 90 release on any supported SPARC based or x86 based system.
When you are running the SXCE, build 90 release, create a ZFS storage pool for your ZFS root file system.
Use Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system.
Activate your ZFS BE with the luactivate command.
For information about ZFS and Solaris Live Upgrade requirements, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.
For the SXCE, build 90 release, the compression of the install images is now unzipped with the 7zip utility. If you want to install the appropriate patches rather than upgrading or reinstalling to build 90, you will need to apply the following patches for Solaris Live Upgrade to succeed with the SXCE, build 90 release:
137321-01 (Solaris 10 SPARC)
137322-01 or later (Solaris 10 x86)
137477-01 or later (Solaris 9 SPARC)
137478-01 or later (Solaris 9 x86)
This chapter doesn't cover Solaris 10 issues, but if you are attempting to use Solaris Live Upgrade from a Solaris 10 release to the Nevada, build 90 release, the Solaris 10 5/08 release has had the 7zip utility since build 5. The patches listed above are only necessary if you are running releases older than the Solaris 10 5/08 release.
If you want to Solaris Live Upgrade from a Solaris 10 system with zones installed, you also need to apply the following additional cpio patches:
127922-03 (Solaris 10 SPARC)
127923-03 or later (Solaris 10 x86)
If you want to use Solaris Live Upgrade from Nevada builds before build 79, you must install the SUNWp7zip package from the latest Nevada build.
Review the following list of issues before you use Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:
The Solaris installation GUI's standard-upgrade option is not available for migrating from a UFS to a ZFS root file system. To migrate from a UFS file system, you must use Solaris Live Upgrade.
You must create the ZFS storage pool that will be used for booting before the Solaris Live Upgrade operation. In addition, due to current boot limitations, the ZFS root pool must be created with slices instead of whole disks. For example:
# zpool create rpool mirror c1t0d0s0 c1t1d0s0 |
Before you create the new pool, make sure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, make sure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slices that are intended for the root pool.
You cannot use Solaris Live Upgrade to create a UFS BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your UFS BE, you can boot from either your UFS BE or your ZFS BE.
Do not rename your ZFS BEs with the zfs rename command because the Solaris Live Upgrade feature is unaware of the name change. Subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to continue to use.
When creating an alternative BE that is a clone of the primary BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion and exclusion option set in the following cases:
UFS -> UFS UFS -> ZFS ZFS -> ZFS (different pool) |
Although you can use Solaris Live Upgrade to upgrade your UFS root file system to a ZFS root file system, you cannot use Solaris Live Upgrade to upgrade non-root or shared file systems.
If you are attempting to use Solaris Live Upgrade from a Solaris 10 release to the Nevada, build 90 release, you might need to do steps similar to the following:
# lucreate -n newBE -m /:cXdYsZ:ufs # luupgrade -n newBE -u -s </path/to/snv_90> # luactivate newBE # init 6 |
You cannot use the lu command to create or migrate a ZFS root file system.
The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, c1t1d0s0, containing a UFS root file system, is identified by the -c option. The new BE, zfsnv109BE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation. The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable.
# zpool create mpool mirror c1t0d0s0 c1t2d0s0 # lucreate -c ufsnv109BE -n zfsnv109BE -p mpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <ufsnv109BE>. Creating initial configuration for primary boot environment <ufsnv109BE>. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsnv109BE> PBE Boot Device </dev/dsk/c1t1d0s0>. Comparing source boot environment <ufsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsnv109BE>. Source boot environment is <ufsnv109BE>. Creating boot environment <zfsnv109BE>. Creating file systems on boot environment <zfsnv109BE>. Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfsnv109BE>. Populating file systems on boot environment <zfsnv109BE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsnv109BE>. Creating compare database for file system </mpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsnv109BE>. Making boot environment <zfsnv109BE> bootable. Creating boot_archive for /.alt.tmp.b-0ob.mnt updating /.alt.tmp.b-0ob.mnt/platform/sun4u/boot_archive Population of boot environment <zfsnv109BE> successful. Creation of boot environment <zfsnv109BE> successful. |
After the lucreate operation completes, use the lustatus command to view the BE status. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsnv109BE yes yes yes no - zfsnv109BE yes no no yes - |
Then, review the list of ZFS components. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT mpool 9.95G 41.2G 21K /mpool mpool/ROOT 7.45G 41.2G 19K /mpool/ROOT mpool/ROOT/zfsnv109BE 7.45G 41.2G 7.45G /tmp/.alt.luupdall.5232 mpool/dump 2G 43.2G 16K - mpool/swap 517M 41.7G 16K - |
Next, use the luactivate command to activate the new ZFS BE. For example:
# luactivate zfs1009BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs1009BE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** . . . Modifying boot archive service Activation of boot environment <zfs1009BE> successful. |
# luactivate zfsnv109BE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,700000/scsi@2/disk@1,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <zfsnv109BE> successful. |
Next, reboot the system to the ZFS BE.
# init 6 |
Confirm that the ZFS BE is active.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsnv109BE yes no no yes - zfsnv109BE yes yes yes no - |
If you switch back to the UFS BE, you will need to re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.
If the UFS BE is no longer required, you can remove it with the ludelete command.
Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool mpool, for example, the -p option is omitted.
If you have multiple ZFS BEs on a SPARC based system, you can use the boot -L command to identify the available BEs and select a BE from which to boot by using the boot -Z command. On an x86 based system, you can select a BE from the GRUB menu. For more information, see Example 5–6.
# lucreate -n zfsnv1092BE Analyzing system configuration. Comparing source boot environment <zfsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfsnv1092BE>. Source boot environment is <zfsnv109BE>. Creating boot environment <zfsnv1092BE>. Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>. Creating snapshot for <mpool/ROOT/zfsnv109BE> on <mpool/ROOT/zfsnv109BE@zfsnv1092BE>. Creating clone for <mpool/ROOT/zfsnv109BE@zfsnv1092BE> on <mpool/ROOT/zfsnv1092BE>. Setting canmount=noauto for </> in zone <global> on <mpool/ROOT/zfsnv1092BE>. Population of boot environment <zfsnv1092BE> successful. Creation of boot environment <zfsnv1092BE> successful. |
You can upgrade your ZFS BE to a later build by using the luupgrade command. The following example shows how to upgrade a ZFS BE from build 109 to build 110.
The basic process is:
Create an alternate BE with the lucreate command.
Activate and boot from the alternate BE.
Upgrade your primary ZFS BE with the luupgrade command.
# luupgrade -n zfsnv109BE -u -s /net/install/export/nv/combined.nvs_wos/110 50687 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/install/export/nv/combined.nvs_wos/110/Solaris_11/Tools/Boot> Validating the contents of the media </net/install/export/nv/combined.nvs_wos/110>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <11>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsnv109BE>. Determining packages to install or upgrade for BE <zfsnv109BE>. Performing the operating system upgrade of the BE <zfsnv109BE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Adding operating system patches to the BE <zfsnv109BE>. The operating system patch installation is complete. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <zfsnv109BE> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <zfsnv109BE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsnv109BE>. Before you activate boot environment <zfsnv109BE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsnv109BE> is complete. |