Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning |
Part I Upgrading With Solaris Live Upgrade
1. Where to Find Solaris Installation Planning Information
2. Solaris Live Upgrade (Overview)
3. Solaris Live Upgrade (Planning)
4. Using Solaris Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Solaris Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Solaris Live Upgrade Boot Environments (Tasks)
8. Upgrading the Solaris OS on a System With Non-Global Zones Installed
9. Solaris Live Upgrade (Examples)
10. Solaris Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool
11. Solaris Live Upgrade and ZFS (Overview)
12. Solaris Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
Creating a Boot Environment Within the Same ZFS Root Pool
How to Create a ZFS Boot Environment Within the Same ZFS Root Pool
Creating a Boot Environment In a New Root Pool
How to Create a Boot Environment on a New ZFS Root Pool
Creating a Boot Environment From a Source Other Than the Currently Running System
Falling Back to a ZFS Boot Environment
14. Solaris Live Upgrade For ZFS With Non-Global Zones Installed
B. Additional SVR4 Packaging Requirements (Reference)
This procedure describes how to migrate a UFS file system to a ZFS file system. Creating a boot environment provides a method of copying critical file systems from an active UFS boot environment to a ZFS root pool. The lucreate command copies the critical file systems to a new boot environment within an existing ZFS root pool. User-defined (shareable) file systems are not copied and are not shared with the source UFS boot environment. Also, /swap is not shared between the UFS file system and ZFS root pool. For an overview of critical and shareable file systems, see File System Types.
Note - To migrate an active UFS root (/) file system to a ZFS root pool, you must provide the name of the root pool. The critical file systems are copied into the root pool.
The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.
The following substeps describe the steps in the SunSolve Infodoc 206844.
Note - Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.
Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the Infodoc 206844 (formerly 72099) on the SunSolve web site.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches
Install the patches with the patchadd command.
# patchadd patch_id
patch_id is the patch number or numbers. Separate multiple patch names with a space.
Note - The patches need to be applied in the order that is specified in Infodoc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
# init 6
You now have the packages and patches necessary for a successful migration.
The ZFS root pool must be on a single slice to be bootable and upgradeable.
# zpool create rpool c0t1d0s5
Specifies the name of the new ZFS root pool to be created.
Creates the new root pool on the disk slice, c0t1d0s5.
For information about creating a new root pool, see the Oracle Solaris ZFS Administration Guide.
# lucreate [-c ufsBE] -n new-zfsBE -p rpool
Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.
Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.
Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.
The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
In this example, the lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ----------------------------------------------------------------- ufsBE yes yes yes no - new-zfsBE yes no no yes -
The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.alt.luupdall.110034 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G -
The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.
You can now upgrade and activate the new boot environment. See Example 13-1.
Example 13-1 Migrating a UFS Root (/) File System to a ZFS Root Pool
In this example, the new ZFS root pool, rpool, is created on a separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot environment,c0t0d0, to the new ZFS boot environment, new-zfsBE, and places the new boot environment in rpool.
# zpool create rpool C0t0d0s4 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool # lucreate -c c0t0d0 -n new-zfsBE -p rpool Analyzing system configuration. Current boot environment is named <c0t0d0>. Creating initial configuration for primary boot environment <c0t0d0>. The device </dev/dsk/c0t0d0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <c0t0d0> PBE Boot Device </dev/dsk/c0t0d0>. Comparing source boot environment <c0t0d0> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <new-zfsBE>. Source boot environment is <c0t0d0>. Creating boot environment <new-zfsBE>. Creating file systems on boot environment <new-zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/new-zfsBE>. Populating file systems on boot environment <new-zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-cBc.mnt updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive Population of boot environment <new-zfsBE> successful. Creation of boot environment <new-zfsBE> successful. # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t0d0 yes yes yes no - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.alt.luupdall.110034 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G -
You can now upgrade or activate the new boot environment.
In this example, the new boot environment is upgraded by using the luupgrade command from an image that is stored in the location indicated with the -s option.
# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10 51135 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/install/export/solaris_10/combined.solaris_10_wos /Solaris_10/Tools/Boot> Validating the contents of the media </net/install/export/s10/combined.s10>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains Solaris version <10_1008>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsBE>. Determining packages to install or upgrade for BE <zfsBE>. Performing the operating system upgrade of the BE <zfsBE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Adding operating system patches to the BE <zfsBE>. The operating system patch installation is complete. INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot environment <zfsBE> contains a log of the upgrade operation. INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot environment <zfsBE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsBE>. Before you activate boot environment <zfsBE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsBE> is complete.
The new boot environment can be activated anytime after it is created.
# luactivate new-zfsBE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <new-zfsBE> successful.
Reboot the system to the ZFS boot environment.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . .
If you fall back to the UFS boot environment, then you need to import again any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.
# luactivate c0t0d0 WARNING: The following files have changed on both the current boot environment <new-zfsBE> zone <global> and the boot environment to be activated <c0t0d0>: /etc/zfs/zpool.cache INFORMATION: The files listed above are in conflict between the current boot environment <zfsBE> zone <global> and the boot environment to be activated <c0t0d0>. These files will not be automatically synchronized from the current boot environment <new-zfsBE> when boot environment <c0t0d0>