Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning |
Part I Upgrading With Solaris Live Upgrade
1. Where to Find Solaris Installation Planning Information
2. Solaris Live Upgrade (Overview)
3. Solaris Live Upgrade (Planning)
4. Using Solaris Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Solaris Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Solaris Live Upgrade Boot Environments (Tasks)
8. Upgrading the Solaris OS on a System With Non-Global Zones Installed
9. Solaris Live Upgrade (Examples)
10. Solaris Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool
11. Solaris Live Upgrade and ZFS (Overview)
12. Solaris Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
14. Solaris Live Upgrade For ZFS With Non-Global Zones Installed
Creating a ZFS Boot Environment on a System With Non-Global Zones Installed (Overview and Planning)
Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool (Tasks)
How to Migrate a UFS File System to a ZFS Root Pool on a System With Non-Global Zones
B. Additional SVR4 Packaging Requirements (Reference)
This chapter provides step-by-step instructions for migrating from a UFS root (/) file system to a ZFS root pool on a system with non-global zones installed. No non-global zones are on a shared file system in the UFS file system.
The lucreate command creates a boot environment of a ZFS root pool from a UFS root (/) file system. A ZFS root pool must exist before the lucreate operation and must be created with slices rather than whole disks to be upgradeable and bootable. This procedure shows how an existing non-global zone associated with the UFS root (/) file system is copied to the new boot environment in a ZFS root pool.
In the following example, the existing non-global zone, myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.
Note - Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the Infodoc 206844 (formerly 72099) on the SunSolve web site.
Become superuser or assume an equivalent role.
Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches
Install the patches with the patchadd command.
# patchadd patch_id
patch_id is the patch number or numbers. Separate multiple patch names with a space.
Note - The patches need to be applied in the order that is specified in Infodoc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
# init 6
The ZFS root pool must be on a single slice to be bootable and upgradeable.
# zpool create rpool c3t0d0s0
In this example, the name of the new ZFS to be created is rpool. The pool is created on a bootable slice, c3t0d0s0.
For information about creating a new root pool, see the Oracle Solaris ZFS Administration Guide.
# lucreate [-c ufsBE] -n new-zfsBE -p rpool
Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.
Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.
Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.
All nonshared non-global zones are copied to the new boot environment along with critical file systems. The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
The lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ ufsBE yes yes yes no - new-zfsBE yes no no yes -
The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/new-zfsBE 5.38G 57.6G 551M /tmp/.alt.luupdall.110034 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G -
The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.
Example 14-1 Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool
In the following example, the existing non-global zone myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based, non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.
# zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared - myzone installed /zones/myzone native shared - zzone installed /pool/zones native shared # zpool create mpool mirror c3t0d0s0 c4td0s0 # lucreate -c c1t2d0s0 -n zfs2BE -p mpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <c1t2d0s0>. Creating initial configuration for primary boot environment <c1t2d0s0>. The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <c1t2d0s0> PBE Boot Device </dev/dsk/c1t2d0s0>. Comparing source boot environment <c1t2d0s0> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <c1t2d0s0>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-cBc.mnt updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful.
When the lucreate operation completes, use the lustatus command to view the boot environment status as in this example.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t2d0s0 yes yes yes no - zfsBE yes no no yes -
# zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared - myzone installed /zones/myzone native shared - zzone installed /pool/zones native shared
Next, use the luactivate command to activate the new ZFS boot environment. For example:
# luactivate zfsBE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <ZFSbe> successful.
Reboot the system to the ZFS BE.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . .
Confirm the new boot environment and the status of the migrated zones as in this example.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t2d0s0 yes yes yes no - zfsBE yes no no yes -
If you fall back to the UFS boot environment, then you again need to import any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following when you switch back to the UFS boot environment.
# luactivate c1t2d0s0 WARNING: The following files have changed on both the current boot environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>: /etc/zfs/zpool.cache INFORMATION: The files listed above are in conflict between the current boot environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>. These files will not be automatically synchronized from the current boot environment <ZFSbe> when boot environment <c1t2d0s0>