Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning Oracle Solaris 10 1/13 Information Library |
Part I Upgrading With Live Upgrade
1. Where to Find Oracle Solaris Installation Planning Information
4. Using Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Live Upgrade Boot Environments (Tasks)
8. Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed
Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool
10. Live Upgrade and ZFS (Overview)
11. Live Upgrade for ZFS (Planning)
12. Creating a Boot Environment for ZFS Root Pools
13. Live Upgrade for ZFS With Non-Global Zones Installed
Creating a ZFS Boot Environment on a System With Non-Global Zones Installed (Overview and Planning)
Migrating From a UFS Root (/) File System With Non-Global Zones Installed to ZFS Root Pool (Tasks)
How to Migrate a UFS File System to a ZFS Root Pool on a System With Non-Global Zones
A. Live Upgrade Command Reference
C. Additional SVR4 Packaging Requirements (Reference)
This section provides step-by-step instructions for migrating from a UFS root (/) file system to a ZFS root pool on a system with non-global zones installed. No non-global zones are on a shared file system in the UFS file system.
The lucreate command creates a boot environment of a ZFS root pool from a UFS root (/) file system. A ZFS root pool must exist before the lucreate operation and must be created with slices rather than whole disks to be upgradeable and bootable. This procedure shows how an existing non-global zone associated with the UFS root (/) file system is copied to the new boot environment in a ZFS root pool.
Note - Using Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Live Upgrade software to perform the tasks.
The three Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Live Upgrade, upgrading to the target release fails.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Ensure that you have the most recently updated patch list by consulting My Oracle Support. Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.
Become superuser or assume an equivalent role.
Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the My Oracle Support web site, obtain the list of patches.
Change to the patch directory.
# cd /var/tmp/lupatches
Install the patches with the patchadd command.
# patchadd patch_id
patch_id is the patch number or numbers. Separate multiple patch names with a space.
Note - The patches need to be applied in the order that is specified in the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Live Upgrade fails.
# init 6
The ZFS root pool must be on a single slice to be bootable and upgradeable.
# zpool create rpool c3t0d0s0
In this example, the name of the new ZFS to be created is rpool. The pool is created on a bootable slice, c3t0d0s0.
For information about creating a new root pool, see the Oracle Solaris ZFS Administration Guide.
# lucreate [-c ufsBE] -n new-zfsBE -p rpool
The name for the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.
The name for the boot environment to be created. The name must be unique on the system.
Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.
All nonshared non-global zones are copied to the new boot environment along with critical file systems. The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
The lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ ufsBE yes yes yes no - new-zfsBE yes no no yes -
The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 11.4G 2.95G 31K /rpool rpool/ROOT 4.34G 2.95G 31K legacy rpool/ROOT/new-zfsBE 4.34G 2.95G 4.34G / rpool/dump 2.06G 5.02G 16K - rpool/swap 5.04G 7.99G 16K -
The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.
Example 13-1 Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool
In the following example, the existing non-global zone myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Live Upgrade operation. The ZFS-based, non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.
The commands to create the boot environment are as follows:
# zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared - myzone installed /zones/myzone native shared - zzone installed /pool/zones native shared # zpool create mpool mirror c3t0d0s0 c4td0s0 # lucreate -c c1t2d0s0 -n zfs2BE -p mpool Checking GRUB menu... Analyzing system configuration. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfs2BE>. Source boot environment is <c1t2d0s0>. Creating file systems on boot environment <zfs2BE>. Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfs2BE>. Populating file systems on boot environment <zfs2BE>. Analyzing zones. Mounting ABE <zfs2BE>. Generating file list. Copying data from PBE <c1t2d0s0> to ABE <zfs2BE>. 100% of filenames transferred Finalizing ABE. Fixing zonepaths in ABE. Unmounting ABE <zfs2BE>. Fixing properties on ZFS datasets in ABE. Reverting state of zones in PBE <c1t2d0s0>. Making boot environment <zfs2BE> bootable. Updating bootenv.rc on ABE <zfs2BE>. Saving existing file </boot/grub/menu.lst> in top level dataset for BE <zfs2BE> as <mount-point>//boot/grub/menu.lst.prev. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <zfs2BE> in GRUB menu Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful.
When the lucreate operation completes, use the lustatus command to view the boot environment status as in this example.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t2d0s0 yes yes yes no - zfsBE yes no no yes -
# zoneadm list -iv ID NAME STATUS PATH BRAND IP 0 global running / native shared - myzone installed /zones/myzone native shared - zzone installed /pool/zones native shared
Next, use the luactivate command to activate the new ZFS boot environment. For example:
# luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Boot the machine to Single User mode using a different boot device (like the Solaris Install CD or Network). Examples: At the PROM monitor (ok prompt): For boot to Solaris CD: boot cdrom -s For boot to network: boot net -s 3. Mount the Current boot environment root slice to some directory (like /mnt). You can use the following command to mount: mount -Fufs /dev/dsk/c1t0d0s0 /mnt 4. Run <luactivate> utility with out any arguments from the current boot environment root slice, as shown below: /mnt/sbin/luactivate 5. luactivate, activates the previous working boot environment and indicates the result. 6. Exit Single User mode and reboot the machine. ********************************************************************** Modifying boot archive service Activation of boot environment <zfsBE> successful.
Reboot the system to the ZFS BE.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . .
Confirm the new boot environment and the status of the migrated zones as in this example.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t2d0s0 yes yes yes no - zfsBE yes no no yes -
If you fall back to the UFS boot environment, then you again need to import any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following when you switch back to the UFS boot environment.
# luactivate c1t2d0s0 WARNING: The following files have changed on both the current boot environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>: /etc/zfs/zpool.cache INFORMATION: The files listed above are in conflict between the current boot environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>. These files will not be automatically synchronized from the current boot environment <ZFSbe> when boot environment <c1t2d0s0>