|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning Oracle Solaris 10 1/13 Information Library|
The following procedure provides an example with abbreviated instructions for upgrading with Live Upgrade.
For detailed explanations of steps, see Upgrading With Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).
The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Oracle Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Oracle Solaris 10 8/11 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.
Note - This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
Install required patches.
Ensure that you have the most recently updated patch list by consulting http://support.oracle.com (My Oracle Support). Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support. In this example, /net/server/export/patches is the path to the patches.
# patchadd /net/server/export/patches # init 6
Remove the Live Upgrade packages from the current boot environment.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Insert the Oracle Solaris DVD or CD. Then install the replacement Live Upgrade packages from the target release.
# pkgadd -d /cdrom/cdrom0/Solaris_10/Product SUNWlucfg SUNWlur SUNWluu
Create a boot environment.
In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.
# lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
Upgrade the new boot environment.
In this example, /net/server/export/Solaris_10/combined.solaris_wos is the path to the network installation image.
# luupgrade -n newbe -u -s /net/server/export/Solaris_10/combined.solaris_wos
(Optional) Verify that the boot environment is bootable.
The lustatus command reports if the boot environment creation is complete.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t1d0s0 yes yes yes no - newbe yes no no yes -
Activate the new boot environment.
# luactivate newbe # init 6
The boot environment newbe is now active.
(Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).