Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Upgrading a System With Non-Global Zones Installed (Example)

The following procedure provides an example with abbreviated instructions for upgrading with Solaris Live Upgrade.

For detailed explanations of steps, see Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).

Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System

The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Solaris 10 5/09 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Install required patches.

    Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 206844 (formerly 72099) on the SunSolve web site. In this example, /net/server/export/patches is the path to the patches.


    # patchadd /net/server/export/patches
    # init 6
    
  2. Remove the Solaris Live Upgrade packages from the current boot environment.


    # pkgrm SUNWlucfg SUNWluu SUNWlur
    
  3. Insert the Solaris DVD or CD. Then install the replacement Solaris Live upgrade packages from the target release.


    # pkgadd -d /cdrom/cdrom0/Solaris_10/Product SUNWlucfg SUNWlur SUNWluu
    
  4. Create a boot environment.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.


    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
    
  5. Upgrade the new boot environment.

    In this example, /net/server/export/Solaris_10/combined.solaris_wos is the path to the network installation image.


    # luupgrade -n newbe -u -s  /net/server/export/Solaris_10/combined.solaris_wos
    
  6. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no           -
    newbe               yes       no       no       yes          -
  7. Activate the new boot environment.


    # luactivate newbe
    # init 6
    

    The boot environment newbe is now active.

  8. (Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).