JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

Part I Upgrading With Live Upgrade

1.  Where to Find Oracle Solaris Installation Planning Information

2.  Live Upgrade (Overview)

3.  Live Upgrade (Planning)

4.  Using Live Upgrade to Create a Boot Environment (Tasks)

5.  Upgrading With Live Upgrade (Tasks)

6.  Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

7.  Maintaining Live Upgrade Boot Environments (Tasks)

8.  Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed

Upgrading With Live Upgrade and Installed Non-Global Zones (Overview)

Understanding Oracle Solaris Zones and Live Upgrade

Guidelines for Using Live Upgrade With Non-Global Zones (Planning)

Creating a Boot Environment When a Non-Global Zone Is on a Separate File System

Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks)

Upgrading With Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)

Upgrading a System With Non-Global Zones Installed (Example)

Upgrading With Live Upgrade When Non-Global Zones Are Installed on a System

Administering Boot Environments That Contain Non-Global Zones

To View the Configuration of a Boot Environment's Non-Global Zone File Systems

To Compare Boot Environments for a System With Non-Global Zones Installed

Using the lumount Command on a System That Contains Non-Global Zones

9.  Live Upgrade Examples

Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool

10.  Live Upgrade and ZFS (Overview)

11.  Live Upgrade for ZFS (Planning)

12.  Creating a Boot Environment for ZFS Root Pools

13.  Live Upgrade for ZFS With Non-Global Zones Installed

Part III Appendices

A.  Live Upgrade Command Reference

B.  Troubleshooting (Tasks)

C.  Additional SVR4 Packaging Requirements (Reference)

D.  Using the Patch Analyzer When Upgrading (Tasks)

Glossary

Index

Upgrading a System With Non-Global Zones Installed (Example)

The following procedure provides an example with abbreviated instructions for upgrading with Live Upgrade.

For detailed explanations of steps, see Upgrading With Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).

Upgrading With Live Upgrade When Non-Global Zones Are Installed on a System

The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Oracle Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Oracle Solaris 10 8/11 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.


Note - This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Install required patches.

    Ensure that you have the most recently updated patch list by consulting http://support.oracle.com (My Oracle Support). Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support. In this example, /net/server/export/patches is the path to the patches.

    # patchadd /net/server/export/patches
    # init 6
  2. Remove the Live Upgrade packages from the current boot environment.

    # pkgrm SUNWlucfg SUNWluu SUNWlur
  3. Insert the Oracle Solaris DVD or CD. Then install the replacement Live Upgrade packages from the target release.

    # pkgadd -d /cdrom/cdrom0/Solaris_10/Product SUNWlucfg SUNWlur SUNWluu
  4. Create a boot environment.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.

    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
  5. Upgrade the new boot environment.

    In this example, /net/server/export/Solaris_10/combined.solaris_wos is the path to the network installation image.

    # luupgrade -n newbe -u -s  /net/server/export/Solaris_10/combined.solaris_wos
  6. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete.

    # lustatus
    boot environment   Is        Active  Active     Can        Copy
    Name               Complete  Now     OnReboot   Delete     Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no           -
    newbe               yes       no       no       yes          -
  7. Activate the new boot environment.

    # luactivate newbe
    # init 6

    The boot environment newbe is now active.

  8. (Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).