Solaris 10 10/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Chapter 8 Upgrading the Solaris OS on a System With Non-Global Zones Installed

This chapter describes using Solaris Live Upgrade to upgrade a system that has non-global zones installed.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system with non-global zones to a ZFS root pool, see Chapter 14, Solaris Live Upgrade For ZFS With Non-Global Zones Installed.


This chapter contains the following sections:

Upgrading With Solaris Live Upgrade and Installed Non-Global Zones (Overview)

Starting with the Solaris Solaris 10 8/07 release, you can upgrade or patch a system that contains non-global zones with Solaris Live Upgrade. If you have a system that contains non-global zones, Solaris Live Upgrade is the recommended program to upgrade and to add patches. Other upgrade programs might require extensive upgrade time, because the time required to complete the upgrade increases linearly with the number of installed non-global zones. If you are patching a system with Solaris Live Upgrade, you do not have to take the system to single-user mode and you can maximize your system's uptime. The following list summarizes changes to accommodate systems that have non-global zones installed.

Understanding Solaris Zones and Solaris Live Upgrade

The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS, the global zone. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system.

Solaris Live Upgrade is a mechanism to copy the currently running system onto new slices. When non-global zones are installed, they can be copied to the inactive boot environment along with the global zone's file systems.

Figure 8–1 shows a non-global zone that is copied to the inactive boot environment along with the global zone's file system.

Figure 8–1 Creating a Boot Environment – Copying Non-Global Zones

The context describes the illustration.

Figure 8–2 shows that a non-global zone is copied to the inactive boot environment.

Figure 8–2 Creating a Boot Environment – Copying a Shared File System From a Non-Global Zone

The context describes the illustration.

Guidelines for Using Solaris Live Upgrade With Non-Global Zones (Planning)

Planning for using non-global zones includes the limitations described below.

Table 8–1 Limitations When Upgrading With Non-Global Zones

Problem 

Description 

Consider these issues when using Solaris Live Upgrade on a system with zones installed. It is critical to avoid zone state transitions during lucreate and lumount operations.

  • When you use the lucreate command to create an inactive boot environment, if a given non-global zone is not running, then the zone cannot be booted until the lucreate operation has completed.

  • When you use the lucreate command to create an inactive boot environment if a given non-global zone is running, the zone should not be halted or rebooted until the lucreate operation has completed.

  • When an inactive boot environment is mounted with the lumount command, you cannot boot non-global zones or reboot them, although zones that were running before the lumount operation can continue to run.

  • Because a non-global zone can be controlled by a non-global zone administrator as well as by the global zone administrator, to prevent any interaction, halt all zones during lucreate or lumount operations.

Problems can occur when the global zone administrator does not notify the non-global zone administrator of an upgrade with Solaris Live Upgrade. 

When Solaris Live Upgrade operations are underway, non-global zone administrator involvement is critical. The upgrade affects the work of the administrators, who will be addressing the changes that occur as a result of the upgrade. Zone administrators should ensure that any local packages are stable throughout the sequence, handle any post-upgrade tasks such as configuration file adjustments, and generally schedule around the system outage.  

For example, if a non-global zone administrator adds a package while the global zone administrator is copying the file systems with the lucreate command, the new package is not copied with the file systems and the non-global zone administrator is unaware of the problem.

Creating a Boot Environment When a Non-Global Zone Is on a Separate File System

Creating a new boot environment from the currently running boot environment remains the same as in previous releases with one exception. You can specify a destination disk slice for a shared file system within a non-global zone. This exception occurs under the following conditions:

To prevent this separate file system from being shared in the new boot environment, the lucreate command enables specifying a destination slice for a separate file system for a non-global zone. The argument to the -m option has a new optional field, zonename. This new field places the non-global zone's separate file system on a separate slice in the new boot environment. For more information about setting up a non-global zone with a separate file system, see zonecfg(1M).


Note –

By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. Updating shared files in the active boot environment also updates data in the inactive boot environment. For example, the /export file system is a shared file system. If you use the -m option and the zonename option, the non-global zone's file system is copied to a separate slice and data is not shared. This option prevents non-global zone file systems that were created with the zonecfg add fs command from being shared between the boot environments.


Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks)

The following sections provide step-by-step procedures for upgrading when non-global zones are installed.

ProcedureUpgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)

The following procedure provides detailed instructions for upgrading with Solaris Live Upgrade for a system with non-global zones installed.

  1. Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve Infodoc 206844. Search for the Infodoc 206844 (formerly 72099) on the SunSolve web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the SunSolve Infodoc 206844.

    1. Become superuser or assume an equivalent role.

    2. From the SunSolve web site, follow the instructions in Infodoc 206844 to remove and add Solaris Live Upgrade packages.

      The following instructions summarizes the Infodoc steps for removing and adding the packages.

      • Remove existing Solaris Live Upgrade packages.

        The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


        # pkgrm SUNWlucfg SUNWluu SUNWlur
        
      • Install the new Solaris Live Upgrade packages.

        You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve Infodoc for more information.

        • If you are using the Solaris Operating System DVD, change directories and run the installer:

          • Change directories.


            # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
            

            Note –

            For SPARC based systems, the path to the installer is different for releases previous to the Solaris 10 10/08 release:


            # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
            

          • Run the installer


            # ./liveupgrade20
            

            The Solaris installation program GUI is displayed. If you are using a script, you can prevent the GUI from displaying by using the -noconsole and -nodisplay options.

        • If you are using the Solaris Software – 2 CD, you can run the installer without changing the path.


          % ./installer
          
        • Verify that the packages have been installed successfully.


          # pkgchk -v SUNWlucfg SUNWlur SUNWluu
          
    3. If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches.

    4. From the SunSolve web site, obtain the list of patches.

    5. Change to the patch directory as in this example.


      # cd /var/tmp/lupatches
      
    6. Install the patches.


      # patchadd -M  path-to-patchespatch-id  patch-id
      

      path-to-patches is the path to the patches directory, such as /var/tmp/lupatches. patch-id is the patch number or numbers. Separate multiple patch names with a space.


      Note –

      The patches need to be applied in the order specified in infodoc 206844.


    7. Reboot the system if necessary. Certain patches require a reboot to be effective.

      x86 only: Rebooting the system is required. Otherwise, Solaris Live Upgrade fails.


      # init 6
      

      You now have the packages and patches necessary for a successful creation of a new boot environment.

  2. Create the new boot environment.


    # lucreate [-A 'BE_description'] [-c BE_name] \
     -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...] -n BE_name
    
    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -c BE_name

    Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a – (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    • zonename specifies that a non-global zone's separate file system be placed on a separate slice. This option is used when the zone's separate file system is in a shared file system such as /zone1/root/export. This option copies the zone's separate file system to a new slice and prevents this file system from being shared. The separate file system was created with the zonecfg add fs command.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. The non-global zone named zone1 is given a separate mount point on c0t1d0s1.


    Note –

    By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. The /export file system is a shared file system. If you use the -m option, the non-global zone's file system is placed on a separate slice and data is not shared. This option prevents zone file systems that were created with the zonecfg add fs command from being shared between the boot environments. See zonecfg(1M) for details.



    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
    
  3. Upgrade the boot environment.

    The operating system image to be used for the upgrade is taken from the network.


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades an operating system image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image

    In this example, the new boot environment, newbe, is upgraded from a network installation image.


    # luupgrade -n newbe -u -s /net/server/export/Solaris_10/combined.solaris_wos
    
  4. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no      -
    newbe               yes       no       no       yes     -
  5. Activate the new boot environment.


    # luactivate BE_name
    

    BE_name specifies the name of the boot environment that is to be activated.


    Note –

    For an x86 based system, the luactivate command is required when booting a boot environment for the first time. Subsequent activations can be made by selecting the boot environment from the GRUB menu. For step-by-step instructions, see x86: Activating a Boot Environment With the GRUB Menu.


    To successfully activate a boot environment, that boot environment must meet several conditions. For more information, see Activating a Boot Environment.

  6. Reboot.


    # init 6
    

    Caution – Caution –

    Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The most recently active boot environment is booted again.


    The boot environments have switched and the new boot environment is now the current boot environment.

  7. (Optional) Fall back to a different boot environment.

    If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Upgrading a System With Non-Global Zones Installed (Example)

The following procedure provides an example with abbreviated instructions for upgrading with Solaris Live Upgrade.

For detailed explanations of steps, see Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).

Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System

The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Solaris 10 10/09 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Install required patches.

    Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the Infodoc 206844 (formerly 72099) on the SunSolve web site. In this example, /net/server/export/patches is the path to the patches.


    # patchadd /net/server/export/patches
    # init 6
    
  2. Remove the Solaris Live Upgrade packages from the current boot environment.


    # pkgrm SUNWlucfg SUNWluu SUNWlur
    
  3. Insert the Solaris DVD or CD. Then install the replacement Solaris Live upgrade packages from the target release.


    # pkgadd -d /cdrom/cdrom0/Solaris_10/Product SUNWlucfg SUNWlur SUNWluu
    
  4. Create a boot environment.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.


    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
    
  5. Upgrade the new boot environment.

    In this example, /net/server/export/Solaris_10/combined.solaris_wos is the path to the network installation image.


    # luupgrade -n newbe -u -s  /net/server/export/Solaris_10/combined.solaris_wos
    
  6. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no           -
    newbe               yes       no       no       yes          -
  7. Activate the new boot environment.


    # luactivate newbe
    # init 6
    

    The boot environment newbe is now active.

  8. (Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Administering Boot Environments That Contain Non-Global Zones

The following sections provide information about administering boot environments that contain non-global zones.

ProcedureTo View the Configuration of a Boot Environment's Non-Global Zone File Systems

Use this procedure to display a list of file systems for both the global zone and the non-global zones.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Display the list of file systems.


    # lufslist -n BE_name
    
    BE_name

    Specifies the name of the boot environment to view file system specifics


Example 8–1 List File Systems With Non-Global Zones

The following example displays a list of file systems that include non-global zones.


# lufslist -n s3
boot environment name: s3
This boot environent is currently active.
This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on Mount Options
------------------------------------------------------------------
/dev/dsk/c0t0d0s1         swap     2151776256   -        -
/dev/dsk/c0t0d0s3         ufs     10738040832   /        -
/dev/dsk/c0t0d0s7         ufs     10487955456   /export  -
                zone <zone1> within boot environment <s3>
/dev/dsk/c0t0d0s5         ufs      5116329984   /export  -

ProcedureTo Compare Boot Environments for a System With Non-Global Zones Installed

The lucompare command now generates a comparison of boot environments that includes the contents of any non-global zone.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Compare the current and new boot environments.


    # /usr/sbin/lucompare -i  infile (or) -t -o  outfile BE_name
    
    -i  infile

    Compare files that are listed in infile. The files to be compared should have absolute file names. If the entry in the file is a directory, the comparison is recursive to the directory. Use either this option or -t, not both.

    -t

    Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file. Use either this option or -i, not both.

    -o  outfile

    Redirect the output of differences to outfile.

    BE_name

    Specifies the name of the boot environment that is compared to the active boot environment.


Example 8–2 Comparing Boot Environments

In this example, current boot environment (source) is compared to second_disk boot environment and the results are sent to a file.


# /usr/sbin/lucompare -i  /etc/lu/compare/ -o /var/tmp/compare.out second_disk

Using the lumount Command on a System That Contains Non-Global Zones

The lumount command provides non-global zones with access to their corresponding file systems that exist on inactive boot environments. When the global zone administrator uses the lumount command to mount an inactive boot environment, the boot environment is mounted for non-global zones as well.

In the following example, the appropriate file systems are mounted for the boot environment, newbe, on /mnt in the global zone. For non-global zones that are running, mounted, or ready, their corresponding file systems within newbe are also made available on /mnt within each zone.


# lumount -n newbe /mnt

For more information about mounting, see the lumount(1M) man page.