Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning |
Part I Upgrading With Solaris Live Upgrade
1. Where to Find Solaris Installation Planning Information
2. Solaris Live Upgrade (Overview)
3. Solaris Live Upgrade (Planning)
4. Using Solaris Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Solaris Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Solaris Live Upgrade Boot Environments (Tasks)
8. Upgrading the Solaris OS on a System With Non-Global Zones Installed
Upgrading With Solaris Live Upgrade and Installed Non-Global Zones (Overview)
Understanding Solaris Zones and Solaris Live Upgrade
Guidelines for Using Solaris Live Upgrade With Non-Global Zones (Planning)
Creating a Boot Environment When a Non-Global Zone Is on a Separate File System
Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks)
Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)
Upgrading a System With Non-Global Zones Installed (Example)
Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System
Administering Boot Environments That Contain Non-Global Zones
To View the Configuration of a Boot Environment's Non-Global Zone File Systems
To Compare Boot Environments for a System With Non-Global Zones Installed
Using the lumount Command on a System That Contains Non-Global Zones
9. Solaris Live Upgrade (Examples)
10. Solaris Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool
11. Solaris Live Upgrade and ZFS (Overview)
12. Solaris Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
14. Solaris Live Upgrade For ZFS With Non-Global Zones Installed
B. Additional SVR4 Packaging Requirements (Reference)
The following sections provide step-by-step procedures for upgrading when non-global zones are installed.
Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)
For an example with abbreviated steps, see Upgrading a System With Non-Global Zones Installed (Example).
The following procedure provides detailed instructions for upgrading with Solaris Live Upgrade for a system with non-global zones installed.
The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.
The following substeps describe the steps in the SunSolve Infodoc 206844.
Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
The following instructions summarizes the Infodoc steps for removing and adding the packages.
Remove existing Solaris Live Upgrade packages.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Install the new Solaris Live Upgrade packages.
You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve Infodoc for more information.
If you are using the Solaris Operating System DVD, change directories and run the installer:
Change directories.
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers
Note - For SPARC based systems, the path to the installer is different for releases previous to the Solaris 10 10/08 release:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
Run the installer
# ./liveupgrade20
The Solaris installation program GUI is displayed. If you are using a script, you can prevent the GUI from displaying by using the -noconsole and -nodisplay options.
If you are using the Solaris Software – 2 CD, you can run the installer without changing the path.
% ./installer
Verify that the packages have been installed successfully.
# pkgchk -v SUNWlucfg SUNWlur SUNWluu
# cd /var/tmp/lupatches
# patchadd -M path-to-patchespatch-id patch-id
path-to-patches is the path to the patches directory, such as /var/tmp/lupatches. patch-id is the patch number or numbers. Separate multiple patch names with a space.
Note - The patches need to be applied in the order specified in infodoc 206844.
x86 only: Rebooting the system is required. Otherwise, Solaris Live Upgrade fails.
# init 6
You now have the packages and patches necessary for a successful creation of a new boot environment.
# lucreate [-A 'BE_description'] [-c BE_name] \ -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...] -n BE_name
The name of the boot environment to be created. BE_name must be unique on the system.
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.
Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap volume. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).
zonename specifies that a non-global zone's separate file system be placed on a separate slice. This option is used when the zone's separate file system is in a shared file system such as /zone1/root/export. This option copies the zone's separate file system to a new slice and prevents this file system from being shared. The separate file system was created with the zonecfg add fs command.
In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. The non-global zone named zone1 is given a separate mount point on c0t1d0s1.
Note - By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. The /export file system is a shared file system. If you use the -m option, the non-global zone's file system is placed on a separate slice and data is not shared. This option prevents zone file systems that were created with the zonecfg add fs command from being shared between the boot environments. See zonecfg(1M) for details.
# lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
The operating system image to be used for the upgrade is taken from the network.
# luupgrade -u -n BE_name -s os_image_path
Upgrades an operating system image on a boot environment
Specifies the name of the boot environment that is to be upgraded
Specifies the path name of a directory that contains an operating system image
In this example, the new boot environment, newbe, is upgraded from a network installation image.
# luupgrade -n newbe -u -s /net/server/export/Solaris_10/combined.solaris_wos
The lustatus command reports if the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t1d0s0 yes yes yes no - newbe yes no no yes -
# luactivate BE_name
BE_name specifies the name of the boot environment that is to be activated.
Note - For an x86 based system, the luactivate command is required when booting a boot environment for the first time. Subsequent activations can be made by selecting the boot environment from the GRUB menu. For step-by-step instructions, see x86: Activating a Boot Environment With the GRUB Menu.
To successfully activate a boot environment, that boot environment must meet several conditions. For more information, see Activating a Boot Environment.
# init 6
Caution - Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The most recently active boot environment is booted again. |
The boot environments have switched and the new boot environment is now the current boot environment.
If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).